Home / Technology / What Big Tech and Big Tobacco research funding have in common

What Big Tech and Big Tobacco research funding have in common

Amid declining sales and evidence that smoking causes lung cancer, in the 1950s tobacco companies undertook PR campaigns to reinvent themselves as socially responsible and to shape public opinions. They also started funding research into the relationship between health and tobacco. Now, Big Tech companies like Amazon, Facebook, and Google are following the same playbook to fund AI ethics research in academia, according to a recently published paper by University of Toronto Center for Ethics PhD student Mohamed Abdalla and Harvard Medical School student Moustafa Abdalla.

The coauthors conclude that effective solutions to the problem will need to come from institutional or governmental policy changes. The Abdalla brothers argue Big Tech companies aren’t just involved with, but are leading, ethics discussions in academic settings.

“The truly damning evidence of Big Tobacco’s behavior only came to light after years of litigation. However, the parallels between the public facing history of Big Tobacco’s behavior and the current behavior of Big Tech should be a cause for concern,” the paper reads. “We believe that it is vital, particularly for universities and other institutions of higher learning, to discuss the appropriateness and the tradeoffs of accepting funding from Big Tech, and what limitations or conditions should be put in place.”

An analysis of tenure-track research faculty at major AI research MIT, Stanford University, UC Berkeley, and the University of Toronto included in the report found that nearly 60% with known funding sources have taken money from Big Tech.

Last week, Google fired Timnit Gebru, an AI ethics researcher, in what Google employees described as a “a retaliatory fire” following “unprecedented research censorship.” In an interview with VentureBeat earlier this week, Gebru said AI research conferences are heavily influenced by industry and said the world needs better options for AI research funding than corporate and military funding.

The Grey Hoodie project name is meant to hark back to Project Whitecoat, a deliberate attempt to obfuscate the impact of second-hand smoke that started in the 1980s. The Partnership on AI (PAI), the coauthors argue, takes the role of the Council for Tobacco Research, a group that supplied funding to academics studying the impact of smoking on human health. Created in 2016 by Big Tech companies like Amazon, Facebook, and Google, PAI now has more than 100 participating organizations, including the ACLU and Amnesty International. By participating in meetings, research, and other initiatives, coauthors argue that nonprofit and human rights groups end up legitimizing Big Tech companies.

In a December 2019 account published in The Intercept, MIT PhD student Rodrigo Ochigame called AI ethics initiatives from Silicon Valley “strategic lobbying efforts” and quoted an MIT Media Lab colleague as saying “Neither ACLU nor MIT nor any non-profit has any power in PAI.”

Earlier this year the digital human rights organization Access Now resigned from the Partnership on AI, in part because the coalition has been ineffective in influencing the behavior of corporate partners. In an interview with VentureBeat responding to questions about ethics washing, PAI director Terah Lyons said it takes time to change the behavior of Big Tech companies.

In addition to funding academic research, Big Tech companies also fund AI research conferences. For example, coauthors say the Fairness, Accountability, and Transparency (FAccT) conference has never had a year without Big Tech funding, and NeurIPS has had at least two Big Tech sponsors since 2015. Apple, Amazon Science, Facebook AI Research, and Google Research are all among platinum sponsors of NeurIPS this year.

Abdalla and Abdalla suggest academic researchers consider splintering AI ethics into a separate field from computer science, akin to the way bioethics is separated from medicine and biology.

The Grey Hoodie Project follows analysis released this fall about the de-democratization of AI and a compute divide forming between Big Tech, elite universities, and the rest of the world. The Grey Hoodie Project paper was initially published this fall but was accepted for publication by the Resistance AI workshop, which takes place Friday as part of the NeurIPS AI research conference, the largest annual gathering of AI researchers in the world. In another first, this year, NeurIPS authors were required to state financial conflicts of interest and potential impact to society.

The topic of corporate influence over academic research came up at NeurIPS on Friday morning. During a panel conversation, Black in AI cofounder Rediet Abebe said she will refuse to take funding from Google, and that more senior faculty in academia need to speak up. Next year, Abebe will become the first Black woman assistant professor ever in the Electrical Engineering and Computer Science (EECS) department at UC Berkeley.

“Maybe a single person can do a good job separating out funding sources from what they’re doing, but you have to admit that in aggregate there’s going to be an influence. If a bunch of us are taking money from the same source, there’s going to be a communal shift towards work that is serving that funding institution,” she said.

The Resistance AI workshop at NeurIPS explores how AI has shifted power into the hands of governments and corporations and away from marginalized communities and how to shift power back to the people. Organizers count among them the founders of groups like Disability in AI and Queer in AI. Workshop organizers also include members of the AI community who describe themselves as abolitionists, advocates, ethicists, and AI policy experts, such as J Khadijah Abdurahman, who this week this week penned a piece about the moral collapse of AI ethics, and Marie-Therese Png, who coauthored a paper earlier this year about anticolonial AI and how to make AI free of the exploitative or oppressive technology.

A statement from Google Brain research associate Raphael Lopes and other conference organizers said the Resistance AI group was formed following a meetup at an AI conference this summer and is designed to include people marginalized in society today.

“We were frustrated with the limitations of ‘AI for good’ and how it could be coopted as a form of ethics-washing,” organizers said. “In some ways, we still have a long way to go: many of us are adjacent to big tech and academia, and we want to do better at engaging those who don’t have this kind of institutional power.”

Other work presented today as part of the event includes the following:

  • “AI at the Borderlands” explores surveillance along the U.S.-Mexico border.
  • In a paper VentureBeat has written about, Alex Hanna and Tina Park urged tech companies to think beyond scale in order to properly address societal issues.
  • “Does Deep Learning Have Politics?” asserts that a shift toward deep learning and increasingly large datasets “centers the power of these algorithms in corporations or the government, which thus leaves its practice vulnerable to the institutional racism and sexism that is so often found there.”
  • A paper analyzing research submitted to major conferences found that building on recent work, performance, accuracy, and understanding are among the top values reflected in machine learning research.

On Saturday another NeurIPS workshop will examine harm caused by AI and the broader impact of AI research on society.

Let’s block ads! (Why?)

VentureBeat

About

Check Also

The scale of ambition in gaming is getting bigger | Brian Ward fireside chat

The scale of ambition for Saudi Arabia when it comes to moving into the games …