Home / Technology / AI Weekly: The trends that shaped 2020

AI Weekly: The trends that shaped 2020

A few days ago, I published a story about books that I read throughout the year to improve and inform my job covering artificial intelligence and adjacent industries. In all, the multi-part review contains nine books published in 2020 that explore subjects like business strategy, policy, and geopolitics, as well as the human rights consequences associated with AI deployments.

Too Smart, for example, looks at the smart city and smart home and their role in technopolitics and power. Monopolies Suck examines how big businesses fleece the average person. And in a year filled with calls to dismantle the social hierarchy of white supremacy, the exploration of Afrofuturism and Black joy detailed in Black Futures and Distributed Blackness were very welcome to me.

That process, reviewing books I read throughout the year, put me in a reflective mood, and so in this final AI Weekly of 2020, we take a look back at the kinds of stories VentureBeat saw recur throughout 2020. Given so much news in a year full of unprecedented history, it seems like a good idea.

2020 kicked off with AI startups bringing in more funding than any previous year, according to CB Insights. Companies built on data, like Palantir and Snowflake, went public, while a collection of AI startup acquisitions helped Big Tech businesses concentrate their power.

The year began with COVID-19 spreading around the world and ended with algorithms deciding who gets the vaccine after a year of watching Black and brown people die at disproportionate rates. Questions continue to be asked about who gets the vaccine and when.

At the beginning of 2020, the world learned the story of Clearview AI, a company that scraped billions of photos from the internet to make its facial recognition software and has extensive ties to far-right and white supremacist groups. Despite public outrage, being forced out of Canada, and an alleged biometric law violation, at the end of the year, news emerged that Clearview AI landed a Department of Defense contract.

In another harrowing case, on December 28, reports emerged of a Black man in New Jersey who was incorrectly identified using Clearview AI facial recognition and arrested. According to NJ.com, he spent a year fighting charges that could have carried a penalty of up to 20 years of prison time. This incident comes to light less than six months after the first reported case of false arrest due to use of facial recognition was reported in Detroit; Robert Williams, the innocent man in that incident, was also Black.

The year ends with additional policy reverberations for facial recognition. Boston and Portland passed citywide bans, and New York signed a bill into law placing a moratorium on facial recognition use in schools; meanwhile, a statewide ban stalled in Massachusetts. One of my favorite anecdotes from books I read this year was from Too Smart, which said that when you consider what smart cities look like, don’t think of futuristic metropolises sold in PR campaigns — think of New Orleans and the predictive policing that perpetuates historic bias.

Palantir and many other companies sold policing tools to New Orleans over the years, but 2020 ends with the New Orleans City Council passing a ban on predictive policing and facial recognition. The Gulf Coast news outlet The Lens reports that legislation is watered down from its original version, but considering the fact that two years ago the city council didn’t know police were using Palantir, it’s a story worth remembering.

“It’s here,” New Orleans councilmember Jason Williams told The Lens. “The technology is here before there’s laws guiding the technology. And I think it’s a very dangerous position for communities to be in.”

In early 2020, Ruha Benjamin warned the AI community it needs to take steps to include historical and social context or risk becoming party to gross human rights violations like IBM, which provided technology used to document the Holocaust. Earlier this month, news reports emerged that Alibaba and Huawei are developing facial recognition for tracking and detaining members of Muslim minority groups in China. With more than one million people detained today, it’s a phenomenon that often draws comparisons with Nazi concentration camps and the Jewish genocide of World War II. IBM agreed to stop selling facial recognition software in June.

There were also two reports in 2020 named The State of AI. One, from Air Street Capital, found evidence of brain drain from academia to industry. Another, from McKinsey, found businesses were increasing their use of AI, but that few business managers who took part in a survey are meeting 10 major measurements of risk mitigation, a trend that carries consequences beyond those typically placed on marginalized communities. It can also leave businesses vulnerable, a situation that appeared to undergo little change this year compared to the same survey administered a year earlier.

This is of course an incomplete collection of trends. I’ve got no grand statement or takeaway to offer here that ties all these together, but given these trends and that we are currently living in the deadliest month in the deadliest year in American history, it only seems right that people end the year by sticking up for humanity. In the ML community, that means confronting issues of inequality and the potential to automate oppression within its own ranks, and not allowing events of bias or human rights violations to become normalized.

The need to continue to emphasize this is underscored by a few important recent events. Amid the fallout from Google firing Timnit Gebru and other events that seriously call into question the objectivity of AI research produced with corporate funding, an AI research survey completed in part by Google AI researchers called for a major culture change. Following high-profile instances of AI bias revealed in computer vision and natural language models, coauthors of the recent survey say the machine learning community needs to shift away from using massive, poorly curated datasets and toward treating data like it’s not just numbers but respectful of human privacy and property rights.

This wasn’t a year anyone saw coming, full of challenges new and old. Happy New Year to everyone reading this. Let’s ensure that as we confront the challenges of 2021, we keep our humanity intact and fight to defend the humanity of others so that we can together ensure AI is a technology that serves everyone, not just a handful of engineers and Big Tech companies.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

VentureBeat

About

Check Also

SAG-AFTRA hits out at AI Taylor Swift deepfakes and George Carlin special, calls to make nonconsensual ‘fake images’ illegal

The Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) put out …