This week, as thousands of protestors marched in cities around the U.S. to bring attention to the death of George Floyd, police brutality, and abuses at the highest levels of government, members of the AI research community made their own small gestures of support. NeurIPS, one of the world’s largest AI and machine learning conferences, extended its technical paper submission deadline by 48 hours. And researchers pledged to match donations to Black in AI, a nonprofit promoting the sharing ideas, collaborations, and discussion of initiatives to increase the presence of black people in the field of AI.
“NeurIPS grieves for its Black community members devastated by the cycle of police and vigilante violence. [We] mourn … for George Floyd, Breonna Taylor, Ahmaud Arbery, Regis Korchinski-Paquet, and thousands of black people who have lost their lives to this violence. [And we stand] with its black community to affirm that, today and every day, black lives matter,” the NeurIPS board wrote in a statement announcing its decision.
In a separate, independent effort aimed at spurring mentors to reach out to black researchers as they finalize their NeurIPS submissions, Google Brain scientist Nicolas Le Roux and Google AI lead Jeff Dean pledged to contribute $ 1,000 to Black in AI for every person who receives assistance.
For the AI community, acknowledgment of the movement is a start, but research shows that it — much like the rest of the tech industry — continues to suffer from a lack of diversity. According to a survey published by New York University’s AI Now Institute, as of April 2019, only 2.5% of Google’s workforce was black, while Facebook and Microsoft were each at 4%. The absent representation is problematic on its face, but it also risks replicating or perpetuating historical biases and power imbalances, like image recognition services that make offensive classifications and chatbots that channel hate speech. In something of a case in point, a National Institute of Standards and Technology (NIST) study last December found that facial recognition systems misidentify black people more often than white people.
“Despite many decades of ‘pipeline studies’ that assess the flow of diverse job candidates from school to industry, there has been no substantial progress in diversity in the AI industry. The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether,” the AI Now Institute report concluded. “[AI] bias mirrors and replicates existing structures of inequality in the [industry and] society.”
Some solutions proposed by the AI Now Institute and others include greater transparency with respect to salaries and compensation, harassment and discrimination reports, and hiring practices. Others are calling for targeted recruitment to increase employee diversity, along with commitments to bolster the number of people of color, women, and other underrepresented groups at leadership levels of AI companies.
But it’s an uphill battle. An analysis published in Proceedings of the National Academy of Sciences earlier this year found that women and people of color in academia produce scientific novelty at higher rates than white men, but those contributions are often “devalued and discounted” in the context of hiring and promotion. And Google, one of the largest and most influential AI companies on the planet, reportedly scrapped diversity initiatives in May over concern about a conservative backlash.
As my colleague Khari Johnson recently wrote, many AI companies pay lip service to the importance of diversity. That was never acceptable, particularly considering that venture capital for AI startups reached record levels in 2018. But at this juncture, as Americans are forced to come terms with systemic racism, it seems downright inexcusable.