Home / Technology / Avaaz: Facebook continues to fail at flagging false and misleading posts about U.S. elections

Avaaz: Facebook continues to fail at flagging false and misleading posts about U.S. elections

Facebook is still failing to spot and flag false and misleading posts about elections, according to a new report published by Avaaz. The U.S.-based nonprofit analyzed a cross section of election misinformation on Facebook ahead of the pivotal U.S. Senate runoff in Georgia. Avaaz found that 60% of detected false and misleading posts on the platform reached thousands of voters without fact-check labels.

The report comes as investigations suggest Facebook is failing to stem the overall spread of misinformation, disinformation, and hate speech on its platform. In January, Seattle University associate professor Caitlin Carlson published results of an experiment in which she and a colleague collected more than 300 posts that appeared to violate Facebook’s hate speech rules. (Only about half the posts were ultimately removed.) Separately, according to the Washington Post and others, allies of U.S. President Donald Trump have received few penalties under Facebook’s rules. Former employees told the publication that Trump-aligned accounts have been protected against strict enforcement because of concerns about the perception of anti-conservative bias.

Avaaz documented and analyzed 204 Facebook posts between November 4 and November 23 that promoted 12 different false election-related claims about Georgia and were fact-checked by PolitiFact, Snopes, Reuters, USA Today, and other independent fact-checkers. The report showed that as of November 2020, these misinformation posts — which promote false claims about senatorial candidates, the state’s election recount, and alleged voter fraud and intimidation — have reached a combined total of 643,406 interactions and 112 don’t have a fact-check label applied.

In June, Facebook began labeling — but not fact-checking — posts from politicians. Some studies have shown that these labels reduce people’s tendency to share misinformation, and researchers at MIT, among others, have asserted that posts lacking fact-check labels take on higher levels of authority, with many users assuming they’re correct. But according to BuzzFeed News, Facebook’s internal data shows labels placed on Trump’s misleading posts about the election have done little to slow their spread.

In the course of its audit, Avaaz found that 61 posts of the 204 it identified as misleading or false contained a generic Facebook election information label but no fact-check label or detailed corrections. Meanwhile, 82 of the posts had a fact-check label, and 61 posts had no label at all. Avaaz says 59 of the posts with generic labels should have received a fact-check label, as only two of them came from elected leaders or campaigns Facebook would deem exempt. Moreover, the nonprofit points out that a profile called Qu Ed, which shared Georgia election misinformation with over 6,500 followers, appears to be inauthentic and promotes QAnon content — a violation of Facebook’s ban on QAnon content announced in early October.

The posts with a generic label had a combined total of 361,262 interactions, while the posts with a fact-check label had 269,971 interactions, and the posts without labels had 12,173 interactions. As Avaaz notes, Facebook’s failure to act might have further harmed trust in elections and could influence voter turnout and behavior ahead of early voting in Georgia on December 14.

Avaaz recommends that Facebook correct the record for all users exposed to any of the posts it identified, a step the company once considered but decided against for political reasons, according to the New York Times. Research has shown that retroactive corrections can reduce the belief in disinformation by almost 50% when done quickly.

Avaaz is also urging Facebook to label all variations of the same misinformation across its platforms and train its AI systems to spot “near-duplicate” versions so problematic pages and groups can be demoted. There is a limit to what AI can accomplish, however, particularly with respect to content like memes. When Facebook launched the Hateful Memes dataset, a benchmark made to assess the performance of models for removing hate speech, the most accurate algorithm achieved 64.7% accuracy, while humans demonstrated 85% accuracy on the dataset. A New York University study published in July estimated that Facebook’s AI systems make about 300,000 content moderation mistakes per day, and problematic posts continue to slip through Facebook’s filters. In one Facebook group that was created in November and rapidly grew to nearly 400,000 people, members calling for a nationwide recount of the 2020 U.S. presidential election swapped unfounded accusations about alleged election fraud and state vote counts every few seconds.

Technological challenges aside, Facebook’s inconsistent, unclear, and in some cases controversial content moderation policies must be addressed, says Avaaz. “Election officials, including the secretary of state of Georgia, are receiving threats due to disinformation,” the organization wrote in its report. “Defending democracy and ensuring voters make fact-based decisions about voting, and are not deceived, requires the urgent implementation of this solution now.”

Let’s block ads! (Why?)

VentureBeat

About

Check Also

Kongregate focuses on building its own idle games for mobile

GamesBeat Summit 2021 #GBSummit returns with two days of content and networking designed for industry …