Home / Technology / AI Weekly: A biometric surveillance state is not inevitable, says AI Now Institute

AI Weekly: A biometric surveillance state is not inevitable, says AI Now Institute

In a new report called “Regulating Biometrics: Global Approaches and Urgent Questions,” the AI Now Institute says regulation advocates are beginning to believe a biometric surveillance state is not inevitable.

The report’s release couldn’t be more timely. As the pandemic drags on into the fall, businesses, government agencies, and schools are desperate for solutions that ensure public safety. With measures ranging from tracking body temperatures at points of entry to issuing health wearables to employing surveillance drones and facial recognition systems, there’s never been a greater need to balance the collection of biometric data with individual rights and freedoms. Meanwhile, a growing number of companies are selling biometric-driven products and services that seem benign but could become problematic or even abusive.

Surveillance capitalism is presented as inevitable to discourage individuals from daring to push back. It is an especially easy illusion to pull off as COVID-19 continues to spread around the globe. People are reaching for immediate solutions, even if that means acquiescing to a new and possibly longer-lasting problem in the future.

When it comes to biometric data collection and surveillance, there’s often a lack of clarity around what is ethical, safe, and legal — and what laws and regulations are still needed. The AI Now report methodically lays out all of those challenges, explains why they’re important, and advocates for solutions. It then provides shape and substance through eight case studies that examine biometric surveillance in schools, police use of facial recognition technologies in the U.S. and U.K., national efforts to centralize biometric information in Australia and India, and more.

All citizens — not just politicians, entrepreneurs, and technologists — need to acquire a working understanding of the issues around biometrics, AI technologies, and surveillance. Amid a rapidly changing landscape, the report could serve as a reference for understanding the novel questions that continue to arise. It would be an injustice to summarize the whole 111-page document in a few hundred words, but it touches on several broad themes.

Laws and regulations pertaining to data, rights, and surveillance are lagging behind the development and implementation of various AI technologies that monetize biometrics or adapt them for government tracking. This is why companies like Clearview AI are thriving — what they do is offensive to many and may be unethical, but it is — with some exceptions — still legal.

The very definition of biometric data remains unsettled, and some experts want to pause the implementation of these systems while we create new laws and reform or update others. Others seek to ban the systems entirely on the grounds that some things are perpetually dangerous, even with guardrails.

To effectively regulate the technology, average citizens, private companies, and governments need to fully understand data-powered systems that involve biometrics and their inherent tradeoffs. The report suggests that “any infringement of privacy or data-protection rights be necessary and strike the appropriate balance between the means used and the intended objective.” Such proportionality also means ensuring a “right to privacy is balanced against a competing right or public interest.”

This raises the question of whether a situation warrants the collection of biometric data at all. It is also necessary to monitor these systems for “function creep” and make sure data use doesn’t extend beyond the original intent.

The report considers the example of facial recognition used to track student attendance in Swedish schools. The Swedish Data Protection Authority eventually banned the technology on the grounds that facial recognition was too onerous for the task at hand. And surely there were concerns about function creep; such a system captures rich data on many children and teachers. What else might that data be used for, and by whom?

This is where rhetoric around safety and security becomes powerful. In the Swedish school example, it’s easy to see how facial recognition use doesn’t hold up to principles of proportionality. But when the rhetoric is about safety and security, it’s harder to push back. If the purpose of a system is not taking attendance, but rather scanning for weapons or identifying people who aren’t supposed to be on campus, the conversation takes a different turn.

The same holds true of the need to get people back to work safely and keep returning students and faculty safe from the spread of COVID-19. People are amenable to more invasive and extensive biometric surveillance if it means maintaining their livelihood while reducing their risk of becoming a pandemic statistic.

It’s tempting to default to a simplistic position of more security equals more safety, but that logic can fall apart in real-life applications. First of all: more safety for whom? If refugees have to submit a full spate of biometric data at the border or civil rights advocates are subjected to facial recognition while exercising their right to protest, whose safety is protected? And even if there is some need for security in such situations, enhanced surveillance can have a chilling effect on a range of freedoms. People fleeing for their lives may balk at invasive conditions of asylum. Protestors may be afraid to speak freely, which hurts democracy itself. And kids could suffer from the constant reminder that their school is under threat, which would hamper mental well-being and the ability to learn.

A related problem is that regulation may happen only after these systems have been deployed, as the report illustrates with the case of India’s controversial Aadhaar biometric identity project. The report described it as “a centralized database that would store biometric information (fingerprints, iris scans, and photographs) for every individual resident in India, indexed alongside their demographic information and a unique 12-digit ‘Aadhaar’ number.” The program ran for years without proper legal guardrails. In the end, instead of using new regulations to roll back the system or address its flaws and dangers, lawmakers essentially fashioned the law to fit, thereby encoding the problems for posterity.

And then there are issues of how well a given measure works and whether it’s even helpful. You could fill entire tomes with research on AI bias and examples of how, when, and where those biases cause technological failures and result in abuse. Even when models are benchmarked, the report notes, their scores may not reflect how well they perform in real-world settings. Fixing bias problems in AI, at multiple levels of data processing, product design, and deployment, is one of the most important and urgent challenges the field faces today.

Keeping a human in the loop is one way to mitigate the errors AI coughs up. In police departments, biometric scans are used to provide leads after officers run images against a database, and humans can then follow up with any suspects. But these systems often suffer from automation bias, which is when people rely too much on the machine and overestimate its credibility. This defeats the purpose of having a human in the loop and can lead to horrors like false arrests, or worse.

Efforts to improve efficacy also raise moral considerations. Many AI companies say they can determine a person’s emotions or mental state by using computer vision to examine their gait or their face. Though the reliability of such tools is debatable, some people believe their very goal is immoral. Taken to the extreme, such predictive efforts result in absurd research that amounts to AI phrenology.

Finally, none of the above matters without accountability and transparency. When private companies can collect data without anyone knowing or consenting, when contracts are signed in secret, when proprietary concerns take precedent over demands for auditing, when laws and regulations between states and countries are inconsistent, and when impact assessments are optional, human rights are lost. And that’s not acceptable.

The pandemic has revealed cracks in governmental and social systems and has brought simmering problems to a boil. As we cautiously return to work and school, the biometrics issue remains front and center. We’re being asked to trust biometric surveillance systems, the people who made them, and the people who are profiting from them, all without sufficient transparency or regulation. It’s a steep price to pay for the purported protections to our health and economy. But you can at least understand the issues at hand, thanks to the AI Now Institute’s latest report.

Let’s block ads! (Why?)

VentureBeat

About

Check Also

SAG-AFTRA hits out at AI Taylor Swift deepfakes and George Carlin special, calls to make nonconsensual ‘fake images’ illegal

The Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) put out …