Home / Technology / AI Weekly: Workplace surveillance tech promises safety, but not worker rights

AI Weekly: Workplace surveillance tech promises safety, but not worker rights

All of the issues around the pandemic-driven rash of surveillance and tracking that emerged for society at large are coalescing in the workplace, where people may have little to no choice about whether to show up to work or what sort of surveillance to accept from their employer.

Our inboxes have simmered with pitches about AI-powered workplace tracing and safety tools and applications, often from smaller or newer companies. Some are snake oil, and some seem more legitimate, but now we’re seeing larger tech companies unveil more about their workplace surveillance offerings. Though presumably the solutions coming from large and well-established tech companies reliably perform the functions they promise and offer critical safety tools, they don’t inspire confidence for workers’ rights or privacy.

Recently, IBM announced Watson Works, which it described in an email as “a curated set of products that embeds Watson artificial intelligence (AI) models and applications to help companies navigate many aspects of the return-to-workplace challenge following lockdowns put in place to slow the spread of COVID-19.” There were curiously few details in the initial release about the constituent parts of Watson Works. It mainly articulated boiled-down workplace priorities — prioritizing employee health; communicating quickly; maximizing the effectiveness of contact tracing; and managing facilities, optimizing space allocation, and helping ensure safety compliance.

IBM accomplishes the whole of the above by collecting and monitoring external and internal data sources to track, produce information, and make decisions. Those data sources include public health information as well as “WiFi, cameras, Bluetooth beacons and mobile phones” within the workplace. Though there’s a disclaimer in the release that Watson Works follows IBM’s Principles for Trust and Transparency and preserves employees’ privacy in its data collection, serious questions remain.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

After VentureBeat reached out to IBM via email, an IBM representative replied with some answers and more details on Watson Works (and at this point, there’s a lot of information on the Watson Works site). The suite of tools within Watson Works includes Watson Assistant, Watson Discovery, IBM Tririga, Watson Machine Learning, Watson Care Manager, and IBM Maximo Worker Insights — which vacuums and processes real-time data from the aforementioned sources.

Judging by its comments to VentureBeat, IBM’s approach to how its clients use Watson Works is rather hands-off. On the question of who bears liability if an employee gets sick or has their rights violated, IBM punted to the courts and lawmakers. The representative clarified that the client collects data and stores it however and for whatever length of time the client chooses. IBM processes the data but does not receive any raw data, like heart rate information or a person’s location. The data is stored on IBM’s cloud, but the client owns and manages the data. In other words, IBM facilitates and provides the means for data collection, tracking, analysis, and subsequent actions, but everything else is up to the client.

This approach to responsibility is what Microsoft’s Tim O’Brien would classify as a level one. In a Build 2019 session about ethics, he laid out four schools of thought about a company’s responsibility for the technology it makes:

  1. We’re a platform provider, and we bear no responsibility (for what buyers do with the technology we sell them)
  2. We’re going to self-regulate our business processes and do the right things
  3. We’re going to do the right things, but the government needs to get involved, in partnership with us, to build a regulatory framework
  4. This technology should be eradicated

IBM is not alone in its “level one” position. A recent report from VentureBeat’s Kyle Wiggers found that drone companies are largely taking a similar approach in selling technology to law enforcement. (Notably, drone maker Parrot declined comment for that story, but a couple of weeks later, the company’s CEO explained in an interview with Protocol why he’s comfortable having the U.S. military and law enforcement as customers.)

When HPE announced its own spate of get-back-to-work technology, it followed IBM’s playbook: It put out a press release with tidy summaries of workplace problems and HPE’s solutions without many details (though you can click through to learn more about its extensive offerings). Yet in those summaries are a couple of items worthy of a raised eyebrow, like the use of facial recognition for contactless building entry. As for guidance for clients about privacy, security, and compliance, the company wrote in part: “HPE works closely with customers across the globe to help them understand the capabilities of the new return-to-work solutions, including how data is captured, transmitted, analyzed, and stored. Customers can then determine how they will handle their data based on relevant legal, regulatory, and company policies that govern privacy.”

Amazon’s Distance Assistant appears to be a fairly useful and harmless application of computer vision in the workplace. It scans walkways and overlays green or red highlights to let people know if they’re maintaining proper social distancing as they move around the workplace. On the other hand, the company is under legal scrutiny and dealing with worker objections over a lack of coronavirus safety in its own facilities.

In a chipper fireside chat keynote at the conference on Computer Vision and Pattern Recognition (CVPR), Microsoft CEO Satya Nadella espoused the capabilities of the company’s “4D Understanding” in the name of worker safety. But in a video demo, you can see that it’s just more worker surveillance — tracking people’s bodies in space relative to one another and tracking the objects on their workstations to ensure they’re performing their work correctly and in the right order. From the employer perspective, this sort of oversight equates to improved safety and efficiency. But what worker wants to have literally every move they make the subject of AI-powered scrutiny?

To be fair to IBM, it’s out of the facial recognition business entirely — ostensibly on moral grounds — and the computer vision in Watson Works, the company representative said, is for object detection only and isn’t designed to identify people. And most workplaces that would use this technology are not as fraught as the military or law enforcement.

But when a tech provider like IBM cedes responsibility for ethical practices in workplace surveillance, that puts all the power in the hands of employers and thus disempowers workers. Meanwhile, the tech providers profit.

We do need technologies that help us get back to work safely, and it’s good that there are numerous options available. But it’s worrisome that the tone around so many of the solutions we’re seeing — including those from larger tech companies — is morally agnostic and that the solutions themselves appear to give no power to workers. We can’t forget that technology can be a tool just as easily as it can be a weapon and that, devoid of cultural and historical contexts (like people desperate to hang onto their jobs amid historically poor unemployment), we can’t understand the potential harms (or benefits) of technology.

Let’s block ads! (Why?)

VentureBeat

About

Check Also

SAG-AFTRA hits out at AI Taylor Swift deepfakes and George Carlin special, calls to make nonconsensual ‘fake images’ illegal

The Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) put out …