Home / Technology / These are the AI risks we should be focusing on

These are the AI risks we should be focusing on

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Since the dawn of the computer age, humans have viewed the approach of artificial intelligence (AI) with some degree of apprehension. Popular AI depictions often involve killer robots or all-knowing, all-seeing systems bent on destroying the human race. These sentiments have similarly pervaded the news media, which tends to greet breakthroughs in AI with more alarm or hype than measured analysis. In reality, the true concern should be whether these overly-dramatized, dystopian visions pull our attention away from the more nuanced — yet equally dangerous — risks posed by the misuse of AI applications that are already available or being developed today.

AI permeates our everyday lives, influencing which media we consume, what we buy, where and how we work, and more. AI technologies are sure to continue disrupting our world, from automating routine office tasks to solving urgent challenges like climate change and hunger. But as incidents such as wrongful arrests in the U.S. and the mass surveillance of China’s Uighur population demonstrate, we are also already seeing some negative impacts stemming from AI. Focused on pushing the boundaries of what’s possible, companies, governments, AI practitioners, and data scientists sometimes fail to see how their breakthroughs could cause social problems until it’s too late.

Therefore, the time to be more intentional about how we use and develop AI is now. We need to integrate ethical and social impact considerations into the development process from the beginning, rather than grappling with these concerns after the fact. And most importantly, we need to recognize that even seemingly-benign algorithms and models can be used in negative ways. We’re a long way from Terminator-like AI threats — and that day may never come — but there is work happening today that merits equally serious consideration.

How deepfakes can sow doubt and discord

Deepfakes are realistic-appearing artificial images, audio, and videos, typically created using machine learning methods. The technology to produce such “synthetic” media is advancing at breakneck speed, with sophisticated tools now freely and readily accessible, even to non-experts. Malicious actors already deploy such content to ruin reputations and commit fraud-based crimes, and it’s not difficult to imagine other injurious use cases.

Deepfakes create a twofold danger: that the fake content will fool viewers into believing fabricated statements or events are real, and that their rising prevalence will undermine the public’s confidence in trusted sources of information. And while detection tools exist today, deepfake creators have shown they can learn from these defenses and quickly adapt. There are no easy solutions in this high-stakes game of cat and mouse. Even unsophisticated fake content can cause substantial damage, given the psychological power of confirmation bias and social media’s ability to rapidly disseminate fraudulent information.

Deepfakes are just one example of AI technology that can have subtly insidious impacts on society. They showcase how important it is to think through potential consequences and harm-mitigation strategies from the outset of AI development.

Large language models as disinformation force multipliers

Large language models are another example of AI technology developed with non-negative intentions that still merits careful consideration from a social impact perspective. These models learn to write humanlike text using deep learning techniques that are trained by patterns in datasets, often scraped from the internet. Leading AI research company OpenAI’s latest model, GPT-3, boasts 175 billion parameters — 10 times greater than the previous iteration. This massive knowledge base allows GPT-3 to generate almost any text with minimal human input, including short stories, email replies, and technical documents. In fact, the statistical and probabilistic techniques that power these models improve so quickly that many of its use cases remain unknown. For example, initial users only inadvertently discovered that the model could also write code.

However, the potential downsides are readily apparent. Like its predecessors, GPT-3 can produce sexist, racist, and discriminatory text because it learns from the internet content it was trained on. Furthermore, in a world where trolls already impact public opinion, large language models like GPT-3 could plague online conversations with divisive rhetoric and misinformation. Aware of the potential for misuse, OpenAI restricted access to GPT-3, first to select researchers and later as an exclusive license to Microsoft. But the genie is out of the bottle: Google unveiled a trillion-parameter model earlier this year, and OpenAI concedes that open source projects are on track to recreate GPT-3 soon. It appears our window to collectively address concerns around the design and use of this technology is quickly closing.

The path to ethical, socially beneficial AI

AI may never reach the nightmare sci-fi scenarios of Skynet or the Terminator, but that doesn’t mean we can shy away from facing the real social risks today’s AI poses. By working with stakeholder groups, researchers and industry leaders can establish procedures for identifying and mitigating potential risks without overly hampering innovation. After all, AI itself is neither inherently good nor bad. There are many real potential benefits that it can unlock for society — we just need to be thoughtful and responsible in how we develop and deploy it.

For example, we should strive for greater diversity within the data science and AI professions, including taking steps to consult with domain experts from relevant fields like social science and economics when developing certain technologies. The potential risks of AI extend beyond the purely technical; so too must the efforts to mitigate those risks. We must also collaborate to establish norms and shared practices around AI like GPT-3 and deepfake models, such as standardized impact assessments or external review periods. The industry can likewise ramp up efforts around countermeasures, such as the detection tools developed through Facebook’s Deepfake Detection Challenge or Microsoft’s Video Authenticator. Finally, it will be necessary to continually engage the general public through educational campaigns around AI so that people are aware of and can identify its misuses more easily. If as many people knew about GPT-3’s capabilities as know about The Terminator, we’d be better equipped to combat disinformation or other malicious use cases.

We have the opportunity now to set incentives, rules, and limits on who has access to these technologies, their development, and in which settings and circumstances they are deployed. We must use this power wisely — before it slips out of our hands.

Peter Wang is CEO and Co-founder of data science platform Anaconda. He’s also the creator of the PyData community and conferences and a member of the board at the Center for Human Technology.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

VentureBeat

About

Check Also

The scale of ambition in gaming is getting bigger | Brian Ward fireside chat

The scale of ambition for Saudi Arabia when it comes to moving into the games …