The older I get, the more I wish I could stop time so I could read more books. Books that earn my time and attention are those that promise to enrich me as a person and deepen my understanding of AI for the work I do as senior AI staff writer at VentureBeat.
This year, I read more than a dozen books, some published in recent months and others in years past, like The Curse of Bigness by Tim Wu, a great read for anyone interested in understanding antitrust, and the novel Parable of the Sower by Octavia E. Butler, one of my favorite books of all time.
Facts and insights from the books I read often find their way into my stories. For example, last year I wrote about how AI ethics is all about power in a work that drew heavily on Race After Technology by Ruha Benjamin and The Age of Surveillance Capitalism by Shoshana Zuboff. Reading Amy Webb’s The Big Nine also helped deepen my understanding of what could go wrong if companies like Amazon, Facebook, and Google grow without challenge for the next 50 years.
As the year winds down, here’s a rundown and some thoughts on nine books I read in 2020 that touch on artificial intelligence. Some books on this list are more about art or the study of Black tech cyberculture than AI, but each offers thought-provoking insights; a unique perspective; or a window into how AI impacts business, balances of power, or human rights. Best of all, many of the books included here attempt to imagine an alternative tech future without gross violations of human rights or accelerating inequalities.
Black Futures
This book came together following some Twitter DMs a few years ago and is the best blend of words and imagery on this list. Black Futures was edited by New York Times Magazine staff writer Jenna Wortham and art curator Kimberly Drew and released December 1.
With more than 100 contributors and nearly 500 pages of short reads and rich visuals, Black Futures is a collection of poems, memes, original essays, photography, and art. It’s designed to be read in no particular order, and Drew and Wortham encourage you to read along with an internet-connected device so you can search for names, terms, and websites mentioned in the text.
You can read a soliloquy from a repertory theater play on one page and learn about the video game Hair Nah on the next. And you can laugh at #ThanksgivingWithBlackFamilies memes and then follow that up with a piece about Black queer culture, Black political action, or Black power naps. There’s also a mix of practical advice, like how to survive a police riot and how to build an archive for a Black future, like the Octavia E. Butler collection maintained by the Huntington Library in Los Angeles.
When it comes to AI, one of my favorite parts is the story of Alisha B. Wormsley, a self-described Black sci-fi nerd who bought a billboard in Pittsburgh simply to advertise the mantra “There are Black people in the future.” This gets at the whiteness of AI in science fiction and pop culture that in many cases seeks to erase Black people from existence, according to research released earlier this year.
That piece also cites a favorite Martin Luther King, Jr. quote on automation that says, “When machines and computers, profit motives, and property rights are considered more important than people, the giant triplets of racism, economic exploitation, and militarism are incapable of being conquered.”
Another piece in the book calls dreaming of a Black disabled future a radical act. Former Google Ethical AI co-lead Timnit Gebru touched on the idea of envisioning a more inclusive world when she told VentureBeat in an interview earlier this month that she wants young Black people and women who witnessed her mistreatment to know that their perspectives are an invaluable part of imagining alternative futures.
Contributors to this book include writers like Hannah Giorgis; Ta-Nehisi Coates; Nikole Hannah-Jones; Wesley Morris, who co-hosts the podcast Still Processing with Jenna Wortham; and singer Solange Knowles.
One of my favorite things about Black Futures might be that one of the book’s 10 sections is dedicated to Black joy. I’d never seen that before. Once you can actually invite people inside your home again, Black Futures will make a beautiful coffee table book that lets guests flip through, dive in, and get lost in a good way.
Monopolies Suck: 7 Ways Big Corporations Rule Your Life and How to Take Back Control
This is a book for people who feel helpless in the face of powerful businesses. In Monopolies Suck, Sally Hubbard makes the case that anticompetitive behavior and market concentration are benefiting not just Big Tech giants like Amazon, Apple, Facebook, and Google, but companies throughout virtually every major industry in the United States today.
She recognizes the deleterious consequences of market concentration beyond tech, like rising prices in the airline industry, price gouging in pharmaceutical drugs, and unhealthy effects on the food we eat. In outlining these harms, Hubbard compares health care business conglomerates to organized crime mafias.
She also argues that monopolies reduce the American dream, ramp up inequality, cripple innovation, and threaten democracy.
The book recognizes the viewpoints and influence of historically significant figures in the history of antitrust, namely former Ohio senator John Sherman, whose Sherman Antitrust Act (1890) provides the base for antitrust law today, and Robert Bork, whose conservative viewpoints have shaped the attitudes of judges and lawmakers. Hubbard also examines the role algorithms, data, and surveillance play in consolidating power for large corporations and how those businesses lobby lawmakers.
Hubbard used to work in the antitrust division of the Department of Justice. Today she works at the Open Markets Institute. She also testified as an expert in the antitrust investigation a congressional subcommittee completed this fall.
What I enjoy about this book is that the author takes time to recognize how powerless market concentration can make people feel. At times, Hubbard seems to stop just to tell readers they aren’t crazy, that they really are making less money and enjoying fewer opportunities now than in the past.
Each chapter ends with a section titled “Your Life, Better” that summarizes the way monopolies lower your pay or crush the American dream, sometimes supplying advice for how you can take back control.
Monopolies Suck came out this fall, shortly after the DOJ lawsuit against Google and a congressional subcommittee’s Big Tech antitrust investigation. Both events mark antitrust activity that hasn’t happened in decades and may have implications for AI and tech corporations that this year ranked among the 10 companies with the highest market caps in the world. Part of me wishes this book had come out after those historic events so it could have included Hubbard’s response.
If you’re looking for a book that pulls punches and defers to Big Tech complaints that regulation could negatively impact innovation and the economy, this isn’t it. But if you or someone you know might appreciate a careful examination of corporate influence and a guide that can empower everyone to act, Hubbard offers powerful insight.
Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World
This book is for business executives and decision-makers anxious to grasp the ways artificial intelligence will transform business and society. Harvard Business School professors Marco Iansiti and Karim Lakhani explore how incumbent companies and digital challengers will clash and how businesses must re-architect firms and factories in the age of AI. While occasionally drawing on insights from their own research, the coauthors closely examine the forces that helped companies like Ant Financial and WeChat achieve unprecedented dominance.
Competing in the Age of AI is an excellent book for anyone in need of a primer on how data and AI transform business in an increasingly digital economy, producing what Iansiti and Lakhani call a “new breed of companies.” It’s packed full of simple-to-understand business strategy and insights into what business leaders from inside and outside the U.S. need to do to adapt and thrive.
As the authors explain, the book was written to “give readers the insight to prepare for collisions that will inevitably affect their businesses.”
More specifically, it gets into the wisdom this change requires of company leaders. The book notes that failure to adapt can leave businesses vulnerable to data-driven competition. Part of that shift will require managers to learn some machine learning essentials: “Just as every MBA student learns about accounting and its salience to business operations without wanting to become a professional accountant, managers need to do the same with AI and the related technology stack,” the book reads.
The book devotes time to examining the network effect, referring to it as an essential part of strategy for digital operating models, and it lists questions managers should ask themselves if they want to form sound strategies. Competing in the Age of AI focuses primarily on opportunities, but it also briefly touches on the need to address risks associated with AI deployments.
Turning Point: Policymaking in the Era of Artificial Intelligence
Turning Point is a book by Brookings Institution VP Darrell West and Brookings president and retired U.S. Marine Corps four-star general and former NATO commander John Allen. Both men have testified before Congress and advised lawmakers shaping AI policy in the U.S. In a House Budget Committee hearing about the role AI will play in the country’s economic recovery this fall, West talked with Rep. Sheila Jackson-Lee (D-TX) about how tech is accelerating inequality.
As it is a Brookings Institution publication, you get to hear from experts like Brookings scholar William Galston, who argues that government use of facial recognition should be treated with the same legal weight as search warrants. You will also hear about a cross section of major influences on policy and the regulation of artificial intelligence. This is also one of the only books I’ve ever come across that uses free Unsplash stock imagery for cover art.
Allen and West support increased government spending to address issues associated with artificial intelligence in the years ahead. Areas of concern range from education to national defense. Turning Point also addresses policy considerations across a broad spectrum of issues, from the datafication of businesses and geopolitics to levels of inequality and people moving into cities, a trend now happening at rates unseen in human history.
I like that the book doesn’t get beyond page two without recognizing AI’s potential to concentrate wealth and power. I also appreciate that Allen and West acknowledge how startups like Kairos and Affectiva have refused to accept government or surveillance contracts. But the reader in me also wants to hear the authors examine ties between other AI startups and white supremacy groups or look into the motives of companies that are eager to provide surveillance software to governments.
Turning Point was released in July and, as its subtitle suggests, is focused on policymaking with artificial intelligence in mind. I felt the authors achieved their goal of defining how AI is impacting fundamental aspects of people’s lives and shaping the strategies and investments of nation states.
But I disagree with their assertion that there’s no use trying to ban autonomous weapons or rein in their use. In fact, some countries have already tried to rally the world’s governments around a ban on the use of lethal autonomous weapons. And thus far about 30 nations, including China, have called for a ban on fully autonomous weapons at UN Convention on Conventional Weapons (CCW) meetings, according to a Human Rights Watch analysis.
I also wish the ethics chapter appeared earlier in the book, instead of being relegated to the final chapters. While Allen and West commit time to ethics in early applications, business opportunities are considered before the risks. Turning Point isn’t alone in this. Other books on this list, like Competing in the Age of AI, adopt the same structure.
Turning Point briefly touches on the role machine learning plays in the targeted detainment of Uighur Muslims in China, a subject of importance to many of the authors on this list. In notable recent updates, news reports earlier this month found both Alibaba and Huawei are reportedly testing or selling facial recognition for tracking members of the Muslim minority group in China.
I also appreciate that the authors took time to recognize the major risk the United States is incurring by failing to graduate enough people proficient in science, technology, engineering, and math (STEM). They also suggest policy approaches to address this issue, which they deem a threat to national security. Recent cyberattacks President-elect Joe Biden described as a “grave risk” to the United States illustrate this point.
This is a compelling book for anyone anxious to understand how data collection and AI are changing business, education, defense, and health care. It also prescribes policy solutions, like the creation of a national data strategy, cybersecurity protection for the nation’s infrastructure, and the establishment of a national research cloud. The latter approach is supported by lawmakers in Congress and major businesses, as well as researchers concerned about growing inequality among AI researchers in the age of deep learning.
My recommendation comes with the caveat that this book is cowritten by a retired general and is less critical of the military’s history of influence on the field of AI than other books on this list, like Artificial Whiteness.
Data Feminism
Data Feminism encourages people to adopt a framework informed by direct experience based on intersectional feminism and co-liberation. Throughout the book, authors Catherine D’Ignazio and Lauren Klein focus on the work of Black female scholars like Kimberlé Crenshaw. Notable endorsers of the book include Algorithms of Oppression author Safiya Noble, Race After Technology author Ruha Benjamin, and DJ Patil, who coined the title “data scientist” and was the first White House Chief Data Scientist.
The hype around big data and AI, the coauthors write, is “deafeningly male and white and techno-heroic.” They add that “the time is now to reframe that world with a feminist lens.”
Written by two white women, Data Feminism acknowledges that people who experience privilege can be unaware of oppression experienced by other people, something the authors term “privilege hazard.”
“The work of data feminism is first to tune into how standard practices in data science serve to reinforce these existing inequities and second to use data science to challenge and change the distribution of power,” the authors write. “Our overarching goal is to take a stand against the status quo — against a world that benefits us, two white women college professors, at the expense of others. To work toward that goal, we have chosen to feature the voices of those who speak from the margins.”
The book describes instances when data is used to prove inequality, ranging from Christine Darden’s experience at NASA to Joy Buolamwini’s critical work analyzing commercial facial recognition systems. The authors detail ongoing efforts to redress inequities, including the Library of Missing Datasets and other work to gather data that governments do not collect. The book further asserts that governmental data collection is often a reflection of who has power and who doesn’t. Examples include a femicide data-gathering project in Mexico that follows in the footsteps of Ida B. Wells’ work to gather data about lynchings of Black people in the U.S.
Data Feminism was released in February and was written for data scientists interested in the ways intersectional feminism can move the profession toward justice and help feminists embrace data science. The authors strive for inclusion and note that the book is not only for women.
The book joins a number of works introduced this year that urge people to think differently about approaches to developing artificial intelligence. During the Resistance AI workshop at the NeurIPS AI research conference earlier this week, the authors shared seven principles of data feminism. A number of AI ethics researchers have also called on data scientists to center the experiences of marginalized communities when designing AI systems and to consider the harm AI systems can inflict on such groups.
Researchers from Google’s DeepMind have also called for decolonizing AI in order to avoid producing AI systems that generate exploitation or oppression, a message echoed by research on data colonization in Africa. There’s also work calling for AI informed by the philosophy of ubuntu, which acknowledges the ways people are interconnected. Queer and indigenous AI frameworks were also introduced this year.
Distributed Blackness: African American Cybercultures
Distributed Blackness was written by Georgia Tech digital tech associate professor André Brock, Jr., who previously contributed to Black Futures with a brief essay about why BlackPlanet was a social media network pioneer.
Distributed Blackness includes an exploration of digital spaces like Black Twitter and covers some of the early entrepreneurs who built the first Black online spaces in the 1990s. Brock wrote that his book is meant to hark back to The Negro Motorist Green Book, which helped Black people travel and gather in safe spaces when moving across the United States.
“I am arguing that Black folks’ ‘natural internet affinity’ is as much about how they understand and employ digital artifacts and practices as it is about how Blackness is constituted within the material (and virtual) world of the internet itself. I am naming these Black digital practices as Black cyberculture,” Brock writes.
He says Black digital practices include “libidinal online expressions and practices of joy and catharsis about being Black.” He also examines forms of online activity he refers to as “ratchet digital practice.” He defines ratchery as the enactment and performance of ratchet behavior and aesthetics.
Examples include creative Twitter display names like Optimus Fine, Zora Neale Hustlin’, and Auntie Hot Flash Summer. Brock also explains why the book omits examples of ratchery like “WorldStar!” and why he defines that with class issues in Black America and the work of W.E.B. DuBois in mind. The book also attempts to examine factors influential to the Black digital experience, like the fact that roughly 55% of Black people have home broadband but 80% have smartphones.
One of the greatest aspects of this book is that it ruptures the idea of the internet and people in technology operating on a white default and offers a pointed critique of a tech culture that treats white as the norm and everyone else as “other.” It also takes a close — and at times critical — look at Afrofuturism, which Brock calls “an alternative path to analyzing Black technoculture.”
Algorithms do come up briefly in the book, but Distributed Blackness is not really about AI. It’s an exploration of Black expression and creativity online, an examination of technoculture as the “interweaving of technology, culture, self, and identity.”
This book’s vernacular bounces comfortably between academic terminology and social media references and terminology crafted by Brock himself. That can make parts of the book tough to read, but it’s rewarding. Distributed Blackness made me cackle at times and think at other times.
Too Smart: How Digital Capitalism is Extracting Data, Controlling Our Lives, and Taking Over the World
We’ve all heard the marketing pitch: The smart device, smart car, smart home, and smart city will improve your life. But Too Smart author Jathan Sadowski argues that smart tech’s modest conveniences are what you get in exchange for not asking too many questions about a world full of data-collecting machines connected to the internet.
“This book will be called dystopian. It might even be accused of alarmism. Such reactions are to be expected in a culture that teaches us to trust in technology’s benevolent power,” he writes.
Sadowski notes that over time, people get used to “offending events” or privacy violations that come with smart tech, which he says gives businesses the ability to control, manage, and manipulate people. Smart tech, Sadowski writes, prioritizes the interests of corporate technocratic power over democratic rights and the social good. He argues that tech is not neutral and that the question is not whether it is political but what the politics behind it are.
“The key concern is not with control itself but rather with who has control over whom,” he writes. These companies “are technocrats creating systems that shape society and govern people. By neglecting the politics of smart tech, we allow powerful interests to reside in the shadows and exercise undue influence over our lives.”
The smart world, sometimes referred to as the internet of things (IoT), has grown from 8 billion devices in 2017 to 20 billion in 2020. The surveillance and ability to power systems of control and manipulation can fundamentally reshape society, Sadowski writes, and form the foundation of capitalism in a digitized world.
He notes that data collected through smart devices can be used to predict consumer interests and upsell products or services, as is the case with Amazon’s recommendation systems or power applications for the growing smart city sector.
Sadowski is critical of deterministic views of technopolitics because he believes such an approach cedes power to executives, engineers, and entrepreneurs.
Too Smart calls datafication a form of violence and says companies like Amazon and Google want to become, to borrow a phrase Tom Wolfe used to describe Wall Street titans in the 1980s, “masters of the universe.”
One of my favorite parts of this book is a chapter in which Sadowski details smart tech deployments in major U.S. cities and argues that when people think of smart cities, they need to think about New Orleans, not depictions of futuristic metropolises. New Orleans has a history of working with surveillance companies like Palantir and using predictive policing. In 2018, the Verge teamed up with the Investigative Fund to report that the New Orleans Police Department’s work with Palantir was such a closely kept secret that members of the city council didn’t even know about it. Earlier this month, the New Orleans City Council voted to put in place a ban on facial recognition and predictive policing tools.
If you don’t trust the “smart” agenda for homes and cities or are concerned about growing rates of AI-powered surveillance technology used by democratic and authoritarian governments, you might want to read Too Smart.
Girl Decoded: A Scientist’s Quest to Reclaim Our Humanity by Bringing Emotional Intelligence to Technology
Girl Decoded is a book Affectiva CEO Rana el Kaliouby wrote about her journey from growing up in Cairo, Egypt to building a U.S.-based company that uses AI to classify human emotion. (Full disclosure: I moderated an onstage conversation at an Affectiva conference in Boston in 2018).
It could be the amount of AI-related reading and writing I do, but what stood out to me wasn’t the technical aspects per se, though I did appreciate el Kaliouby divulging that as a consequence of her work she has a deep knowledge of the muscles responsible for facial expressions.
The book is about emotional intelligence, so I guess it’s predictable that I enjoyed reading about el Kaliouby’s family, her faith, and her journey to starting a company. Girl Decoded also details how el Kaliouby ended up in Boston working with people like MIT Media Lab professor and Affective Computing Group leader Rosalind Picard.
While Girl Decoded focuses on the opportunities of emotional intelligence tech, AI practitioners and researchers have raised questions about the validity of using AI to predict human emotion. And a paper recently accepted to the Fairness, Accountability, and Transparency conference (FAccT) questions the field of affective computing.
But el Kaliouby argues that ethical emotional intelligence can benefit society. Examples range from helping people on the autism spectrum identify human emotion and interact with others to recognizing when a driver is experiencing road rage or drowsiness or is otherwise distracted, a threat that has become more common since the advent of the smartphone.
She also writes about how consumers can punish corporations that engage in unethical behavior, like companies selling technology to spy on ethnic minorities.
You will also hear about how robots can change human behavior in positive ways. For example, Mabu is a home robot that uses Affectiva’s emotional intelligence to evaluate the responses of patients dealing with congestive heart failure. Its AI is trained using data from an American Heart Association knowledge graph to answer a patient’s questions. Affectiva has also been used for SoftBank Robotics’ Pepper and to scan the faces of shoppers watching advertisements in a supermarket setting.
This might be a good book for entrepreneurs interested in the arc of a founder’s story or anyone curious to hear arguments in favor of emotion recognition systems and human-centric technology.
Artificial Whiteness: Politics and Ideology in Artificial Intelligence
Since I read the previously mentioned paper “The Whiteness of AI” earlier this year, I expected to hear more about the impact on science fiction and pop culture, but this is not that book. Artificial Whiteness was written by Yarden Katz, a fellow in the Harvard Medical School’s Department of Systems Biology and an MIT graduate.
The book delivers a view of AI history not through significant technical advances, but moments of collaboration between academia, industry, and government. It also examines the influence of an AI expert industry made up of the media, think tanks, and universities.
Artificial Whiteness references scholars like Angela Davis, Frantz Fanon, Toni Morrison, and W.E.B. DuBois, but that comes in later chapters. It begins with a history of artificial intelligence in academia and its early ties to military funding. In exploring AI’s roots, Katz talks about how “artificial intelligence” is as much a marketing term as it is a field of computer science and industry.
“The all too real consequences of whiteness come from its connection to concrete systems of power. From colonial America to the present, whiteness has been intertwined with capitalist conceptions of property inscribed into law,” the book reads. “AI’s new progressive rebranding is not a real departure from the field’s imperial roots but rather an adaptation to changing political sensibilities.”
Katz writes about how whiteness is used to sustain oppressive relationships, but you’ll hear more about Henry Kissinger, geopolitics, and efforts to maintain American dominance in the first 100 pages than about the social hierarchy of white supremacy.
Among solutions Katz offers are acts of refusal, which he argues can be generative. Examples of this include early AI researchers Terry Winograd and Joseph Weizenbaum, who made a point of refusing military funding. Today, AI researchers have also refused to take money from Google.
“When the neoliberal logic surrounding the university pushes for more partnerships, more interdisciplinary collaboration, and the creation of more institutes that naturalize the military-industrial-academic machine, it seems to me that a different disposition — one of refusal — becomes even more essential,” Katz writes.
This is a book about how white supremacy can be found at the roots of artificial intelligence, an ongoing influence confirmed by links between AI startups and white supremacists. It’s also about naming powerful forces in the industry, like AI experts and universities. And the book gives readers insight into the integral role marketing played and continues to play in the history of AI, a relationship that comes to mind when a survey finds 40% of AI startups don’t actually use AI in ways material to their business.
Final thoughts
It should come as no surprise to anyone who regularly follows my work, but the nine books I read this year touch on policy, discrimination, human rights violations, and harms associated with artificial intelligence deployment. I try to keep these insights in mind when I hear Microsoft is working on tech to enable e-carceration or when companies make claims about the efficacy of an AI system.
I’m already looking forward to Your Computer Is on Fire, by a collection of stories about how to fix a broken computing industry. Know a book that I should read to inform my reporting about artificial intelligence in 2021 or that I should have read this year? You can send me a DM on Twitter @kharijohnson or send me an email.
VentureBeat
VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you,
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more.