Home / Technology / Michael Kanaan: The U.S. needs an AI ‘Sputnik moment’ to compete with China and Russia

Michael Kanaan: The U.S. needs an AI ‘Sputnik moment’ to compete with China and Russia

In his book, “T-Minus AI,” Michael Kanaan calls attention to the need for the U.S. to wake up to AI in the same way that China and Russia have — as a matter of national importance amid global power shifts.

In 1957, Russia launched the Sputnik satellite into orbit. Kanaan writes that it was both a technological and a military feat. As Sputnik orbited Earth, suddenly the U.S. was confronted by its Cold War enemy demonstrating rocket technology that was potentially capable of delivering a weaponized payload anywhere in the world. That moment led to the audacious space race, resulting in the U.S. landing people on the moon just 12 years after Russia launched Sputnik. His larger point is that although the space race was partially about national pride, it was also about the need to keep pace with growing global powers. Kanaan posits that the dynamics of AI and global power echo that time in world history.

An Air Force Academy graduate, Kanaan has spent his entire career to this point in various roles at the Air Force, including his present one as director of operations for the Air Force/MIT Artificial Intelligence Accelerator. He’s also a founding member of the controversial Project Maven, a Department of Defense AI project in which the U.S. military collaborated with private companies, most notably Google, to improve object recognition in military drones.

VentureBeat spoke with Kanaan about his book, the ways China and Russia are developing their own AI, and how the U.S. needs to understand its current (and potential future) role in the global AI power dynamic.

This interview has been edited for brevity and clarity.

VentureBeat: I want to jump into the book right at the Sputnik moment. Are you saying essentially that China is sort of out-Sputniking us right now?

Michael Kanaan: Maybe ‘Sputniking’ — I guess it could be a verb and a noun. Every single day American citizens deal with artificial intelligence and we’re very fortunate as a nation to have access to the digital infrastructure and internet that we know of, right? Computers in our homes and smartphones at our fingertips. And I wonder, at what point do we realize how important this topic of AI is — something more akin to electricity but not necessarily oil.

And you know, it’s the reason we see the ads we see, it’s the reason we get the search results we get, it drives your 401k. I personally believe it has in some ways ruined the game of baseball. It makes art. It generates language — the same issues that make fake news, of course, right, like true computer generated content. There are nations around the world, putting it to very 1984 dystopian uses like China.

And my question is, why nothing has woken us up?

What needs to happen for us to wake up to these new realities? And what I fear is that the day comes, where it’s something that shakes us to our core, or brings us to our knees. I mean, early machine learning applications are arguably, not an insignificant portion of the stock market crash that millennials are still paying for.

The reason China woke up to such realities was because of the significance of that game — of the game of Go [when the reigning Go champion, Lee Sedol, was defeated by AlphaGo.]

And similarly to Russia — albeit a very brute force early terms, arguably isn’t even machine learning — on Deep Blue. Russia prided itself on the global stage with chess, there is no doubt about that.

So, are they out-Sputniking us? It’s more [that] they had their relative Sputnik.

VB: So you’re saying that Russia and China — they’ve already had their Sputnik moment.

MK: [For Russia and China], it’s like the computer has taken a pillar of my culture. And what we don’t talk about — everyone talks about the Sputnik moment as, we look up into the sky and they can go to space. Now, and as I talked about in the book, it’s an underlying rocket technology that could re-enter the atmosphere from our once perceived high ground, geographically safe location. So there is a real material fear behind the moment.

VB: I thought that was [an] interesting way that you framed it, because I never read that piece of history that way before. You’re saying that [the gravity of the moment] was not because of the space part, it was because we were worried about the threat of war.

MK: Right. It was the first iteration of a functional ICBM.

VB: I think your larger point is we haven’t hit our Sputnik moment yet, and that we really need to, because our world competitors have already done it. Is that a fair characterization?

MK: That’s the message. The general tagline of the American citizen is something like this: At a time of the nation’s needing, America answers the call, right? We always say that. I sit back and I say, “Well, why do we need that moment? Can we get out ahead of it because we can read the tea leaves here?” And furthermore, the question is, yeah we’ve done that, what — three or four times? That’s not even enough to generate a reasonable statistic or pattern. Who’s to say that we’ll do it again, and why would we use that fallback as the catch-all, because there is no preordained right to doing that.

VB: When you imagine what America’s Sputnik moment might look like […] What would that even be?

MK: I think it has to be something in the digital sphere, perpetuated broadly to [make us] say, “Wait a second, we need to watch this AI thing.” Again, my question is “what does it take?” I wish I could figure it out because I think we’ve had more than a few moments that should have done that.

VB: So, China. One of the things that you wrote about was the Mass Entrepreneurship and Innovation Initiative project. [As Kanaan describes this in the book, China’s government helps fund a company and then allows the company to take most of the profit, and then the company reinvests in a virtuous cycle.] It seems like it’s working really well for China. Do you think something similar could work in the U.S.? Why or why not?

MK: Yeah. This is circulating this idea of digital authoritarianism. And if our central premise is that the more data you have, the better your machine learning applications are, the better that the capability is for the people using it, who reinform it with new data — this whole virtuous cycle that ends up happening. Then when it comes to digital authoritarianism… it works. In practice, it works well.

Now, here’s the difference, and why I wrote the book: What we need to talk about is, we need to make a different argument. And it is not very simple to say: Global customer X, by choosing to leverage these technologies and make the decisions you’re making on surveillance technologies and the way in which China sees the world … you are giving up this principle of the things we talk about: Freedom of speech, privacy, right? No misuse. Meaningful oversight. Representative democracy.

So in any moment, what you’ll find in an AI project is, they’re like “Ugh, if only I had that other data set.” But you can see how that turns into this very slippery slope very, very quickly. So that’s the tradeoff. Once upon a time, we could make the moral foundational argument, and the intellectual wants to say, “No no no. We see right in the world.”

But that’s a tough argument to make — you’re seeing it play out in Tik Tok right now. People are saying, “Well, why should I get off that platform, you haven’t given me something else?” And it’s a tough pill to swallow to say, “Well let me walk you through how AI is developed, and how those machine learning applications for computer vision can actually [be used against] Uighurs — and millions of them — in China.” That’s tough. So, I see it as a dilemma. My mindset is, let’s stop trying to out-China China. Let’s do what we do best. And that’s by at least being accountable, and having the conversation that when we make mistakes, we at least aim to fix it. And we have a populace to respond to.

VB: I think the thing about Chinese innovation in AI is really interesting, because on the one hand, it’s an authoritarian state. They have really … complete … data [on people]. It’s complete, [and] there’s a lot of it. They force everyone to participate. […] If you didn’t care about humanity, that’s exactly how you would design data collection right? It’s pretty amazing.

On the other hand … the way that China has used AI for evil to persecute the Uighurs … they have this advanced facial recognition. Because it’s an authoritarian state, the goal is not accuracy, necessarily, the point of identifying these people is subjugation. So who cares if their facial recognition technology is precise and perfect — it’s serving a different purpose. It’s just a hammer.

MK: I think there’s a disconcerting underlying conversation that people are like, “Well it’s their choice to do with it what they want.” I actually think that anyone along the chain — and strangely now the customer is all of a sudden the creator of more accurate computer vision — that’s very strange, it’s that whole model of if you’re not paying for it, you’re the product. So, being a part of it is making it more informed, more robust, and more accurate. So I think that from the developer to the provider to literally the customer, in the digital age, has some responsibility to sometimes say no. Or to understand it to the extent of how it could play itself out.

VB: One of the unique things about AI among all technologies is that ensuring that it’s ethical, reducing bias, etc. is not just the morally right thing to do. It’s actually a requirement for the technology to work properly. And I think that stands in big contrast to, say, Facebook. Facebook has no business incentive to cull misinformation or create privacy standards because Facebook works best when it increases engagement and collects as much data about users as possible. So Facebook is always bumping into this thing where they’re trying to appease people by doing something morally right — but it runs counter to its business model. So when you look at China’s persecution of Uighurs using facial recognition, doing the morally right thing is not the point. I suppose that could mean that because China doesn’t have these ethical qualms, they probably aren’t slowing down and building ethical AI, which is to say, it’s possible they’re being very careless with the efficacy of their AI. And so, how can they expect to export that AI, and beat the U.S. and beat Russia and beat the EU, when they may not have AI that actually works very well.

MK: So here’s the point: When taking a computer vision algorithm from [a given city in China] or something, and not retraining it in any manner, and then throwing it into a completely new place, would that necessarily be a performant algorithm? No. However, when I mentioned AI is more of the journey than the end state — the practice of deploying AI at scale, the underlying cloud infrastructure, the sensors themselves, the cameras — they are incredibly effective with this.

It is a contradiction. You say “I want to do good,” but here’s the issue, and we’ll do a thought experiment for a moment. And I want to commend — truly — companies like Microsoft and Google and OpenAI, and all these ethics boards who are setting principles and trying to lead the cause. Because as we have said, commercial leads to development in this country. That’s what it’s all about right? Market capitalism.

But here’s the deal: In America, we have a fiduciary responsibility to the shareholder. So you can understand how quickly when it comes to the practice of these ethical principles that things get difficult.

That’s not to say we’re doing wrong. But it’s hard to maximize the business profits, while simultaneously doing “right” in AI. Now, break from there: I believe there’s a new argument to shareholders and a new argument to people. That is this: By doing good and doing right…we can do well.

VB: I want to move on a bit and talk about Russia, because your chapter on Russia is particularly chilling. With regard to AI, they’re developing military applications and propaganda. How much influence do you think Russia had in our 2016 presidential election, and what threat, do you think Russia poses to the 2020 election? And how are they using AI within that?

MK: Russia’s use of AI is very — it’s very Russia. It’s very Ivan Drago, like, no kidding, I’ve seen this story before. Here’s the deal. Russia is going to use it to always level the playing field. That’s what they do.

They lack certain things that the rest of us — other nations, Westernized countries, those with more natural resources, those with warm water ports — have naturally. So they’re going to undercut it through the use of weapons.

Russian weapon systems don’t prescribe to the same laws of armed conflict. They don’t sit in some of the same NATO groups and everything else that we do. So of course they’re going to use it. Now, the concern is that Russia makes a significant amount of money from selling weaponry. So if there are likewise countries who don’t necessarily care quite as much on how they’re used, or their populace doesn’t hold them to account, like in America or Canada or the U.K., then that’s a concern.

Now, on the aspect of mis- and disinformation: To the extent in which anything they do materially affects anything is not my call. It’s not what I talk about. But here is the reality, and I don’t understand why this is not just more known: It is public knowledge and acknowledged by the Russian government and military that they operate in mis- and disinformation, and conduct propaganda campaigns, which includes political interference.

And this is all an integral, important part of national defense to them. It is explicitly stated in the Russian Federation doctrine. So it should not take us by surprise that they do this.

Now when we think about what is computer-generated content … are these people just writing stories? You see technology like language, automation, and prediction like in GPT (and this is why OpenAI rolled it out in phases) that ultimately have far more broad and significant reach. And if most people don’t necessarily catch a slip-up in grammar and the difference between a semicolon and comma… Well, language prediction right now is more than capable of only making little mistakes like that.

And the most important piece, and the one that I believe so much — because again, this is all about Russia leveling the playing field — is the Hanna Arendt quote: “And a people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please.”

Mis- and dis-information has existed between private business competition, nation-state actors, Julius Caesar, and everyone else, right? This is not new. But the extent to which you can reach — this is new, and it can be perpetuated, and then further exported and [contribute to the growth of] these echo chambers that we see.

Ultimately, I make no calls on this. But, you know, read their policy.

VB: So, regarding Russia’s military AI. You wrote that Russia is competitive in that regard. How concerned should we be about Russia using AI-powered weapons, exporting those weapons, and how might that spark an actual AI arms race between Russia and the United States.

MK: Did you ever watch the short little documentary, “Slaughterbots?” […] I don’t think slaughterbots are that complex. If you had someone fairly well-versed on GitHub, and had a DJI [drone], how much work would it actually take to make that come into reality, to make a slaughterbot? Not a ton.

Because of the way that we’ve looked at it as an obligation to develop this technology publicly in a lot of ways — which is the right thing. We do have to recognize the inherent duality behind it. And that is, take a weapon system, have a fairly well-versed programmer, and voilà, you have “AI-driven” weapons.

Now, break from that. There’s a Venn diagram that happens. And what we do is, we use the word “automation” interchangeably with “artificial intelligence,” but they’re more of a diagram. They’re two different things that certainly overlap. We’ve had automated weapons for a long time. Very rules-based, very narrow. So first, our conversation needs to be separated — automation doesn’t equal AI.

Now, when it comes to using AI weapons — which, there is plenty of public domain stuff of Russia developing AI guns, AI tanks, etc. right? This is nothing new. Does that necessarily make them better weapons? I don’t know, maybe in some cases, maybe not. The point being is this is: When it comes to the strict measures that are currently in place — again, we put this AI conversation up on a pedestal, like everything has changed, like there is no law of armed conflict, like there is no public law on meaningful human oversight, like there aren’t automation documents that have for a long time talked about automated weaponry — the conversation hasn’t changed. Just because of the presentation of AI, which in most cases is more like illuminating a pattern you didn’t see than it is automating a strike capability.

So I think certainly there is a concern that robotic guns and automated weapons is something we have to pay close attention to, but for the concern of the “arms race” — which is specifically why I did not put “race” in the title of this book — is the pursuit of power.

We’re going to have to always keep these laws in place. However, I have seen, except in the far reaches of science fiction, not the realities of today, that laws don’t work for artificial intelligence, as it stands now. We are strictly beholden to them, and are accountable for those.

VB: There’s a single passage in the book in italics. [The passage refers to the Stamp Act, a tax that England levied against the American colonies in which most documents printed in the Americas had to be on paper produced in London.] “Consider the impact: in an analog age, Britain’s intent was to restrict all colonial written transactions and records to a platform imposed upon the colonies from outside their cultural borders. In today’s digital atmosphere, China’s aspirations to spread its 5G infrastructure to other nations who lack available alternatives, and who will then be functionally and economically dependent upon a foreign entity, is not entirely different.” Is there a reason that one paragraph is in italics?

MK: We’ve seen this before, and I don’t know why we make the conversation hard. Let’s look at the political foundations, the party’s goals, and the culture itself to figure out how they’ll use AI. It’s just a tool, it’s an arrow in your quiver that’s sometimes the right arrow to pick and sometimes not.

So what I’m trying to do in that italicized sentence is pull a string for the reader to recognize that what China is doing is not characteristically much different than why we rose up and why we said “We need to have representative governments that represent the people. This is ridiculous.” So what I’m trying to do is inspire that same moment of: Stop accepting the status quo for those who are in authoritarian governments and to be holed into their will, where you can’t make these decisions, and it’s patently absurd you can’t.

VB: Along the lines of figuring out what we’re doing as a country and having sort of a national identity: Most of the current U.S. AI policies and plans seem to be more or less held over from the late Obama administration. And I can’t quite tell how much was changed by the Trump-era folks — I know there’s some of the same people there making those policies, of course, a lot of it’s the same.

MK: What the Obama administration did … he was incredibly prescient. Incredibly, about how he saw AI playing itself out in the future. He said, perhaps this allows us to reward different things. Maybe we start paying stay-at-home dads and art teachers and everything else, because we don’t have to do these mundane computer jobs that human shouldn’t do anyways. He sent forth a lot of stuff, and there’s a lot of work [that he did]. And he left office before they were quite done.

AI is an incredibly bipartisan topic. Think about it. We’re talking about holdover work, from NSF and NIST and everyone else from the Obama administration, and then it gets approved in the Trump administration and publicly released? Do we even have another example of that? I don’t know. The AI topic is bipartisan in nature, and that’s awesome, which is one thing we can rally around.

Now, the work done by the Obama administration set the course. It set the right words, because it’s bipartisan; we’re doing the right thing. Now in the Trump administration, they started living the application. The exercising of it through getting out cash and all of that, from that policy. So I would say they’ve done a lot — chiefly, the National Security Commission on AI — is awesome, [I would] just commend, commend, commend more stuff like that.

So I don’t actually tie this AI effort to either administration, because it’s just inherently the one bipartisan thing we have.

VB: How do you think U.S. AI and policy investment may change, or remain the same, under a second Trump term versus a Biden administration?

MK: Here’s what I do know: Whatever the policies are — again, being bipartisan — we know that we need a populace that’s more informed, more cognizant. Some experts, some not.

China has a 20-some-odd-volume machine learning course that starts in kindergarten [and runs] all the way through primary school. They recognize it. Right. There are a number of … Russia announcing the STEM competitions in AI, and everything else.

The thing that matters most right now is to create a common dialogue, a common language on what the technology is and how we can grow the workforce for the future to use it for whatever future they see fit. So regardless of politics, this is about the education of our youth right now. And that’s where the focus should be.

Let’s block ads! (Why?)

VentureBeat

About

Check Also

The scale of ambition in gaming is getting bigger | Brian Ward fireside chat

The scale of ambition for Saudi Arabia when it comes to moving into the games …