Home / Technology / Tech execs urge Washington to accelerate AI adoption for national security

Tech execs urge Washington to accelerate AI adoption for national security

Tech company CEOs may be heading to Washington next week to take part in antitrust hearings in Congress, but this week high-profile executives from companies like Amazon, Microsoft, and Google gave the president, Pentagon, and Congress advice on how the United States can maintain AI supremacy over other nations. Today, the National Security Commission on AI released a set of 35 recommendations ranging from the creation of an accredited university for training AI talent to speeding up Pentagon applications of AI in an age of algorithmic warfare.

The National Security Council on AI (NSCAI) was created by Congress in 2018 to advise national AI strategy as it relates to defense, research investments, and strategic planning. Commissioners include AWS CEO Andy Jassy, Google Cloud chief AI scientist Andrew Moore, and Microsoft chief scientist Eric Horvitz. Former Google CEO Eric Schmidt acts as chairman of the group. Concern with China’s rise as an economic and military power and AI being increasingly used by businesses and governments means the group’s recommendations may have a long-lasting impact on the United States government and the world.

To bolster U.S. competitiveness in AI, the council recommends steps such as creating a National Reserve Digital Corps, modeled on military reserve corps, to give machine learning practitioners a way to contribute to government projects on a part-time basis. Unlike the U.S. Digital Service, which asks tech workers to serve for one year, the NRDC would ask for no less than 38 days a year.

Commissioners also recommend the creation of an accredited university called the U.S. Digital Services Academy. Graduates would pay for their education with five years of work as a civil servant. Classes would include American history as well as mathematics and computer science. Students would participate in internships at government agencies and in the private sector.

A joint Stanford-NYU study found that only a small percentage of federal agencies are using complex forms of machine learning, and that trustworthiness of systems used for what it calls algorithmic governance will be critical to citizen trust. Released in February, the report urges federal agencies to acquire more internal AI expertise.

The quarterly report also includes an outline for putting ethics principles into practice with the goal of aligning American principles with engineering practices in major parts of the AI lifecycle.

“We hope that the key considerations we’re laying out now have the potential to form the foundation for international dialogue, on areas of cooperation and collaboration, even with potential adversaries,” said Horvitz, who gave a presentation on the subject during a meeting Monday where the recommendations received unanimous approval by commissioners.

Also among recommendations:

  • Train U.S. State Department employees in emerging technologies like AI that, as a NSCAI staff member put it, “define global engagement strategies.”
  • Encourage the Department of Defense to adopt commercial AI systems for things like robotic process automation. At VentureBeat’s Transform 2020 conference last week, Joint AI Center acting director Nand Mulchandani, a former Silicon Valley executive, stressed that the military will grow its reliance on private industry.
  • Build a certified AI software repository for the U.S. military to accelerate creation of AI and support research and development.
  • Create a database to track research and development projects within the U.S. military.
  • Have military leaders adopt an open innovation model for DoD to accelerate the Pentagon’s ability to create AI.
  • Integrate AI-enabled applications into “all major joint and service exercises” as well as wargames and table-top exercises
  • Invest in research and development for testing AI systems for compliance and verify results.

Google Cloud’s Moore said testing is important “because it won’t be long before 90% of the entire length of an AI project [pipeline] is testing and validation and only 10% will be the initial development. So we have to be good at this, or else we will see our country’s speed of innovation grind to a halt.”

Former deputy secretary of Defense and NSCAI commissioner Robert Work referred to the competition in AI as a competition in values, but stressed that testing and validation to prove results is also important for military leaders to confidently adopt AI applications.

Following the release of the NSCAI interim report to Congress last fall, the group began releasing quarterly recommendations to advise national leaders on how to maintain America’s edge in AI. Recommendations in the first quarterly report ranged from building public-private partnership and government funding of semiconductor development to using the ASVAB military entrance exam to identify recruits with “computational thinking.”

A major topic of discussion throughout the meeting Monday was international relations — how the U.S. cooperates with allies, and how it treats adversaries. While many commissioners in the meeting Monday stressed the need to defend American AI supremacy over other nation-states, Microsoft’s Horvitz said, “Our biggest competitor is status quo and actually innovation, to be honest.”

NSCAI Commissioner Gilman Louie is founder of In-Q-Tel, the CIA’s investment arm. He said he welcomes healthy competition in the development of AI for exploration, science, health, and the environment, but being the best in AI is a matter of national security, particularly with the rise of adversarial machine learning.

Louie said increased adoption of government use of AI is not just a matter of technical expertise or compute resources, but also a matter of cultural change. Once that change happens, he said, it can have a drastic and disruptive impact.

“I think there’s going to be a point somewhere maybe five or six years from now, when we get our hands around the basic uses of AI, that we will have a choice to make: whether or not we’ll continue to use AI for incremental improvement versus highly disruptive change,” he said. “When you think about offensive uses, defensive uses, support uses of AI, we tend to liken these new technologies within the department and national security apparatus in a way that doesn’t change the wiring diagram. It makes us a little bit faster, a little bit better, but we don’t want to change the way we think about our mental models of operating or constructs of organizations. I think the power of AI is that it could disrupt all of that, and if we’re not willing to disrupt ourselves, we’re going to let potential adversaries and competitors disrupt us.”

Katharina McFarland, chair of the National Academies of Science Board of Army Research and Development, said she’s seen machine learning deployments accelerate inside and outside the military during the COVID-19 pandemic. “There’s some hope here because people are starting by having to — not because they want to, but because they have to — to start having and developing some confidence in these tools,” she said.

The commission also discussed potential next steps such as testing and validation framework recommendations and ways to put ethical principles into practice.

Additional NSCAI recommendations are due out in the fall. The NSCAI is a temporary group that is scheduled to deliver a final report to Congress next spring and will dissolve in October 2021.

In other news at the intersection of AI and policy, today the Senate Committee on Commerce, Science, and Transportation advanced two bills that if passed into law would help shape U.S. AI policy.

Let’s block ads! (Why?)

VentureBeat

About

Check Also

The scale of ambition in gaming is getting bigger | Brian Ward fireside chat

The scale of ambition for Saudi Arabia when it comes to moving into the games …