Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

AI Supremacy: Russia, China Could Edge Out US, Experts Warn

Cyberattack and Disinformation Risks From AI Loom Large, Commission Warns
AI Supremacy: Russia, China Could Edge Out US, Experts Warn

The U.S. is in danger of falling behind China and Russia in developing artificial intelligence technologies and countering cybersecurity threats that could develop as AI use becomes more widespread, according to a newly released report from the National Security Commission on Artificial Intelligence.

See Also: Live Webinar | The Role of Passwords in the Hybrid Workforce

The 750-page report offers more than 60 recommendations for how the U.S. could become the world leader in developing and deploying AI technologies as well as how to counter threats that leverage AI, including disinformation campaigns and cyberattacks by nation-states. The commission recommends many of the same changes to national policy that the Cyberspace Solarium Commission advocated in a 2020 report.

"AI is deepening the threat posed by cyberattacks and disinformation campaigns that Russia, China and others are using to infiltrate our society, steal our data and interfere in our democracy," according to the report's executive summary. "The limited uses of AI-enabled attacks to date represent the tip of the iceberg."

The report, released Monday, is the result of nearly two years' worth of work by the 15-member commission, which was created as part of the 2019 National Defense Authorization Act.

In addition to recommending the rethinking of the government's approach to artificial intelligence, the report calls for Congress to authorize the spending of billions of dollars to fund the development of AI, machine learning and other technologies to help the U.S. better compete as well as protect critical assets from security threats.

The commission's report recommends increasing spending on research and development to $8 billion by 2025, up from $1.5 billion in 2022. Commissioners also recommend that the Office of the Director of National Intelligence spend $1 billion annually from 2022 to 2032 to study the effects that AI will have on a host of cybersecurity and national security issues.

The report also calls for the U.S. to spend millions developing programs to foster engineering and other skills needed to create, harness and deploy AI and machine-learning technologies.

In an opening letter to the report, Eric Schmidt, the chair of the National Security Commission on Artificial Intelligence and former CEO of Google, and Robert O. Work, the vice chair, note the U.S. needs to create a holistic approach that balances the development of AI technologies and countermeasures to fend off threats from nation-states and hackers deploying AI.

"AI is dual-use, often open source and diffusing rapidly," Schmidt and Work note. "State adversaries are already using AI-enabled disinformation attacks to sow division in democracies and jar our sense of reality. States, criminals and terrorists will conduct AI-powered cyberattacks and pair AI software with commercially available drones to create 'smart weapons.'"

Lauren Christopher, an associate professor of electrical and computer engineering at Indiana University-Purdue University Indianapolis who has studied the effects of AI, believes that the commission was right to focus on how China and Russia are developing artificial intelligence technologies and that the U.S. needs to invest more to keep pace.

"Of the technologies outlined in the report, increasing the talent pool and bringing chip design back on-shore are spot-on," Christopher tells Information Security Media Group. "The U.S. needs to invest more and have leadership strategies in place. There is much already done in the U.S. on AI topics, but much more to do."

National Security Concerns

The report notes that while AI and machine learning are still in early stages of development, they're already being misused. This includes using deepfakes to spread disinformation (see: How Deepfakes Can Defeat Video ID Verification Tests).

"Rival states are already using AI-powered malign information," the report notes. "In the United States, the private sector has taken the leading role in combating foreign malign information. Social media companies in particular have extensive operations to track and manage information on their platforms. But coordination between the government and the social media firms remains ad hoc. We need a more integrated public-private response to the problem of foreign-generated disinformation."

Eric Schmidt, chair, National Security Commission on Artificial Intelligence (Photo: Wikipedia)

The report adds that the theft of personally identifiable information, and the use of this data to feed algorithms to target potential victims, also remains a major concern. The theft of personal data in 2015 from the U.S. Office of Personnel Management should serve as a warning about how nation-states can steal and then attempt to use data to track individuals.

"For the government to treat the data of its citizens and businesses as a national security asset, substantial changes are required in the way we think about data security and in our policies and laws to strengthen it," the report notes.

The commission makes several recommendations. It calls for:

  • Creating a security development life cycle for AI systems, including conducting red team testing to ensure data is kept private as well as using federated and anonymized databases that only hold personal data for a limited time;
  • Increasing screening of foreign investment in AI and machine-learning systems developed in the U.S. and ensuring that supply chains meet security standards;
  • Developing national data privacy legislation to protect and regulate the use of U.S. citizens' data, including limiting the ability of nation-states to buy this data.

The report also warns that AI and machine-learning technology will be able to spread malware much faster and give operators new ways to create malicious code that can quickly take advantage of vulnerabilities in 5G networks and IoT devices.

Polymorphic malware already accounts for more than 90% of malicious executable files spotted in the wild, the commission says.

Military Use of AI

Besides addressing national security and cybersecurity concerns, the commission examined the deployment of AI throughout the military, including the development and deployment of autonomous weapon systems.

The U.S. Department of Defense has developed autonomous systems that have certain safeguards, such as requiring a human operator to authorize their use, the report notes.

The U.S. should take a lead role in promoting the responsible use of AI in weapons systems, especially those that can control nuclear weapons, the commission says.

"The United States should make a clear, public statement that decisions to authorize nuclear weapons employment must only be made by humans, not by an AI-enabled or autonomous system, and should include such an affirmation in the DoD's next Nuclear Posture Review," the report notes. "The United States should also actively press Russia and China, as well as other states that possess nuclear weapons, to issue similar statements."

Urging Action

Schmidt testified before the Senate Armed Services Committee last month, urging lawmakers to read the report and take action on its recommendations.

"A bold, bipartisan initiative can extend our country’s technology advantage - but only if we act now," Schmidt told the panel. "Success matters for more than our companies' bottom lines and our military's battlefield edge. To that end, I urge Congress again to adopt all of our AI commission recommendations, which provide a clear blueprint to win a technology competition that is centered around AI."


About the Author

Scott Ferguson

Scott Ferguson

Managing Editor, News Desk

Ferguson is the managing editor for the news desk at Information Security Media Group. He's been covering the IT industry for more than 13 years. Before joining ISMG, Ferguson was editor-in-chief at eWEEK and director of audience development for InformationWeek. He's also written and edited for Light Reading, Security Now, Enterprise Cloud News, TU-Automotive, Dice Insights and DevOps.com.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.