Artificial Intelligence & Machine Learning , Cybercrime , Cyberwarfare / Nation-State Attacks

UK's AI Safety Summit to Focus on Risk and Governance

Topics at First-Ever Summit to Include AI Cybersecurity and Nation-State Threats
UK's AI Safety Summit to Focus on Risk and Governance
Image: Shutterstock

Cybersecurity and safety risks tied to frontier artificial intelligence will be a key focus for the U.K government's first-ever global AI summit

See Also: 2024 CISO Insights: Navigating the Cybersecurity Maelstrom

Unveiling the plan for its AI Safety Summit scheduled Nov. 1-2 at Bletchley Park, Buckinghamshire, the U.K. government said the event will focus on preventing the misuse of emerging AI capabilities that are deemed dangerous enough to pose "severe risks to public safety."

"We have already seen the dangers AI can pose: teens hacking individuals' bank details, terrorists targeting government systems, cybercriminals duping voters with deepfakes and bots, even states suppressing their peoples," Deputy Prime Minister Oliver Dowden said last week. "Our efforts need to preempt all of these possibilities - and to come together to agree to a shared understanding of those risks.

Organizers warned that potential misuse of AI could help nation-state groups or other adversaries execute cyberattacks on target critical infrastructure or develop bioweapons that pose "significant harm" or "loss of life."

Noting that the capabilities of the technology "are very difficult to predict," even for the AI model developers, the summit will lead discussions on risks posed by "narrow AI" designed to perform a single task, such as that used for bioengineering of generative AI, the government said in a statement.

The summit represents a move to not just urgently address ways to mitigate AI risks but also to ensure that the government, British academics and businesses have a role in promoting global cooperation in AI developments.

Although Prime Minister Rishi Sunak is eager to turn the U.K into the next AI hub, British lawmakers have called out the government's slow response to regulate AI. In a letter published last month, lawmakers on the U.K Parliament's Science, Innovation and Technology Committee said Britain's interim AI strategy, published in March, could impede the country's AI development because the government does not plan to introduce any new legislation in the near term. As a result, other jurisdictions, "principally the European Union and the United States," may well be the ones "to set international standards," they warned (see: Mitigating AI Risks: UK Calls for Robust Guardrails).

Unless Britain introduces its own legislation, British lawmakers fear that AI standards, governance and enforcement may follow the same path as the EU General Data Protection Regulation. Namely, if the EU articulates its position first, the U.K. may find it "difficult to deviate" if it favors a different approach.


About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.