Artificial Intelligence & Machine Learning , Geo Focus: The United Kingdom , Geo-Specific
UK Government Urged to Publish Guidance for Electoral AI
Need to Prevent Use of AI to Create False or Misleading Info, Researchers SayArtificial intelligence has a limited impact on the outcome of specific elections, said the U.K.'s Alan Turing Institute, but evidence suggests its application in campaign settings creates second-order risks such as polarization and damaging trust in online sources.
See Also: Does Office 365 Deliver The Email Security and Resilience Enterprises Need?
The threats AI poses to election integrity aren't new, the think tank researchers said in a report. But AI has the potential to enhance those threats, which include phishing, cyber intrusions and fake news. Threat actors may use bots or inauthentic accounts to amplify reach or micro-target voters.
The United Kingdom is preparing for a snap election on July 4, giving regulators "limited time to make significant changes to election security." Nonetheless, government officials should publish guidance that sets "clearer expectations on political parties regarding fair use of AI in the election period."
The guidance should require political parties to clearly label AI-generated materials, provide a list of certified deepfake tools and create live repositories of AI-generated material from recent elections to help voters identify AI-generated content, the researchers said.
"With a general election just weeks away, political parties are already in the midst of a busy campaigning period," said, Sam Stockwell, research associate at The Alan Turing Institute. "Right now, there is no clear guidance or expectations for preventing AI from being used to create false or misleading electoral information. That's why it's so important for regulators to act quickly before it's too late."
Turing researchers analyzed data related to 112 national elections that took place between January 2023 and the next U.K. national election. Only 19 showed indicators of AI interference. "As of May 2024, evidence demonstrates no clear signs of significant changes in election results compared to the expected performance of political candidates from polling data," the report says.
"We nevertheless must use this moment to act and make our elections resilient to the threats we face. Regulators can do more to help the public distinguish fact from fiction," said Alexander Babuta, director of the Turing Institute's Center for Emerging Technology and Security.
The report comes just days after the U.K. Parliament's Science, Innovation and Technology Committee published a report on AI development in the country. The committee identified 12 risks related to AI systems - such as bias, data protection challenges and the issue of copyright infringement, among others.
The committee lauded the government's pro-innovation stance of not having a binding regulation on AI but said it should be ready to introduce an AI-specific law in case the current sector-specific rule or voluntary commitments obtained from the companies fail.
Members of the Joint Committee on the National Security Strategy on May 23 wrote a letter to the prime minister urging his government to release guidance on spotting election-related deepfakes and misinformation.