Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

AI Disinformation Likely a Daily Threat This Election Year

Disinformation Campaigns Could Have Real-World Impact, as 50 Countries Host Polls
AI Disinformation Likely a Daily Threat This Election Year
Image: Shutterstock

Bad actors will disseminate artificial intelligence-crafted misinformation on a daily basis by this summer, predict researchers from George Washington University who constructed a mathematical model to forecast the rise of propaganda on social media platforms during a record election year.

See Also: Safeguarding Election Integrity in the Digital Age

The prophesied deluge could be large enough to affect the outcome of elections being held in more than 50 countries - including the United States, the United Kingdom and India, the researchers warn.

In what is touted as a first-of-its-kind quantitative analysis of how bad actors will misuse AI globally, researchers investigated data taken from two historical, technologically similar cybersecurity incidents that involved the manipulation of online electronic information systems to extrapolate the frequency of attacks and examined them in the context of the current technological progress of AI.

The study was published just days after authorities in New Hampshire initiated an investigation into robocalls mimicking the voice of U.S. President Joe Biden that apparently had been generated by AI. The robocalls were an attempt to suppress voter turnout in a Tuesday primary election.

ChatGPT maker OpenAI has announced steps the company is taking to deter the malicious use of its models in the 2024 U.S. election. But GWU researchers said bad actors need only use basic large language models to manipulate and bias information on social media platforms - not the more advanced GPT-3 or GPT-4 models, which have more guardrails.

Neil Johnson, lead study author and a professor at the university, emphasized the significance of the researchers' scientific approach. "You cannot win a battle without a map of the battlefield, but nobody has one," Johnson told Information Security Media Group.

The battlefields the bad actors thrive in are online communities, including social media platforms large and small, with an ecosystem of more than 1 billion individuals and potential targets, the report said. Johnson said the bad actors are "next door: the threat actors are connected directly into more than 1 billion individuals in mainstream online communities that have people with similar interests, such as parenting communities and pet lovers, and can hence influence them directly."

The assumption is that large social media platforms - such as X, formerly Twitter, and Facebook - are the key players, but that is misguided, Johnson said. Instead, there is a huge number of small social media platforms that connect people to larger platforms. "This is really important since it means that it doesn't matter much what the European Union and other governments force major platforms like Facebook and X to do. They are missing the elephant in the room - the strong web of smaller platforms that are connected into these larger platforms and keep the bad-actor battlefield strong."

Johnson said the researchers were "amazed" that major players such as Facebook, X and YouTube did not necessarily understand the informational pathways feeding into their platforms. "This explains why their efforts to deal with harmful content have and will be so ineffective," he said.

The threat is especially dire since criminals don't need advanced GPT systems to manipulate information - a basic model will do. Basic models are actually more attractive for creating misinformation than their advanced versions since they can run on a laptop and are good enough to replicate extreme views posted in online communities, the researchers said. Even before OpenAI released GPT 3 and 4, AI experts forecast that by 2026, 90% of online content would be generated by machines without human intervention.

Instead of removing specific pieces of content, social media companies should deploy widespread tactics and coordinate activity to eliminate the threat, the report said. "Though everyone assumes that governments and platforms focus on getting rid of this bad actor activity, the better approach is containment," Johnson said.

"It will be COVID misinformation on steroids since AI doesn't have to sleep, eat or take a break. It works 24/7."


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.