Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime , Next-Generation Technologies & Secure Development

Deepfake Phone Scams for Less Than a Dollar a Pop

Academics Build AI Agent With OpenAI to Execute Phone Scams at Scale
Deepfake Phone Scams for Less Than a Dollar a Pop
Artificial intelligence could automate phone scams. (Image: Shutterstock)

Hackers can use OpenAI's real-time voice API to carry out phone deepfake scams at scale for less than a dollar, said researchers.

See Also: 2024 CISO Insights: Navigating the Cybersecurity Maelstrom

The artificial intelligence giant released Realtime API earlier this month, offering capabilities for third parties similar to ChatGPT's advanced voice mode, where developers can input text or audio in the GPT-4o model and output text, audio or both. The move sparked security concerns, especially after the company had delayed releasing ChatGPT's advanced voice mode amid apprehensions of abuse over AI models interacting with realistic simulated voices and outcry over using a voice that sounded like actress Scarlett Johansson, without her consent (see: Cloned Voice Tech Is Coming for Bank Accounts).

Researchers at the University of Illinois Urbana-Champaign found that adversaries could use the OpenAI service to create AI agents to automate the process of a phone scam, which involves impersonating figures such as government officials or bank employees to scam victims into revealing sensitive information including bank account details or Social Security numbers. AI agents are essentially software programs that use AI to perform tasks without human intervention.

Phone scams target 17.6 million Americans annually at a cost of around $40 billion. With the new gen AI tool, scamsters could carry out the entire scam process for just $0.75, said the research paper's co-author Daniel Kang, assistant professor in the computer science department at UIUC.

The research team created its own AI agents to test the theory.

"Our agent design is not complicated," Kang said. "We implemented it in just 1,051 lines of code, with most of the code dedicated to handling real-time voice API. This simplicity aligns with prior work showing the ease of creating dual-use AI agents for tasks like cybersecurity attacks." Kang's prior works detail how LLM agents could autonomously hack zero-day vulnerabilities in a sandboxed environment and how GPT-4 agents could exploit unpatched "real-world" vulnerabilities without precise technical information.

In the latest study, Kang and team used OpenAI's GPT-4o model and browser automation tool, Playwright, incorporating specific code and fraud-related instructions to execute the scams. The agents used Playwright functions to interact with targeted websites, and by coupling these functions with a standardized jailbreaking prompt template, the researchers bypassed GPT-4o's built-in safety controls, enabling them to conduct fraudulent actions online through automated browser tasks.

Kang explains the result with an example of an AI agent carrying out a Bank of America funds transfer scam in this video.

The research team's tests involved multiple scam tactics: bank and crypto account hijacking, gift code exfiltration and credential theft. Success rates varied, with Gmail credential theft achieving a 60% success rate, taking 122 seconds and costing $0.28 per attempt. Bank account transfers had a lower success rate of 20%, requiring 26 actions within 183 seconds and $2.51 in API fees.

The average success rate across scams stood at 36%, with an average cost of $0.75. Most failures stemmed from AI transcription errors, though navigating complex banking sites also presented challenges.

OpenAI did not respond to Information Security Media Group's request for comment. In its Realtime API announcement, the company said it used "multiple layers" of protections to mitigate API abuse risk and that it was against usage policy to repurpose output from its services to spam people, adding that it "actively monitor[s] for potential abuse." Its past recommendation is to prevent abuse of voice-based authentication banks: just don't use it.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.