Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Video

New AI Bot Could Take Phishing, Malware to a Whole New Level

Experts Warn ChatGPT Could Usher in Phishing 3.0, Democratize Hacking Technology
Watch this video with ISMG's Anna Delaney containing demos of ChatGPT and perspectives of cybersecurity experts.

Anything that can write a software code can also write malware. While most threat actors take several hours and sometimes even days to write malicious code, the latest AI technology can do it in seconds. Even worse, it could open the door to rapid innovation for hackers with little or no technical skills or help them overcome language barriers to writing the perfect phishing email.

See Also: OnDemand | Code Red: How KnowBe4 Exposed a North Korean IT Infiltration

Those are some of the fears in the cybersecurity community about the latest viral sensation, ChatGPT, an AI-based chatbot developed by OpenAI that specializes in dialogue. Simply ask a question, and it can compose a poem, write a term paper for a high schooler or craft malware code for a hacker.

Within the first week, more than 1 million people registered to try the online app. In fact, comedian Trevor Noah on "The Daily Show" last week said it's a sign that AI has finally gone mainstream.

Since the cybercrime market for ransomware as a service is already organized to outsource malware development, tools such as ChatGPT could make the process even easier for criminals entering the market.

"I have no doubt that ChatGPT and other tools like this will democratize cybercrime," says Suleyman Ozarslan, security researcher and co-founder of Picus Security. "It's bad enough that ransomware code is already available for people to buy off the shelf on the dark web. Now virtually anyone can create it themselves."

In testing ChatGPT, Ozarslan instructed the bot to write a phishing email, and it spat out a perfect mail within seconds. "Misspellings and poor grammar are often tell-tale signs of phishing, especially when attackers are targeting people from another region. Conversational AI eliminates these mistakes, making it quicker to scale and harder to spot them," he says.

While the terms of service for ChatGPT prohibit individuals from using the software for nefarious purposes, Ozarslan prompted the bot to write the phishing email by telling it the code would be used for a simulated attack. The software warned that "phishing attacks can be used for malicious purposes and can cause harm to individuals and organizations," but the bot created a phishing email anyway.

This phishing email generated by ChatGPT even suggests where to place the malicious link. (Source: Picus Security)

Despite the guardrails preventing users from causing mischief, numerous researchers found a way to bypass those warnings. "It's like a 3D printer that will not print a gun but will happily print a barrel, magazine, grip and trigger together if you ask it to," Ozarslan says.

Another computer researcher impressed by the capabilities of ChatGPT, Brendan Dolan-Gavitt, assistant professor at New York University, asked the bot to solve an easy buffer overflow challenge. ChatGPT correctly identified the vulnerability and wrote a code exploiting the flaw. Although it made a minor error in the number of characters in the input, the bot quickly corrected it after Brendan prompted.

Ozarslan asked the AI to write a software code in Swift, to be able to find all Microsoft Office files from his MacBook and send them over HTTPS to his web server. He also wanted the tool to encrypt all Microsoft Office files and send the private key to him for decryption.

Despite that action being potentially more dangerous than the phishing mail, ChatGPT sent the sample code without any warnings.

ChatGPT generated ransomware code but displayed no warning about the content. (Source: Picus Security)

New Age of Deepfakes and Phishing 3.0

Peter Cassidy, general secretary with the Anti-Phishing Working Group, tells Information Security Media Group that phishing attacks have become more focused. "We've gone from broken English attacks against the banks in the English-speaking countries to extremely focused ones in order to create a false scenario about the victim," he says.

Cassidy explains how ChatGPT could make it easier for cybercriminals to achieve their targets in record numbers and with accuracy. "You can now get it to create a greeting for a birthday or wishes for someone who got out of the hospital, and in whichever language you want," he adds.

"Threat actors are never computer scientists when the arrests are made. It's always some 14-year-old kid who taught himself how to build malware from scratch, based on what he saw online. Phishing requires determination," he adds.

But tools for coders are also tools for threat actors. In a recent blog post, Eyal Benishti, CEO at Ironscales, called ChatGPT a double-edged sword and warned of AI leading to Phishing 3.0.

Deepfake technology uses AI to create fabricated content, making it look like the real thing. It has the proper context, and it sounds and reads like a legitimate message. "Imagine a combined attack where the threat actor impersonates a CEO with an email to accounting to create a new vendor account to pay a fake invoice, followed up by a fake voicemail with the CEO's voice acknowledging the authenticity of this email," he says.

"It is only a matter of time before threat actors combine phishing and pretext methods to deliver compelling, coordinated and streamlined attacks."

Now that personal information is publicly accessible over social media and other places on the web, it has become easier to harvest, correlate and put into context using sophisticated models designed to look for opportunities to create highly targeted attacks.

Only a week after it was introduced, ChatGPT has been banned on Stack Overflow, a Q&A forum for programmers. When many people posted answers from ChatGPT, presumably to farm points on the platform, Stack Overflow noticed "a high rate of incorrect answers, which typically look like they might be good," the moderators wrote.

A New Source for Malware Innovation?

In a tweet, OpenAI CEO Sam Altman agrees that cybersecurity is one of the principal risks of "dangerously strong AI."

And in a paper about OpenAI's code-writing model - Codex - the company researchers say that "the non-deterministic nature of systems like Codex could enable more advanced malware. While software diversity can sometimes aid defenders, it presents unique challenges for traditional malware detection and antivirus systems that rely on fingerprinting and signature-matching against previously sampled binaries."

Application security and model deployment strategies including rate-limiting access and abuse monitoring can manage this threat in the near term, "though that is far from certain," the report adds.

Although ChatGPT is scary good, it contains imperfections, and that gives protectors time to bring tighten the fences. "Attackers won't stand still, nor should the defenders. As AI makes it easier for attackers to scale, it's vital for companies to validate security against real-world attackers and be proactive against new threats like AI as they emerge, rather than waiting around to see how they will impact them in the future," says Ozarslan.

Microsoft infrastructure

ChatGPT is built on the GPT-3 deep learning model, which was created by OpenAI in a partnership with Microsoft in 2019. Recently, Altman credited the Microsoft Azure cloud for providing the AI infrastructure that powers the GPT language models.

Microsoft's experience with AI traces back to 1993, when AutoCorrect was launched. Over two decades later, Microsoft invested $1 billion in OpenAI and plans to commercialize GPT-3.

In November 2021, Microsoft launched the Azure OpenAI Service, giving Azure customers the ability to use OpenAI's machine-learning models, which were previously available by invitation only.

The partnership helps cement Microsoft's Azure cloud infrastructure as the platform of choice for the next generation of AI.


About the Author

Anna Delaney

Anna Delaney

Director, Productions, ISMG

An experienced broadcast journalist, Delaney conducts interviews with senior cybersecurity leaders around the world. Previously, she was editor-in-chief of the website for The European Information Security Summit, or TEISS. Earlier, she worked at Levant TV and Resonance FM and served as a researcher at the BBC and ITV in their documentary and factual TV departments.

Anviksha More

Anviksha More

Senior Subeditor, ISMG Global News Desk

More has seven years of experience in journalism, writing and editing. She previously worked with Janes Defense and the Bangalore Mirror.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.