Euro Security Watch with Mathew J. Schwartz

Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

The Dark Side of AI: Previewing Criminal Uses

Threats Include Social Engineering, Insider Trading, Face-Seeking Assassin Drones
The Dark Side of AI: Previewing Criminal Uses
Advertisement for a real-time voice cloning tool (Source: "Malicious Uses and Abuses of Artificial Intelligence")

"Has anyone witnessed any examples of criminals abusing artificial intelligence?"

See Also: Live Webinar | Navigating Identity Threats: Detection & Response Strategies for Modern Security Challenges

That's a question security firms have been raising in recent years. But a new public/private report into AI and ML identifies likely ways in which such attacks might occur - and offers examples of threats already emerging.

The most likely criminal use cases will involve "AI as a service" offerings, as well as AI enabled or supported offerings, as part of the wider cybercrime-as-a-service ecosystem. That's according to the EU's law enforcement intelligence agency, Europol, the United Nations Interregional Crime and Justice Research Institute - UNICRI - and Tokyo-based security firm Trend Micro, which prepared the joint report: "Malicious Uses and Abuses of Artificial Intelligence".

AI refers to finding ways to make computers do things that would otherwise require human intelligence - such as speech and facial recognition or language translation. A subfield of AI, called machine learning, involves applying algorithms to help systems continually refine their success rate.

Defined: AI and ML (Source: "Malicious Uses and Abuses of Artificial Intelligence")

Criminals' Top Goal: Profit

If that's the high level, the applied level is that criminals have never shied away from finding innovative ways to earn an illicit profit, be it through social engineering refinements, new business models or adopting new types of technology (see: Cybercrime: 12 Top Tactics and Trends).

And AI is no exception. "Criminals are likely to make use of AI to facilitate and improve their attacks by maximizing opportunities for profit within a shorter period, exploiting more victims and creating new, innovative criminal business models - all the while reducing their chances of being caught," according to the report.

Thankfully, all is not doom and gloom. "AI promises the world greater efficiency, automation and autonomy," says Edvardas Šileris, who heads Europol's European Cybercrime Center, aka EC3. "At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology."

Emerging Concerns

The new report desecribess some emerging law enforcement and cybersecurity concerns about AI and ML, including:

  • AI-supported hacking: Already, Russian-language cybercrime forums are advertising a rentable tool called XEvil 4, which uses neural networks to bypass CAPTCHA security checks. Another tool, PWnagotchi 1.0.0, uses a neural network model to improve its WiFi hacking performance. "When the system successfully de-authenticates Wi-Fi credentials, it gets rewarded and learns to autonomously improve its operation," according to Trend Micro.
  • AI-assisted password guessing: For credential stuffing, Trend Micro says it found a GitHub repository earlier this year with an AI-based tool "that can analyze a large dataset of passwords retrieved from data leaks" and predict how users will alter and update their passwords in the future, such as changing 'hello123' to 'h@llo123,' and then to 'h@llo!23.'" Such capabilities could improve the effectiveness of password-guessing tools, such as John the Ripper and HashCat.
  • Small assassination drones: AI-powered facial recognition drones carrying a gram of explosives are now being developed, the report warns. "These drones are specifically for micro-targeted or single-person bombings. They are also usually operated via cellular internet and designed to look like insects or small birds. It is safe to assume that this technology will be used by criminals in the near future."
  • Insider trading: Criminals already attempt to profit from insider knowledge. But banking insiders, in particular, might create shadow AI models that cash in, based on inside knowledge about massive trades planned or executed by their organization, all while keeping the illicit trades small enough to avoid controls designed to detect money laundering, terrorism financing or insider trades.
  • Human impersonation on social networks: AI can be used to create bots that resemble actual humans. One AI-enhanced bot being advertised on the Null cybercrime forum claims to be able to "to mimic several Spotify users simultaneously" while using proxies to avoid detection, Trend Micro says. "This bot increases streaming counts - and subsequently, monetization - for specific songs. To further evade detection, it also creates playlists with other songs that follow human-like musical tastes rather than playlists with random songs, as the latter might hint at bot-like behavior."
  • Deepfakes: In 2018, Reddit banned photos and videos in which a celebrity's face was superimposed over explicit content. Since then, however, a variety of tools have made it easier to generate such content. Although several social media platforms have banned deepfakes and pledged to maintain defenses to spot and block them, concerns remain. Election security experts, for example, have warned that they could be used as part of disinformation campaigns.

Criminals Keep Seeking Small Improvements

The attacks described in the paper are largely theoretical. Recently, Philipp Amann, head of strategy for Europol's EC3, told me that there are as yet few known criminal cases involving AI and ML.

In one case, "criminals allegedly used an online tool to emulate the voice of the CEO," says Europol's Philipp Amann

Even criminal uptake of deepfakes has been scant. "The main use of deepfakes still overwhelmingly appears to be for non-consensual pornographic purposes," according to the report. It cites research from last year by the Amsterdam-based AI firm Deeptrace , which "found 15,000 deepfake videos online, of which 96% were pornographic and 99% of which used mapped faces of female celebrities onto pornographic actors."

Maybe that's because criminals are still searching for good use cases?

For example, Amann told me that one known case allegedly involved "an online tool to emulate the voice of the CEO" at a company. A fraudster appears to have phoned a German senior financial officer based in the U.K. The officer reported that the voice on the other end sounded like a native German speaker who self-identified as the CEO and was seeking an urgent money transfer.

Access to such tools makes it easier for criminals to potentially increase the success of their attacks by making their social engineering more effective. "It's just another way of convincing you that you actually are talking to your counterpart," Amann said. "So the social engineering is something that we need to be aware of and which requires training, awareness and education, on an ongoing basis."

'Malicious Innovations'

Criminals rarely reinvent the wheel. Ransomware, for example, is just the latest variation on the old kidnapping-and-ransom racket (see: Ransomware: Old Racket, New Look).

Expect criminals to use anything that makes the latest attacks more automated and easier to execute at scale, less costly and more reliable and effective.

"Cybercriminals have always been early adopters of the latest technology and AI is no different," says Martin Roesler, head of forward-looking threat research at Trend Micro. "It is already being used for password guessing, CAPTCHA-breaking and voice cloning, and there are many more malicious innovations in the works."



About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.