Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
Slack Patches Prompt Injection Flaw in AI Tool Set
Hackers Could Exploit Bug to Manipulate Slack AI's LLM to Steal DataChat app Slack patched a vulnerability in its artificial intelligence tool set that hackers could have exploited to manipulate an underlying large language model to phish employees and steal sensitive data.
See Also: Vá à luta com armas mais inteligentes: acelere seu SOC com IA
The Salesforce-owned text client offers Slack AI as an add-on service. It says the function uses the "conversation data already in Slack to create an intuitive and secure AI experience tailored to you and your organization."
Researchers at PromptArmor found a flaw in the form of a prompt injection vulnerability. Prompt injection flaws exist because LLMs cannot recognize whether an instruction is malicious or not. "As such, if Slack AI ingests any instruction via a message, if that instruction is malicious, Slack AI has a high likelihood of following that instruction instead of, or in addition to, the user query," the researchers said.
Hackers with a Slack workspace account could have exploited the flaw to steal business files or sensitive data shared on the collaboration platform simply by querying the AI tool to fetch it for them, presenting a significant data exposure risk, the researchers said.
When prompted, Slack AI retrieves data from both public and private channels, even those the querying employee is not part of. This can expose API keys and sensitive customer data in private channels, which a hacker can potentially exfiltrate and abuse.
Bad actors could have also exploited the flaw to inject malicious prompts to phish employees to gain wider access into the target organization. Workplaces can connect Slack to third-party storage, increasing the risk surface area for the latest flaw. Hackers could inject malicious instructions in the documents. "The issue here is that the attack surface area fundamentally becomes extremely wide. Now, instead of an attacker having to post a malicious instruction in a Slack message, they may not even have to be in Slack," the researchers said.
The PromptArmor team said that the attack is "very difficult" to trace, as Slack AI does not cite the attacker's message as a source for the output. The researchers advised changing Slack AI's settings to restrict analysis of documents to limit access to sensitive information.
They said they disclosed the vulnerability to Slack on Aug. 14 and worked with the company to fix the issue over the period of a week.
Slack described the flaw to the researchers as "intended behavior," the researchers said. In a blog post, Slack said it was a low-severity bug that could allow a hacker with an account in the same workspace to phish users for "certain data" under "very limited and specific circumstances." Slack said there's no evidence of unauthorized access or exploitation of customer data at this time.