Artificial Intelligence & Machine Learning , Endpoint Security , Healthcare
Study: Attacks Can Manipulate Medical Imaging, AI Outcomes
UPMC Researchers Say 'Adversarial' Attacks Can Trick AI, Doctors Into Making Wrong DiagnosesArtificial intelligence-based image recognition technology used by radiologists to help improve the speed and accuracy of medical diagnoses - such as detecting breast cancer in mammography images - is vulnerable to cyberattacks that can trick the AI, as well as doctors, into potentially making the wrong diagnoses, a new study says.
See Also: 2024 CISO Insights: Navigating the Cybersecurity Maelstrom
The study, conducted by University of Pittsburgh Medical Center researchers and published on Tuesday in Nature Communications, examined whether adversarial attacks, including cyberattacks, that manipulate medical images can lead AI software and human clinical experts to arrive at faulty conclusions, including incorrect medical diagnoses.
The UPMC findings are similar to a 2019 study by researchers at Ben-Gurion University in Israel which also found that attackers can potentially use deep learning AI to add or remove evidence of medical conditions - such as cancerous tumors - from 3D medical imaging scans.
Study Details
To understand how AI would behave under an adversarial attack, the UPMC research team used mammogram images to develop an AI model for detecting breast cancer.
The researchers first trained a deep learning algorithm to distinguish malignant and benign cases with more than 80% accuracy. Next, they developed a generative adversarial network, or GAN, which is a computer program that generates false images by inserting or removing cancerous regions from the medical images, UPMC says in a statement.
The researchers tested how the AI model classified these manipulated images. Those experiments found that manipulated image samples tricked the AI model into outputting a wrong diagnosis for about 69% of the cases that were initially correctly classified by the AI-CAD model.
In the second part of the experiment, the researchers asked five radiologists to distinguish whether manipulated mammogram images were real or fake. The medical experts identified the images with accuracy of between 29% and 71%, depending on the individual, the report says.
"Many of the adversarial images in this study not only fooled the AI model, but they also fooled experienced human readers," said the study's senior author, Shandong Wu, who is an associate professor of radiology, biomedical informatics and bioengineering at the University of Pittsburgh and director of the Intelligent Computing for Clinical Imaging Lab and the Pittsburgh Center for AI Innovation in Medical Imaging.
The manipulations on the medical images ranged from tiny changes that altered the AI’s decision, but are essentially imperceptible to the human eye, to more sophisticated modifications that target sensitive contents of the image, such as cancerous regions, making them more likely to fool a human, Wu says.
"Such attacks could potentially be very harmful to patients if they lead to an incorrect cancer diagnosis."
Potential motives for such attacks involving medical image manipulation include insurance fraud from healthcare providers looking to boost revenue or companies trying to falsify clinical trial outcomes, Wu says.
Vulnerable Devices
The most worrisome type of potential cyberattacks in the near term involving medical images are situations in which hackers get unauthorized access to patient records and manipulate their imaging data in ways that humans cannot easily identify, Wu tells Information Security Media Group in a statement.
To date, there have been no known "real-life" attacks involving such scenarios, Wu says.
Ido Hoyda, cyber analyst team leader at healthcare cybersecurity firm CyberMDX, says medical imaging suites are among the most vulnerable in the medical device ecosystem.
One of the primary reasons for this is that many of these devices are Windows-based and are running on a very outdated or obsolete operating system, which makes their security maintenance and patch frequency very problematic, he says.
"The lack of vendor support due to the end-of-life software leaves them exposed to many vulnerabilities and makes them susceptible to spreading malware," he says. Also, these machines use cloud servers and services that expose them to the internet, unlike other medical devices, such as infusion pumps, anesthesia machines, and respirators, which are generally in more of a "closed" ecosystem with minimum internet exposure, he says.
"As with any third-party software, adding AI tools on top of the machine increases the amount of potential known and yet-to-be-discovered vulnerabilities that can be exploited. If the software developed by the third arty contains vulnerabilities that enable the hacker to impact the device or software, the effects could lead to incorrect diagnosis and other patient safety risks."
'Even Scarier' Attacks
Michael Holt, president and CEO of healthcare security firm Virta Labs says that "any cyber-physical system" in healthcare, including those using AI, is inherently vulnerable to manipulation.
But such an attack "on a medical manufacturer's end-to-end patient monitoring solutions - including devices, workstations, and telemetry services - that covers a large market share of hospitals and a large footprint of IT systems within each hospital," is the most concerning threat in the near term, he says.
And looking ahead, a potential data manipulation attack involving gene sequencing "is even scarier," he adds.
Taking Action
There are several different ways to better protect patients against potential cyberattacks that might allow adversaries to manipulate medical images and diagnostic outcomes, Wu says in a statement to Information Security Media Group.
That includes better securing hospital IT systems and infrastructure to reduce the risk of unauthorized access to patient data, blocking malware, educating cybersecurity personnel on adversarial attacks, and building AI models that are resistant to adversarial samples.
"There is active research in this regard, such as using adversarial training to improve adversarial robustness of AI models," he says.
Hoyda says that in the meantime, patching wherever possible to ensure that the software is up to date is one of the best defenses against falling victim to these sorts of attacks.
"Although this is probably the biggest pain across cybersecurity teams in healthcare, having an updated operating system and software versions in those devices will significantly reduce the attack surface," he says.
"On top of the software concerns, restricting the devices' communications to known endpoints, using only necessary communication protocols, will put these devices in some sort of a safe box and will help prevent unwanted and potentially dangerous communication from unknown devices."
A Food and Drug Administration spokeswoman said the FDA's cyber experts planned to review the UPMC research, but declined ISMG's immediate request for comment on the study's findings.