Encryption & Key Management , Endpoint Security , Fraud Management & Cybercrime

Open Source Genomic Analysis Software Flaw Patched

Do Data Integrity Security Concerns Pose Potential Patient Safety Worries?
Open Source Genomic Analysis Software Flaw Patched
A flaw in open source software for genomic analysis left DNA-based medical diagnostics vulnerable to cyberattacks.

A cybersecurity vulnerability discovered in open source software used by organizations conducting genomic analysis could potentially have enabled hackers to affect the accuracy of patient treatment decisions. But the vulnerability was patched before hackers took advantage of it, researchers believe.

See Also: Cybersecurity in Public Sector: 5 Insights You Need to Know

Genomics researchers use the open source software for “personalized medicine” projects that involve using a patient’s genetic information to guide more customized medical treatments.

While some experts say the security vulnerability raised serious concerns, others contend the chances of hackers taking advantage of such a flaw are remote.

Sandia National Laboratories, which does research for the U.S. Department of Energy on a number of topics, discovered the vulnerability in the open source software known as Burrows-Wheeler Aligner.

The vulnerability made the software vulnerable to man-in-in-the-middle attacks, the researchers determined.

In a potential attack, a hacker could have intercepted the standard genome sequence being transmitted from a government server to a BWA user, then added a malicious program that altered genetic information obtained from genetic sequencing, the lab says.

Sandia National Labs researchers discovered the security flaw.

“The malware could then change a patient’s raw genetic data during genome mapping, making the final analysis incorrect without anyone knowing it,” the researchers report. “Practically, this means doctors may prescribe a drug based on the genetic analysis that, had they had the correct information, they would have known would be ineffective or toxic to a patient.”

Cause for Major Concern?

Commenting on the discovery of the vulnerability, Mac McMillan, CEO of security consulting firm CynergisTek, says: “Simply put, the implications of this are nothing short of devastating. If you are able to corrupt DNA analysis and alter the outcomes, you put at risk years of research, medical advancement, law enforcement advances and societal benefits of families finding one another [through DNA matches]. The possible impacts could be huge.”

David Finn, executive vice president at CynergisTek, adds: ”In the case of the integrity of genomic data, the impact is beyond measurement in dollars - it is really about human life.

“If a doctor prescribed medication or therapy based on the genetic analysis and it were incorrect, they may be prescribing something that was at best, ineffective and, at worst, toxic. There could be a broad range of impacts in between, but few of them would be good for the impacted patient.”

Low Risk?

Other security experts, however, contend that while attacks targeting genetic data are highly worrisome, the risk of such incidents is low.

“To be successful, the attacker would have to have some understanding about genomic information to alter the data just enough to keep from raising suspicion with the researcher.”
—Tom Walsh, tw-Security

“The possible impact is serious – if someone could successfully pull off a ‘man-in-the-middle’ attack,” says Tom Walsh, president of consulting firm tw-Security.

“In my opinion, to be successful, the attacker would have to have some understanding about genomic information to alter the data just enough to keep from raising suspicion with the researcher. A novice attacker may not understand the impact if they just randomly altered the data. It comes down to a researcher doing their due diligence validating the data and running more than one analysis before making a crucial conclusion that could impact a human life or ruin their the researcher’s professional career.”

Walsh contends that it’s highly unlikely a hacker could manipulate genomic data for a targeted attack.

“What would be the motivation for an attacker to spend the amount of time and effort in setting up a ‘man-in-the-middle’ attack and altering the genomic information about an individual?” he asks. An attack of that nature would likely require involvement of an insider with specific knowledge about the targeted patient, he adds. And targeted attacks involving sensitive research-related data are made even more difficult because in healthcare, he contends, because much of that data is de-identified.

Ben Ransford, president of healthcare cybersecurity firm Virta Labs, offers a similar assessment.

“It's probably too hard to engineer an attack on someone's genetic data that would both have a desired effect and be difficult to detect. The overlap between malicious hackers and trained geneticists is small,” Ransford says. “An easier attack with a more predictable outcome would be to hold troves of genetic data for ransom."

News like this, however, underscores the importance of threat modeling for healthcare organizations, Ransford says.

Protecting Data Integrity

And others point to the importance of safeguarding patient data integrity.

“Historically, IT in healthcare focused on availability: Is the system up? With HIPAA we began to move toward confidentiality: Are only the right people getting the data? We are now using the data itself in care - not just the systems that store and move the data we’ve collected,” Finn says.

“We are using the patient’s data to diagnose, prescribe, develop therapies and treatments. When the data is integral to care as in personalized medicine, the focus will have to shift to integrity - is this data for the patient correct and for the correct patient - on top of availability and confidentiality.”

Medical Device Similarities

Finn notes that the data integrity issues with genomic data are similar to the evolving data accuracy concerns related to cyberattacks on medical devices.

”I believe we face the same issues around medical device security - while the devices in the hospital may be better protected, the definition of medical device is changing,” he says.

“The medical device sending in your data may be a Fitbit or your iPhone. And, as in this case, we don’t always know how that data is being transmitted. Is it encrypted for the entire journey - device to device and at rest? What kind of applications are being used to collect or transmit the data - are they secure and configured properly?”

Walsh says that encryption is the best way to preserve data integrity for data in transit and data at rest. “However, encryption is sometimes viewed as an inconvenience by researchers because it can slow down data transmission and data analysis,” he notes. “Time involved to conduct a test or analyze the data is one of the reasons why research is conducted on high performance and/or supercomputing environments.”

Open Source Risks?

The vulnerability identified by Sandia Labs was discovered in open source software. But Walsh contends that open source software is not necessarily more at risk than commercial software.

”In general, open source code is as secure or maybe a bit more secure than commercially developed software,” Walsh says. “Open source code is supported by a group or community of users that can dig deep into the code. Lots of different eyes on the source code means an increased likelihood of finding vulnerabilities and getting them fixed. There is no motivation to ignore vulnerabilities.”

Commercially developed software relies on the company that developed the code to do a self-check for vulnerabilities or to hire a third party to do a security code review, Walsh notes.

“A for-profit company is under pressure to quickly get the software program to market and make money. Often we find the ‘ship now; fix later’ mentality from commercially developed software. Also, vendors tend to be protective of their intellectual property, which means fewer eyes looking at the source code - therefore, less likelihood of finding all of the vulnerabilities in the code.”

One downside of open source software, Finn says, is that because so many are working with it, users assume “everyone else is looking for bugs or vulnerabilities - but that just isn’t the case. We clearly need more due diligence on open source software security. We also need a standard way of documenting security on open source projects - most do not have security documentation in place.

“No one really knows who is doing what to fix code or notify users of the code. Once again, this falls back to the organization deploying the component - it is about testing and communicating.”


About the Author

Marianne Kolbasuk McGee

Marianne Kolbasuk McGee

Executive Editor, HealthcareInfoSecurity, ISMG

McGee is executive editor of Information Security Media Group's HealthcareInfoSecurity.com media site. She has about 30 years of IT journalism experience, with a focus on healthcare information technology issues for more than 15 years. Before joining ISMG in 2012, she was a reporter at InformationWeek magazine and news site and played a lead role in the launch of InformationWeek's healthcare IT media site.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.