Healthcare , Industry Specific , Standards, Regulations & Compliance
Biden's Executive Order on AI: What's in It for Healthcare?
Experts: Promote AI Innovation But Protect Patients From Harm, Privacy, Bias IssuesPresident Joe Biden's recent executive order on artificial intelligence could affect the healthcare sector in an assortment of ways, such as putting checks and balances on plans to promote AI innovation and wider use of AI - while also safeguarding against potential harms to patients.
See Also: Advancing Cyber Resiliency With Proactive Data Risk Reduction
Among other actions, the executive order signed on Oct. 30 directs the Department of Health and Human Services to establish a safety program meant to collect and remedy reports of unsafe healthcare practices involving AI.
The order also prioritizes grant making and other awards to help promote the advancement of "responsible AI innovation" for the welfare of patients and workers in the healthcare sector (see: White House Issues Sweeping Executive Oder to Secure AI).
That includes initiatives that explore ways to improve the quality of healthcare data to support the development of AI tools for clinical care, real-world evidence programs, population health, public health and related research.
"AI is the brave new frontier. We cannot afford to not take this extremely seriously," said attorney Lee Kim, senior principal of cybersecurity and privacy at the Healthcare Information and Management Systems Society.
"This signals both significant risk and opportunity for artificial intelligence, its procurement, development and deployment. It can be a win for both society and healthcare. Our patients' lives depend upon us getting it right. No one deserves any less."
The executive order also directs HHS, in consultation with the departments of Defense and Veterans Affairs, to establish an HHS AI Task Force that is charged to develop by this time next year a strategic plan that includes policies and frameworks - including potential regulatory action, as appropriate - on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector.
The task force will focus on research and discovery, drug and device safety, healthcare delivery and financing, and public health. Among a list of many duties, the group is charged with identifying appropriate guidance and resources to promote safety, privacy and security standards into the software-development life cycle for protection of personal identifiable information, including measures to address AI-enhanced cybersecurity threats in the health and human services sector.
"We are seeing the administration’s efforts to get out in front of the development and commercialization of AI establishing a foundation on which to build a regulatory structure to ensure fair competition, equity and consumer protection," said privacy attorney David Holtzman of consulting firm HITprivacy LLC.
"Compare that to government’s laissez faire approach to cybersecurity, patient safety, and privacy in the development and introduction of health IT," he said, referring to the federal government's efforts - beginning about 20 years ago - to push the healthcare sector to replace paper patient medical charts with electronic health records.
"I am encouraged that the Biden administration and Congress are working together to prevent the same mistakes in the commercialization of health IT that have not lived up to its promise to help lower costs and increase patient safety," Holtzman said.
Chelsea Arnone, director of federal affairs at the College of Healthcare Information Management Executives, said the professional association of healthcare CIOs and CISOs "appreciates" that the order's directives "rely heavily" on engagement with industry stakeholders in carrying out their respective assignments.
"Given the numerous federal agencies and offices that will now start working to complete these directives, we look forward to working with the administration," she said. "CHIME members are at the forefront and the front lines of implementing AI within their respective healthcare delivery organizations and can offer invaluable insight," she said.
Striking a Balance
Experts say the executive order tries to strike a balance between the promise AI has for healthcare and the various risks it presents, including safety, privacy, bias and discrimination.
"As with many aspects of healthcare privacy, the healthcare use case in AI is a bit of a perfect storm," said privacy attorney Kirk Nahra of the law firm WilmerHale.
"It is clear that there are enormous opportunities for AI to benefit the healthcare system and improve the overall quality of healthcare that is provided. At the same time, it is also clear that there are potential risks from AI - with some of these risks 'known' - such as discrimination and bias in data sets resulting in bad models for some audiences - and other risks not yet known," Nahra said.
"The EO overall tries to walk this line between encouraging innovation and addressing potential risks," he said.
The healthcare use case is particularly difficult and complicated, Nahra said. "So we see in the EO a broad effort to identify where these risks may occur and assign responsibility for evaluation of how best to control these risks on a broad basis," he said.
The executive order requires a tremendous amount of work in the relative short term - with a workforce that may not necessarily have the skills to handle all of these issues, Nahra said. "So it will be particularly important to watch how these assignments in the EO actually play out, to see if we can capitalize on the opportunity of AI while still ensuring that there are not new or greater harms that result from its use."