AI in healthcare raises need for guidelines to protect patients

Several studies in recent years have raised potential problems with the use of AI in a healthcare setting.
A 2019 analysis published in the journal Science found that a commercial algorithm from Optum used by a health system to select patients for a care management program assigned less healthy black patients the same risk level as white people , meaning that black patients would be less often identified as needing extra care.
An Optum spokesperson said in a statement that the algorithm is not racially biased and that the researchers mischaracterized a cost prediction algorithm based on one health system’s incorrect, unrecommended use of the tool.
“The algorithm is designed to predict future costs that individual patients may incur based on past healthcare experiences and does not result in racial bias when used for that purpose – a fact with which the study authors agreed,” the spokesperson said. .
In 2021, researchers at the University of Michigan Medical School published a peer-reviewed study that found that a widely used sepsis prediction model from electronic health record giant Epic Systems failed to identify 67% of people who had sepsis. It also increased sepsis alerts by 43%, even though the hospital’s overall patient population declined by 35% in the early days of the pandemic. Epic did not make the team working on the AI sepsis model available for an interview.
The White House’s Office of Science and Technology Policy included both cases, without naming the companies, in a report accompanying its “AI Bill of Rights” blueprint, intended as a guideline for various industries.
Although the framework lacks an enforcement mechanism, it includes five rights to which the public should be entitled: Algorithms must be safe and effective, be non-discriminatory, be fully transparent, protect the privacy of those they affect and provide make for alternatives, choice outcomes and feedback.
Jeff Cutler, chief commercial officer at Ada Health, a healthcare AI company that offers symptom monitoring for patients, said his organization follows the five principles when developing and deploying algorithms.
“It’s very important that the industry takes the ‘Bill of Rights’ very seriously,” Cutler said. “It is important that users and companies embracing these platforms ask the right questions about clinical effectiveness, accuracy, quality and safety. And it is important that we are transparent with users.”
But experts say real regulation is needed to make a difference. Although the Food and Drug Administration is tasked with overseeing software as a medical device, including AI, experts say the agency is struggling to respond to the increasing number of algorithms developed for clinical use. Congress can step in to define AI in healthcare and set mandatory standards for health systems, developers and users.
“There will need to be enforcement and oversight to ensure that algorithms are developed with discrimination, bias and privacy in mind,” says Linda Malek, chair of the healthcare practice at law firm Moses & Singer.
Dr. John Halamka, president of Mayo Clinic Platform, a portfolio of businesses from the Rochester, Minnesota-based health system focused on integrating new technologies, including AI, into health care, said more policies could be on the way.
The Office of the National Coordinator is expected to coordinate much of the regulatory guidance from various government agencies, including the FDA, the Centers for Disease Control and Prevention, the National Institutes of Health and other federal agencies outside of HHS, said Halamka, who advised ONC and the federal government on numerous health care technology initiatives, but is not directly involved in oversight.
Halamka expects significant regulatory and sub-regulatory guidance within the next two years.
Download Modern Healthcare’s app to stay updated when industry news breaks.