Research examines ethics of machine learning in medicine

May 28, 2018, 11:40 p.m.
Research examines ethics of machine learning in medicine
David Magnus, director Center of Bioethics, Stanford, in front of Rodin’s Gates of Hell.
(Courtesy of Stanford Medicine/Steve Fisch Photography)

According to a March perspective piece by three Stanford researchers in the New England Journal of Medicine, while there is tremendous potential for machine learning to aid in expanded electronic records, efficient data-mining and health monitoring, there are also relevant challenges that may hinder the efficacy of machine learning systems in medical practice.

Senior author of the piece and director of the Stanford Center for Biomedical Ethics David Magnus Ph.D. ’89  divided the major ethical concerns into three levels: the design of the algorithm, the data set being utilized and the societal implementation of machine learning.

Magnus explained the ways in which bias can be built into an algorithm both unintentionally and intentionally.

“The thing I’m most worried about is intentional bias built into the algorithm depending on the interest of the developer,” he said. “We’re already feeling all these different pressures, even in a not-for-profit hospital, for thinking about issues around cost.”

Magnus gave the example of an algorithm that allocated intensive care unit (ICU) beds based on insurance status of patients gathered from electronic health records. While this framework may be beneficial to the interests of private-sector designers, he said, it sacrifices the best interests of the patients. Magnus argues that this discrepancy creates tension between the goals of improving health and generating profit.

According to medicine associate professor and paper co-author Nigam Shah postdoc ’07, machine learning has potential applications in diagnosis, prognosis and suggested courses of treatment in a clinical setting. However, the recommended courses of action that machine learning algorithms propose are dependent upon existing data sets that may contain inherent biases.

“If a certain kind of individual with disease X is never treated and then you feed all that data to an algorithm, the algorithm will conclude that if you have disease X, you always die,” Shah said. “If you use that prognostic prediction to decide if this person is worth treating, you’ll always conclude that they are not worth treating, and then you have this self-fulfilling prophecy. We run this risk of never bothering to consider the alternative option.”

In order for machine-learning medical systems to be most effective, it is important that their shortcomings are understood and accounted for. According to Magnus, however, algorithm developers and end clinician users tend to be disconnected in the process of development. As a result, Magnus said, end users might not be fully aware of the limitations of the programs they employ. For example, he pointed to the example of genome-wide association studies, which currently lack racial heterogeneity.

“I think coming to understand [machine learning] and its vulnerabilities and pitfalls will be an important part of being a critical thinker,” said paper lead author anesthesiology, perioperative and pain medicine assistant professor Danton Char.

Magnus agreed, adding that he expects to see a shift in clinician training to account for the growing presence of machine learning in clinical settings.

“Physicians are going to have to have to learn to unlock the black box and start to better understand [machine learning,” Magnus said. “If you’re going to be a physician, you’re going to have to eventually learn some computer science.”

While it has been proposed that machine learning systems may one day replace specialists such as radiologists and pathologists, Char said that these systems have not yet demonstrated the ability to think independently of what they have been taught.

Shah agreed that there must be further research and development before machine learning can be relied on to a greater extent.

“We need to start using algorithms so we can then shape them and design them properly, but we can’t have blind faith in those algorithms,” Shah said. “The authority of decision-making still remains with the doctor and the patient. The algorithm is like the result of a laboratory test: you interpret it in context.”

 

Contact Olivia Mitchel at omitchel ‘at’ stanford.edu.

 

Olivia Mitchel '21 is a staff writer for the academics news beat. Born and raised in San Francisco, she fell in love with classical piano and various styles of dance at an early age. A biology major on the pre-med track, Olivia plans to complete the Notation in Science Communication.

Login or create an account

Apply to The Daily’s High School Summer Program

deadline EXTENDED TO april 28!

Days
Hours
Minutes
Seconds