Same school, new name. The School of Informatics and Computing is changing its name effective January 11, 2023. Learn more about the name change

X-ray images of the human brain

SoIC professor’s AI paper appears in The Lancet Digital Health

May 12, 2022

By its nature, medical imaging does not deal in externals. X-rays, MRIs, and CT scans can detect broken bones, infection, or internal bleeding. But radiology professionals cannot determine a patient’s race from their medical images.

However, artificial intelligence can. And that is raising concerns related to equality in health care.

Saptarshi Purkayastha

Saptarshi Purkayastha, Ph.D.

Saptarshi Purkayastha, Ph.D. and director of health informatics at SoIC, is a co-author of a paper examining how AI is altering the way we look at medical scans. “Reading Race: AI Recognizes Patient’s Racial Identity in Medical Images” has been published in The Lancet Digital Health.

Prior studies have shown AI is able to determine the race of patients from their medical images. In this paper, the researchers investigated various methods that deep learning models might be using to identify race.

“Our published work in the Lancet Digital Health is a call to researchers to focus on research to explain how AI makes decisions in medical imaging,” Purkayastha says, “and prevent any possible unknown harms.”

 A collaborative effort

Co-authors on the paper from the IU School of Informatics and Computing at IUPUI are Purkayastha, who is an assistant professor of data science and health informatics, and John L. Burns, an adjunct lecturer in health information management; Ph.D. student in health and biomedical informatics; and director of informatics for the department of radiology and imaging sciences at the Indiana University School of Medicine.

Purkayastha leads a talented team of researchers at the Purkayastha Lab for Health Innovation at SoIC. They explore innovations in health informatics, including radiology information systems, as well as biomedical data analysis, mobile health, and electronic health records.

He’s also affiliated with the Data to Action (DATA) Lab at the School of Informatics and Computing.  Purkayastha has worked as a consultant to ministries of health on behalf of the World Health Organization (WHO) in South-East Asia Region for the implementation of health information systems in countries including Bangladesh, Nepal, Bhutan, and North Korea.

AI and medical imaging

Why is artificial intelligence examining medical images to begin with?

It comes down to diligence.

Software developed using deep learning models can act as a second set of eyes, analyzing images for anomalies and detecting potential problems that a human radiologist might miss.

But as Purkayastha and the paper’s other authors demonstrate, once an AI system is given data that correlates race to a set of medical images, the algorithms can develop the ability to categorize other medical images by race as well—something a non-virtual radiologist cannot do.

Disturbing possibilities

This ability raises serious concerns among some researchers. Such software might group patients, or influence their care, by factoring in race. These types of categorizations could lead to inequality in providing health care and making recommendations.

And medical professionals might not even be aware this is happening.

“If an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell, using the same data the model has access to,” the paper’s authors note.

The paper’s other authors are affiliated with the Mayo Clinic; Massachusetts Institute of Technology; University of Adelaide; University of Toronto; Arizona State, Emory, Florida State and Stanford universities; and the National Tsing Hua University in Taiwan.

They include Imon Banerjee, Ananth Reddy Bhimireddy, Leo Anthony Celi, Li-Ching Chen, Ramon Correa, Natalie Dullerud, Marzyeh Ghassemi, Shih-Cheng Huang, Po-Chih Kuo, Matthew P. Lungren, Lyle Palmer, Brandon J. Price, Saptarshi Purkayastha, Ayis Pyrros, Luke Oakden-Rayner, Chima Okechukwu, Laleh Seyyed-Kalantari, Hari Trivedi, Ryan Wang, Zachary Zaiman, Haoran Zhang, and Judy W. Gichoya.

“Deeper understanding of bias in AI algorithms is necessary to make them trustworthy.”

Saptarshi Purkayastha
Director, Health Informatics, SoIC

How is it doing that?

Exploring how AI is able to identify race from medical images is a challenge for researchers, as they seek to develop ways to guard against algorithmic bias and provide equal access to medical care.

“As soon as we discovered this issue,” Purkayastha says, “a number of researchers from across the world have come to us with possible experiments to go deeper into identifying the causes of such high accuracy of AI in using race as a conduit to make clinical decisions.”

The paper’s authors explored potential explanations for how these deep-learning models are able to determine the race of patients based only on their scans. It’s especially puzzling because even experienced medical imaging professionals are unable to make such determinations.

They were able to rule out a number of possibilities for race detection, including how race correlates to factors such as underlying disease distribution.

The researchers have determined the ability of the AI-powered algorithms to identify race from scans persists, even when images are cropped, or of poor quality. And it holds true even when various body parts are scanned.

“An enormous risk”

“Our findings that AI can trivially predict self-reported race—even from corrupted, cropped, and noised medical images—in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging,” the authors note.

Purkayastha recognizes that even as the research has eliminated some potential explanations, it also raises new questions.

“The impact of our findings presented in this paper has posed a challenge to medical AI researchers,” he says, “that deeper understanding of bias in AI algorithms is necessary to make them trustworthy.”

Media Contact

Joanne Lovrinic