Medical software shows racial bias against Black patients

Study shows Black patients less likely to receive accurate skin disease diagnosis due to AI and human bias

Facial recognition software is not as accurate for people with darker skin.

Black patients are less likely to receive an accurate skin disease diagnosis, even if the doctor has help from artificial intelligence. 

A study on Feb. 5 found when physicians were assisted by AI in diagnosing skin disease among Black patients, the accuracy wasn’t improved because of the data the AI was trained on. AI models are improving the accuracy of skin disease diagnosis, but not for Black patients.

Medical and facial recognition softwares have shown bias against Black people for years.

Facial recognition software has been proven to show biases against Black women in particular, according to a prior study by Timnit Gebru and Joy Buolamwini at MIT. 

The study evaluated facial recognition software created by IBM, Microsoft, and Face++, finding that although the companies had high accuracy, they all performed significantly better on men and lighter skin types.

The discrepancy occurs because there’s less training and benchmark data, Buolamwini said in a video explaining the study.

“We have entered the age of automation overconfident, yet underprepared,” Buolamwini said in the video. 

“If we fail to make ethical and inclusive artificial intelligence, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality,” Buolamwini added.

It’s not the first study where AI medical software has shown an algorithmic bias against Black people.

AI software commonly used in healthcare to determine which patients face severe illnesses has a bias against Black patients. The study, led by Ziad Obermeyer, showed software was more likely to highlight white patients who needed extra attention, compared to Black patients who were just as ill.

The discrepancy occurred because the algorithm concluded Black patients need less health funding, because historical data showed they spent less on health care in the early stages of illnesses, Alex Hanna said in an interview with Harvard’s Advanced Leadership Initiative: Social Impact Review.  Hanna is a trained sociologist and the Director of Research at Distributed AI Research Institute (DAIR).

“The basic issue is that the problem and variables were framed incorrectly, and the algorithm didn’t know how spending was dispersed nor what kind of institutions surrounded the issue,” Hanna said.

There are steps to limit how much an algorithm discriminates, but no “surefire way,” Hanna said. Mitigating bias depends on whether the format is images or text, such as comparing the link of certain traits with positive or negative words—but sometimes even searching for bias can be missed due to intersectionality.

Businesses need to have people who understand social contexts too, Hanna said, having spent 15 years studying technology as a sociologist.

“This interaction is going to be complicated, and it is going to take a different knowledge far beyond the technical,” Hanna added. “There are models that can be replicated or improved for corporate social responsibility.”

According to Hanna, companies need to expand hiring for roles that evaluate the social impact of projects or work with external organizations for projects.

Tags

Technology

All final editorial decisions are made by the Editor(s) in Chief and/or the Managing Editor. Authors should not be ed, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to [email protected].

Leave a Reply

Your email address will not be published. Required fields are marked *