AI Fairness in Healthcare: Study Reveals Critical Gaps and Bias Challenges
June 14, 2025
While attributes such as gender, race, and age have been frequently examined, many important factors influencing fairness remain underexplored.
Additionally, the review found that most studies relied heavily on publicly available datasets, revealing a concerning lack of diversity in the data types utilized across different medical fields.
A recent scoping review and evidence gap analysis on AI fairness in clinical settings, published on June 14, 2025, by Mingxuan Liu and colleagues in npj Digital Medicine, highlights significant shortcomings in current research.
The authors analyzed over 11,000 papers, ultimately identifying only 467 studies that focus on AI fairness, underscoring the scarcity of research across various medical specialties.
The review identifies critical gaps in AI fairness research, particularly the limited attention to bias-relevant attributes and an overemphasis on performance equality metrics.
Specific examples of AI bias were cited, including the underrepresentation of darker skin tones in dermatology, which leads to diagnostic inaccuracies, and the MELD algorithm's bias against women in liver transplantation.
Despite advancements in technology, there is a significant disconnect between technical solutions for AI fairness and their practical applications in clinical settings, indicating an urgent need for improvement.
The article suggests that varying definitions of fairness across medical contexts complicate the development of standardized solutions for bias detection and mitigation.
To address these gaps, the authors propose actionable strategies aimed at enhancing AI fairness, ultimately striving for more equitable healthcare outcomes.
The ethical necessity of addressing AI fairness in healthcare is emphasized, as it is crucial for mitigating biases and promoting equity in treatment.
Summary based on 1 source