Study Reveals Gender Bias in AI Tools: Women's Health Needs Downplayed by Google's 'Gemma'

August 11, 2025
Study Reveals Gender Bias in AI Tools: Women's Health Needs Downplayed by Google's 'Gemma'
  • This research aligns with broader studies indicating that 44% of AI systems across various industries exhibit gender bias, with 25% demonstrating racial discrimination.

  • Research from the London School of Economics and Political Science has revealed significant gender bias in AI tools used in social care, particularly in large language models like Google's 'Gemma'.

  • The study analyzed 29,616 case notes and found that Gemma used more negative descriptors for men than for women, indicating a bias in how health needs are represented.

  • Dr. Sam Rickman, the lead author of the study, warned that the perceived needs determined by these AI tools could lead to women receiving less care than necessary.

  • Notably, Google's AI model exhibited more gender bias compared to other models, such as Meta's Llama 3, which did not show similar disparities.

  • These findings underscore the urgent need for transparency, rigorous testing for bias, and legal oversight in the deployment of AI systems in public sectors, especially in social care.

  • The study, published in BMC Medical Informatics and Decision Making, highlights that LLMs are downplaying women's health needs compared to men's when summarizing case notes.

  • The analysis demonstrated that similar care needs in women were often omitted or described in less serious terms, potentially trivializing their health issues.

  • The study calls for fair regulation of large language models in long-term care to address these biases and promote algorithm fairness.

  • While AI tools like Gemma are intended to alleviate social workers' workloads, there is a concerning lack of transparency regarding the models used in healthcare decisions.

  • In response to the findings, Google acknowledged the study and stated they would review the report, noting that the research was based on the first generation of Gemma, which is now in its third generation.

  • Rickman advocates for transparency and rigorous testing of AI systems to detect biases, urging regulators to measure bias in large language models used in long-term care.

Summary based on 4 sources


Get a daily email with more World News stories

More Stories