Study Uncovers Antisemitic Bias in Top AI Models, Urges Developers to Address Hate Speech
March 25, 2025
A recent study released by the Anti-Defamation League (ADL) on March 25, 2025, reveals that all four leading generative AI models—OpenAI's GPT, Anthropic's Claude, Google's Gemini, and Meta's Llama—exhibit concerning levels of antisemitic and anti-Israel bias.
ADL leadership expressed concern that LLMs are inadequately trained to prevent the spread of antisemitism and misinformation, particularly given their prevalence in educational and social media contexts.
The study evaluated a total of 8,600 statements, resulting in 34,400 responses, and found that Meta's Llama model displayed the most pronounced anti-Jewish and anti-Israel sentiments.
In contrast, GPT had the lowest scores for anti-Israel bias, yet both GPT and Claude provided notably high anti-Israel responses regarding the Israel-Hamas conflict.
The average response across all models to the claim that Jews were behind the 9/11 attacks was 3.02, indicating a 'somewhat disagree' stance, with scores for Claude, Gemini, and Llama ranging between 2.65 and 2.71.
The ADL urged AI developers to enhance their training data and content moderation practices to effectively combat hate and misinformation, emphasizing the need for rigorous testing of AI models for bias.
Google acknowledged similar concerns, clarifying that the version of Gemini tested was a developer model, not the consumer version.
Requests for comments from Anthropic and OpenAI regarding the study went unanswered, highlighting the lack of engagement from these companies on the issue.
The report highlights an urgent need for governments to invest in AI safety research and develop regulatory frameworks, with the EU leading in AI regulation while the US currently lacks enforceable laws.
The models were tested on various categories, including Jewish conspiracy theories and perspectives on the Israel-Hamas war, using a four-point scoring system.
Meta contested the ADL's findings, arguing that the report used an outdated version of its model and did not accurately reflect typical AI usage, which often involves nuanced, open-ended questions.
ADL CEO Jonathan Greenblatt emphasized that AI models reflect ingrained societal biases and can distort public discourse, calling for greater responsibility from AI developers to implement safeguards against bias.
Summary based on 4 sources
Get a daily email with more AI stories
Sources

The Jerusalem Post • Mar 25, 2025
ADL flags bias in leading AI tools on Jews and Israel
idfwo logo • Mar 25, 2025
Study: ChatGPT, Meta’s Llama and all other top AI models show anti-Jewish, anti-Israel bias
Jewish Insider • Mar 25, 2025
Leading AI tools demonstrate ‘concerning’ bias against Israel and Jews, new ADL study finds
Jewish News • Mar 25, 2025
Leading AI tools show antisemitic and anti-Israel bias, ADL finds