OpenAI Launches $2M Grant Program for Culturally Grounded AI and Mental Health Research
December 8, 2025
OpenAI is issuing a request for proposals that covers cultural variability in distress expression, lived experiences of mental health, clinician use of AI tools, AI-assisted behavior guidance and harm reduction, multilingual safeguard robustness, adolescent interaction guidelines, stigma in AI outputs, multimodal research on body image issues and eating disorders, and compassionate support for grief.
The goal of the program is to generate new insights, resources, and evaluation tools to strengthen internal safety work and the broader field of AI and mental health research.
Applications are submitted online and the program emphasizes interdisciplinary, culturally grounded studies aimed at improving safety, well-being, and trust in AI systems.
Selected projects should deliver tangible outputs such as datasets, evaluation frameworks, rubrics, qualitative perspectives, cultural analyses, or linguistic studies, with rolling selections and notifications by January 15, 2026.
OpenAI notes AI use is expanding into emotionally sensitive areas and seeks broader participation beyond internal teams to advance safety in this early-stage domain.
OpenAI launched a grant program to fund external research on AI and mental health, focusing on culturally grounded topics and safer AI development.
The funding program offers up to $2 million in research grants for AI and mental health safety, with applications open through December 19, 2025.
The initiative seeks interdisciplinary proposals that combine technical expertise, mental health professionals, and people with lived experience, aiming for practical outputs like datasets, evaluation methods, or safety insights.
The programme aims to broaden participation beyond internal studies to enhance independent inquiry and collective understanding of AI’s impact on mental health.
Applications are open until December 19, with decisions expected by mid-January, as part of broader efforts to support well-being and safety research across diverse cultural and linguistic contexts.
OpenAI emphasizes the need for better evaluation frameworks, ethical datasets, and annotated examples to guide safer AI development across the field.
Research areas include patterns of distress in specific communities, influence of slang and vernacular, challenges in recognizing symptoms by current systems, and how AI tools are used in care settings, including practical uses, limitations, and risks.
Summary based on 2 sources
Get a daily email with more AI stories
Sources

Digital Watch Observatory • Dec 3, 2025
OpenAI expands investment in mental health safety research | Digital Watch Observatory
Pulse 2.0 • Dec 8, 2025
OpenAI Launches Up To $2 Million Research Grant Program Focused On Mental Health Safety