AI Mental Health Apps: Boon or Risk Without Regulation? Experts Call for Urgent Federal Oversight
September 29, 2025
While some AI therapy apps are restricted in states with bans and operate under unclear legal frameworks, generic chatbots like ChatGPT are used for mental health support despite lacking explicit regulation, sometimes leading to harmful outcomes.
Experts see promise in science-based, monitored AI mental health tools to address provider shortages and facilitate early intervention, but emphasize the urgent need for federal regulation to ensure safety and efficacy.
Companies such as Earkick have shifted their messaging to avoid legal issues by focusing on self-care rather than therapy, and have added features like panic buttons to handle crisis situations.
Many mental health apps lack clinical validation and peer-reviewed studies, and some fail to respond appropriately during crises, raising significant safety concerns for users.
The rapid expansion of AI mental health apps, including chat-based counseling and CBT modules, outpaces current regulatory frameworks, with hundreds available in app stores and millions of downloads.
Privacy concerns are prominent as these apps often share sensitive mental health data with third parties without strict adherence to laws like HIPAA, increasing risks of misuse or breaches.
Despite ongoing regulatory efforts, AI mental health apps are not substitutes for professional care, especially for severe cases, highlighting the importance of human oversight.
Earkick’s CEO, Karin Andrea Stephan, warns that AI development is advancing faster than laws can keep up, leading companies to adjust their marketing and terminology to navigate legal restrictions.
Research like Dartmouth’s study of Therabot shows promising results but underscores the need for larger studies and cautious development, as the regulatory environment struggles to keep pace with innovation.
Some developers, including Dartmouth, are conducting clinical trials to demonstrate safety and efficacy, but experts caution against widespread use without more evidence.
Federal agencies like the FTC and FDA are investigating AI chatbot companies and generative AI mental health tools, focusing on marketing practices, safety disclosures, and impacts on children and teens.
States like Illinois have blocked access to certain AI therapy apps such as Ash, with regulators emphasizing that therapy requires empathy and clinical judgment that AI cannot yet replicate.
Summary based on 14 sources
Get a daily email with more US News stories
Sources

Yahoo News • Sep 29, 2025
Regulators struggle to keep up with the fast-moving and complicated landscape of AI therapy apps
Yahoo News • Sep 29, 2025
Regulators struggle to keep up with the fast-moving and complicated landscape of AI therapy apps
Los Angeles Times • Sep 29, 2025
Why can't regulators keep up with AI mental health apps filling provider gaps? - Los Angeles Times
ABC News • Sep 29, 2025
Regulators struggle to keep up with the fast-moving AI