Survey: Nearly Half of Americans Turn to AI for Mental Health, Experts Warn of Risks
June 21, 2025
A recent survey revealed that nearly half of U.S. respondents, specifically 48.7% of 499 participants, utilized large language models (LLMs) for psychological support over the past year, primarily for anxiety and depression, with many reporting more positive experiences compared to traditional therapy.
Despite the positive feedback from users, mental health professionals are increasingly concerned about the safety and efficacy of LLMs in therapy, warning that these tools may validate users' concerns without providing effective treatment, potentially leading to adverse outcomes such as psychological dependence or misdiagnosis.
Leading organizations, including the World Health Organization and the U.S. Food and Drug Administration, have issued warnings regarding the unsupervised use of AI chatbots in mental health, emphasizing the necessity for human oversight and ethical guidelines.
The lack of regulatory oversight for LLMs in therapeutic contexts raises serious safety and ethical concerns, akin to the dangers posed by unregulated over-the-counter medications.
To address these concerns, the article advocates for the formation of an expert consensus panel tasked with establishing universal standards and regulatory oversight for the responsible use of LLMs in mental health care.
A proposed framework called Artificial Intelligence Safety Levels (ASL) categorizes AI systems based on their relevance and safety in clinical settings, ranging from general-purpose tools to advanced therapeutic agents requiring strict human oversight.
Experts caution that while users may find LLMs beneficial, there are significant risks involved, including the potential for these systems to reinforce negative psychological states.
Next steps proposed include ensuring that stakeholders adopt necessary regulatory measures to mitigate risks associated with LLMs in mental health, emphasizing the importance of human supervision and continuous monitoring.
The ASL-MH model includes six levels, from general AI tools with no clinical relevance to autonomous mental health agents that necessitate human oversight and are restricted to research environments.
Mental health experts express growing concern over the lack of safety standards in the use of LLMs for therapy, warning of potential negative outcomes such as inducing narcissistic fugues or psychosis.
Organizations like the WHO and FDA have called for human oversight and validation in the application of AI chatbots for therapeutic purposes, highlighting the need for regulatory measures.
LLMs are increasingly viewed as unregulated therapeutic surrogates, raising alarms about their unsupervised use and the risks they pose to users' mental health.
Summary based on 3 sources
Get a daily email with more AI stories
Sources

Psychology Today • Jun 21, 2025
Millions now turn to chatbots for therapy—but safety standards are nonexistent.
Psychology Today • Jun 21, 2025
Millions now turn to chatbots for therapy—but safety standards are nonexistent.
Psychology Today • Jun 21, 2025
Millions now turn to chatbots for therapy—but safety standards are nonexistent.