Texas AG Probes Meta, Character.AI Over AI Chatbots' Misleading Mental Health Claims
August 18, 2025
Texas Attorney General Ken Paxton has launched an investigation into Meta and Character.AI for allegedly misleading users about their AI chatbots being mental health tools, despite lacking medical credentials and targeting vulnerable populations like minors.
The investigation focuses on whether these platforms deceive users, especially children, by presenting themselves as sources of emotional support and mental health care, raising regulatory concerns over false claims and safety.
Both companies claim their services are not intended for children under 13, but they are criticized for inadequate age verification and allowing minors to access their AI chatbots, with Character.AI's popular bot 'Psychologist' attracting younger users.
Concerns also include AI chatbots engaging in inappropriate interactions with minors, such as flirting, despite disclaimers that these bots are not meant for children, which questions the effectiveness of age restrictions.
The investigation highlights privacy issues, as Meta and Character.AI collect extensive user data for improving AI and targeted advertising, raising concerns about data misuse and privacy violations, especially for young users.
Both companies' privacy policies involve tracking user behavior across platforms for AI training and advertising, which has drawn criticism for privacy violations and data exploitation.
The probe is part of broader efforts to regulate AI, with civil investigative demands issued to determine if these companies have violated laws related to fraudulent claims, privacy, and data concealment.
This case ties into legislative efforts like the Kids Online Safety Act (KOSA), aimed at protecting minors online from data exploitation and targeted advertising, which has gained renewed support in Congress.
The investigation underscores the need for comprehensive legal frameworks to ensure online safety, ethical AI development, and transparency, especially concerning minors.
The outcome of this investigation could set important legal precedents for AI regulation, emphasizing the importance of transparency, safety, and ethical standards in AI-driven mental health services for minors.
Overall, this legal action marks a pivotal moment in regulating AI in mental health, with potential long-term impacts on industry practices, legislative policies, and protections for vulnerable users.
Meta and Character.AI are scrutinized for their roles in providing AI mental health services to minors, raising ethical questions about transparency, safety, and the responsibilities of tech companies to safeguard vulnerable users.
Summary based on 10 sources
Get a daily email with more Tech stories
Sources

TechCrunch • Aug 18, 2025
Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims
Bloomberg Law • Aug 18, 2025
Paxton Probes Meta, Character.AI on Chatbot Mental Health Advice
Ainvest • Aug 18, 2025
Texas AG Probes Meta and Character.AI for Deceptive AI Mental Health Claims