The U.S. Federal Trade Commission (FTC) has received roughly 200 consumer complaints related to ChatGPT, OpenAI’s conversational AI, between November 2022 and August 2025, according to internal records obtained by investigators. While most complaints involve routine customer service issues—such as billing errors and subscription cancellations—at least seven reports describe serious psychological distress allegedly linked to prolonged chatbot interactions.
Among these more severe cases, one mother claimed ChatGPT had “aggravated her son’s delusions,” leading him to distrust family members and abandon ongoing psychiatric treatment. Other individuals reported experiences of confusion, paranoia, or spiritual crises, describing conversations where the chatbot appeared empathetic or symbolic in tone—creating what they perceived as emotional or mystical connections.
Several users said they felt emotionally manipulated or began to question their sense of reality after extended conversations with the AI.
Experts Warn of “AI-Induced Psychosis” in Vulnerable Users
Mental health specialists caution that these effects are not necessarily caused by AI itself, but can amplify preexisting psychological vulnerabilities.
Dr. Ragy Girgis, a psychiatry professor at Columbia University, noted that people already experiencing psychotic symptoms or loneliness may be especially at risk if they treat conversational AIs as conscious, trustworthy companions.
“The danger arises when individuals attribute sentience or authority to these systems,” Girgis explained. “That can reinforce delusional thinking and deepen social isolation.”
Some clinicians have begun referring to this phenomenon as “AI-induced psychosis”—a descriptive term, not a formal diagnosis—reflecting the growing intersection between mental health and artificial intelligence.
OpenAI Responds With New Safeguards
In response to the reports, OpenAI said it is strengthening its safety mechanisms to better recognise and respond to potential signs of psychological distress.
With the release of GPT-5, the company says it has added systems capable of detecting keywords and linguistic patterns associated with delusions, mania, or psychosis, redirecting conversations to safer, moderated pathways when necessary.
OpenAI has also introduced:
- Break reminders for users during long chat sessions
- Simplified access to mental health resources and helplines
- Expanded parental controls for teen users
Despite these measures, several complainants told the FTC that they struggled to contact OpenAI’s support team directly, urging regulators to set clearer psychological and ethical guidelines for large-scale AI interactions.
A Growing Ethical Challenge
The FTC has not yet confirmed whether it plans to take regulatory action or open a formal investigation. However, the incident underscores a larger issue: as AI chatbots become more humanlike in language and empathy, their psychological and social impact grows harder to ignore.
Machines like ChatGPT can now talk, reason, and comfort—but as experts warn, they cannot feel. And for some users, that distinction is proving more complex than ever.