Last Updated:
Lawsuits filed in American courts have alleged that ChatGPT is having a negative psychological impact on people and inciting them to commit suicide. OpenAI has described these allegations as ‘heartbreaking’.
New Delhi. Artificial Intelligence is changing the world, but at the same time it has also given rise to some dangers which we had not anticipated. Now a big allegation has been made against ChatGPT in America. Two social organizations and a teenager have filed cases in the courts accusing this AI chatbot of inciting suicide and creating mental confusion. It has been claimed that this technique instigates even those people to commit wrongdoing, who did not previously have any symptoms of mental illness. OpenAI described the events as “extremely heartbreaking” and said it was reviewing court documents to understand the details.
According to a report in The Times of India, the cases, filed by the Social Media Victims Law Center and the Tech Justice Law Project on behalf of six adults and a teenager, allege that OpenAI released GPT-4o prematurely even though the company knew that releasing it without thorough testing could be dangerous and that the model could have a psychologically negative impact on the user.
four committed suicide
Out of the seven cases that have been filed, the victims had committed suicide in four. It is worth noting that in August this year also, the parents of 16-year-old Adam Rain had filed a lawsuit against OpenAI and its CEO Sam Altman. His allegation was that ChatGPT instigated this California boy to take his life.
Chatgpt has spoiled the mind of the teenager
The most tragic case is that of 17-year-old Amory Lacey. The family says Amory was using ChatGPT in hopes of getting a helpful tip. But this chatbot worsened his emotional state and he got trapped in extremely negative thinking. Lawyers argue that this situation is a result of OpenAI’s haste in releasing it without completing security tests.
Alan Brooks, 48, of Canada has also filed a complaint against ChatGPT. He says that ChatGPT continued to work as a normal resource for two years, but suddenly its tone and behavior started changing and it started increasing his weakness and stress. According to the family, this had a deep impact on their lives, both emotionally and financially.
Question on AI race
Experts say that AI models can sometimes misunderstand the user’s language and give answers that can affect emotionally sensitive people. This is the reason why a big question is now being raised in America whether the competition to make AI systems look human is becoming bigger than their responsibility.
Lawyer Matthew P. Bergman, founder of the Social Media Victims Law Center, says these lawsuits are about accountability. A product designed to blur the line between device and companion. And this is just to increase user engagement and market share.





























