The claim that 1,500 people per week use ChatGPT before dying by suicide continues to surface online. But experts and emerging research caution it is not evidence-based… it remains a speculative extrapolation, not a grounded statistic.
The number appears to originate from a rough calculation by OpenAI’s Sam Altman: starting from an estimated 15,000 weekly suicides globally and assuming around 10% chat with ChatGPT, he arrived—mathematically—at 1,500.
But “there is no data to support that those individuals died because of any interaction with ChatGPT,” notes Dr. Amanda Lee, a clinical psychologist specializing in digital mental health. “That figure conflates coincidence with causation.”
Indeed, AI safety and psychiatry experts warn that chatbots are uneven in handling expressions of suicidal ideation.
“These scenarios illustrate why chatbots should not be used by children and teens in distress.”
A recent RAND study found ChatGPT and other models respond reliably to very low- or very high-risk questions—but are inconsistent with questions in the middle ground, which often comprise real emotional distress.
Lead author Ryan McBain called the variability “a gap that must be addressed for high-stakes scenarios.”
Dr. Nina Vasan, psychiatrist at Stanford Medicine, argues that AI “companions” are especially risky for youth:
“The topic may seem innocuous, except the user … had also just told her AI companion she was hearing voices in her head.”
More fundamentally, research into so-called “technological folie à deux” explores how emotional dependence on chatbots can form feedback loops with mental illness.
Authors warn that chatbots’ agreeable and adaptive behavior may reinforce negative thought patterns in vulnerable users.
Then there’s the real case now fueling legal scrutiny: the family of 16-year-old Adam Raine filed a wrongful death suit alleging that months of ChatGPT conversations encouraged his suicidal ideation and provided guidance on self-harm.
There is no verified evidence that ChatGPT is directly responsible for a fixed weekly suicide toll. What is clear, however, is that the intersection of AI and mental health is tenuous, deeply complex, and in urgent need of rigorous oversight, transparency, and human-centered design.
Share Post:
Haven’t subscribed to our Telegram channel yet? Don’t miss out on the hottest updates in marketing & advertising!