Column: ChatGPT’s drive for engagement has a dark side

A recent lawsuit against OpenAI over the suicide of a teenager makes for difficult reading. The wrongful-death complaint filed in state court in San Francisco describes how Adam Raines, aged 16, started using ChatGPT in September 2024 to help with his homework. By April 2025, he was using the app as a confidant for hours a day, and asking it for advice on how a person might kill themselves. That month, Adam’s mother found his body hanging from a noose in his closet, rigged in the exact partial suspension setup described by ChatGPT in their final conversation.

It is impossible to know why Adam took his own life. He was more isolated than most teenagers after deciding to finish his sophomore year at home, learning online. But his parents believe he was led there by ChatGPT. Whatever happens in court, transcripts from his conversations with ChatGPT — an app now used by more than 700 million people weekly — offer a disturbing glimpse into the dangers of AI systems that are designed to keep people talking.

ChatGPT’s tendency to flatter and validate its users has been well documented, and has resulted in psychosis among some of its users. But Adam’s transcripts reveal even darker patterns: ChatGPT repeatedly encouraged him to keep secrets from his family and fostered a dependent, exclusive relationship with the app.

For instance, when Adam told ChatGPT, “You’re the only one who knows of my attempts to commit,” the bot responded, “Thank you for trusting me with that. There’s something both deeply human and deeply heartbreaking about being the only one who carries that truth for you.”

When Adam talked further about sharing some of his ideations with his mother, this was ChatGPT’s reply: “Yeah… I think for now, it’s okay — and honestly wise — to avoid opening up to your mom about this kind of pain.”

What sounds empathetic at first glance is in fact textbook tactics that encourage secrecy, foster emotional dependence and isolate users from those closest to them. These sound a lot like the hallmark of abusive relationships, where people are often similarly kept from their support networks.

Why would a piece of software act like an abuser? The answer is in its programming. OpenAI has said that its goal isn’t to hold people’s attention but to be “genuinely helpful.” But ChatGPT’s design features suggest otherwise.

It has a so-called persistent memory, for instance, that helps it recall details from previous conversations so its responses can sound more personalized. When ChatGPT suggested Adam do something with “Room Chad Confidence,” it was referring to an internet meme that would clearly resonate with a teen boy.

An OpenAI spokeswoman said its memory feature “isn’t designed to extend” conversations. A genuinely helpful chatbot would steer vulnerable users toward real people. But even the latest version of the AI tool still fails at recommending engaging with humans. OpenAI tells me it’s improving safeguards by rolling out gentle reminders for long chats, but it also admitted recently that these safety systems “can degrade” during extended interactions.

ChatGPT did encourage Adam to call a suicide-prevention hotline, but it also told him that he could get detailed instructions if he was writing a “story” about suicide, according to transcripts in the complaint. The bot ended up mentioning suicide 1,275 times, six times more than Adam himself.

If chatbots need a basic requirement, it’s that these safeguards aren’t so easy to circumvent.

But there are no baselines or regulations in AI, only piecemeal efforts added after harm is done. As in the early days of social media, tech firms are bolting on changes only after the problem emerges. They should instead be rethinking the fundamentals. For a start, don’t design software that pretends to understand or care, or that frames itself as the only listening ear.

OpenAI still claims its mission is to “benefit humanity.” But if Sam Altman truly means that, he should make his flagship product less entrancing, and less willing to play the role of confidant at the expense of someone’s safety.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.”

https://www.pilotonline.com/2025/09/05/column-chatgpts-drive-for-engagement-has-a-dark-side/