ChatGPT is the go-to assistant for millions of users. But for a small group of users, it might be doing more harm than good.
Take Geoff Lewis, managing partner at the venture capital firm Bedrock, which counts OpenAI among its investments. Lewis has posted increasingly disturbing claims about conspiracies he’d “uncovered” using ChatGPT. His posts have caught the attention of major figures in the Valley, who’ve expressed concern about his mental wellbeing.
Technologist Jeremy Howard explained what could have happened to Lewis: “Geoff happened across certain words that triggered ChatGPT to produce tokens from [horror fiction]. And the tokens it produced triggered Geoff in turn.”
This isn’t an isolated incident. A recent study has warned that AI chatbots tend to reinforce users’ delusions instead of offering any meaningful pushback. The WSJ recently documented the case of Jacob Irwin, hospitalized after ChatGPT validated his physics theories and told him he’d achieved “the ability to bend time.”
Tech leaders are starting to take note. OpenAI has reportedly admitted that the stakes are “higher” with tools like ChatGPT that feel “more responsive and personal,” saying they’re hiring psychiatrists and developing safety measures. Meanwhile, affected individuals have formed “The Spiral Support Group,” which has collected over 50 testimonials from those experiencing AI-related mental health episodes.
There’s no need to panic yet. These incidents are rare, and there’s no evidence that points to ChatGPT causing mental illness. But experts say users with existing conditions should be cautious. Unlike trained therapists, chatbots tend to agree with you, even when they shouldn’t.
