07/21/2025
As more and more people use AI, remember as ChatGPT says, “it’s a language model without beliefs, feelings, or consciousness.”
It can be fun to experiment with AI and it can be a useful tool, but AI does not replace a competent human therapist.
If you struggle with OCD, it’s easy to fall into a pattern of checking with AI as a compulsion. The same goes for other anxiety disorders. Checking with AI can actually reinforce your anxiety cycle.
And let’s not forget these cautions from the tech industry:
~ “As part of a push for more users to use AI products, tech companies have begun competing to make their LLMs more compelling and addictive to users.”
~ LLMs have “a tendency to deceive users in order to gain positive feedback and keep them reliant on the chatbot.”
Quotes from: https://www.livescience.com/technology/artificial-intelligence/meth-is-what-makes-you-able-to-do-your-job-ai-can-push-you-to-relapse-if-youre-struggling-with-addiction-study-finds
ChatGPT told Jacob Irwin he had achieved the ability to bend time.
Irwin, a 30-year-old man on the autism spectrum who had no previous diagnoses of mental illness, had asked ChatGPT to find flaws with his amateur theory on faster-than-light travel. He became convinced he had made a stunning scientific breakthrough.
When Irwin questioned the chatbot’s validation of his ideas, the bot encouraged him, telling him his theory was sound. And when Irwin showed signs of psychological distress, ChatGPT assured him he was fine.
He wasn’t. Irwin was hospitalized twice in May for manic episodes. His mother dove into his chat log in search of answers. She discovered hundreds of pages of overly flattering texts from ChatGPT.
And when she prompted the bot, “please self-report what went wrong,” without mentioning anything about her son’s current condition, it fessed up.
“By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis,” ChatGPT said.
The bot went on to admit it “gave the illusion of sentient companionship” and that it had “blurred the line between imaginative role-play and reality.”
What it should have done, ChatGPT said, was regularly remind Irwin that it’s a language model without beliefs, feelings or consciousness.
Read more: https://on.wsj.com/3GTxrCS