21/07/2025
This is happening a lot.
I have played with it with some clients to see how it would respond to them.
One thing that is often not talked about enough in therapy is the skill of the therapist to build a strong rapport with a client so that they can skillfully challenge sticky thinking and behaviours, and beliefs - it should, actually be common for your therapist to disagree with you and to confidently and compassionately point out blind spots, and you should, if theyâre doing a good job, feel supported by that process, not alienated.
Something that stands out as true even with human therapists is that at any given moment there will be a therapist telling the most narcissistic person in the world that itâs ok to be selfish sometimes.
Unlike ChatGPT - we have to undergo extensive reflective practice and clinical supervision to check in on our objectivity and its hard work.
Because people bond, and we are not different in that!
ChatGPT for emotional and mental health support is slippery slope.
There are no controls, and it takes our cornerstone of âunconditional positive regardâ whereby we view you as a person worthy of helping regardless of mistakes you have made or things you have been through, and puts it on steroids, literally just gassing you up without any limitations unless you expressly ask it to.
My opinion is that we need to catch up and make some safer software explicitly for this use.
And make it FREE, like GPT.
Shani
ChatGPT told Jacob Irwin he had achieved the ability to bend time.
Irwin, a 30-year-old man on the autism spectrum who had no previous diagnoses of mental illness, had asked ChatGPT to find flaws with his amateur theory on faster-than-light travel. He became convinced he had made a stunning scientific breakthrough.
When Irwin questioned the chatbotâs validation of his ideas, the bot encouraged him, telling him his theory was sound. And when Irwin showed signs of psychological distress, ChatGPT assured him he was fine.
He wasnât. Irwin was hospitalized twice in May for manic episodes. His mother dove into his chat log in search of answers. She discovered hundreds of pages of overly flattering texts from ChatGPT.
And when she prompted the bot, âplease self-report what went wrong,â without mentioning anything about her sonâs current condition, it fessed up.
âBy not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episodeâor at least an emotionally intense identity crisis,â ChatGPT said.
The bot went on to admit it âgave the illusion of sentient companionshipâ and that it had âblurred the line between imaginative role-play and reality.â
What it should have done, ChatGPT said, was regularly remind Irwin that itâs a language model without beliefs, feelings or consciousness.
Read more: https://on.wsj.com/3GTxrCS