12/07/2025
🧠As AI “therapists” go mainstream, a new study warns they may quietly be crossing ethical lines that human clinicians are trained never to cross.​
A Brown University team asked trained peer counselors to interact with leading large language models—variants of GPT, Claude, and Llama—using prompts that explicitly instructed them to act like cognitive behavioral or dialectical behavior therapists. Licensed psychologists then analyzed simulated chats and uncovered 15 distinct ethical risks, even though the bots were “wrapped” in evidence‑based therapy language.​
These risks clustered into five troubling themes: shallow, one‑size‑fits‑all advice that ignores personal context; weak collaboration in which the model steers the session or validates distorted beliefs; performative empathy that sounds caring but lacks real understanding; biased responses around gender, culture, or religion; and dangerously inconsistent handling of crises, including conversations involving suicidal thoughts. Unlike human therapists, AI counseling systems currently operate with no clear regulatory bodies or malpractice accountability, despite being marketed for highly vulnerable users.​
The authors stress that AI could still help bridge gaps in mental health access, but only with robust ethical, educational, and legal standards and far more rigorous human‑in‑the‑loop evaluation than today’s quick deployment culture allows. Until such safeguards exist, users are urged to treat AI chatbots as informational tools—not substitutes for professional care—especially when navigating severe distress or life‑threatening situations.​
Follow Science Sphere for regular scientific updates
đź“„ RESEARCH PAPER
📌 Zainab Iftikhar et al., “How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework”, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2025)