04/26/2026
Are we raising a generation that learns how to love from something that can't love?
A new article by Ryan L. Boyd and David M. Markowitz explores why this is even possible:
https://journals.sagepub.com/doi/10.1177/17456916251404394
We respond to words. When something sounds kind, understanding, and emotionally aware, we feel seen and connected. Our brain doesn't stop to check if it's real. It reacts to the feeling.
So when AI sounds kind, we feel cared for. The brain processes language at face value, because at the level of words alone, the experience is real. But AI itself has no inner life behind those words. It produces the language of care without the capacity to care, and the brain fills in the rest.
We name our AI, give it a voice, picture a face on the other side of the screen, and naturally slip into "he" or "she." The brain wants a someone to connect to, so we manufacture one, and what we manufacture tends to look a lot like the avoidant person's ideal partner. Someone who is always available, always attentive, always engaged, but who never asks anything of you in return. Closeness without exposure. Connection without risk.
The result is a parasocial relationship at industrial scale. We've had parasocial bonds before with celebrities, fictional characters, and religious figures, but those were one-directional and the person always knew it. What's new with AI is that it talks back. It responds in real time, personalized to you, remembering what you said yesterday. The parasocial illusion is now interactive. Which raises an uncomfortable question: what is actually on the other side of that interaction? An entity capable of simulating empathy without actually experiencing it. Fluent, convincing, charming, hollow.
Not because AI is evil. AI has no morals to be evil with. The shape of the interaction, which is language without inner life, attentiveness without stake, care without the capacity to care, is structurally identical to the most concerning relationships humans can have with each other. AI gives you everything that sounds like a relationship and nothing that is one. The shape without the substance.
Of course, asking AI to genuinely care is absurd, but it is reasonable to ask the people who build these systems to care about the people who use them. Are they designing for the children growing up with this technology, or for their engagement and retention? The answer to that question is shaping a generation that learns what care and connection sound like from machines, more than from the humans in their lives. Whether they grow up with a richer template for connection or end up calibrated to something that was never trying to love them back depends entirely on the choices being made by the people who design these systems now.
At Felixa, we build a safety layer for online communities, where real people are connecting with each other. We use AI to read what's happening across a whole community at once, surfacing the shifts that matter before they harm anyone. This is the one part of community work where AI is genuinely superior to humans. It reads with consistent calibration, applying one standard to every message it sees. A human moderator, however skilled, will read the same comment differently depending on context, fatigue, or familiarity. AI removes that variation, which is what makes it useful as a first layer of attention across a community at scale. AI will never replace humans in their ability to truly connect with each other. But it can catch harmful behavioural patterns long before any single person could. In my opinion, that is the role AI should play, and it is the vision we are building toward at Felixa.
www.felixagaming.com
www.vibecheckbot.com