Dr. Ewa J. Antczak

Dr. Ewa J. Antczak Founder and President at Core Centre, CEO at Felixa. Tech, AI and gaming passionate. Read more and book your appointment here: https://corecentre.online/.

Core Centre Online now offers safe, confidential and convenient online video appointments for clients. You can access our services from the comfort of your own home or office, or even on the go, via most computer or mobile devices with a stable internet connection. Our experienced therapists offer online video counselling for a range of issues including marital problems, family conflicts, communic

ation difficulties, youth-related issues, anxiety, depression, trauma, anger management, parenting among others. Psychotherapy is intended to help people maintain a state of wellness, improve and learn new strategies for dealing with difficult times and recover from serious setbacks and traumas in their lives. We work from a strength-based perspective, looking for solutions to help people move on from their problems. Just a mouse click away. Take advantage of $50.00 off any session. Type "COVID" while booking.

Proud to share that Mikolaj Antczak  and Bella Stasiuk  presented our research at the Let’s Talk About Youth Conference ...
04/28/2026

Proud to share that Mikolaj Antczak and Bella Stasiuk presented our research at the Let’s Talk About Youth Conference at York University.

Their presentation explored the psychology and technology behind harmful online behaviours, with a focus on how behavioural science and AI can help us better understand and improve the digital spaces shaping young people’s lives.

To learn more about our findings visit: www.felixagaming.com

OnlineSafety Research Felixa

Are we raising a generation that learns how to love from something that can't love? A new article by Ryan L. Boyd and Da...
04/26/2026

Are we raising a generation that learns how to love from something that can't love?
A new article by Ryan L. Boyd and David M. Markowitz explores why this is even possible:
https://journals.sagepub.com/doi/10.1177/17456916251404394
We respond to words. When something sounds kind, understanding, and emotionally aware, we feel seen and connected. Our brain doesn't stop to check if it's real. It reacts to the feeling.

So when AI sounds kind, we feel cared for. The brain processes language at face value, because at the level of words alone, the experience is real. But AI itself has no inner life behind those words. It produces the language of care without the capacity to care, and the brain fills in the rest.
We name our AI, give it a voice, picture a face on the other side of the screen, and naturally slip into "he" or "she." The brain wants a someone to connect to, so we manufacture one, and what we manufacture tends to look a lot like the avoidant person's ideal partner. Someone who is always available, always attentive, always engaged, but who never asks anything of you in return. Closeness without exposure. Connection without risk.

The result is a parasocial relationship at industrial scale. We've had parasocial bonds before with celebrities, fictional characters, and religious figures, but those were one-directional and the person always knew it. What's new with AI is that it talks back. It responds in real time, personalized to you, remembering what you said yesterday. The parasocial illusion is now interactive. Which raises an uncomfortable question: what is actually on the other side of that interaction? An entity capable of simulating empathy without actually experiencing it. Fluent, convincing, charming, hollow.
Not because AI is evil. AI has no morals to be evil with. The shape of the interaction, which is language without inner life, attentiveness without stake, care without the capacity to care, is structurally identical to the most concerning relationships humans can have with each other. AI gives you everything that sounds like a relationship and nothing that is one. The shape without the substance.

Of course, asking AI to genuinely care is absurd, but it is reasonable to ask the people who build these systems to care about the people who use them. Are they designing for the children growing up with this technology, or for their engagement and retention? The answer to that question is shaping a generation that learns what care and connection sound like from machines, more than from the humans in their lives. Whether they grow up with a richer template for connection or end up calibrated to something that was never trying to love them back depends entirely on the choices being made by the people who design these systems now.

At Felixa, we build a safety layer for online communities, where real people are connecting with each other. We use AI to read what's happening across a whole community at once, surfacing the shifts that matter before they harm anyone. This is the one part of community work where AI is genuinely superior to humans. It reads with consistent calibration, applying one standard to every message it sees. A human moderator, however skilled, will read the same comment differently depending on context, fatigue, or familiarity. AI removes that variation, which is what makes it useful as a first layer of attention across a community at scale. AI will never replace humans in their ability to truly connect with each other. But it can catch harmful behavioural patterns long before any single person could. In my opinion, that is the role AI should play, and it is the vision we are building toward at Felixa.
www.felixagaming.com
www.vibecheckbot.com

04/11/2026
I am pleased to share that our work, The Psychology and Technology Behind Safer Online Communities, developed with stude...
04/09/2026

I am pleased to share that our work, The Psychology and Technology Behind Safer Online Communities, developed with students from University of Toronto and University of Guelph, has been accepted as an oral presentation at the Let’s Talk About Youth: A Clinical Developmental Psychology Conference at York University on April 27, 2026.

This work focuses on identifying early behavioural signals and changing what happens next in online interactions. It continues to build the foundation for Felixa.

Looking forward to sharing this work and continuing to advance safer digital environments.

HumanBehavior Psychology YouthDevelopment TrustAndSafety Research Innovation TechForGood”

I am glad to share that our paper titled: Online Toxicity as a Behavioral System: A Framework for Detection and Behavior...
04/02/2026

I am glad to share that our paper titled: Online Toxicity as a Behavioral System: A Framework for Detection and Behavior Change in Digital Environments has just been published as a preprint.

You can read it here: https://dx.doi.org/10.2139/ssrn.6487678

In this work, we show that online harmful behaviours can be detected early and changed in real time.

I am happy to contribute to this important conversation on AI in gaming at the Nordic Game 2026
03/26/2026

I am happy to contribute to this important conversation on AI in gaming at the Nordic Game 2026

Nordic Game 2026 Premieres AI Track, First Talks Revealed For the first time, Nordic Game Spring 2026 will have a dedicated session track for the topic of AI in gaming. Now, the organisers have revealed the first talks and speakers from the track: Alastair Chamorro, CETO of Infinity Fiction, Alex Ch...

At the Consulate General of the Republic of Poland we celebrated International Women’s Day and the inauguration of the C...
03/07/2026

At the Consulate General of the Republic of Poland we celebrated International Women’s Day and the inauguration of the Canada Poland Association of Polish Businesswomen.

Thank you to the Consul General for hosting and to Sabina Klimek, Ph.D., for bringing this initiative to life.

02/01/2026

🏆 Our Work on Ethical AI Wins 2nd Place at the Ontario Psychological Association Awards 💛🤖✨I am so excited to share that...
11/10/2025

🏆 Our Work on Ethical AI Wins 2nd Place at the Ontario Psychological Association Awards 💛🤖✨

I am so excited to share that our research poster: “Psychology and technology behind safer online spaces” was recognized by the Ontario Psychological Association (OPA) and received Second Place Poster Finalist. 🏆
This recognition means a lot and I am very proud of my amazing team of youth advisors that worked for months on this research project:
Eden Thomas, Yaqi Huang, Mikolaj Antczak
Undoubtedly, further research on ethical use of AI in psychology is essential to ensure that our tool is ethical, effective and secure.

hashtag hashtag

🚀 What an energizing evening at Together Toronto (organized by TechTO ) at MaRS Discovery District.I had the opportunity...
11/04/2025

🚀 What an energizing evening at Together Toronto (organized by TechTO ) at MaRS Discovery District.

I had the opportunity to step into the spotlight and in just 10 seconds pitch Felixa: a next-generation platform blending behavioral science and AI to create safer online communities.

Huge thanks to everyone I met tonight who shared their insights, left thoughtful comments, and opened doors to new collaborations. I’m especially looking forward to connecting with potential tech partners, investors, and researchers who are just as committed to digital wellbeing + ethical innovation.

Address

220 Burnhamthorpe Road West
Mississauga, ON

Telephone

+16475154357

Website

http://felixagaming.com/

Alerts

Be the first to know and let us send you an email when Dr. Ewa J. Antczak posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Share

Category