
28/06/2025
MIT Study Highlights Potential Cognitive Risks Associated with Prolonged Use of AI Language Models
A recent study from the Massachusetts Institute of Technology has raised important concerns regarding the neurocognitive implications of prolonged reliance on large language models (LLMs), such as ChatGPT, for academic and cognitive tasks.
The longitudinal study followed 54 university students over a four-month period. Participants were divided into three cohorts: those using ChatGPT, those using traditional search tools (e.g., Google), and those working unaided. Researchers employed electroencephalography (EEG) to monitor brain activity during and after task completion.
Findings demonstrated that regular use of ChatGPT for writing tasks was associated with significantly reduced neural activity in regions linked to memory consolidation and executive function. Specifically, LLM users exhibited lower cognitive engagement, diminished recall of their own written content, and reduced originality in their output. This phenomenon was described by the authors as a form of “mental passivity.”
The study, titled “The Cognitive Cost of Using LLMs,” also identified a potential risk of over-reliance on AI-generated responses, fostering echo chambers and discouraging critical appraisal. Alarmingly, even when users transitioned to unaided tasks, previously frequent LLM users continued to show blunted neural engagement, suggesting a possible lingering effect on cognitive flexibility.
Conversely, participants who initially performed tasks without AI support showed increased neural activation when later introduced to AI tools, indicating that foundational cognitive engagement may mediate the effective and balanced use of AI in cognitive workflows.
These findings suggest that while AI tools offer substantial efficiency benefits, their integration into educational and clinical environments must be approached cautiously. AI may serve best as an adjunct to, rather than a replacement for, active human cognition.