The first brain-scan study on AI users reveals a hidden cognitive cost.

The first brain-scan study on AI users reveals a hidden cognitive cost.

In a preliminary study published in June 2025 titled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” researchers provided the first concrete data on the neurological effects of AI usage.

Empathy

“The key to success is sincerity,” said actor and comedian George Burns. “If you can fake that, you’ve got it made.” However, it may be even harder to fake empathy, which involves understanding and sharing the feelings of others. Despite this, we often do fake empathy. We hide our schadenfreude—the pleasure derived from the misfortunes of obnoxious people—under sweet expressions of sympathy. A weary doctor may not feel or be able to easily express the empathy needed when delivering bad news. But don’t worry, there’s an app for that—really.

A 2023 study found that responses from ChatGPT to patient questions were preferred by a panel of licensed healthcare professionals nearly 80% of the time over human physician responses. ChatGPT’s responses were rated as more empathetic by a factor of nearly 10. Subsequent research supports this finding. A recent study tested six large language models (LLMs) on emotional intelligence (EI) assessments designed for humans. The LLMs achieved an average score of 82% correct answers, significantly higher than the 56% scored by human participants. LLMs, which lack emotions, exhibited better emotional intelligence as measured by cognitive assessments than humans, including but not limited to emotional empathy. In predicting what humans might think or feel, it is hard not to give LLMs credit for having a “theory of mind,” or at least a very convincing simulation of it.

The Amnesia Effect: Users Couldn’t Recall Their Own Work

One of the most striking behavioral results was the immediate impact on memory. Participants were asked to quote a sentence from the essay they had just completed.

  • The AI Group: A staggering 83.3% of participants who used ChatGPT failed to provide a correct quotation from their own essay in the first session. In fact, none of the participants in this group could produce a fully correct quote. Let that sink in: ZERO!
  • The Control Groups: In contrast, those who wrote without tools had no problem. Only 11.1% of participants in the “Brain-only” and “Search Engine” groups experienced the same difficulty.

The implication is clear: while you process ideas, you don’t fully internalize them.

The Neural Dimmer Switch: Brain Connectivity Collapsed

The behavioral data was mirrored by the neurophysiological evidence. The researchers found that mental engagement systematically decreased with more AI assistance. The brain essentially powers down parts of its creative network when AI takes over.

  • The “Brain-only” group showed 79 significant neural connections in the alpha band — a frequency associated with internal attention and creative ideation.
  • The LLM group showed just 42 connections.

That’s a dramatic 47% reduction in neural engagement during creative work.

The Atrophy Effect: The Brain Didn’t Bounce Back

Perhaps most concerning, when regular AI users were forced to write without the tool in a later session, their brains didn’t simply return to a “normal” state.

  • Their brains showed significant “under-engagement of alpha and beta networks” compared to those who had practiced without AI all along.
  • The study supports the idea that frequent AI use can lead to skill atrophy in tasks like brainstorming and problem-solving.

Like a muscle that’s forgotten how to work, the neural pathways for independent thought were measurably weaker.

The “Soulless” Output: The Human Difference

While AI-assisted essays often scored well on technical metrics, human teachers evaluating the work had a different perspective. They described the LLM-generated essays as “soulless,” noting that many sentences were empty of content and that the essays lacked personal nuance. Participants themselves called the AI’s output “robotic.”

The bottom line? Relying too heavily on AI weakens memory, reduces neural engagement, and diminishes originality. However, this isn’t a reason to avoid AI; it’s a reason to rethink how we use it.

Why This Feels Familiar: The Ghost of Technologies Past

This phenomenon of cognitive offloading isn’t new. The study references historical parallels that highlight we’ve been here before.

  • The Calculator Precedent: The paper notes educational observations that students who rely heavily on calculators can struggle more when those aids are removed because they haven’t internalized the problem-solving process.
  • The Google Effect: The study also discusses the well-documented “Google Effect,” where reliance on search engines changes how we remember information. We stop retaining the information itself and instead remember where to find it, which discourages deeper cognitive processing.