When AI tools promote a form of ‘metacognitive laziness’

Opinion: Students using AI tools from the outset are not gaining a fundamental understanding of the subject they are studying, says Alex Sims and Dulani Jayasuriya.

Structure made of cubes in the shape of a thinking or contemplating person that evolves from simple to complex, 3D render.

In discussions about AI and teaching it’s almost impossible to avoid the buzzword phrases of ‘AI literacy’ and ‘prompt engineering’. However, imagine a medical student who aces every AI-assisted assignment but can’t diagnose a patient without ChatGPT. Or a computer science graduate who writes flawless code with Copilot but can’t debug a simple loop manually. These aren’t dystopian scenarios, they’re emerging realities.

A groundbreaking MIT study (its working paper was published in June this year) highlights an uncomfortable truth: people can produce well-written material, yet struggle to learn when they use AI to assist with creating that material.

MIT’s Media Lab monitored the brain activity of 54 participants tasked with writing essays. Participants were divided into three groups: those that used AI (ChatGPT) to assist them; those that used a search engine (Google) to find sources; and those that simply used their brains (memory) only.

The AI group showed dramatically reduced neural engagement (measured by examining electrical activity in the brain); their brains essentially went into standby mode. When asked to recreate their work without the help of AI an hour later, they couldn’t remember what they’d written. In contrast, participants who used Google maintained neural engagement nearly identical to those using their brains only and the recall in both groups was significantly better than the AI group.

Why is there such a large discrepancy between the groups? The act of searching, evaluating sources, comparing information, making connections, keeps the brain active and assists with learning. It’s the cognitive equivalent of the difference between learning to swim by watching someone else swim versus physically learning to swim in water. We need practical experience and repetition of movements when learning to swim.

The MIT’s Media Lab study affirms what many teachers, including at schools and universities and others are seeing every day: students are developing what appears to be sophisticated capabilities. Text generated by AI tools, such as ChatGPT, is better than humans in terms of logic and composition, vocabulary, sentence structure, text linking, syntactic complexity and composition. Yet students using AI tools from the outset are not gaining a fundamental understanding of the subject they are studying.

The ability to think must come before the ability to prompt. Understanding must precede automation. And validation must accompany any generative AI output.

MIT’s Media Lab study also showed the wider harmful effects of reliance on AI by testing something called ‘Brain-to-LLM’. When participants, who originally wrote essays with AI assistance, were asked to write essays without AI assistance they showed diminished activation of critical thinking skills. This finding resonated with other studies that found those who often used AI bypassed deep engagement with the material, leading to ‘skill atrophy’ for tasks such as brainstorming and problem solving.

But when participants who originally wrote using their memory were asked to write an essay with AI assistance they wrote substantially better essays. Using AI tools to help rewriting was more engaging for the brain, which suggests that AI can be a useful tool for learning.

This hybrid approach maintained higher cognitive engagement while still leveraging AI’s capabilities. However, to avoid the negative effects and harness the positive effects, AI must be deployed at a later stage in a learner’s journey, after they have gained a good grounding in the content. This highlights a fundamental truth: how novices use AI must be radically different from how experts use it.

It’s hardly surprising that students taking the ‘hard’ route of learning the work by themselves learn more than those taking the easy and convenient route of using AI tools from the outset. As the MIT study found, using AI tools early promotes a form of ‘metacognitive laziness’ as students are offloading their cognitive and metacognitive responsibilities to AI.

Care must always be taken in the AI debate. In another MIT Study, ‘State of AI in Business 2025‘ one professional observed that ChatGPT is “excellent for brainstorming and first drafts”. Yet, that professional was knowledgeable about the subject matter they were working on. For such experts ChatGPT and similar AI is useful, and increasingly essential. However, students are not approaching the subject matter with the same level of knowledge, and as we have seen, not only does AI assistance hinder learning, it also diminishes the ability to brainstorm and problem solve.

An experienced programmer instantly recognises when GitHub Copilot suggests inappropriate code. A student cannot. Handing both groups the same tools with the same instructions is as dangerous as giving medical students and surgeons identical authority in an operating room.

The ability to think must come before the ability to prompt. Understanding must precede automation. And validation must accompany any generative AI output.

The choice is clear from a teaching perspective. Use AI in a way that strengthens human cognition or accept that we’re raising a generation whose minds have outsourced knowledge and thinking. Because if there’s one lesson clearer than any other: The ability to generate sophisticated-looking outputs that may or may not be correct without understanding them is the opposite of education

What makes the Brain-to-LLM discovery interesting, is that it proves that AI doesn’t have to diminish cognition. The sequence and process of learning matters more than the tool. Writing first, then enhancing with AI, maintains the neural pathways essential for learning while still capturing AI’s efficiency gains. It’s the difference between using a calculator after understanding math versus never learning math because calculators exist, a distinction that could determine whether we’re educating minds or merely training prompt engineers.

Dr Alex Sims is a professor in the Department of Commercial Law at the University of Auckland Business School.

Dr Dulani Jayasuriya is a senior lecturer at the University of Auckland Business School. 

This article reflects the opinion of the authors and not necessarily the views of Waipapa Taumata Rau University of Auckland.

This article was first published on Newsroom, When student's brains go quiet, 2 October, 2025. 

Media contact

Margo White I Research communications editor
Mob
021 926 408
Email margo.white@auckland.ac.nz