HomepageISTEEdSurge
Skip to content
ascd logo
Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
March 1, 2026
5 min (est.)
Vol. 83
No. 6
Research Alert

When Are AI Shortcuts Bad for Student Writing?

author avatar
    premium resources logo

    Premium Resource

    Artificial IntelligenceReading & Writing
    An illustration of two halves of a brain, one pink and one dark grey, against a light colored background.
    Credit: Anna / Adobe Stock
      A trio of recent scientific studies from around the world seem to flash a warning that the very thing generative AI is so good at doing—reducing the cognitive load of curating information and wrangling words onto a page—could make it bad for learning.
      The studies were grounded in cognitive load theory—the concept that learning requires working memory to juggle three types of mental load: intrinsic (dealing with the complexity of the material), extraneous (filtering out irrelevant details and distractions), and germane (processing information and building mental schema to anchor learning). For writing tasks, one might think AI tools could reduce intrinsic and extraneous load by curating information and presenting it in a digestible way so students’ brains can focus on the germane load of writing about what they’ve learned.
      Yet when German researchers randomly assigned university students to research and write about the potential risks of nanoparticles in sunscreen using either a traditional web search or ChatGPT, students using ChatGPT wrote less accurate and nuanced responses despite finding the task to be much easier (Stadler et al., 2024). Students using traditional web searches, on the other hand, found the task harder and had to think more deeply about the content, but had stronger written responses.
      Researchers in China found similar results after examining the impact of university students using AI to support revisions to their writing (Fan et al., 2025). They randomly sorted students into four types of support for revising essays: (1) a ChatGPT bot aligned to a grading rubric, (2) live online chats with a writing expert, (3) a self-evaluation checklist based on the rubric, and (4) no support at all. The essays from the students who used ChatGPT were scored highest of all four groups, yet those students engaged in less metacognitive evaluation; many simply cut and pasted text from ChatGPT into their revisions. Although they got a better grade, that score didn’t reflect any greater knowledge of the content than the other groups.
      At issue seems to be what students think (or don’t think) about while writing. Researchers in India measured students’ brain waves while engaging in a writing task with and without the use of AI and found lower brain wave activity when relying on AI (Dhawan et al., 2025). They also could recall few details of what they had read or written about.
      What should educators do with these findings? First, substitute writing tasks that AI can readily do (e.g., summarizing historical events) with those AI cannot do (e.g., drawing upon personal ethics to develop and defend historical arguments). Second, help students understand that while AI makes researching and writing easier, that doesn’t mean it makes it better. Ultimately, students only learn what they think about, which is why the process of writing is more important than the product: Writing forces us to do the tough mental work of arranging concepts into our own mental schema and become, in a word, educated.
      References

      Dhawan, N., Bhasin, S., Gupta, A., & Khalkho, J. T. (2025). Understanding the impact of AI on the cognitive thinking of students. SSRN, 5433395.

      Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., et al. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56, 489–530.

      Stadler, M., Bannert, M., & Sailer, M. (2024). Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry. Computers in Human Behavior, 160, 108386.

      Bryan Goodwin is the head of the McREL Institute at Region 13. Goodwin is a former teacher and journalist and writes a monthly research column for Educational Leadership. He presents research findings and insights to audiences across the United States and in Canada, the Middle East, and Australia.

      Learn More

      ASCD is a community dedicated to educators' professional growth and well-being.

      Let us help you put your vision into action.
      Discover ASCD's Professional Learning Services
      Related Articles
      View all
      undefined
      Artificial Intelligence
      Teaching with AI, Not Against It
      Sarah McKibben
      2 days ago

      Related Articles

      From our issue
      Educational Leadership magazine cover titled “Literacy in the Age of AI,” featuring a collage of notebook paper strips, books, a pen, and a laptop arranged on a light background.
      Literacy in the Age of AI
      Go To Publication