Opinionllmreading pedagogywhole language
Author Critiques LLMs For Encouraging Predictive Thinking
6.1
Relevance Score
On January 17, 2026, a high-school teacher and essayist argues that large language models (LLMs) like ChatGPT mirror the failed "whole-language" teaching approach by relying on probabilistic, context-based guesses rather than direct evidence. He warns this token-based prediction reduces texts and people to predictable patterns, diminishes surprise, and urges educators to teach evidence-based reading and open attention to uncover unexpected insights.



