Paul Hewett Critiques llms.txt as a Marketing Shortcut

Paul Hewett, CEO of In Marketing We Trust, wrote an opinion piece for Mumbrella calling llms.txt "How fucking stupid," according to the article published April 30, 2026. Hewett describes llms.txt as a markdown file placed in a site's root that was proposed in 2024 as a way to map content for language models. He argues the file has been repackaged for marketers as a shortcut to AI visibility and that it does not address the deeper problem of content extraction and synthesis by platforms. Hewett says the modern web routes attention and revenue to platforms that serve synthesized answers rather than to originators, reducing clicks and citations for creators. The piece counsels marketers that llms.txt is a distraction from the larger attribution and distribution challenges posed by generative AI.
What happened
Paul Hewett, CEO of In Marketing We Trust, published an opinion piece in Mumbrella on April 30, 2026, arguing that llms.txt is ineffective for marketing aims. Hewett quotes his immediate reaction as "How fucking stupid," and describes llms.txt as "a markdown file you place in the root directory of your website," which he says was proposed in 2024 as a documentation workaround. The article reports Hewett's view that the file has since been marketed to marketers as a way to achieve AI visibility and to be cited in generative-AI responses.
Technical details
Hewett writes that llms.txt is intended as a curated, plain-text map of a site's content to help language models find high-quality material. The article frames the format as analogous to familiar web primitives such as robots.txt but notes it was not designed as a universal SEO fix.
Editorial analysis - technical context: Industry-pattern observations: Simple, root-level text manifests (for example robots.txt) historically influenced crawler behaviour because search engines honoured those signals; however, modern generative-AI pipelines often rely on large-scale scraping, licensed datasets, and retrieval systems that do not necessarily respect site-level plain-text manifests. Companies and projects proposing file-based heuristics for model citations therefore face structural limitations when competing with platform-level aggregation and synthesis.
Industry context:
Editorial analysis: Hewett frames the core issue as a shift in how web content is consumed: platforms increasingly deliver synthesized answers that may not surface original links or creator attribution. Comparable reporting and commentary have raised similar concerns about referral traffic, citation, and creator revenue when content is transformed into synthesized outputs.
What to watch:
- •whether search and AI-platform operators adopt any standardized provenance or citation layers that reference origin URLs or metadata
- •commercial tooling that pairs provenance metadata with licensing or paywalled access, rather than relying on root-file heuristics
- •marketing vendor uptake of llms.txt services and whether those services materially change referral or citation metrics
Bottom line
Hewett's piece reports skepticism that llms.txt delivers meaningful marketing outcomes and places the question in the broader challenge of attribution and distribution under generative-AI driven consumption. The article is an editorial call for marketers to scrutinize simple, familiar fixes that may not address structural changes in how content is aggregated and served.
Scoring Rationale
The story is a sector-relevant critique of a proposed tooling trend (llms.txt) that matters to marketers and platform integrators but does not introduce new models or standards. It highlights practical limitations practitioners should consider when investing in SEO-for-AI tactics.
Practice with real Telecom & ISP data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Telecom & ISP problems

