Author Critiques Widespread, Performative AI Use

On KittyGiraudel.com, author Kitty Giraudel publishes a first-person essay arguing that AI adoption in industry and social platforms has become pervasive and often performative. The author writes that coding models such as Opus 4.5 and GPT-5.2 became game-changers in late 2025 and that adoption accelerated across workplaces. She recounts joining Scilife in October 2024 and finding AI use limited at the time, and says she now uses tools like Cursor and Claude for drafting and review. The essay's TL;DR, as stated by the author, is that AI can be useful but forced and performative adoption is harmful to individuals and the open web.
What happened
The author Kitty Giraudel published a personal essay titled "You Cannot Spell 'Pain' Without AI" on KittyGiraudel.com describing frustrations with contemporary AI use in industry and on social platforms. The author writes that coding models such as Opus 4.5 and GPT-5.2 became genuine game-changers in late 2025, and that by 2024 AI had moved toward ubiquity in workplaces. She reports joining Scilife in October 2024 when AI adoption there was limited. The author also notes routine personal usage of AI: drafting, research, and editorial review with tools she names, including Cursor and Claude, and describes performative, AI-written content on LinkedIn as a recurring irritation.
Editorial analysis - technical context
Industry reporting and practitioner experience document a rapid acceleration in developer-focused and general-purpose foundation models since 2024. Comparable accounts note that launch cadence for models with stronger coding and reasoning capabilities compresses evaluation and integration cycles for engineering teams. For practitioners, this compressed cycle increases pressure to establish robust evaluation, monitoring, and provenance controls before deploying AI-generated outputs into customer-facing or record-keeping workflows.
Industry context
Observed patterns in similar transitions show that public platforms often amplify low-effort outputs because algorithmic engagement favors clarity and brevity. This creates incentives for actors to outsource voice and interaction to models, which can degrade signal, complicate moderation, and blur provenance of authorship. Those are industry-level dynamics, not claims about any single organization's motives.
What to watch
For practitioners: watch how teams operationalize guardrails for AI-assisted writing and commenting, including provenance tagging, human-in-the-loop policies, and documentation practices. Observers should also track platform-level moderation and verification features that attempt to surface when content is AI-assisted, and the ecosystem of tools that audit or watermark model outputs.
Bottom line
The essay is a practitioner's critique that balances personal adoption of AI with concerns about performative use and downstream effects on discourse and the open web. The piece is qualitative and experiential rather than a data-driven study, and the author does not provide organizational roadmaps or proprietary metrics.
Scoring Rationale
This is an experiential essay that highlights cultural and operational issues around AI adoption rather than new technology or data. It is relevant to practitioners as a prompt to examine provenance, moderation, and governance, but it does not introduce technical breakthroughs or new tooling.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problems


