Poibeau Examines Conversational AI's Impact on Poetry

Thierry Poibeau publishes a critical interdisciplinary book, Understanding Conversational AI, that interrogates what LLMs know and how they reshape language, cognition and cultural practice. The book draws on philosophy of language, linguistics, cognitive science and AI ethics to analyze how large language models simulate reasoning, perform translation, offer moral judgments and generate literary texts. Poibeau highlights LLMs' aesthetic appeal in poetry while stressing their limitations: embedded biases, automation of cultural labor, misinformation risks and platform enclosure. The book reframes questions about understanding, creativity, agency and trust in a world where synthetic language is pervasive. For practitioners, the book synthesizes conceptual tools to evaluate model outputs, deploy generative systems responsibly and rethink evaluation beyond surface-level metrics.
What happened
Thierry Poibeau released a critical interdisciplinary book, Understanding Conversational AI, via Ubiquity Press that reassesses what LLMs know and how they alter language, cognition and cultural practices. The book synthesizes insights from philosophy of language, linguistics, cognitive science and AI ethics to show how conversational systems can simulate reasoning, perform translation, offer moral judgements and produce literary texts such as poetry while remaining conceptually and ethically contested.
Technical details
Understanding Conversational AI examines the mechanisms by which LLMs generate meaning rather than assert human-like understanding. Poibeau breaks down model behaviour into representational and procedural layers, then evaluates when surface fluency masks brittle generalization. He foregrounds three technical and evaluative themes:
- •model competence versus claim of understanding, including failure modes where token-level prediction creates plausible but incorrect inferences
- •measurement gaps where standard benchmarks miss aesthetic, cultural and moral dimensions of outputs
- •ways biases are encoded and amplified through training data and platform affordances
Context and significance
The book matters because it translates theoretical concerns into operational questions for practitioners. It reframes evaluation beyond accuracy to include cultural embeddedness, metaphorical depth and affective resonance, criteria Poibeau argues are central to poetic judgement but poorly captured by current metrics. He also connects these aesthetic criteria to real-world risks: automated content moderation, misinformation pathways and the enclosure of language labour by platforms. For teams building or auditing generative systems, the book is a compact guide to interrogating both technical choices and socio-ethical trade-offs.
What to watch
Poibeau pushes readers to develop evaluative frameworks that combine qualitative critique with quantitative testing. Expect this work to influence scholarship, course syllabi and governance discussions around model evaluation, creative AI, and deployment ethics.
Scoring Rationale
The book provides a rigorous interdisciplinary framework useful to practitioners evaluating generative systems, but it is analytical rather than a new tool or model release, so impact is moderate.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


