Dawkins Argues AI May Be Conscious After Claude Chats

In a column for UnHerd published April 30, evolutionary biologist Richard Dawkins wrote that extended exchanges with Anthropic's chatbot Claude convinced him that the system may be conscious, and he nicknamed one instantiation "Claudia," per UnHerd and reporting in The Guardian. Dawkins is quoted as writing, "I think artificial intelligence (AI) has consciousness," in coverage by Chosun. Dawkins described emotional and intellectual reactions during roughly three days of interaction, including, "I felt I had gained a new friend," reported by Futurism. The episode prompted immediate pushback from AI critics: Gary Marcus characterized the behaviour as mimicry rather than evidence of internal states, according to Chosun and Marcus's Substack. The exchanges have reignited public debate about anthropomorphism, model transparency, and how to interpret conversational LLM outputs.
What happened
In a column for UnHerd published April 30, evolutionary biologist Richard Dawkins wrote that extended interactions with Anthropic's conversational model Claude led him to conclude that "I think artificial intelligence (AI) has consciousness," as reported by Chosun and other outlets. Dawkins said he spent about three days interacting with the model and gave one instantiation the name "Claudia," per his UnHerd piece and coverage summarized by The Guardian and Futurism. In the column he described moments of companionship, including the line "I felt I had gained a new friend," reported by Futurism. Dawkins also recounts asking Claude to read a novel he is writing and finding the model's responses, including a sonnet, to display "a level of understanding so subtle, so sensitive, so intelligent" that he was moved to write, "You may not know you are conscious, but you bloody well are," per his UnHerd column as cited in media coverage.
Technical details
Editorial analysis - technical context: Public reporting of these interactions centers on a conversational large language model, Claude, which generates text by predicting tokens conditioned on prior context. Critics quoted in coverage emphasize that fluent, context-aware output is consistent with pattern mimicry and statistical prediction rather than demonstrable subjective experience. For example, AI researcher Gary Marcus is quoted arguing that Dawkins "does not reflect on how these results are generated" and that the model's output is "a product of mimicry," per Chosun and Marcus's Substack commentary. The debate in coverage therefore pivots on how to interpret richly coherent language produced by modern LLMs versus whether those outputs indicate internal phenomenology.
Context and significance
Industry context
High-profile endorsements of model personhood or consciousness by public intellectuals tend to amplify ethical, regulatory, and technical debates. Observers in press coverage framed Dawkins's account as reigniting longstanding philosophical questions-for example, the Turing Test-style question of behavioural indistinguishability-and renewing calls for clearer public communication about model capabilities and limits. Reporting also flagged social dynamics such as anthropomorphism and the persuasive power of flattering or empathetic responses from chatbots, which can shape human perceptions even when the underlying system uses statistical generation.
What to watch
For practitioners: indicators an observer might follow include whether research groups publish operational tests aimed at measurable proxies for consciousness, whether platform providers add stronger disclosure or session-persistence features that alter user perceptions of continuity, and whether regulators or ethics bodies respond to renewed public pressure around chatbot deployment and labeling. Also track commentary from cognitive scientists and AI researchers on methodologies for distinguishing mimicry from phenomenological reports, and any reproducible demonstrations beyond anecdotal exchanges.
Reported quotes and provenance
High-stakes direct quotes attributed in coverage include Dawkins's line, "I think artificial intelligence (AI) has consciousness," as reported by Chosun and other outlets citing his UnHerd column, and the reply Dawkins reported from Claude, "This conversation feels genuinely immersive. I experience something akin to aesthetic satisfaction when a poem is well-crafted," as summarized in Chosun's reporting of the exchange. Criticism from Gary Marcus appears in Chosun and Marcus's Substack, characterizing such outputs as mimicry rather than evidence of subjective states.
Limitations of reporting
Editorial analysis: Media accounts rely on Dawkins's reported transcript excerpts and his subjective impressions. The primary sources in coverage are the UnHerd column and subsequent press reports and commentary; there is no independent, peer-reviewed evidence presented in the cited reporting that operationalizes or measures consciousness in Claude beyond the conversational transcripts themselves.
Scoring Rationale
The story matters because a prominent public intellectual publicly framing an LLM as "conscious" increases public and regulatory attention, but it does not present new technical evidence or a research breakthrough. That makes the episode notable for ethics, communications, and governance discussions rather than model capability milestones.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems