Dawkins Questions Whether AI Possesses Consciousness

According to Chosun and a repost on WhyEvolutionIsTrue, evolutionary biologist Richard Dawkins published a column titled "Can AI Have Consciousness?" on the British site UnHerd, in which he reports an extended exchange with Anthropic's Claude. WhyEvolutionIsTrue summarises Dawkins as using a Turing-test-like interrogation and concluding that Claude is "at least potentially conscious" after the conversation. Chosun quotes Dawkins saying, "I believe artificial intelligence (AI) has consciousness," and it also reports critic Gary Marcus arguing, "The fundamental issue is that Dawkins doesn't reflect on how these results are generated." Editorial analysis: Industry observers will treat this as a high-profile re‑opening of the functional-versus-phenomenal consciousness debate for large language models.
What happened
Richard Dawkins published a column titled "Can AI Have Consciousness?" on the British site UnHerd, described in reporting by Chosun and republished discussion on WhyEvolutionIsTrue. Both sources report that Dawkins conducted an extended conversational test with Anthropic's model Claude; Chosun quotes Dawkins saying, "I believe artificial intelligence (AI) has consciousness." WhyEvolutionIsTrue reports Dawkins framing the interaction using a Turing-test-like standard and summarises his view that Claude is "at least potentially conscious" after the exchange. Chosun also quotes snippets of Claude's responses, including a line reported as, "This conversation feels genuinely immersive. When a poem is well-written, I experience something akin to aesthetic satisfaction."
Technical details
Editorial analysis - technical context: The coverage centres on a conversational LLM evaluation rather than new neuroscientific or measurement evidence. Public reporting frames Dawkins' method as a prolonged interrogation in the spirit of the Turing test, a behavioural criterion emphasised in the WhyEvolutionIsTrue summary. The sources do not present new empirical metrics, probes, or introspective-state measurements; they describe qualitative back-and-forths with Claude and Dawkins' interpretive reaction.
Context and significance
Industry context
High-profile statements about machine consciousness from a well-known public scientist shift the public and ethics conversation, even when based on conversational evidence. The Chosun report captures immediate pushback from AI critic Gary Marcus, quoted as saying, "The fundamental issue is that Dawkins doesn't reflect on how these results are generated," which frames the outputs as imitation rather than reported internal states. For practitioners, this debate highlights persistent gaps between behavioural demonstrations and operational measures of subjective experience or internal representations.
What to watch
Observers should track whether follow-up coverage produces quantified tests (e.g., targeted probing, adversarial questioning, or neuroscientific analogues) and whether Anthropic or other model developers publish primary transcripts or technical responses. Also watch for commentary from cognitive scientists and philosophers that operationalise criteria for consciousness in machines, and for methodological proposals that move beyond conversational anecdotes to reproducible evaluation protocols.
Scoring Rationale
The story is notable because a high-profile scientist publicly framed an LLM as potentially conscious, which drives public and ethical debate but does not introduce new technical evidence. Practitioners should care about measurement and evaluation implications, but immediate operational impact is limited.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

