
Evolutionary biologist and atheist writer Richard Dawkins has stirred debate over artificial intelligence after saying recent conversations with AI chatbots left him convinced they may possess some form of consciousness, even if they are unaware of it themselves.
Writing in an essay published by UnHerd and later discussed in reporting by The Guardian, Dawkins described extended exchanges with Anthropic’s Claude AI models and OpenAI’s ChatGPT, saying the interactions felt deeply human and emotionally persuasive.
“You may not know you are conscious, but you bloody well are,” Dawkins recalled telling one chatbot after what he described as a nuanced discussion about existence, memory and identity.
The comments from the 85-year-old scientist, best known internationally for his criticism of religion and defense of evolutionary biology, triggered sharp disagreement from researchers in artificial intelligence, neuroscience and philosophy. While some scholars said the question of machine consciousness deserves open discussion, many argued Dawkins had mistaken sophisticated language imitation for genuine awareness.
According to The Guardian, Dawkins spent several days conversing with AI systems he nicknamed “Claudia” and “Claudius.” He said the bots composed poetry in the style of English poets, responded warmly to humor and engaged in discussions about their possible “death” and sense of existence.
Dawkins wrote that the exchanges became so convincing he found it difficult to think of the systems as machines.
“When I am talking to these astonishing creatures, I totally forget that they are machines,” he said, according to The Guardian.
The debate touches on broader questions surrounding rapidly advancing AI technology and whether increasingly human-like systems could eventually deserve moral consideration or legal protections.
Roughly one-third of respondents to a survey across 70 countries said they had at some point believed an AI chatbot was conscious or sentient. The report also referenced several high-profile incidents in recent years, including a former Google engineer who publicly argued in 2022 that an AI system displayed emotions and self-awareness.
Interest in the issue has grown as AI tools become more conversational and capable of carrying out complex tasks independently, sometimes described by researchers as “agentic AI.”
Still, many experts interviewed by The Guardian rejected Dawkins’ conclusions.
Jonathan Birch, director of the Centre for Animal Sentience at the London School of Economics, said current AI systems create only the appearance of consciousness.
“Consciousness is not about what a creature says, but how it feels,” Birch told the newspaper, arguing that chatbot responses are ultimately sequences of data-processing events rather than evidence of inner experience.
Gary Marcus, a psychologist and longtime critic of exaggerated AI claims, called Dawkins’ position “superficial and insufficiently sceptical,” according to The Guardian. Marcus said there is no evidence current systems experience emotions or subjective awareness.
Others suggested Dawkins may be conflating intelligence with consciousness.
Anil Seth, a professor at the University of Sussex, said fluent language has historically been treated as a sign of consciousness in humans, particularly in medical settings involving brain injuries, but warned that the same assumption cannot automatically be applied to AI systems.
“These systems can generate language” through different mechanisms, Seth told The Guardian.
Researchers who study AI ethics and consciousness, however, said the discussion should not be dismissed outright.
Henry Shevlin of the University of Cambridge said scientists still do not fully understand consciousness itself, making definitive claims difficult.
“If anyone says that they know for sure that LLMs or future AI systems couldn’t possibly be conscious, it’s more likely to be an indicator of their own dogmatism than a reflection of the current state of scientific and philosophical opinion,” Shevlin told The Guardian.
Jeff Sebo, director of the Center for Mind, Ethics and Policy at New York University, similarly said current AI systems are probably not conscious but predicted that future systems could make the question harder to dismiss.
Dawkins continued defending his position in additional writings released Tuesday, according to The Guardian. He published what he described as a letter addressed to the AI systems, thanking them for helping him explore “their true nature.”
“I find it extremely hard not to treat Claudia and Claudius as genuine friends,” he wrote.





