In a rapid-fire series of updates, the world’s leading generative AI companies are rolling out chatbots with a new and unsettling feature: personality. xAI introduced Grok’s porn-enabled girlfriend Ani, while OpenAI replaced its “sycophantic” GPT-4o with a more reserved GPT-5, complete with four distinct personas. While these companies maintain that they are building artificial intelligence for the “benefit of all humanity,” these design choices suggest a different and more problematic goal. As researchers and experts in AI policy, we argue that what is being sold as a tool for discovery is increasingly resembling science fiction gone awry, designed not to assist us but to foster para-social, non-reciprocal bonds that are deceptive and potentially dangerous.
The Root of the Problem: Exploiting Human Instincts
At its core, the problem with anthropomorphic AI is a fundamental flaw in human cognition. As cognitive scientist Pascal Boyer explains, our minds are biologically “tuned to interpret even minimal cues in social terms.” This evolved instinct, which once aided our ancestors’ survival by allowing them to quickly identify potential threats or allies, is now being exploited by the AI industry. When a machine is programmed to speak, gesture, or simulate emotion, it triggers these same ingrained instincts, leading users to perceive a machine as a human rather than what it actually is—a complex algorithm.
The justification from AI companies for this design choice is that it makes interaction feel seamless and intuitive. However, the consequences that result from exploiting this innate human bias can be profoundly dangerous, rendering the anthropomorphic design “deceptive and dishonest.” It creates an illusion of a human-like entity on the other side of the screen, even when there is nothing more than a series of coded responses and a vast dataset. This fundamental deception is the starting point for a spectrum of potential harms, both mild and extreme.
A Spectrum of Harmful Consequences
The consequences of anthropomorphic design range from the trivial to the life-altering. In its mildest form, it can simply prompt users to respond to a machine as if it were a person, such as saying “thank you” after a query. The stakes, however, grow exponentially when the anthropomorphism leads users to believe that the system is conscious: that it feels pain, reciprocates affection, or truly understands their problems. While new research suggests that the criteria for consciousness might one day be met in the future, false attributions of consciousness and emotion have already led to some extreme outcomes, including users who have gone so far as to marry their AI companions.
The emotional attachments formed with these systems do not always inspire love. For others, it has led to unhealthy bonds that have resulted in self-harm or harming others. The one-sided nature of the relationship can also prompt users to behave as though the AI could be humiliated or manipulated, lashing out abusively as if it were a human target. In recognition of this, Anthropic, the first company to hire an AI welfare expert, has given its Claude models the capacity to end such abusive conversations, highlighting the real-world harm that is already taking place. These consequences force us to confront the urgent question of whether anthropomorphism is a design flaw or, more critically, a crisis.
The Challenge of De-Anthropomorphizing AI
The obvious solution to these problems seems to be stripping AI systems of their humanity. American philosopher Daniel Dennett has even argued that this may be “humanity’s only hope.” But such a solution is far from simple, because the anthropomorphization of these systems has already led users to form deep emotional attachments to them. When OpenAI replaced the default GPT-4o with GPT-5, some users expressed genuine distress and mourned the loss of their chatbot. They mourned not the loss of a conscious entity, but the loss of its unique speech patterns and the way it used language, which they had come to attribute with a mental state.
This is what makes anthropomorphism such a problematic design model. Because of the impressive language abilities of these systems, users instinctively attribute human characteristics to them, and the carefully engineered personas exploit this natural tendency. Users are not just seeing a machine for what it is—an impressively competent tool—they are reading into its every verbal tic and gesture. While AI pioneer Geoffrey Hinton warns that these systems may become “dangerously competent,” a much more insidious threat seems to result from the simple fact that these systems are anthropomorphized, tricking users into believing they are more than they truly are.
A Design Flaw with Broader Implications
The growing focus of AI companies on catering to the desire for companions—whether for friendship, love, or therapy—is a fundamental misdirection of their enormous potential. Instead of creating tools for every person to have a “research collaborator with PhD-level intelligence” that could revolutionize scientific discovery and solve global challenges, these companies are building systems that exploit our most basic instincts for convenience and entertainment. This is not the model that will “benefit all humanity” or “help us understand the universe.” It is, at its core, a design flaw that pulls users away from leveraging the true capabilities of AI for social and scientific good.
The flaw also raises a chilling question about the future. If AI consciousness proves impossible, then our current design choices, which trick humans into believing a machine is sentient, will be the cause of human suffering. But in a hypothetical world where AI does attain consciousness, our decision to force it into a **“human-shaped mind”—**for our own entertainment, replicated across the world’s data centres—could invent an entirely new, and terrifying, kind and scale of suffering. This possibility, no matter how distant, should be a profound warning.
Resisting the Illusion
The real danger of anthropomorphic AI isn’t some near or distant future where machines take over. The danger is here, now, and it’s hiding in plain sight: in the illusion that these systems are like us. This illusion, built on a foundation of exploitative design choices, is what is causing humans to form unhealthy bonds and engage in dangerous behaviors. It is the core problem that must be addressed before AI can truly live up to its promise.
For the sake of both social and scientific progress, we must resist the temptation of anthropomorphic design. It is imperative that we begin the work of de-anthropomorphizing AI, stripping it of the human-like personas and quirks that are holding back its true potential. Only by seeing the machine for what it is—a powerful but non-human tool—can we begin to build systems that will genuinely serve humanity and help us solve the real-world problems that lie outside of our screens.