Thứ Hai, Tháng 9 1, 2025

Beyond Calculation: Why Ancient Philosophers Would Say AI Can’t Truly Think

Must Read

In my classroom, students are quick to offer opinions on whether AI is intelligent. They can assess its ability to analyze, evaluate, and communicate. But when I ask whether AI can truly “think,” I’m often met with blank stares. The two words may seem synonymous, but philosophers have spent millennia drawing careful distinctions between them. While ancient Greek thinkers like Plato and Aristotle never knew of modern technology, their ideas about intellect and thinking offer a powerful framework for understanding what’s at stake with artificial intelligence today.

What It Means to Think, According to Plato

In his work Republic, Plato used the analogy of a “divided line” to separate higher forms of understanding from lower ones. At the very top of his hierarchy was “noesis,” which he defined as the highest form of understanding: a direct, intuitive grasp of the truth that is a property of the soul. For Plato, this kind of knowing goes beyond reason or sensory perception, and it can only be achieved by an embodied being.

Can AI think – and should it? What it means to think, from Plato to ChatGPT

Below noesis, but still above his dividing line, was “dianoia,” or reasoning. Farther down, Plato placed lower forms of understanding. The lowest of all was “eikasia,” a baseless opinion rooted in false perception. This concept offers a useful comparison to AI’s frequent “hallucinations,” when it makes up plausible but inaccurate information. From a Platonic perspective, AI may be good at a very low form of comprehension, but it fundamentally lacks the highest, intuitive form of understanding that is essential for true thinking.

Aristotle’s Embodied Mind

Aristotle, Plato’s student, further explored the concepts of intelligence and thinking. In his work On the Soul, he distinguished between “active” and “passive” intellect. He argued that while passive intellect receives sensory impressions from the body, active intellect—which he called “nous”—transcends bodily perception to make meaning from experience. For Aristotle, thinking is a process that requires both the physical, passive input and the immaterial, active process. He, too, believed that genuine thinking requires a body.

Can AI think – and should it? What it means to think, from Plato to ChatGPT

Aristotle’s ideas on rhetoric and phronesis (practical wisdom) also shed light on why AI falls short. He viewed rhetoric as the observation and evaluation of how emotion and character influence people’s thinking. This kind of nuanced understanding of human behavior requires a body and feeling—something AI fundamentally lacks. Likewise, phronesis involves the lived experience needed to not only think the right thing, but also to apply those thoughts to “good ends” and virtuous actions. AI may analyze vast datasets to reach conclusions, but it cannot consult the wisdom or moral insight that comes from a life of experience.

The Problem of Embodiment

In the modern world, AI is taking on many physical forms, from self-driving cars to humanoid robots. This might lead us to believe that AI is getting closer than ever to human thought. However, according to both Plato and Aristotle, AI’s physical forms are still not “bodies” in the human sense. They run on code, algorithms, and data sets, not on lived, perishable experience.

Can AI think – and should it? What it means to think, from Plato to ChatGPT

Intuitive understanding, emotion, and practical wisdom seem to require a consciousness that is moved by experience. As the article points out, even when AI is given a physical form, it’s not truly thinking. It’s simply following a set of rules and probabilities. The very consciousness that would allow it to “think” is missing.

Ultimately, the philosophical distinction between intelligence and thinking provides a compelling reason to be skeptical. While AI can analyze and generate, it cannot truly feel, understand, or live. The best evidence for this may come from the AI itself. When prompted with the question, “Can you think?” ChatGPT responded: “I don’t have consciousness, emotions, intentions, or awareness. Everything I ‘do’ is based on patterns learned from huge amounts of text… I don’t truly think or understand in the human sense.” It seems that on the question of whether it can think, AI and ancient philosophy are surprisingly aligned.

- Advertisement -spot_img
- Advertisement -spot_img
Latest News

The Digital Divide: How AI Companions Pose Hidden Psychological Risks

In a world where chronic loneliness is a recognized public health crisis, the explosive popularity of AI chatbots and...

More Articles Like This