Those who believe that AI thinks, understand neither AI nor thinking: 3 psychological insights

A frontal view of a dog running through the forest

The Apple article “The Illusion of Thinking” (link in comments) exposes the omnipresent wishful fantasy that AI truly thinks. With relentless methodology, Shojaee et al. analyze the breakdowns of today's Large Reasoning Models (LRMs): They fail dramatically as tasks grow complex, terminate their "trains of thought," and hit unusual scaling limits.

Apple's analysis uses well-thought-out control puzzles. Result: While LRMs outperform classic LLMs on simple tasks and show advantages on medium-complex tasks, they utterly collapse in all areas of foreseeable complexity. Their alleged "thinking" is nothing more than subtle pattern recognition, far from human-like reflection or purposeful generalization.

This is a rather radical, yet entirely justified wake-up call: what we celebrate as AI thinking is simulation. Large language models combine token-based predictions, not algorithmic problem-solving. They operate with regressive heuristics, not conscious concepts. Even when they produce long chains of thought, they merely reproduce plausible intermediate thinking without real understanding.

The consequence: We must finally stop anthropomorphizing these systems as intelligent or human-like. Currently, they are powerful text generators, not thinking agents. Distorted market strategies and hype that glorify AI as an omnipotent solution are dangerous. Progress is real - but, so far, it is engineering-driven, not cognitively inspired.

Apple's paper shows: those expecting complex, generalizing intelligence will invariably be left behind. Intelligence requires modularity, symbolic systems, true abstraction, not more tokens or larger models.
Yann LeCun warns: Auto-regressive LLMs are a dead end for genuine AI.
And Cassie Kozyrkov cautions: Imitation is not an indicator of competence.

My psychological insights:
1. The AI hype harms us: We believe in AI thinking while we only see patterns and ignore fundamental psychological deficits.

2. Apple provides strong empirical counter-evidence against the illusion of a generic thinking apparatus, which can support a shift in perspective for individual use.

3. The future lies not only in hybrid architectures (modular, symbol-based, aimed at real problem-solving) but also in the interaction process between humans and machines, in co-creation and curation of knowledge.

“The Illusion of Thinking” uncovers the core of our self-deception: We have idealized AI, while it merely simulates. Therefore, we must stop confusing algorithms with consciousness.

The future of the economy is psychological.

Surrealer Banner als künstlerisches Detail

Exclusive Seminars

Legally binding

Theta Venture LLC

© Copyright 2025 Theta Ventures LLC

All Rights Reserved.

Surrealer Banner als künstlerisches Detail

Legally binding

Theta Venture LLC

© Copyright 2025 Theta Ventures LLC

All Rights Reserved.