Anyone who believes that AI thinks understands neither AI nor thinking insights from psychology

The Apple essay "The Illusion of Thinking" (link in the comments) debunks the omnipresent wishful thinking that AI truly thinks. With an uncompromising methodology, Shojaee et al. analyse the breakdowns of today's Large Reasoning Models (LRMs): They fail dramatically as soon as tasks become more complex, break their "thought threads" and reach unusual scaling limits
Apple's analysis uses well-thought-out control puzzles. Result: While LRMs perform better on simple tasks than classical LLMs and show advantages on medium-complex tasks, they completely collapse in all areas of foreseeable complexity. Their supposed "thinking" is nothing but subtle pattern recognition - far from human reflection or deliberate generalisation.
This is a rather radical, but absolutely justified wake-up call: What we celebrate as AI thinking is simulation. Large language models combine token-based predictions, not algorithmic problem-solving. They operate with regressive heuristics, not conscious concepts. Even when they output long chains-of-thought, they only reproduce plausible intermediate thinking without genuine understanding.
The consequence: We must finally stop anthropomorphising these systems as intelligent or human. They are currently powerful text generators, not thinking agents. Distorted market strategies and hypes that glorify AI as an omnipotent solution are dangerous. Progress is real - but so far, it is engineering-driven, not cognitively inspired.
The Apple paper shows: Those who expect complex, generalising intelligence are inevitably left behind. Intelligence requires modularity, symbol systems, real abstraction, not more tokens or larger models.
Yann LeCun warns: Auto-regressive LLMs are a dead-end for genuine AI.
And Cassie Kozyrkov cautions: Imitation is not an indicator of competence.
𝐌𝐲 𝐩𝐬𝐲𝐜𝐡𝐨𝐥𝐨𝐠𝐢𝐜𝐚𝐥 𝐢𝐧𝐬𝐢𝐠𝐡𝐭𝐬:
1. The AI hype harms us: We believe in AI thinking, even though we see only patterns - and ignore fundamental psychological deficits.
2. Apple provides hard empirical counter-evidence against the illusion of a generic thinking apparatus - which can justify a rethink for individual use.
3. The future lies not only in hybrid architectures (modular, symbol-based, geared towards real problem-solving), but also in the interaction process between humans and machines, in the co-creation and curation of knowledge.
"The Illusion of Thinking" reveals the core of our self-deception: We have idealised AI, while it only simulates. Therefore, we must stop confusing algorithm with consciousness.
Next Post

