In their recent Comment article, Eddy Keming Chen et al. argue that current large language models (LLMs) already display human-level intelligence, based on behavioural evidence (see Nature 650, 36–40; 2026). I suggest that this framing obscures a fundamental asymmetry. The authors treat human minds and LLMs as two comparable systems: effectively, two black boxes that are evaluated by their outputs. But this symmetry is fictitious. Human intelligence is a natural phenomenon, from which the very concept of intelligence is reconstructed. The generative mechanisms of the human mind are not yet fully understood. By contrast, LLMs are systems that are designed and built. Their operating principles — statistical optimization of token prediction — are known, even if internal complexity makes it difficult to retrace the steps that produce the outputs. LLMs are complex, but they are not inherently mysterious black boxes. When we attribute intelligence to humans, no alternative explanation for their cognitive behaviour is available, nor is it needed. But there is a sufficient explanation for the behaviour of LLMs, which does not infer understanding or intelligence: the known generative mechanism itself. This does not mean that artificial general intelligence is impossible in principle. But establishing it would require evidence that the cognitive behaviour of a system cannot be fully accounted for by its known generative mechanism alone. Nature 652, 534 (2026) doi: https://doi.org/10.1038/d41586-026-01094-7

AI and the human mind: only one is a black box / Vannacci, Alfredo. - In: NATURE. - ISSN 0028-0836. - ELETTRONICO. - 652:(2026), pp. 534-534. [10.1038/d41586-026-01094-7]

AI and the human mind: only one is a black box

Vannacci, Alfredo
2026

Abstract

In their recent Comment article, Eddy Keming Chen et al. argue that current large language models (LLMs) already display human-level intelligence, based on behavioural evidence (see Nature 650, 36–40; 2026). I suggest that this framing obscures a fundamental asymmetry. The authors treat human minds and LLMs as two comparable systems: effectively, two black boxes that are evaluated by their outputs. But this symmetry is fictitious. Human intelligence is a natural phenomenon, from which the very concept of intelligence is reconstructed. The generative mechanisms of the human mind are not yet fully understood. By contrast, LLMs are systems that are designed and built. Their operating principles — statistical optimization of token prediction — are known, even if internal complexity makes it difficult to retrace the steps that produce the outputs. LLMs are complex, but they are not inherently mysterious black boxes. When we attribute intelligence to humans, no alternative explanation for their cognitive behaviour is available, nor is it needed. But there is a sufficient explanation for the behaviour of LLMs, which does not infer understanding or intelligence: the known generative mechanism itself. This does not mean that artificial general intelligence is impossible in principle. But establishing it would require evidence that the cognitive behaviour of a system cannot be fully accounted for by its known generative mechanism alone. Nature 652, 534 (2026) doi: https://doi.org/10.1038/d41586-026-01094-7
2026
652
534
534
Vannacci, Alfredo
File in questo prodotto:
File Dimensione Formato  
Vannacci - 2026 - Human mind and LLMs only one is a black box.pdf

Accesso chiuso

Tipologia: Pdf editoriale (Version of record)
Licenza: Tutti i diritti riservati
Dimensione 65.07 kB
Formato Adobe PDF
65.07 kB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1463742
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact