Three circles for thinking about LLMs
12 May 26
When Emily Bender called large language models stochastic parrots, Sam Altman replied “i am a stochastic parrot, and so r u”. The exchange captures everything wrong with how we talk about LLMs. Bender meant something specific and damning: these systems produce coherent-looking text without any of the cognitive machinery we associate with meaning. Altman meant something blithe and dismissive: you can’t really tell the difference, can you. The two are arguing past each other because we lack clean vocabulary for what’s at stake.
Here’s a Venn diagram that might help.
Three circles: conscious, intelligent, and articulate1. Most things in the world land in zero or one. Humans land in all three, which is presumably why we find ourselves interesting. The interesting question is which other configurations are possible, and what their occupants reveal.
The classic two-circle regions are well-populated. Dogs and octopuses live in the conscious-and-intelligent zone: they navigate the world, solve problems, and presumably have inner lives, but they don’t use anything we’d recognise as human language. Chess engines, AlphaGo, and your bank’s fraud-detection algorithm live in the intelligent-only zone: narrow problem-solvers with no inner life and no language. ELIZA, Markov-chain text generators, and a parrot reciting “pieces of eight” live in the articulate-only zone, producing language-shaped output with nothing behind it.
The interesting region is the new one: intelligent and articulate but not conscious. This is where LLMs live, at least on the most common reading. They solve problems and generate fluent text across an enormous range of domains. Whether they have any kind of inner experience is, to put it gently, disputed. Even Anthropic, the company that builds Claude, hedges carefully on the question.
Philosophers have been imagining this region’s inhabitants for forty years. Searle’s Chinese Room is exactly the thought experiment: imagine something that fluently produces Chinese without understanding a word of it. David Chalmers formalised the philosophical zombie in the 1990s: a being functionally indistinguishable from a conscious person, with no inner life behind the behaviour. It’s been a fixture of consciousness debates ever since. The intelligent-articulate-but-not-conscious region was a thought experiment. Now it has actual occupants.
This doesn’t settle anything. Searle’s whole point was that the Chinese Room isn’t really intelligent either; it’s syntax all the way down. Bender’s argument is similar: LLMs aren’t doing semantics; they’re doing very sophisticated statistics. Whether that distinction holds up is the actual debate. Chalmers has taken it on directly, and Murray Shanahan has argued that we systematically over-attribute mental states to LLMs because of the language interface. There’s a careful cognitive-science version of the same argument: LLMs have strong “formal linguistic competence” but weak “functional” competence, sounding right without thinking right.
The clean three-circle picture has at least one serious problem, which I should flag before someone else does. “Articulate” might not be an independent axis at all; it might be downstream of intelligence. Peter Wolfendale’s recent Aeon essay on artificial souls draws a related three-way distinction: intelligence, consciousness, and personhood. He treats language as the medium of metacognition, not a separate capacity. On that reading, my third circle is doing some sleight of hand. Language only looks distinctive because LLMs can do it without doing much else; in any sufficiently capable system, the two come bundled.
The diagram is still worth drawing. It’s wrong in specific ways and useful in others. It gives you somewhere to point when someone says “Claude is basically a person now” or “Claude is just a fancy autocomplete”. Both claims are smuggling a conflation. The first treats articulate-plus-intelligent as sufficient for consciousness; the second treats “stochastic process” as equivalent to “not really intelligent”. Neither follows from the diagram, and both have to be argued for separately.
We also built the philosophical zombie, not on purpose, and probably not perfectly. The lights might be on after all, in some form none of us would recognise. The region consciousness researchers had been gesturing at as a hypothetical now has actual residents, and we have to live with them. That’s a strange thing to have done in a decade.
#Footnotes
-
I’m using “articulate” rather than “linguistic” because it’s punchier and captures the right thing: not just producing strings of words, but producing fluent, contextually appropriate ones. A phrasebook contains language; it isn’t articulate. ↩
Cite this post
@online{swift2026threeCirclesForThinkingAboutLlms,
author = {Ben Swift},
title = {Three circles for thinking about LLMs},
url = {https://benswift.me/blog/2026/05/12/three-circles-for-thinking-about-llms/},
year = {2026},
month = {05},
note = {AT-URI: at://did:plc:tevykrhi4kibtsipzci76d76/site.standard.document/2026-05-12-three-circles-for-thinking-about-llms},
}