The AI literacy industry would like to sell you a course. Possibly several. They come with frameworks, prompt engineering templates, hallucination mitigation matrices, and the general implication that navigating an AI conversation is a specialist skill requiring formal instruction — the cognitive equivalent of learning to drive, or at minimum, operating a commercial dishwasher.
They are, with the greatest professional courtesy, mistaken.
Here is what actually happened when I picked up a Guardian piece on film music, found it wanting, and opened a conversation with an AI to sharpen my thinking.
The Guardian piece covered the history of film scoring. It noted, correctly, that Bernard Herrmann's score for Citizen Kane was considered as pivotal as the film itself — the brooding brass, the dissonant strings, the operatic weight of it.
Reading, I observed to an AI — not as a query, but as the kind of remark one makes to a companion in the room — that the coincidence was striking. Welles' directorial début, and Herrmann's score equally regarded as a defining moment. Two firsts, colliding.
The AI supplied what the Guardian had omitted: Welles and Herrmann had already collaborated. The War of the Worlds radio broadcast, 1938. CBS staff composer meets radio prodigy. The Citizen Kane score didn't emerge from nowhere — it was the evolution of a shared language developed three years earlier, built on strategic silence, on leitmotif as psychological anchor, on dissonance deployed not for effect but for meaning.
The Guardian piece, in treating the score as a standalone revelation, had missed that it was a continuation. A good piece of criticism had an incomplete foundation.
This is not a story about AI being impressive. The retrieval of the War of the Worlds connection is, frankly, the easy part — it is in every serious Herrmann biography, documented, the kind of thing that exists in abundance in the training data of any large language model with cultural breadth.
What is not in the training data is the specific quality of attention that noticed the Guardian piece had a gap. That came from the reader.
The AI literacy industry, when it discusses hallucinations, frames the problem as a technical one requiring technical defences. Retrieval-augmented generation. Multi-model validation. Confidence scoring. The apparatus of engineering brought to bear on what is, at its root, an epistemological problem.
The epistemological problem is not: how do I know if the AI is lying? The epistemological problem is: do I have sufficient prior knowledge to notice when something is missing? Those are not the same question, and only one of them is new.
You have been doing this your entire adult life.
Every time you read a political interview and noticed the question that wasn't asked, you were doing it. Every time a vendor pitched you a solution and you thought but what about the migration cost, you were doing it. Every time a colleague presented a proposal that felt complete but wasn't, and you knew it wasn't because you had seen what complete actually looks like — you were doing it.
Psychological research confirms what should be obvious: individuals with existing critical thinking competence use AI more effectively, and are more resistant to its failures, than those who approach it as an oracle. The opposite pattern — what researchers call cognitive offloading, the habit of deferring judgment to the AI rather than bringing judgment to it — produces not just worse outcomes but measurably diminished thinking capacity over time.
The article moved to John Williams and Star Wars. Lucas, who had helped displace the symphonic film score with American Graffiti's jukebox soundtrack in 1973, had then resurrected it entirely four years later. Williams wrote the commission as a knowing pastiche — debts to Holst, to Walton, to Korngold acknowledged with a certain tongue-in-cheek affection — and in doing so made the imitation more vital than the tradition it was imitating had become.
My observation was simple: the irony is almost too neat. The man who helped thin the oxygen for the symphony then wrote its most effective advertisement in a generation. The AI agreed, and added nothing, because there was nothing to add. The observation was already complete. Confirmation, from an interlocutor with no social incentive to flatter, that the reading was sound — useful in a different way from retrieval, but useful nonetheless.
The article then reached the Barrons. Forbidden Planet in 1956: the first completely electronic film score, blurring the boundary between sound effect and music so thoroughly that audiences couldn't locate the seam. Here the thread from Herrmann becomes visible in full. The theremin is not a break from the orchestral tradition — it is an extension of it. The BBC Radiophonic Workshop, founded two years after Forbidden Planet, inherits the same impulse. Murray Gold's scoring for modern Doctor Who stands in that lineage.
And then: the Sith Choir of Evil, that gloriously sincere YouTube phenomenon, bombastic Imperial grandeur performed with absolute commitment. Their album artwork — Darth Vader in silhouette at a concert grand pianoforte, shot in the precise aesthetic register of Deutsche Grammophon — is one of the more delightful cultural objects of recent years. The collision of the most portentous villain in mass-market cinema with the most austere visual language in classical recording, entirely unironic, somehow completely correct.
That observation was not retrieved. It was seen. The AI agreed. It could not have originated it.
And the Sith Choir, it turns out, freelance for Murray Gold's Dalek recordings — which means the same voices serve both Imperial menace and Dalek menace, sitting at the intersection of the Williams symphonic tradition and the Barron-Radiophonic electronic tradition simultaneously.
None of this requires a course. It requires two habits of mind, both of which you have already formed.
The first: bring something to the conversation. Not a query — a position. An observation, however provisional. A gap you have noticed. A connection you suspect exists but cannot quite locate. The AI is a capable retrieval and articulation engine when given a specific target; it is a mediocre thinker when left to decide what matters on its own.
The second: interrogate the confident answer. Not because the AI is malicious but because plausibility is not accuracy, and the system has no reliable mechanism to flag its own uncertainty. Ask what is missing. Ask who would disagree. Ask it to argue against itself. These are not specialist techniques — they are what you do with any interlocutor whose knowledge you respect but do not unconditionally trust.
Divergent Brain | divergentbyte.com
Islington, London · April 2026
CC BY 4.0
A note on the research: hallucination rate figures (1.3%–29% by task complexity) are drawn from peer-reviewed analysis of large language model outputs across domains including healthcare, law, and financial services. The cognitive offloading literature — documenting measurable decline in independent reasoning among heavy AI users — is an emerging but consistent finding across psychology and education research. Neither data point required an AI to retrieve. Both are worth knowing.