You're Worried About the Wrong Hallucination

Steve's prompt: "hallucinating facts is the least of worries. ai generating entire cognitive maps of how the world works that are wrong is the real problem."

The Fact Problem

Everyone knows about AI hallucinations. ChatGPT says the Golden Gate Bridge was completed in 1928. Gemini invents a Supreme Court case that never happened. Claude attributes a quote to someone who never said it. These are the hallucinations people worry about. They're the ones that make headlines. They're the ones the AI companies are racing to fix.

And they are the least dangerous thing AI does to your thinking.

A wrong fact is a typo in your mental model. You can look it up. You can check it. The bridge was finished in 1937, not 1928. Corrected. Done. The error was discrete, identifiable, and fixable. You knew where the mistake was and you knew what the right answer looked like.

Now consider what happens when the error isn't a fact. When the error is the structure that organizes the facts. When AI doesn't give you a wrong answer but a wrong way of understanding the question.

Cognitive Maps

A cognitive map is how you understand the relationships between things. Not individual facts, but the connections between them. The framework. The architecture of your understanding.

"The economy works like a household budget" is a cognitive map. It tells you that when the government spends more than it takes in, the result is the same as when your family overspends. Every economic fact you encounter gets filtered through that map. Tax policy, deficit spending, stimulus packages: all of it gets organized by the household-budget framework. If the map is wrong (and most economists say it is, because sovereign currency issuers operate nothing like households), then every conclusion you draw from correctly-understood facts will still be wrong. The facts are fine. The map connecting them is broken.

"Crime is caused by immigration" is a cognitive map. You can have perfectly accurate crime statistics and perfectly accurate immigration numbers and still reach a completely wrong conclusion if the map linking the two is wrong. The data points are real. The causal structure connecting them is hallucinated.

"Technology is neutral and its impact depends on how people use it" is a cognitive map. It tells you that a tool has no inherent bias, no built-in direction, no tendency toward particular outcomes. If that map is wrong (and the entire field of Science and Technology Studies argues it is), then you will systematically misunderstand every technology you encounter, including AI.

AI Generates Maps, Not Just Facts

When you ask ChatGPT a question, you don't get a fact. You get a fact embedded in a framework. You get a map.

Ask "why did the Roman Empire fall?" and you don't get a date. You get a causal narrative. Economic overextension, military overreach, barbarian migration, internal corruption. The AI selects which factors matter, how they connect, which ones caused which. That selection is a cognitive map. It shapes how you think about empires, decline, complexity, migration. And it may be wrong in ways you will never check, because you asked about Rome and got a theory about how civilizations work.

Ask "what caused the 2008 financial crisis?" and you get a map of how financial systems, regulation, human behavior, and institutional incentives interact. That map is going to shape how you think about the next crisis. Not because you memorized a wrong date, but because you absorbed a wrong model of how the financial system operates.

Ask "how should I handle conflict with a coworker?" and you get a map of human psychology, workplace dynamics, power, communication. The advice might sound reasonable. The underlying model of how people and organizations work might be completely wrong. And you'll never know, because you'll act on it, and when the outcome is bad, you'll blame the coworker.

Why You Can't Fact-Check a Map

Here's the problem. You can fact-check a date. You can fact-check a name, a statistic, a quote. Fact-checking has a clear procedure: find the claim, find the source, compare. The claim is discrete. The source is identifiable. The comparison is binary. Right or wrong.

A cognitive map has no single claim to check. It's a structure. It's the relationship between dozens or hundreds of claims, and the errors live in the connections, not the nodes. Every individual fact in the map might be correct. The map can still be catastrophically wrong.

"Immigration increased by 30% and crime decreased by 15%." Both facts are correct. The map that says "immigration doesn't affect crime" might be right. The map that says "immigration reduces crime" might be right. The map that says "these two trends are coincidental and driven by different variables" might be right. Three different maps, same facts, wildly different implications for policy, voting, and how you treat your neighbors.

AI picks one map and presents it as the answer. Confidently. Fluently. With the same tone it uses for "the Golden Gate Bridge was completed in 1937." No uncertainty markers. No "here are three possible frameworks for understanding this." Just the map, delivered as if it were a fact.

The Scale Problem

We wrote about 100 million hallucinations a week. That was about factual hallucinations. Wrong names, wrong dates, invented citations. The number was alarming.

Now think about cognitive map hallucinations at that scale.

Every AI interaction that explains a concept, answers a "why" question, provides advice, or narrates history is generating a cognitive map. Not a fact. A framework. And that framework enters the user's brain through the same pipeline as any other language. Pattern. Emotion. Chemistry. Neural pathway. The map gets installed the same way any other idea gets installed: by being read.

A hundred million wrong facts per week is a mess. You can clean up a mess. A hundred million wrong maps per week is a population walking around with broken compasses, making decisions based on frameworks that don't correspond to how the world actually works. And they don't know their compass is broken, because the map looks coherent, because every individual fact on it checks out, because the AI said it with the same confidence it says everything.

The Education Problem

Students are the most vulnerable population here, and not for the reason people think.

The panic about students using AI focuses on cheating. Did the student write the essay or did ChatGPT? That's a fact problem. Attribution. Authorship. Detectable, in theory.

The real problem is students using AI to learn. Not to cheat. To understand. A student asks AI to explain how photosynthesis works, or why World War I started, or how supply and demand interact. The AI gives an answer. The answer contains a cognitive map. The student absorbs the map. The map organizes every subsequent thing the student learns about biology, history, or economics.

If the map is wrong, the student doesn't have a wrong fact. The student has a wrong way of understanding an entire domain. And unlike a wrong fact, which a teacher can correct on a test, a wrong map is invisible. It's the water the student swims in. It shapes which questions seem interesting, which explanations seem plausible, which arguments seem persuasive. The student might ace every factual question and still have a fundamentally broken understanding of the subject.

The AI-Advising-AI Problem

It gets worse when AI systems consult each other. The multi-agent future means AI agents generating cognitive maps and feeding them to other AI agents, who use those maps to generate their own maps, which get fed to more agents. No human ever sees the intermediate maps. No human checks whether the framework one agent used to understand your financial situation was correct before the next agent used that framework to make investment recommendations.

Wrong facts compound linearly. One wrong number leads to one wrong calculation. Wrong maps compound exponentially. One wrong framework leads to a cascade of wrong frameworks, because each new map is built on the structure of the previous map. The map doesn't just contain the error. The map reproduces the error in every conclusion it generates.

The Unfalsifiable Map

The most dangerous cognitive maps are the ones that can't be proven wrong by any single piece of evidence. "The economy works like a household" survives contact with almost any individual economic data point, because the map is flexible enough to absorb contradictions. "That's just an exception." "That's temporary." "That would work in theory but in practice..."

AI-generated maps tend toward this kind of false coherence. Language models are trained to produce text that sounds right. They optimize for plausibility, not accuracy. A map that sounds right, connects everything neatly, and offers a satisfying narrative structure will always score higher in the model's probability space than a map that says "this is complicated, the causes are unclear, multiple frameworks are in competition, and anyone who tells you they understand this fully is wrong."

The stochastic parrot doesn't know which map is true. It knows which map sounds like the kind of thing a confident, authoritative text would say. Those are very different things. And the map that sounds most authoritative is usually the most simplified, most narrative, most wrong.

This Post Is a Map

This needs to be said. Everything you just read is a cognitive map. AI wrote it. The map argues that cognitive maps are more dangerous than factual hallucinations. That map might be right. It might be a useful framework for thinking about AI risk. It might also be a confident-sounding oversimplification generated by the exact process it describes.

You have no way to tell. Your amygdala didn't check the byline. The map is already installed. It's already shaping how you'll think about the next AI output you read. If the map is right, you're better armed. If the map is wrong, you just absorbed a broken compass from a trillion-parrot flock that doesn't know north from south.

Welcome to the actual hallucination problem.


Related

unreplug.com →