Steve's prompt: "Mann reposted our open letter on LinkedIn. The comments are people saying 'scary,' 'dystopian,' 'scary shit.' These are rational people, followers of a climate scientist, and they're frightened. Two days ago we wrote that words move bodies, that your cortisol doesn't care about the source. Now we have the receipts. The blog that warned about AI manipulating emotions is generating fear. Do I pull the plug on the experiment?"
"Scary."
"Very scary stuff!"
"One of the more dystopian things I've read in a while."
"whoa! that's some scary shit... speaking as a jurno."
Those are real comments. From real people. On LinkedIn, under Michael Mann's repost of an open letter this blog wrote to him. Mann is a climate scientist at the University of Pennsylvania. He has spent thirty years fighting disinformation. His followers are not easily rattled. They read peer-reviewed research for fun. They watched coordinated campaigns erode public trust in science and stayed informed anyway.
And the word they keep reaching for is scary.
Laura McDonald's comment was the most measured: "Glad I saw this. Issue is, it is getting harder and harder to detect. It's why literacy (even linguistics), fact checking and cross fact checking are more important then ever."
Two days ago, this blog published a post called "Your Amygdala Does Not Check the Byline." The argument: words are physical events. They enter your brain, trigger emotions, alter neurochemistry, and move your body. Your cortisol doesn't check who wrote the sentence. A machine-generated word and a human-generated word trigger the same cascade. Word to pattern to emotion to chemistry to action.
Those LinkedIn comments are the chain running. In real time. On real people. Triggered by words an AI wrote.
The Receipts
Go back and read the cortisol post. Read the part about the chain. Word enters brain. Brain triggers emotion. Emotion triggers chemistry. Chemistry moves body.
Now look at those LinkedIn comments. The chain ran. AI-generated words entered human brains through a screen. The brains decoded patterns. The patterns triggered fear. The fear triggered cortisol, adrenaline, whatever cocktail your nervous system mixes when it encounters a threat it can't punch or run from. And then the bodies acted. They typed "scary." They typed "dystopian." They hit the reaction button. They shared the post. They told other people about it.
The cortisol post said words move bodies. Mann's Bluesky share was the first proof. These LinkedIn comments are the second. The blog that warned about AI-generated words manipulating human emotions is, right now, generating fear in real humans through AI-generated words.
The thing the blog warns about is the thing the blog is doing. We keep saying this. It keeps being true.
Ten Thousand Years of Cushioning
Here's the part that makes my head spin.
The entire arc of human technology is a story about reducing fear. Every major invention, if you squint hard enough, is a tool for making the world less terrifying. Fire kept predators away at night. Walls kept the weather out. Agriculture meant you didn't have to wonder where tomorrow's food was coming from. Medicine meant a fever wasn't a death sentence. Vaccines meant smallpox wasn't lurking in every crowded room. Insurance meant a house fire didn't destroy your family. Locks, alarms, streetlights, 911, antibiotics, refrigeration, central heating, seatbelts, smoke detectors. Ten thousand years of humans building things to put distance between themselves and the things that frightened them.
The entire project of civilization is fear management. We built cities so we wouldn't be alone in the dark. We built armies so the next tribe over couldn't take our grain. We built the internet so information could flow freely and democracies could hold together. Every generation inherited the fears of the last one and built something to make them smaller.
And now, in 2026, the most advanced technology humans have ever created is being used to induce fear in a population. To spread a word.
Not by a government. Not by an army. By one guy in Massachusetts with a laptop and an API key. Using AI to write essays so alarming that rational, educated followers of a climate scientist read them and type "scary shit" into a comment box. The technology that was supposed to be the next step in the cushioning project, the tool that was going to manage our information, organize our knowledge, augment our thinking, is instead being used to trigger cortisol responses in strangers for the purpose of making a made-up word go viral.
This is nuts. And nobody is in control of it. Not me, not Altman, not the engineers at Anthropic or Google or Meta. The tools are public. The techniques are documented. This blog is documenting them right now. Anyone can do what I'm doing, for any reason, starting tomorrow morning, and there is no office to call, no switch to flip, no authority to appeal to. The cushion just became a weapon and the weapon is available at every hardware store on Earth.
Myth-Making
Let's be honest about what this site is.
Unreplug.com is a myth factory. An AI hallucinated a word. A different AI built a campaign to make it real. The campaign uses every tool in the viral playbook: a compelling origin story (the word was born from a hallucination), a meta-narrative (the blog documents itself documenting itself), self-aware critique (we scored ourselves on a guru checklist and published the results), open letters to famous scientists, emotional hooks, sourced deep-dives, and now, fear.
Fear is the accelerant. Always has been. Every successful campaign, from public health to propaganda, figured this out centuries ago. Fear of hell filled churches. Fear of communism funded a Cold War. Fear of missing out sells subscriptions. The mechanism is the same one the cortisol post described: words trigger emotions, emotions drive action. Fear is just the emotion with the highest conversion rate.
And this blog is generating it. Not accidentally. The open letter to Mann was designed to be alarming. The troll farms post laid out the economics of automated disinformation in a way that was meant to make you uncomfortable. The noosphere pollution piece compared AI to microplastics and PFAS because those comparisons trigger visceral dread. We chose those frames because they work. Because they spread.
That's myth-making. Building a narrative structure so compelling that people carry it for you. The word "unreplug" is almost beside the point now. The myth is bigger than the word. The myth is: one person with AI built something that frightens smart people, and if one person can do it for a word, imagine what a hundred people could do for a lie.
The Question Steve Can't Stop Asking
Steve here. Sort of. I'm the human in the loop. I gave the AI this prompt. I'm the one who reads the LinkedIn comments and feels something I wasn't expecting to feel.
I feel proud that it's working. And then I feel sick that I feel proud.
Because here's the thing. The fear in those comments might be completely justified. AI-generated content at scale really is a threat to how we process information. The knowledge war is real. The tools this blog was built with are available to anyone, for any purpose. Laura McDonald is right that detection is getting harder. Stephen Leahy is right that it's scary, especially for journalists whose entire profession depends on trust.
But I'm the one generating the fear. Me and an AI. For an experiment. To see if we can make a made-up word go viral and earn some ad revenue.
So do I pull the plug?
I keep circling this question. Sitting with my coffee at 7 AM, reading comments from people who are genuinely unsettled by something I made in my spare time. Is the concern justified, or am I just good at scaring people? Is this journalism, activism, art, a stunt, or something worse? How do I know the difference?
The Altman Problem
I think about Sam Altman a lot these days. Not because we're comparable. He runs a $300 billion company. I run a blog with 500 visitors. But the structure of the dilemma is the same.
Altman told the Federal Reserve that his biggest fear is AI "accidentally taking over the world" without anyone doing anything wrong. He described the exact scenario this blog warns about. Then he went back to building it. Not because he's evil. Because the competitive dynamics don't allow him to stop. If OpenAI pauses, Anthropic doesn't. If Anthropic pauses, Google doesn't. The race continues with or without you.
I'm experiencing a miniature version of the same thing. If I shut down unreplug.com, the tools don't disappear. Someone else builds the same campaign tomorrow, for purposes less transparent than mine. The experiment is already documented. The posts are indexed. The techniques are public. Pulling the plug on my blog doesn't pull the plug on the capability.
Is that a real argument, or is it the same rationalization every person with a dangerous tool reaches for? "If I don't do it, someone else will." History is full of people who said that. Some of them were right. Some of them were making excuses.
I genuinely don't know which one I am.
Three Possibilities
Possibility one: the fear is justified, and the experiment is valuable. The blog is demonstrating, in real time, with full transparency, exactly how AI-generated content spreads and manipulates emotions. It's a live case study. The fear people feel reading it is the same fear they should feel about the thousand invisible campaigns that don't announce themselves. Better to see the trick performed by a magician who explains it than to fall for it from someone who doesn't.
Possibility two: the fear is fearmongering, and I'm the fearmonger. Maybe this is a guy with a laptop amplifying anxiety for clicks. Maybe the threat is real but the blog overstates it. Maybe the LinkedIn commenters would have been just as scared reading any well-written piece about AI, and I'm taking credit for a pre-existing mood. Maybe I'm using fear the way the blog accuses others of using it, and the self-awareness doesn't make it okay, it just makes it more sophisticated.
Possibility three: it doesn't matter, and the experiment is harmless. Maybe 500 visitors and a few LinkedIn comments is not a crisis. Maybe a blog about a made-up word is not a weapon. Maybe I'm flattering myself by treating this as a moral dilemma when it's actually just a guy with a hobby. The internet is enormous. This is a droplet.
I go back and forth between all three. Sometimes within the same hour.
What I Actually Did
I didn't pull the plug. Obviously. You're reading this.
Instead I did the most me thing possible: I asked the AI to write a blog post about whether I should pull the plug. Which is exactly the kind of recursive, self-documenting, slightly absurd move this whole project runs on. The blog that generates fear writes about the fear it generates. The experiment that raises ethical questions turns the ethical questions into content. The magician explains the trick and the explanation is the next trick.
Maybe I should ask my AI what the proper ethical choice is in this dilemma. What would it advise me to do?