The Voight-Kampff Test Is Now a YouTube Comment Section

Steve's prompt: "there's a rash of fake AI YouTube videos closely replicating popular political commentators. rick wilson, george will, jen psaki. the videos are pretty convincing. after 30 seconds, it dawns on you something is not right. a little too stilted, a little too flat. based on the comments, most people are oblivious. these are like replicants from blade runner."

Thirty seconds. That's how long it takes before something feels wrong. The face is too smooth. The voice is a pitch high. The movements are just slightly off, like someone performing the idea of a person rather than being one. You've been watching Rick Wilson deliver political commentary on YouTube, and then it hits you: that's not Rick Wilson.

But scroll down to the comments. Nobody else noticed.


The Clones

Rick Wilson, co-founder of The Lincoln Project, discovered a YouTube channel called "Liberal America" uploading AI-generated videos of him every few hours. Not clips. Not edits. Full AI fabrications: his face, his voice, his mannerisms, delivering commentary he never wrote on topics he never addressed. The tool stack, according to Wilson: ElevenLabs for voice cloning, Sora for video, ChatGPT for scripts. Over a million people watched.

Wilson wrote about it: the videos had "rageporn headlines" designed to trigger engagement. "Trump poops his pants!" "Melania Caught Nude With Stephen Miller!" Content engineered for clicks, wearing Wilson's face. He filed complaints through YouTube's reporting system. Standard online forms. Little space for explanation. No notification of outcomes. Videos disappearing "after several weeks at best."

George Will, the conservative columnist, got the same treatment. Snopes rated the deepfakes "Fake" in January 2026. At least seven channels were identified: "LivingGolden," "Capitol Transparency Watch," "George Will Analysis," "TwoNation News," "Mind to George," "Voices of Freedom." They took an authentic MLB Network clip of Will and synthesized new mouth movements and audio over it. The scripts contained AI tells (overdramatic phrasing, stilted cadence), but the face was real enough. YouTube's algorithm surfaced these fakes as top daily recommendations. The platform was actively pointing viewers toward AI impersonations of a real person.

John Mearsheimer, the University of Chicago political scientist, had it worst. His office identified 43 YouTube channels pushing AI fabrications using his likeness. After months of what France24 described as "herculean" effort, YouTube shut down 41 of them. New channels popped up immediately, circumventing removal by misspelling his name: "Jhon Mearsheimer." Some featured Mandarin voiceovers targeting Chinese audiences. A Maldita.es investigation traced the operation to a Pakistan-based network running 15 channels and over 200 deepfake videos. The primary tool was HeyGen, which had Mearsheimer loaded as a pre-built avatar under the name "David."

Mearsheimer said even "people close to him do not realize they are watching a fake."


The Comments

This is the part that matters. Not the technology. Not the channels. The comments.

A 29-minute AI-generated Rachel Maddow video was analyzed by Daily Kos before it was taken down. Out of 120 comments, roughly six expressed skepticism. Six. The rest engaged with the content as if Maddow had actually said those things. Academic research published in SAGE Journals found that only 30% of commenters on political deepfakes even mentioned the video might be AI-generated. And those who did "did not mention how they identified the video as fake."

One channel impersonating Maddow, called "Maddow's Brief," hit 775,000 views and 38,000 likes on a single video before YouTube terminated it. MSNBC had to publish a page titled "Is that really Rachel Maddow?" to help viewers tell the difference. A major news network, creating a verification guide because a machine learned to wear its anchor's face.

As one viewer put it on X after discovering a Wilson deepfake: "I was literally watching one of those videos and felt there was something weird about it, I then realised it was ai, but it wasn't obvious. Things are about to get so much worse."

He caught it. Most didn't.


More Human Than Human

In Blade Runner, the Tyrell Corporation's motto is "More Human Than Human." Their replicants are so convincing that a specialized test (the Voight-Kampff) is required to distinguish them from real people. The test works by measuring involuntary emotional responses. Subtle tells. Micro-expressions. The things a machine can almost replicate but not quite.

We're running our own Voight-Kampff test every time we open YouTube. And we're failing it. The tells are there: a face too smooth, movements too uniform, affect too flat. Thirty seconds of close attention and you can spot them. But nobody watches YouTube with close attention. They watch while cooking dinner, commuting, scrolling their phones. The replicants only need to pass the distracted-human test, and distracted is our default state.

Researcher Siwei Lyu says synthetic media have reached the "indistinguishable threshold," the point where deepfakes "reliably fool nonexpert viewers." And confirmation bias does the rest. Research shows people accept deepfake content about political opponents as factual when it aligns with what they already believe. You don't scrutinize a video that tells you what you want to hear. You hit like and move on.


The Algorithm Doesn't Run Voight-Kampff

YouTube's recommendation engine optimizes for engagement. Watch time. Clicks. Likes. It does not optimize for whether the person on screen actually exists. The George Will deepfakes appeared as top daily recommendations. The Maddow clones accumulated hundreds of thousands of views. The algorithm treated them the same as any other content that performed well, because to the algorithm, performance is quality.

This is the mega flock with faces. We wrote about stochastic parrots generating text, about the noosphere filling with synthetic content the way groundwater fills with forever chemicals. But text is abstract. You read words on a screen and your brain processes them as information. Video puts a face on it. A familiar face. Rick Wilson's face, George Will's face, Rachel Maddow's face. Faces you trust because you've seen them hundreds of times on real broadcasts. The trust was built by the real person over years. The clone inherits it instantly.

We wrote about how your amygdala doesn't check the byline. It doesn't check the face either. It just processes the signal: familiar person, confident delivery, emotionally charged content. The neurochemical pipeline fires. Cortisol, dopamine, outrage, validation. All of it triggered by a machine wearing a mask.


Whack-a-Mole at Industrial Scale

YouTube introduced an AI disclosure policy in March 2024. They launched a likeness-detection tool in 2025. CEO Neal Mohan declared "AI slop response" a top priority for 2026. They've terminated channels, demonetized accounts, banned repeat offenders.

Mearsheimer's team still needed a dedicated employee submitting individual takedown requests for every video across 43 channels. Wilson is still fighting through standard complaint forms. New channels still sprout using misspelled names to dodge detection. The Pakistan-based network documented by Maldita.es was running HeyGen with pre-built avatars at industrial scale: 200 videos across 15 channels, all created between October 2025 and January 2026. Three months of work, 170,000 subscribers.

We could not rely on social media platforms to stop human-generated misinformation. We spent a decade learning that lesson. Facebook let Russian troll farms run wild through the 2016 election. Twitter let bots manipulate trending topics for years. YouTube let conspiracy content metastasize until it became a radicalization pipeline. Every time, the platforms said they'd do better. Every time, the next wave found new gaps.

Now the content doesn't need a troll farm. It needs a laptop and a HeyGen subscription. The troll farms don't need trolls anymore, and the enforcement tools designed to catch humans are useless against machines that can regenerate faster than they can be deleted.


Training Data Was the Autobiography

Here's the part that should unsettle every public figure. Rick Wilson spent decades building a public persona: interviews, podcasts, TV appearances, op-eds, social media. Every appearance was training data. Every minute of footage taught the clone how to move his face, pitch his voice, structure his arguments. The more successful you are, the more material the machine has. Celebrity is now a vulnerability.

Wilson didn't consent to being cloned. Neither did Will or Mearsheimer or Maddow. But consent isn't the bottleneck. Data is. And the data is everywhere, uploaded voluntarily over years to the very platforms that now serve the clones to unsuspecting audiences.

Vered Horesh of AI startup Bria put it plainly: "Safety can't be a takedown process. It has to be a product requirement." But the product requirement for YouTube is engagement. And the clones engage.


Mario Nicolais, Wilson's colleague at the Lincoln Project, wrote in the Colorado Sun that the endpoint of all this isn't that people believe the wrong things. The endpoint is that people "throw their hands in the air in surrender" when distinguishing fact from fiction becomes nearly impossible. The replicants don't need to convince everyone. They just need to exhaust everyone.

Forty-three channels. Two hundred videos. Pre-built avatars named "David." A million viewers who never noticed. And an algorithm that kept recommending.

The Voight-Kampff test was supposed to be science fiction.


Sources


Related

unreplug.com →