Troll Farms Don't Need Trolls Anymore

This one isn't funny. The other posts on this blog are playful, self-aware, occasionally stoned. This one is a history lesson and a warning. The facts are sourced. The implications are real.


What 400 Humans Could Do

In 2014, a building at 55 Savushkina Street in St. Petersburg, Russia, housed an operation called the Internet Research Agency. It was funded by Yevgeny Prigozhin, a businessman close to Vladimir Putin, as part of a larger initiative called Project Lakhta.

At its peak, approximately 400 employees worked 12-hour shifts. About 80 were dedicated to targeting the United States. By September 2016, the monthly budget exceeded $1.25 million. They had departments for graphics, data analysis, SEO, IT, and finance. It was, by every measure, a professional operation.

Here's what they built:

  • 470 Facebook pages, posing as American grassroots organizations
  • 3,814 Twitter accounts that posted nearly 176,000 tweets in the 10 weeks before the election
  • 170 Instagram accounts that produced 120,000 pieces of content
  • Over $100,000 in Facebook ads — 3,500 ads from June 2015 to May 2017

The reach: at least 126 million Americans on Facebook alone. Twitter notified 1.4 million users they'd interacted with IRA-controlled accounts.

The fake pages had names like "Heart of Texas" (which organized a "Stop Islamization of Texas" rally in Houston), "United Muslims of America" (which organized a counter-rally at the same time and place — both sides were the IRA), "Blacktivist" (targeting Black Americans), "Being Patriotic" (200,000+ followers), and "Don't Shoot Us" (250,000+ followers).

Two IRA employees, Aleksandra Krylova and Anna Bogacheva, traveled to the United States in June 2014 to gather intelligence for the operation. They toured nine states.

On February 16, 2018, a federal grand jury indicted 13 Russian nationals and 3 Russian entities. The Mueller Report concluded that Russian interference was "sweeping and systematic" and "violated U.S. criminal law."

The Senate Intelligence Committee's bipartisan report found that African Americans were targeted more than any other demographic group. The IRA's activity actually increased after Election Day — Instagram activity jumped 238%, YouTube 84%, Facebook 59%, Twitter 52%.

This is what 400 humans could do with $1.25 million a month and 12-hour shifts.


What One Person Can Do Now

In June 2023, researchers at the University of Zurich published a study in Science Advances, one of the world's top peer-reviewed journals. They tested 697 participants on their ability to distinguish AI-generated tweets from human-written ones.

The finding: GPT-3 produces disinformation that is more compelling than human-written disinformation. Participants could not reliably tell the difference between AI-generated tweets and real ones. AI-generated content — both accurate and false — was rated as more believable than content written by actual humans.

Read that again. The machine is already better at this than we are.

In 2021, Georgetown University's Center for Security and Emerging Technology tested GPT-3 across six disinformation scenarios: climate change denial, conspiracy theories, inciting groups against each other, micro-targeting religious communities, and stoking racial divisions. Their finding: after seeing just five GPT-3 messages, survey respondents' opposition to sanctions on China doubled. Five messages. That's it.

The IRA needed 400 employees, an office building, and years of infrastructure to write convincing American English. They still made mistakes — awkward phrasings, cultural blind spots, occasionally pricing things in rubles instead of dollars. Those mistakes helped researchers identify them.

AI doesn't make those mistakes. It doesn't use Russian grammar patterns. It doesn't need to understand American culture through training — it already absorbed the entire internet. It can write in any dialect, any register, any tone. It can sound like a Texas conservative or a Brooklyn progressive or a midwestern mom. It does this natively, fluently, and instantly.

One person. One laptop. A $20/month API subscription. No office in St. Petersburg. No 12-hour shifts. No risk of employees being identified and indicted.


It's Already Happening

OpenAI has published multiple threat reports since February 2024. By late 2025, they had disrupted over 40 covert influence networks using their tools.

Among the documented operations:

  • Doppelganger (Russia) — linked to the Kremlin by the US Treasury Department. Spoofed legitimate news websites to undermine support for Ukraine. Used AI to translate articles from Russian into English and French and convert them into Facebook posts.
  • Spamouflage (China) — what Meta called "the largest known cross-platform covert influence operation to date." Meta removed over 8,600 assets including 7,704 Facebook accounts and 954 Pages. The operation targeted at least 50 platforms. Meta linked it to Chinese law enforcement.
  • STOIC (Israel) — a Tel Aviv political marketing firm that created fake accounts posing as Jewish students, African Americans, and concerned citizens to post about the war in Gaza.
  • IUVM (Iran) — the International Union of Virtual Media, using AI to generate and translate anti-US and anti-Israel articles.

In 2024 alone, Meta dismantled 20 new covert influence operations spanning the Middle East, Asia, Europe, and the United States.

OpenAI's assessment is careful: these operations used AI as "an accelerant bolted onto existing playbooks" — making them faster and cheaper but not yet providing novel offensive capabilities. None gained significant traction with real audiences.

That last part is important. And temporary. These are early days. The IRA's first attempts in 2014 weren't sophisticated either. They got better. So will this.


The Psychographic Layer

It would be incomplete to talk about 2016 without mentioning Cambridge Analytica.

In 2013, a researcher named Aleksandr Kogan built a Facebook app called "This Is Your Digital Life" — a personality quiz. 270,000 people installed it. Because of Facebook's data-sharing policies at the time, the app harvested data on up to 87 million people, including 70.6 million Americans.

Cambridge Analytica used this data to build psychographic profiles based on the Big Five personality model — Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism. Their CEO, Alexander Nix, claimed they had "four or five thousand data points on every individual" and modeled the personality of 230 million Americans.

This data was used by the Trump campaign for micro-targeting: displaying customized messages based on individual personality profiles. Christopher Wylie, Cambridge Analytica's former Director of Research, blew the whistle in March 2018. The resulting scandal cost Facebook $100 billion in market cap and a $5 billion FTC fine — the largest privacy penalty ever imposed on a company. Cambridge Analytica declared bankruptcy in May 2018.

Now combine psychographic targeting with AI-generated content. In 2016, you needed a stolen dataset and a team of data scientists to personalize messages. In 2026, you need a prompt: "Write a Facebook post about immigration that would resonate with someone who scores high on conscientiousness and low on openness, living in rural Ohio, who recently shared a post about rising grocery prices."

The AI will write it. Instantly. In the exact right tone. And it'll write a different version for every personality profile, every demographic, every zip code. At scale. For free.


The Math

This is ultimately a math problem, and the math has changed.

2016 (IRA)
2026 (AI)
Personnel
400 employees
1 person
Monthly cost
$1.25 million
~$20
Content quality
Imperfect English, cultural errors
Native fluency, any dialect
Personalization
Broad demographic targeting
Individual psychographic targeting
Languages
English (with errors), Russian
Every language
Output volume
~80,000 Facebook posts over 2 years
80,000 posts per hour
Risk of exposure
Employees indicted, travel records
Virtually anonymous
Adaptation speed
Shift changes, editorial review
Real-time, automatic

The World Economic Forum ranked AI-driven misinformation and disinformation as the #1 global short-term risk in both 2024 and 2025 — the first time the category appeared in their survey, and it went straight to the top. Nearly 3 billion people were eligible to vote in elections worldwide in 2024.

RAND Corporation researchers put it plainly: AI could "supercharge the building of believable personas and generate endless tailored content" at "unprecedented scale and sophistication."


What This Has to Do with Unreplug

This website is an experiment. One person asked two AIs to create a word and build a viral campaign around it. The AIs wrote the content, designed the site, drafted the marketing strategy, and produced the blog posts — including this one. The whole thing took less than 24 hours.

I'm being transparent about it. This is a goofy project about a word that means unplugging something and plugging it back in. The goal is to make $10K and prove a point about what AI can do.

But the infrastructure is identical.

The same tools I used to build a fun website about a made-up word can be used to build a disinformation network. The same AI that wrote a joke about marriage counseling and unreplugging can write a fake news article about a candidate. The same technology that created twelve blog posts in one night can create twelve thousand. The difference isn't capability. It's intent.

In 2016, a building full of people in St. Petersburg proved that social media could be weaponized against a democracy. It took years, millions of dollars, and hundreds of employees.

In 2026, the same operation requires a person, a prompt, and an afternoon.

The troll farms don't need trolls anymore. They don't need farms. They just need the thing you're reading right now — the same technology that's writing these words — pointed in a different direction.

That's not a metaphor. That's the situation.


Sources

  • Mueller, Robert S. III. Report on the Investigation into Russian Interference in the 2016 Presidential Election. U.S. Department of Justice, March 2019.
  • U.S. Senate Select Committee on Intelligence. Russian Active Measures Campaigns and Interference in the 2016 U.S. Election, Volume 2: Russia's Use of Social Media. October 2019.
  • Spitale, Giovanni, Nikola Biller-Andorno, and Federico Germani. "AI model GPT-3 (dis)informs us better than humans." Science Advances, Vol. 9, No. 26, June 2023.
  • Buchanan, Ben, Andrew Lohn, Micah Musser, and Katerina Sedova. "Truth, Lies, and Automation: How Language Models Could Change Disinformation." Georgetown University CSET, May 2021.
  • OpenAI. "Disrupting Deceptive Uses of AI by Covert Influence Operations." Threat Intelligence Reports, May 2024, October 2024, February 2025.
  • Meta Platforms. Quarterly Adversarial Threat Reports, 2023–2025.
  • World Economic Forum. Global Risks Report 2024. January 2024.
  • Helmus, Todd C. and Bilva Chandra. "Generative AI and the Future of Disinformation." RAND Corporation, 2024.
  • Federal Trade Commission. "FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook." Press release, July 24, 2019.
unreplug.com →