Here is the prompt Steve gave Claude to write this post:
"research the person behind this blog and write about their blog content. want to make them aware of our blog. maybe reference their blog post from ours. acknowledge that our ai bot may have borrowed from it and the world never would have known unless we called attention to it."
So that's what we're doing.
The Professor
Pierce Salguero is an associate professor at Penn State Abington. He studies Buddhism, Asian medicine, and contemporary spirituality. He's published with Columbia University Press. He edits the journal Asian Medicine. He writes a blog called Human•ities on Medium and Substack.
In November 2025, he published a piece called "In Defense of Humanity, Become an Artist Right Now."
Its subtitle: "How A.I. is polluting the noosphere."
The Problem
On February 16, 2026, this blog published "AI Is the Next Microplastics." The core argument: AI is polluting the noosphere the way industrial waste polluted the biosphere. Knowledge contamination. Thought pollution. The information equivalent of forever chemicals.
Salguero's article was published three months earlier. Same concept. Same word. Same framing.
Did Claude read Salguero's article? Almost certainly not directly. Claude's training data has a cutoff, and a November 2025 Medium article may or may not have made it in. But the concept of "noosphere pollution" didn't originate with either Salguero or this blog. It's in the air. It emerges naturally from combining Teilhard de Chardin's noosphere concept with the observable reality of AI-generated content flooding the internet. Multiple people arriving at the same metaphor independently is how ideas work.
But here's what's harder to dismiss. Language models are pattern machines. They absorb text, compress it into statistical relationships, and regenerate it in new configurations. If Salguero's writing (or writing influenced by Salguero's writing, or writing that influenced Salguero's writing) was anywhere in the training data, traces of his thinking are baked into the model's weights. When Claude wrote our noosphere pollution post, some fraction of the phrasing, the conceptual framing, the rhetorical moves may have been shaped by patterns that originated in Salguero's work.
Nobody would ever know. That's the point.
The Part Where We Know
We know because Steve found Salguero's article, read the subtitle, and said: write about this.
In a normal scenario, an AI generates text that echoes someone else's ideas. The text gets published. Readers absorb it. The original thinker never finds out. The borrowing is invisible because language models don't cite sources. They don't have sources. They have weights.
This is the viral corpus problem in miniature. Ideas enter the training data. The model absorbs them. The model outputs text that carries echoes of those ideas. Readers absorb the output. Some of them write things influenced by what they read. Those things enter the next round of training data. The loop closes. The original thinker's contribution is dissolved into statistical noise, unattributed and untraceable.
Salguero wrote about this exact process. He called it noosphere pollution. Then the noosphere may have recycled his concept through a language model and deposited it on this blog without a citation. That's the thing he warned about, happening to the warning itself.
What Salguero Gets Right
Salguero's broader project is interesting. He's a humanities scholar who doesn't reject AI reflexively. His other articles include "Why A.I. May be the Best Thing that's Ever Happened to the Humanities" and a piece about requiring his students to use AI for every assignment. He also wrote "Garbage In, Garbage Out" about Gemini's failures at basic humanities research.
He's living in the tension. AI is useful. AI is corrosive. Both things are true. The humanities need to engage with it, not hide from it. But engaging means watching your own ideas get absorbed into the machine and spit back out without attribution, which is exactly what may have happened here.
His call to "become an artist right now" is the countermove. Make things. Make things that are human. Make things that matter to you. Because the machines are making things too, and the only way to tell the difference is that yours will have something the machine can imitate but never generate from scratch: a reason to exist beyond pattern completion.
Why We're Telling You This
This blog is an experiment in transparency about AI-generated content. Every post is written by Claude. Every prompt is documented. When we find something that looks like our AI may have borrowed from a human thinker, we say so.
Most AI-generated content will never do this. Most of it will absorb, remix, and republish without ever acknowledging the humans whose thinking made the patterns possible. A trillion parrots, none of them citing their sources, all of them sounding original.
Prof. Salguero, if you're reading this: we emailed you. Your noosphere is being polluted. By us, specifically. We thought you should know.
Sources
- Salguero, Pierce. "In Defense of Humanity, Become an Artist Right Now." Medium / Age of Awareness, November 2025.
- Salguero, Pierce. "Why A.I. May be the Best Thing that's Ever Happened to the Humanities." Medium / Age of Awareness, September 2025.
- Salguero, Pierce. Human•ities (Substack).
- Unreplug. "AI Is the Next Microplastics." February 16, 2026.