The plateau of AI-generated content that researchers are celebrating this week isn’t good news. It’s a symptom of something far more damaging: we’ve entered an era where the internet itself can no longer be trusted, not because AI is getting better at fooling us, but because we can no longer distinguish the real from the artificial. Worse, we’ve stopped giving things the benefit of the doubt.
Scroll through the comments on any viral video, any impressive image, any piece of content that stands out as noteworthy. You’ll see the same question repeated over and over: “Is this AI?” And it doesn’t matter if the video is a decade old. It doesn’t matter if it predates large language models by years. If something looks impressive or amazing, it’s immediately flagged as suspicious by people encountering it for the first time, unaware of its age or origin.
This is the real crisis. Not that AI slop exists. It always will. But we’ve collectively decided to assume everything is fake until proven otherwise.

Fake News Had Guardrails. AI Slop Doesn’t.
There’s an important distinction here that gets lost in the conversation. Fake news is a targeted weapon. It’s politically charged, ideologically motivated, and designed to sway opinion on specific, high-stakes topics. And because of that, people have developed some immunity. We’re more careful around election content. We know to double-check claims about politicians and policy. There’s scaffolding around fake news: it targets specific narratives, and smart people can learn to spot the patterns.
AI slop infects everything. Cat videos aren’t safe anymore. Heartwarming stories about strangers helping strangers. The Peyton Manning Slop. Tutorials. Recipe videos. Inspirational quotes. Nothing is exempt. The indiscriminate nature of AI-generated spam means there’s no political category to flag as “needs verification”. Everything is suspect now.
That’s the difference between doubting certain claims and doubting the entire internet.
The Real Blame Lies With Incentives, Not Technology
Here’s what won’t appear in any research paper: the people generating this content know exactly what they’re doing, and they’re doing it anyway because it works. Clickbait has always been clickbait. The difference now is that it’s faster and cheaper to create.
The fault lies not with the technology itself, but with the users wielding it. If you want to make fun fake images, label them. If you want to generate content, be honest about it. But this will never stop, because the economics of the internet reward quick, easy money over integrity. There’s no incentive to stop if nobody’s policing it, and policing it at scale is impossible.
The responsibility sits squarely with creators who choose to deceive. It always has.
Solutions Exist, But They’re Incomplete
Detection will improve. Users will develop better AI literacy. Platforms will build better labeling systems. Over time, we’ll adapt. But here’s the hard truth: some of these solutions create new problems.
A platform that specializes in verified, AI-detected content could be valuable. A community standard for what’s real could emerge. But the moment you start flagging content as “not AI,” you run into edge cases that destroy the whole system. If I use AI to edit an image where Photoshop used to do the work, is that slop? If I use AI to summarize a 50-page research paper before writing my own analysis, does my final piece get misflagged?
The distinction between AI-generated and AI-assisted matters, but it’s almost impossible to enforce at scale.
So we’re left with a messier solution: time, community detection techniques, and platforms that build reputations for accuracy. Some sources will earn trust through their filter quality. Some communities will develop better Slop detectors. Some users will move toward creators and platforms they know maintain standards.
But we’ll never return to the baseline trust we had before. That’s gone. (And, was that baseline even really good anyway?)
What This Means for Content That Actually Matters
When I encounter something online now, I start by assuming it’s AI and research backward. It frustrates me when things stand out as obviously artificial, because it means I’m wasting energy on verification instead of engagement. But this is the new baseline.
The irony is that I use AI constantly: to research, to summarize, to help structure and format ideas. But I keep my voice in it. I invest my own time and judgment. I don’t farm out my thinking to a bot and call it done. That’s what separates content that matters from the slop.
Anyone can set up an automation to generate similar content in minutes a day. But they can’t replicate the time investment, the editorial judgment, the human perspective. Not yet, anyway. That’s the only competitive moat left: actually caring enough to do the work.
The plateau in AI-generated content isn’t a victory for quality. It’s a signal that we’ve adapted to a broken trust layer by outsourcing verification to algorithms, platforms, and user suspicion. We haven’t fixed the problem. We’ve just learned to live with it.
And that’s the real story the research missed.