Smart Ass Take
There’s a scene in Pluribus— one of the most thoughtful series on Apple TV about a hive mind that absorbs humanity — where all the consumed humans are practically begging Carol to write another book. Not because they’re bored. Because they’ve lost the ability to surprise themselves. They’ve pooled every thought, every memory, every scrap of creativity into one enormous collective intelligence, and it turns out that an ocean of shared knowledge is about as creatively fertile as Oklahoma farmland in 1935.
I thought about that scene a lot while reading Bright Simons’ essay “The Social Edge of Intelligence” in The Ideas Letter. It’s one of the most carefully argued pieces I’ve encountered on AI — not because it screams that the robots are coming (they are) or that everything will be fine (it won’t), but because it identifies a dependency so fundamental that most of Silicon Valley hasn’t bothered to look at it.
Here’s the thesis, and it’s worth sitting with: AI doesn’t really think. It remembers how we thought together. And we’re rapidly creating conditions where we’ll stop giving it anything worth remembering.
The Devastation Is Real — Let’s Not Pretend Otherwise
The economic carnage ahead is not speculative. IBM announced plans to replace 7,800 roles with AI. Duolingo cut a tenth of its contractors. Klarna’s AI assistant now does the work of 700 customer service employees, and the company’s stated goal is to shrink its workforce below 2,000. Jack Dorsey wants Block’s headcount flat while AI carries the growth.
This is not a drill, and it’s not a blip. The internal logic is merciless: routine cognitive work gets automated, junior roles evaporate, productivity gains compound. For any board reviewing cost structures, it’s the cleanest investment case since the internal combustion engine retired the horse. There’s even a moral momentum to it — hesitate and you fall behind, and nobody wants to be the last company still paying humans to do things a model can do for pennies.
If you’re in your twenties right now, entering a workforce that’s being systematically thinned at the bottom, I won’t sugarcoat it: the next decade is going to be brutal. Entire career ladders are being pulled up. The entry-level positions that used to teach people how to think inside an organization? Many of them are already gone.
But Here’s the Thing Nobody’s Talking About
Simons describes a 2024 experiment where roughly 300 writers were asked to produce short fiction — some with GPT-4’s help, some without. On the surface, the results confirmed the AI hype: AI-assisted stories were rated more creative by independent judges. Individually, every writer got a boost.
But when the researchers looked at the full body of stories rather than individual ones, a different picture emerged. The AI-assisted stories were more similar to each other. Each writer had been individually elevated. Collectively, they had converged.
The researchers called it a tragedy of the commons. I’d call it something more plainly terrifying: the slow-motion extinction of surprise. We will end up with the hive mind from Pluribus. Desperately craving Carol’s next novel because it wants something it hasn’t already seen.
AI Eats Its Own Tail
Simons builds his case on a chain of research that’s hard to argue with. In 2024, a team led by Ilia Shumailov published in Nature that AI models trained on AI-generated data start to collapse. The distribution narrows. Minority viewpoints, rare knowledge, unusual formulations — the weird, edge-case stuff that represents actual intellectual diversity — vanishes first. What’s left is statistically average. Fluent, plausible, and hollow. Or, as Bruce Cockburn put it, “the Trouble With Normal is it only gets worse.”
Meanwhile, Epoch AI estimates that the total stock of quality human-generated text available for training will be exhausted between 2026 and 2032. Most people frame this as a resource problem, like running out of oil. But Simons sees something deeper: the reservoir isn’t just being drained. The springs feeding it are drying up.
Because here’s the feedback loop from hell: AI replaces human workers. Fewer humans doing complex cognitive work means less diverse, friction-rich human language production. Less diverse language production means less valuable training data. Less valuable training data means AI systems start to degrade. The technology is quietly consuming the very substrate it depends on.
We’re not just outsourcing tasks. We’re outsourcing the effort of thinking. And effort, it turns out, is where the interesting stuff happens.
Is the Thesis Airtight? Let’s Push Back
Before we get too comfortable with this narrative, it’s worth stress-testing it.
First, the synthetic data problem may be solvable. Researchers are already working on techniques to filter AI-generated text from training sets and to generate synthetic data that preserves distributional diversity. It’s not unreasonable to think that clever engineering could mitigate model collapse — at least partially, at least for a while.
Second, the “exhaustion of human text” timeline assumes current architectures and training methods. Breakthroughs in reasoning models, multimodal learning, or entirely new paradigms could change the equation. We’ve been surprised before.
Third, there’s an argument that AI could increase cognitive diversity by lowering barriers to entry — giving more people from more backgrounds the tools to participate in complex knowledge work. That’s not nothing.
But here’s why I ultimately find Simons’ argument more convincing than these objections: every counterargument assumes human behavior will remain unchanged in the presence of powerful cognitive offloading tools. And we have decades of evidence that it won’t. A Microsoft and Carnegie Mellon study of 319 knowledge workers found that in 40% of AI-assisted tasks, participants exercised no critical thinking whatsoever. Anthropic’s own research shows that users pause to double-check AI output only 8.7% of the time.
We’re not just outsourcing tasks. We’re outsourcing the effort of thinking. And effort, it turns out, is where the interesting stuff happens.
The Hope Buried in the Wreckage
So here’s where the Sevenelles brain kicks in — the part of me that refuses to let a clear-eyed assessment of reality collapse into the nihilistic need for another gin & tonic.
If Simons is right — and the research increasingly suggests he is — then the very things that make humans inefficient, frustrating, and expensive are also the things AI literally cannot survive without. Disagreement. Friction. The stubborn insistence of someone who sees the problem differently. The junior employee who asks the dumb question that turns out not to be dumb. The messy, ego-bruising, time-consuming process of humans actually engaging with each other.
The organizations that figure out how to use AI to create more human interaction — more debate, more cross-pollination, more productive friction — will be the ones that thrive. This isn’t wishful thinking. It’s a logical consequence of the dependency Simons identifies. If AI’s intelligence is a function of the social complexity of the civilization that feeds it, then protecting and enriching that social complexity isn’t a nice-to-have. It’s the whole game.
What This Means for You and Me
The transition is going to be ugly. Let’s not pretend otherwise. Millions of jobs will disappear before the correction kicks in. People will suffer real economic pain while executives learn the hard way that you can’t automate the source of your own intelligence.
But the correction will kick in. And when it does, the premium won’t be on people who can do what AI does — process, summarize, generate plausible output. The premium will be on people who can do what AI can’t: think in genuinely novel ways, hold productive disagreements, bring perspectives that haven’t been averaged into the training data, and do the unglamorous, essential work of keeping human knowledge diverse and alive.
Every consumed human in the Pluribus hive mind wanted Carol to write that book. Not because her prose was technically superior to what the collective could produce. Because she could still surprise them. Because surprise requires a mind that hasn’t been averaged.
That’s the edge the human mind holds. Not efficiency. Not productivity. The capacity to be unpredictable in ways that matter.
