The Shifting Skepticisms in AI
A History of Disbelief in Large Language Models
The history of AI skepticism is basically a history of moving goalposts. Skepticism isn’t necessarily bad faith. At any given moment, different concerns resonate with different audiences, and the loudest voices shift. People worried about their jobs being replaced by automation have been around for decades. It just wasn’t as urgent of a concern in 2019. What’s interesting to me is the macro-narrative. Which skepticism dominates the conversation at each phase, and what that says about our collective relationship with this technology.
The Scaling Skeptics (2018-2020)
Back in the early LLM days (BERT and GPT-2), the main skepticism was technical: this approach can’t work. And really, that was a pretty reasonable take. Decades of AI disappointment taught researchers not to trust “just add more compute” as a strategy. GPT-2 could spit out impressive text that fell apart if you looked closely. The skeptics seemed right, these were fancy autocomplete engines, and no amount of scaling would help because you can’t get real understanding from predicting the next word.
Then GPT-3 showed up and started doing things that weren’t there at smaller scales. Translating language, basic reasoning, following weird instructions nobody had trained it on. The technical skeptics had to either update their views or find new objections.
The Recurring Plateau
What we got was the first round of what’s become a recurring theme: okay fine, it scaled more than we expected, but it’s definitely plateauing now. This argument has a funny characteristic. You can’t disprove it in the short term (any slowdown could be the start of the plateau), but it keeps getting tested over longer periods. Between 2020 and 2024, confident claims about fundamental limits got run over by continued progress pretty regularly.
The plateau skeptics are certainly ‘technically right’ (that’s the best kind of right). Nothing can improve forever and every technology hits limits eventually. But the track record of specific predictions has been bad enough that I’ve started treating them as statements about the predictor more than the technology. There’s also this contradiction in how we react. When progress continues, it’s “unsurprising incremental stuff.” When plateau predictions fail, they just get quietly swapped out for new predictions about why this time it’s real.
The IP Theft Narrative
As the technical objections got harder to defend, the conversation shifted to social concerns. Around 2023, the copyright argument became dominant: These models were trained on copyrighted work without permission, so they’re basically theft engines.
This does legitimately raise questions about what “learning from” means, whether you can own a style, how creators should get compensated when machines can synthesize everything. But the actual discourse collapsed into teams pretty fast, and the interesting questions got lost.
There’s also a weird tension with the “it’s just autocomplete” take. If models are really just matching patterns, they can’t do anything meaningful with training data anyway. If they’re actually creative, the copyright concerns make more sense. Of course, you are then also admitting something genuinely new is being created.
What I find interesting is who got energized by this critique. A lot of the loudest voices on places like Hacker News and Reddit have historically been pretty hostile to intellectual property protections. Against software patents, eye-rolling at copyright maximalism, maybe a bit wistful about the Napster days.
To be fair, there is a coherent defense for this switch. In the Napster era, copyright was a club used by mega-corporations to crush individuals. ‘Information wants to be free’ was a rallying cry for the underdog against gatekeepers.
Today, the polarity has flipped. Now, it is the trillion-dollar tech giants who are arguing that information should be free (so they can scrape it), while individual artists and coders are using copyright as their only shield. If your consistent principle is ‘defend the little guy against the corporate borg,’ then flipping on copyright makes perfect strategic sense.
But strategy isn’t the same thing as principle. If you only believe ‘information wants to be free’ when you are the one consuming it, but believe ‘intellectual property is sacred’ when you are the one producing it, you don’t actually have a stance on information architecture. You just have a stance on your own bank account.
These same communities that spent years arguing “information wants to be free” suddenly discovered a deep appreciation for IP rights right when AI became the beneficiary of loose attitudes toward training data.
Don’t get me wrong, I’m not saying everyone’s a hypocrite. Different people have different views. But at the macro level, there’s something notable about communities pivoting hard on IP the moment it’s convenient. It reminds me of the anti-EV crowd that developed sudden urgent concern about lithium mining’s environmental impact. I like to ask them: which environmental issues were you worried about before EVs? How do you feel about oil refineries? When principles get applied this selectively, the stated objection probably isn’t the real one.
AI Slop
The “AI slop” critique is compelling because it’s hard not to notice that there’s a flood of garbage AI content clogging the internet. Articles that say nothing, images with weird hands, spam everywhere.
But this is kind of beside the point on capability. Slop tells you that the scarcity is no longer in the creation of content. AI can make stuff cheaply, the scarcity is still in how to make stuff well. The same tech can produce both spam and quality work, but the difference is how it’s used and what the incentives are. There’s also a selection effect we should acknowledge: We notice bad AI content because it’s bad. When AI helps produce something good, we often don’t notice that it is AI-assisted because it didn’t obviously look like slop.
Economic Displacement
The jobs concern has been getting louder as capabilities have grown. This one feels different because it’s really about trajectory: if this keeps improving, where does it end?
I notice this sits weirdly next to the plateau narrative. You can’t really believe both that AI has stopped improving and that it’s about to take everyone’s job. If AI can only generate slop, why would people who can produce good content be worried? But both concerns show up in the same conversations all the time, which suggests they’re more about vibes than consistent analysis.
The smarter version of this concern understands that even if progress slows down, what we have now might be enough to cause a lot of disruption over the next decade. That seems plausible to me, though nobody really knows the timeline.
The Pattern
A few things stand out looking at this history:
Technical critiques turn into social ones as capabilities grow. When “this doesn’t work” stops working, we get “this is theft.” When “this can’t scale” fails, we get “this will kill jobs.” The emotional energy stays constant even as the arguments change.
Principles get applied selectively. The same communities that hated IP protections suddenly love them when AI’s involved. When this happens, the stated objection usually isn’t the real one.
The critiques that hold up don’t depend on capability limits. Copyright questions, alignment problems, social disruption actually get more relevant as the tech improves. The critics who focused here look smarter than the ones who kept betting on technical walls.
“Current limitations” keeps getting confused with “permanent limitations.” GPT-2’s problems were real, but they weren’t a ceiling. This confusion has burned skeptics over and over.
Where Things Stand
In the past few weeks the ability for AI to write good code has become harder to refute. Right now, we’re seeing more grudging acceptance that the capabilities are real, mixed with a growing worry about what that means. The “it doesn’t work” crowd has mostly lost on raw capability. Skepticism didn’t end, but it transformed into questions about what it means for society that these capabilities exist.
I personally think the most defensible position right now might just be uncertainty. This is moving faster than our ability to understand what it means. The predictions, both utopian and dystopian, have track records that should discourage us from being too confident.
If you’re skeptical now, I’d just say that you should pay attention to which of your objections depend on capability limits that might get blown away, versus which ones are about inherent problems with deploying capable AI systems. The first category has not performed well. The second might be where the real fights are.
And if you catch yourself applying principles selectively then it’s worth asking what you’re actually objecting to. The honest answer might be more interesting than whatever you’re saying out loud.


