The Epistemic Horizon Problem
You Cannot Predict a Future Built by Tools That Don’t Exist Yet
Could you have predicted NFTs in 1993?
Not just “no” — the question is structurally malformed. NFTs require a stack of prerequisites that didn’t exist: persistent internet infrastructure, the cultural normalization of digital media as a real asset class, a blockchain trust layer, a crypto-native speculative culture, and a specific psychological anxiety about scarcity in a world of infinite copies. None of those things existed in 1993. More importantly, the combination of those things — which is what actually produced NFTs — was not merely unpredicted but unpredictable. You cannot extrapolate to a concept whose prerequisite concepts don’t yet exist.
This is not a failure of imagination. It is a structural property of how novel futures are generated.
We are now in a position far more extreme than 1993. And most of the conversation around AI — both the optimist and the doomer variants — fails to account for this.
The Horizon Is Not a Fog
The standard framing of uncertainty is epistemic fog: we know roughly where we’re going, we just can’t see clearly. More data, better models, sharper analysis — and the fog lifts.
That is not the situation we are in.
The situation we are in is closer to a horizon problem: the future is not obscured, it is literally beyond the boundary of what current conceptual frameworks can reach. The concepts required to describe it don’t exist yet. They will be generated in the process of building and deploying the thing we are building.
There is a meaningful difference between “we don’t know what AI will do to the labor market” — a forecasting problem — and “we cannot name the categories of impact AI will introduce” — an epistemic problem. The first is fog. The second is horizon.
Most AI discourse treats this as the first kind of problem. The uncomfortable truth is that it’s the second.
Why This Time Is Structurally Different
Every major technology revolution extended human reach. The printing press extended the reach of ideas. The railroad extended the reach of commerce. The internet extended the reach of communication and coordination.
Extension is predictable in structure even when unpredictable in detail. You can reason about what happens when more people can communicate faster, even if you can’t predict Twitter.
AI is not primarily an extension of reach. It is, at minimum, an extension of synthesis — the ability to hold and cross-reference more patterns across more domains than any human mind can manage simultaneously. At maximum, it is an extension of thought itself: a system capable of generating novel conceptual combinations that humans would not produce independently, or would only produce across decades of accumulated research.
This distinction matters enormously for prediction.
When you extend reach, the future is a scaled version of the present. When you extend the capacity for novel concept generation, the future contains things that are not scaled versions of anything — they are genuinely new. And genuinely new things cannot be predicted from first principles, because first principles are, by definition, what you currently have.
The NFT Stack, Generalized
The reason NFTs couldn’t be predicted in 1993 is that they required a conceptual stack — layer upon layer of infrastructure, culture, and incentive structure — none of which existed in complete form.
Now consider the conceptual stack that AI will generate.
We do not know what it looks like. We can identify some of the inputs: advances in biology, materials science, energy physics, economics, cognitive science. AI will process those inputs in combination, at scale, across domain boundaries that human researchers rarely cross because of the transaction costs of deep specialization.
What comes out the other side is not predictable. Not because we lack data. Because the output will be concepts we do not currently have the vocabulary to name.
This is the actual magnitude of what is being built. Not smarter search. Not better automation. A system that may introduce genuinely novel ideas into the world — ideas that will become the prerequisites for the next generation of unpredictable futures.
This Is Not Doom. It Is Not Hype. It Is Structure.
The natural response to “we cannot predict the future” is anxiety. The doomer reads it as: therefore something terrible is coming. The hype-cycle reads it as: therefore something incredible is coming.
Both responses are wrong in the same way. They are importing emotional valence into a structural observation.
The correct read is simpler: the upside of this technology is not bounded by what we can currently imagine, and neither is the downside. Both tails are longer than most models assume. The distribution is not normal. The tails are fat in both directions, and the fat part of the tail contains concepts that don’t have names yet.
This is a reason to take it seriously — not as a promise, not as a threat, but as a genuine civilizational variable of a kind we have not encountered before.
Why You Build It Anyway
The argument against building AI — or against building it at this pace — usually rests on the unknowability of its consequences. If we can’t predict it, we should slow down.
But unknowability is not symmetrically distributed in time. The longer you wait, the more the concepts generated by early AI compound into prerequisites for the next layer of AI-enabled discovery. The horizon keeps moving. And the breakthroughs that AI may unlock in medicine, energy, materials, and cognition are not waiting on a schedule. They are waiting on the conceptual tools that make them possible.
The case for building is not that we know it will go well. The case is that the alternative — not building the concept-generating machine while civilizational-scale problems accumulate — carries its own version of the unknowable tail risk.
Neither path is safe. One path produces new concepts. The other does not.
What Honest Analysis Looks Like
An honest framework for thinking about AI probably has the following properties:
It does not predict specific outcomes beyond a short horizon. It does not produce confident scenarios about 2045. It acknowledges that the most important consequences of the technology will be ones we cannot currently name.
It tracks the structural conditions rather than the narratives: capital flows, research velocity, regulatory friction, the pace at which AI output is being incorporated into further AI development.
It holds the uncertainty as information rather than noise. The fact that we cannot predict the future with AI is itself a meaningful signal — it tells us something about the magnitude of what is being built, and should calibrate how seriously we take any confident forecast in either direction.
And it resists the temptation to resolve the uncertainty prematurely — into optimism, into fear, or into the comfortable middle ground of “it’ll probably be fine.”
The honest position is less comfortable than any of those.
We are building something that will generate futures we cannot imagine. We do not fully understand what we are building in terms of what it will do to society. This is not a failure of analysis. It is the correct conclusion of rigorous analysis.
The horizon is real. The fog is not.
Market Nihilist publishes analysis at the intersection of capital, structure, and civilizational change. Mechanism first. Narrative second.



