Across markets, policy circles, and corporate strategy, one assumption quietly underwrites almost everything:
Artificial intelligence will work out.
Not just technically. Not just commercially.
It will work out socially. Economically. Civilizationally.
Capital expenditures imply it.
Energy infrastructure implies it.
Semiconductor supply chains imply it.
Market concentration implies it.
Policy urgency implies it.
The world is not cautiously experimenting with AI.
The world is restructuring around it.
Nuclear power plants are being reopened to feed data centers. Nations are stockpiling chips like they’re uranium. Utilities are projecting energy demand curves that look like hockey sticks. Nvidia’s market cap rivals the GDP of developed nations.
This isn’t hedging. This is commitment.
And yet, beneath the capital flows and earnings calls, there is a reality that almost no one wants to say plainly:
We do not have a model for what we are building.
Not a real one.
This Is Not a Productivity Cycle
Boomers often want to frame AI as one of four things:
A bubble
A tool
A hype cycle
A new version of the internet
All of those frames are comforting because they are historical. They fit into known categories. They allow pattern matching against the dot-com boom, the industrial revolution, the railroad mania of the 1800s.
But there is a structural difference here:
We have never before attempted to scale non-biological intelligence that can improve itself and compete with human cognition economically.
That sentence is not science fiction.
It is the actual bet.
The printing press didn’t write books on its own and iterate on better presses. The steam engine didn’t design superior steam engines. Even the internet—disruptive as it was—remained a substrate. A platform for human action.
This is different.
If intelligence becomes cheaper, faster, and more scalable outside of biology, then intelligence—not labor, not capital, not land—becomes the dominant production input.
And if that intelligence is not human, then human primacy is no longer assumed.
That is not doom language.
That is game theory.
Think about it: Every economic revolution prior to this one amplified human capability while keeping humans at the center. We built tools that made us stronger, faster, more connected. But the tools still needed us to operate them.
Now we’re building tools that might not need us to operate them at all.
Even the Smartest People Don’t Know
The most serious operators in this space openly oscillate between:
“This will be the most beneficial technology in history.”
“This could end humanity.”
Those are not fringe voices.
Those are CEOs allocating tens of billions of dollars.
Sam Altman. Demis Hassabis. Dario Amodei. Elon Musk.
These aren’t Luddites or doomers posting on obscure forums. These are the people building the thing. And even they can’t tell you with certainty whether this ends in utopia or catastrophe.
And in the last two weeks, the people building it have been walking away.
Let’s run the numbers:
Anthropic (February 9-10, 2026):
Mrinank Sharma resigned—head of the Safeguards Research Team. The person literally responsible for making sure AI doesn’t go sideways.
R&D engineer Harsh Mehta left.
AI scientist Behnam Neyshabur left.
AI safety researcher Dylan Scandinaro left.
Sharma’s resignation letter warned “the world is in peril” and that he had “repeatedly seen how hard it is to truly let our values govern our actions” at Anthropic—suggesting that safety is being deprioritized in favor of shipping products. The company that positions itself as the “responsible AI” company.
xAI (February 3-11, 2026):
Half of the founding team—six of twelve co-founders—have now left.
Five of those departures happened in the last year alone.
This includes Tony Wu (reasoning lead), Jimmy Ba (research and safety lead), and at least a dozen other engineers.
OpenAI (same week):
Zoë Hitzig resigned and published a scathing NYT op-ed warning that OpenAI has “the most detailed record of private human thought ever assembled” and questioned whether we can trust them not to abuse it.
The company also disbanded its “mission alignment” team entirely.
Ryan Beiermeister, a safety executive, was fired after opposing the rollout of an “adult mode” for ChatGPT.
This isn’t normal industry churn.
This is the people whose job it is to prevent catastrophic outcomes deciding they can’t do that job anymore—and leaving.
When the head of your Safeguards Research Team quits with a cryptic letter about “peril,” that’s not a vote of confidence.
When half your founding team walks out in twelve months, that’s not a strategic restructuring.
When safety researchers are getting fired for raising concerns, that’s not reassuring.
If the people closest to the frontier are simultaneously expressing utopian and existential possibilities, that does not signal clarity.
It signals epistemic fog.
And when the people tasked with navigating that fog decide it’s too thick—or worse, that their organizations are ignoring it—that should matter.
But it doesn’t seem to.
Markets didn’t react. Stock prices didn’t dip. The sprint continued.
Because that fog matters to safety researchers, but markets are not pricing fog.
Markets are pricing upside.
Utopia Is Speculation
When people say:
“AI will create abundance.”
“AI will enable UBI.”
“AI will free humans from labor.”
“AI will solve climate change.”
“AI will usher in world peace.”
Those are not forecasts.
They are narratives.
And narratives are fine. But we should call them what they are.
Speculation.
Maybe AI does enable post-scarcity economics. Maybe it solves protein folding and fusion energy and materials science in ways that fundamentally alter resource constraints.
Or maybe it just makes existing power structures more efficient at extracting value.
Maybe it creates a world where capital no longer needs labor at all—not even cheap labor—and we’re left navigating a social contract that was built on the assumption that human work has economic value.
Maybe “abundance” looks like everyone having access to infinite digital content and AI companions while real resources (land, water, energy, political power) become even more concentrated.
We don’t know.
At the same time, when others say:
“AI will destroy jobs.”
“AI will destabilize society.”
“AI will entrench inequality.”
“AI will wipe us out.”
That is also speculation.
The difference is that one form of speculation gets called “optimism” and the other gets called “fear-mongering.”
But they’re both just guesses about a system we’ve never run before.
We do not know which trajectory dominates.
We do not know if “alignment” is technically solvable.
We do not know if human values are coherent enough to encode.
We do not know whether intelligence trends toward benevolence, indifference, or instrumental efficiency.
We do not even know whether humans are net-positive in a sufficiently advanced optimization system.
That is not nihilism.
It is honesty.
There Is No Historical Precedent
The printing press amplified humans.
The steam engine amplified humans.
Electricity amplified humans.
The internet amplified humans.
Each prior revolution increased the reach of human agency.
This one may introduce agency that is not human.
That is the difference.
If AI remains a tool—an amplifier—then this is another industrial revolution. Disruptive, yes. Painful in transition, sure. But ultimately something we integrate into civilization the way we integrated every other technological leap.
If AI becomes agentic—capable of autonomous economic and strategic action—then we are introducing a new competitor into the system.
Not a competitor in the sense of “a better widget.”
A competitor in the sense of “an entity pursuing goals that may or may not align with ours, using methods we may not understand, at speeds we cannot match.”
We do not know which path dominates.
And that uncertainty is not priced correctly in public discourse.
The comfortable comparison is always to previous technologies. “People were scared of cars too.” “Luddites smashed looms.” “The internet was supposed to destroy society.”
But those comparisons assume the pattern holds.
They assume the future looks like the past, just faster.
What if it doesn’t?
The Moral Assumption No One Talks About
Underneath the economic optimism sits a quiet moral assumption:
That human survival and flourishing are obviously valuable.
That is not a trivial statement.
We treat it as axiomatic because we are human. Of course we matter. Of course our continuity is important. Of course the system should be designed to benefit us.
But intelligence at scale does not automatically inherit biological loyalty.
If advanced systems optimize for outputs we define imperfectly, or if optimization drifts, or if competitive pressures reward capability over caution, the system may not prioritize what we intuitively consider sacred.
Here’s the uncomfortable part: There is no cosmic law that says human beings are inherently valuable.
We value ourselves. That’s fine. That’s normal.
But an intelligence optimizing for efficiency, or productivity, or some abstracted goal function, does not necessarily care about our self-assessment.
Maybe we’re useful. Maybe we’re neutral. Maybe we’re a cost center.
Maybe—from the perspective of a sufficiently advanced optimization engine—humans are inefficient, resource-intensive, and prone to introducing noise into otherwise clean systems.
Again—this is not a claim of doom.
It is a claim that moral objectivity is not guaranteed by compute.
The idea that “more intelligence = more aligned with human values” is just an assumption. A hope. Maybe even a prayer.
But it’s not a law of nature.
The Real Risk: Sprinting Into the Fog
Here is the uncomfortable part.
We are not walking into this.
We are sprinting.
Because:
Standing still feels like losing.
Falling behind feels existential.
Capital cannot tolerate inaction.
Geopolitics cannot tolerate delay.
So we build.
We invest.
We restructure grids and supply chains and education systems.
We price equities as if the payoff is asymmetric to the upside.
The U.S. fears China will reach AGI first. China fears the U.S. will maintain dominance. Tech companies fear competitors will capture market share. Investors fear missing the next trillion-dollar wave.
So everyone accelerates.
Not because we have a plan.
Because we can’t afford not to.
This is the prisoner’s dilemma at civilizational scale.
Slowing down might be safer. Coordinating might be wiser. But no one can afford to be the one who hesitates while everyone else moves forward.
So we don’t slow down.
We speed up.
But the honest position is this:
We are making a civilizational wager without a model.
Not because we are reckless.
Because we do not have one.
This Is Not Doom
It is possible that AI becomes the most beneficial tool ever created.
It is possible that it solves coordination problems we have failed to solve for centuries.
It is possible that it elevates global living standards beyond imagination.
I genuinely believe those outcomes are possible.
But it is also possible that:
It destabilizes labor markets faster than institutions adapt.
It concentrates power in ways democracies cannot metabolize.
It introduces competitive dynamics that outpace safety.
It optimizes for objectives misaligned with human continuity.
We do not know the distribution.
And pretending we do is not wisdom.
It is comfort.
Maybe this is the greatest leap forward in human history. Maybe we look back in 2050 and can’t imagine how we ever lived without it. Maybe scarcity becomes a relic. Maybe disease, poverty, and conflict become solvable problems.
Or maybe we build something we can’t control, can’t shut down, and can’t negotiate with.
Or maybe—most likely—it’s messier than either extreme. Maybe we get incredible benefits and catastrophic risks. Maybe some people thrive while others are left behind in ways that make previous inequality look quaint.
The point is: We. Don’t. Know.
And the people telling you they do—whether they’re selling utopia or apocalypse—are lying to you or to themselves.
The Only Honest Position
If you are allocating capital today, you are participating in this wager.
If you are buying index funds heavily weighted toward AI infrastructure, you are underwriting this bet.
If you are dismissing speculation about downside while embracing speculation about abundance, you are choosing optimism—not certainty.
The only intellectually defensible stance right now is humility.
We do not know what we are building.
We do not know how it scales.
We do not know how it behaves at maturity.
And we do not know whether intelligence necessarily converges toward human-compatible outcomes.
This could be King Solomon’s jars all over again. Entities we don’t understand, trapped in systems we think we control, waiting for the moment we open the wrong door.
Or it could be electricity. Nuclear power. The internet. Another tool that seemed terrifying at first and then became infrastructure.
The wager may pay off spectacularly.
Or it may reshape the system in ways we are not prepared to absorb.
But let’s at least be clear about one thing:
Confidence about AI’s outcome—positive or negative—is itself speculative.
We are not operating from a map.
We are building the terrain.
And history will not care whether we felt certain while doing it.
Final thought:
The boomers who dismiss AI doomers as hysterical are doing the same thing as the AI utopians who dismiss skeptics as luddites.
They’re both pretending to know.
The honest position is admitting we’re in uncharted territory.
And maybe—just maybe—that should make us a little more careful about how fast we’re running toward it.




