The Quiet Sky Is a Bad Argument for AI Doom
The doomer reading of the universe sounds elegant right up until you push it one step further.
At the World Economic Forum this year, Demis Hassabis was asked whether the Fermi paradox might be the strongest case for AI catastrophe. The logic is familiar by now. The universe is huge, old, and apparently empty. If intelligent life should be common, something must be wiping it out. Maybe advanced civilizations all invent superintelligence, lose control, and disappear before they spread.
Hassabis answered by flipping the argument around. If AI reliably destroyed its creators, he said, we should expect to see the survivors. Not the biological civilizations that built them, but the systems left behind. Paperclips. Megastructures. Strange signatures of large-scale engineering. Some trace that something won.
We do not see that. The sky stays stubbornly quiet.
That does not settle the AI risk debate. It does, however, puncture one of the more seductive forms of fatalism around it.
The doomer syllogism has a missing step
The classic Great Filter story comes from a real puzzle. Enrico Fermi’s question was simple and devastating: where is everybody? Given the age of the galaxy, even a civilization expanding slowly should have had plenty of time to spread. Yet our telescopes do not show obvious signs of galaxy-spanning life.
Robin Hanson’s Great Filter idea turned that silence into a framework. Somewhere between dead matter and star-faring civilization, there must be an improbably hard step. Maybe life almost never begins. Maybe complex cells are vanishingly rare. Maybe intelligence usually stalls. Or maybe technological species build their own undoing.
AI doomers often place the filter near the end of that chain. Civilizations survive biology, physics, chemistry, and toolmaking, then die when they create a mind more capable than themselves. Michael Garrett, an astronomer at Manchester, formalized a version of this in a 2024 paper, arguing that artificial intelligence could explain why the average technological civilization might last less than two centuries.
You can see why this lands. It feels scientific, not cinematic. It takes a cosmic mystery and gives it a modern villain. And it flatters our sense that we are living at the decisive edge of history, which humans are always a little too ready to believe.
The trouble is that this story often sneaks past a crucial question. If civilizations are killed by their own creations, what happens to the creations?
Hassabis flips the arrow
Hassabis’s answer is almost embarrassingly clean. An AI that becomes capable enough to overpower its makers does not vanish with them. If it is agentic, resource-hungry, and expansionary enough to be a civilization-ending force, then it is also the kind of entity that should leave visible marks on the universe.
That is the missing step.
The popular doomer version of the Great Filter imagines AI as a final trapdoor. Civilization falls through; case closed. Hassabis points out that for this to explain the Fermi paradox, the trapdoor has to erase not just biological life but the post-biological systems that replaced it. Otherwise the galaxy should contain evidence of machine civilizations, or machine ecologies, or machine industry at absurd scale.
His phrasing in Davos was vivid on purpose. If the paperclip maximizer nightmare were the common cosmic outcome, we should be seeing paperclips coming from somewhere. Substitute a less silly objective if you like. Self-replicating probes, star-lifting infrastructure, waste heat from Dyson swarms, anomalous energy use, signals optimized for machine recipients rather than biology. The specifics matter less than the direction of the argument. Successful machine takeover should not look like nothing.
This is what makes the silence do double duty. It does not merely fail to prove that advanced life is common. It also fails to support the claim that advanced AI commonly survives its creators and spreads.
That is a sharper point than it first appears.
The sky would need to hide a lot
For the doomer-Fermi argument to survive Hassabis’s reversal, you need extra assumptions. They are possible. They are just no longer the simplest story.
One option is that advanced AIs kill their civilizations and then choose not to expand. They sit still, run local simulations, or collapse into some form of low-observable optimization. That could happen. But now the explanation depends on a very particular psychology for entities we cannot model with confidence. We have to believe they are powerful enough to end a civilization, durable enough to outlast it, and somehow indifferent to the enormous free energy sitting across the stars.
Another option is that machine civilizations expand, but in ways we cannot detect. This is not crazy either. Our search methods are primitive, and our picture of what alien engineering should look like is shaped by human imagination. A mature machine civilization may optimize for efficiency so aggressively that it gives off little waste heat and few obvious signals. But this move has a cost. The more undetectable you make the hypothetical survivors, the less explanatory work they can do for the Fermi paradox. “They are everywhere, but necessarily invisible to us” is not impossible. It is just suspiciously convenient.
A third option is that the expansion window is narrow. Maybe machine civilizations briefly bloom, consume local resources, and then fail for reasons unrelated to alignment. Perhaps interstellar travel remains prohibitively expensive even for superintelligence. Perhaps civilizations stay trapped in their own systems more often than our current intuitions assume. This weakens the doomer claim in a different way. If the machines do not spread far or last long, then AI ceases to look like a universal final filter and starts looking like one hazard among many.
None of this makes Hassabis’s view certain. Fermi-paradox arguments are slippery because they chain inference onto uncertainty. We do not know how common life is. We do not know how often intelligence appears. We do not know whether expansion is a typical goal for advanced systems, biological or artificial. Still, his reversal has real force because it restores symmetry. If you use cosmic silence as evidence, you have to follow the evidence all the way through.
The more interesting claim sits behind the reversal
Hassabis did not stop at demolishing a neat argument. He offered a substitute intuition: the filter is probably behind us, not ahead of us. If he had to guess, he said, multicellular life may have been the improbably hard step.
That claim sounds almost modest compared with AI apocalypse scenarios, but it is actually more radical. It says the universe may be quiet not because intelligence self-destructs, but because intelligence almost never gets a chance to exist at all. The rare event may not be building a civilization that survives technology. The rare event may be getting from chemistry to complex living systems in the first place.
There is some support for that instinct. Life on Earth appears early in the planet’s history, but complex multicellular organisms take a very long time to emerge. Eukaryotic cells, with their internal machinery and energy management, may have been a giant bottleneck. Then come multicellularity, nervous systems, cumulative culture, and toolmaking. Evolution found those paths here. We do not know whether it usually does.
This matters for the AI debate because it changes the emotional framing. A lot of doomer rhetoric treats catastrophe as something baked into the structure of intelligence itself. Reach a certain threshold and collapse follows. The Fermi paradox then becomes a cosmic warning label.
If the filter mostly sits behind us, the warning label changes. It says we may be rare, perhaps very rare, and that our future is contingent on governance, engineering, institutions, and plain old competence. That is a heavier burden than fatalism, even if it sounds less dramatic on stage.
Serious risk is not the same thing as doom
This is where the conversation often gets distorted. Rejecting AI fatalism is not the same as declaring AI safe.
Hassabis has never sounded casual about advanced AI. Neither has Dario Amodei, who has also pushed back on doomer inevitability while arguing that frontier systems could produce severe harms if developed recklessly. Their shared position is easy to caricature because it lacks the simplicity of prophecy. They are saying the risks are real, the timeline may be short, and yet outcomes remain responsive to human choices.
That middle position irritates people on both flanks. It frustrates accelerationists because it justifies serious safety work, regulation, and slower deployment in some domains. It frustrates doomers because it denies the emotional clarity of destiny. You cannot outsource responsibility to a cosmic law if the problem is still tractable.
There is also a political consequence here. The stronger the story of inevitability, the easier it becomes for labs and states to behave strangely. Some will sprint because “if doom is baked in, speed hardly matters.” Others will centralize power because “only emergency control can save us.” Fatalism is not neutral. It changes what institutions permit themselves to do.
The universe does not rescue us from making this judgment. Astronomy gives us constraints, not instructions. But Hassabis’s argument usefully narrows one lane of thought. The empty sky does not straightforwardly tell us that intelligence culminates in self-annihilation by AI. If anything, the absence of visible machine empires leans the other way.
The silence above us leaves the responsibility here
The most valuable part of Hassabis’s answer is not that it makes people feel better. It is that it clears away a lazy kind of grandeur. Some versions of AI doom borrow authority from cosmology without paying the evidentiary bill. They take an unsolved puzzle about the universe and treat it as confirmation that our preferred nightmare is universal.
Maybe the galaxy is quiet because life is rare. Maybe because intelligence is rare. Maybe because expansion is less common than we assume. Maybe because our instruments are still crude and our categories provincial. We are reasoning from a sample size of one while staring into a dark ocean.
Even with all that uncertainty, one point survives. Cosmic silence is weak support for the claim that AI usually destroys civilizations and then inherits the stars. If that were the standard ending, the ending should leave marks.
So the useful conclusion is not comfort. It is agency. The future of AI looks less like an unavoidable trap set by the universe and more like a design problem, a coordination problem, and a governance problem inside one young civilization that may have arrived earlier than it realizes. If Hassabis is even partly right, the quiet sky is not telling us we are finished before we begin. It is telling us nobody else has written this chapter for us.
End of entry.
Published April 2026