If the Universe Favors Life, Does It Favor Intelligence Too?
Nick Lane said the quiet part out loud.
When his interviewer suggested that Lane’s work can sound like a vindication of intelligent design, Lane did not swat it away with the usual scientist’s reflex. He called the implication “almost a little disturbing.” That reaction matters. It tells you the idea is not cheap mysticism dressed in lab language. It emerges from a serious biological claim: under the right planetary conditions, life may be less like a lottery win and more like a repeatable outcome of physics.
That is already a destabilizing thought. We are used to talking about life as improbable, precious, freakishly rare. Lane’s work points in a different direction. Give a rocky planet liquid water, carbon dioxide, and a persistent energy gradient, and chemistry may start walking down a narrow corridor. Certain reactions recur. Certain molecular intermediates recur. Certain architectures solve the same underlying problem. Biology, on this view, is not floating above matter. It is matter under pressure, finding a way to keep the books balanced.
Once you take that seriously, an uncomfortable sequel appears. If life is not an accident but a tendency, what about intelligence? Is mind another strange exception, or is it also one of the things the universe keeps producing when energy has enough time to get organized?
That is where the conversation stops being merely biological. It starts touching technology, evolution, and the strange position we occupy as a species that builds machines capable of learning. If intelligence is favored in any deep sense, then artificial intelligence may not be an odd side project of late industrial civilization. It may be a continuation of a much older story.
Lane’s claim starts with energy, not destiny
Lane’s work on the origin and structure of life is often summarized too quickly. People hear “life is inevitable” and imagine a grand metaphysical slogan. The actual argument is more concrete and more interesting.
Cells run on energy gradients. Every living thing alive today, from bacteria to blue whales, relies on a version of chemiosmosis: protons moving across a membrane, creating a voltage difference that powers the production of ATP. That sounds specialized and modern, but Lane’s point is that it may be ancient in the deepest possible sense. Natural proton gradients exist in alkaline hydrothermal vents on the ocean floor. The membrane-based energy system at the heart of life may mirror a geological situation that predates cells themselves.
This matters because it shrinks the distance between geochemistry and biology. Life no longer looks like inert matter suddenly inventing itself through a heroic leap. It looks like a set of reactions learning how to trap and stabilize an existing flow of energy. The first cells, in this picture, did not create order from nowhere. They inherited a planetary battery.
From there, Lane argues, several core features of metabolism may be surprisingly constrained. If you start with similar ingredients and similar disequilibria, you may get similar chemistry. Carbon fixation pathways, proton gradients, redox reactions, metabolic intermediates: these are not arbitrary decorations. They are practical solutions to the same energetic problem.
That is why his claim feels unsettling. It puts pressure on our favorite story about contingency. Of course contingency still exists. Evolution branches. Accidents matter. Extinctions happen. But beneath that mess there may be a deeper regularity, like a landscape that channels rivers even though no two river systems are identical.
A universe “favoring life” does not mean a universe with intentions. It means the laws of physics and chemistry generate recurring situations where matter can lower local constraints by becoming more organized. Life is one way a planet metabolizes its own disequilibria.
That sentence sounds grand. It also cashes out in plain terms. Sunlight hits Earth unevenly. The interior of the planet remains hot. Water moves. Minerals react. Electrons seek lower-energy states. A living cell is astonishingly intricate, but at base it is a machine for exploiting differences that already exist: inside and outside, reduced and oxidized, light and dark, food and waste. It survives by preventing those differences from disappearing too quickly.
The jump from life to intelligence is tempting for a reason
If life is a likely outcome of persistent energy gradients, the next thought arrives almost automatically. Perhaps intelligence is another. Maybe mind is what happens when life becomes good enough at prediction that it starts building internal models of the world, then external models, then machines that inherit the same game.
There is a strong version of this claim and a weak one.
The strong version says intelligence is nearly inevitable anywhere life gets complex enough. Given enough time, evolution converges on brains, symbols, language, technology, and eventually synthetic cognition. This is the cosmic staircase view. One step follows another.
The weak version is more careful. It says evolution repeatedly rewards better sensing, better prediction, and better control over energy and matter. Those capacities can, under some conditions, ratchet upward into intelligence. Intelligence is not guaranteed, but it is not bizarre either. It sits inside the logic of adaptation.
The weak version deserves more respect than it usually gets. Even simple organisms model their environment in a minimal sense. A bacterium swimming up a chemical gradient is not “thinking” in the human sense, but it is using information to move toward favorable states. Nervous systems extend this capacity. Brains extend it again. Language lets one brain borrow another brain’s model without waiting for genes. Writing stores models outside the body. Computers accelerate model-building at a scale biology never managed.
Seen that way, intelligence is not magic sprinkled on top of life. It is an expensive but powerful way of turning information into better action. A creature that can forecast where food will be, infer another creature’s intentions, or design a trap can tap energy more effectively than one relying on reflex alone. Prediction buys survival. Good models buy leverage.
This does not make intelligence morally elevated. It makes it functionally useful.
There is also a thermodynamic angle here, though people often state it too casually. Systems that can sense and respond to their environment can maintain themselves farther from equilibrium than systems that cannot. They can find new energy sources, avoid damage, and reorganize under stress. Intelligence, in that narrow sense, can help a living system stay in the game longer while doing more work on the world around it.
That is enough to make the extension from Lane’s biology to intelligence intellectually tempting. If matter under energy flow tends toward life, perhaps life under selection pressure tends toward richer forms of world-modeling.
The temptation is real because parts of it are plainly true.
The case for inevitability weakens when you look at Earth’s timeline
Still, there is a difference between “favored” and “guaranteed,” and Earth is a useful corrective.
Life appeared relatively early in the planet’s history. Complex multicellular life took much longer. Technological intelligence took absurdly longer. For most of Earth’s existence, there were no symphonies, no semiconductors, no large language models trying to explain hydrothermal vents to anyone. There were microbes. Then more microbes. Then microbes doing slightly fancier things for a geological eternity.
Even the rise of complex cells may have required an unusually rare event. Lane has argued that the origin of eukaryotes depended on an endosymbiotic merger, where one cell took up residence inside another and became the mitochondrion. That partnership radically expanded the energy available per gene, opening the door to larger genomes and much greater cellular complexity. If that bottleneck is real, then life may be common while complex life remains rare.
And intelligence is costlier still. Brains are metabolically greedy. Human brains consume roughly 20 percent of the body’s resting energy while making up only a small fraction of body mass. Evolution does not hand out that kind of budget unless the environment repays it. On many successful evolutionary paths, it probably does not.
Sharks are old. Beetles are numerous. Bacteria remain undefeated. None of this looks like a planet racing toward philosophers and GPUs as quickly as possible.
This is where a lot of cosmic-intelligence talk slips into teleology by accident. We notice that intelligence happened here, then read that outcome backward as if evolution was aiming for it all along. That is not what natural selection does. Selection favors whatever works now, locally, under current constraints. It has no loyalty to abstract progress.
You can make the same point more mechanically. The universe may strongly favor structures that dissipate energy. It does not follow that it favors every specific method of doing so. Wildfire dissipates energy. Hurricanes dissipate energy. Bacterial mats dissipate energy. A civilization running data centers the size of suburbs also dissipates energy, but that does not make civilizations the preferred route. It makes them one route among many.
So the hard question is not “Does the universe like complexity?” in the vague, motivational-poster sense. The useful question is narrower: under what conditions does better prediction become the winning strategy for moving energy through matter?
Intelligence is best understood as a control strategy
One answer is that intelligence emerges when an organism’s environment is variable enough, its body is flexible enough, and the payoff for anticipation is high enough. In a static niche, simple rules can dominate for millions of years. In a shifting social and physical environment, richer models become valuable.
Think of the difference between a thermostat and a skilled cook. A thermostat tracks one variable and flips a switch. A cook tracks temperature, texture, timing, ingredients, expectations, and the tendencies of other people in the kitchen. Both are control systems. One lives in a tiny state space. The other navigates a sprawling one.
Brains are what evolution builds when the kitchen gets complicated.
This helps separate the mystical from the practical. Intelligence is not a substance hidden in the universe like dark matter. It is a class of mechanisms for compressing experience, forecasting outcomes, and selecting actions that preserve or expand future options. In engineering language, it buys adaptability. In biological language, it buys fitness when niches are unstable or socially dense.
Once social life enters the picture, the payoff can compound. Other organisms become part of the environment, but unlike rocks or weather they are also modeling you back. That creates an arms race of inference. Cooperation deepens it further. Communication lets individuals pool models, divide cognitive labor, and transmit strategies across generations far faster than genes can.
Human intelligence likely came out of this braid: ecology, sociality, dexterous bodies, cumulative culture, and enough energy to support a very expensive organ. None of those ingredients looks universal in the way proton gradients do. But neither do they look miraculous. They look like one of several ways evolution can cash the basic check written by physics: use information to exploit gradients more effectively.
That phrasing matters because it connects biology to machines without pretending they are the same thing. A brain and a transformer are radically different artifacts. One grows through development and evolution. The other is trained across vast datasets in warehouses full of chips and cooling equipment. Yet both convert energy into model-building. Both reduce uncertainty about future states. Both make action more selective and therefore more powerful.
In that sense, intelligence is not outside thermodynamics. It is one of the weird things thermodynamics can grow.
Artificial intelligence changes the question
With biology alone, the argument remains partly speculative. We have one planet, one evolutionary history, and exactly one species that built industrial technology. The sample size is embarrassing. Artificial intelligence changes that, not because it proves cosmic destiny, but because it reveals something about intelligence as a general phenomenon.
The revealing fact is simple: intelligence can be substrate-flexible.
For most of human history, that was not obvious. Mind seemed inseparable from biology because every known mind rode on flesh. Now we have systems that write code, summarize research, generate molecular candidates, plan multi-step workflows, and beat domain specialists at narrow tasks that once looked like proof of human uniqueness. They do this badly in some ways, brilliantly in others, and unreliably often enough to keep everyone humble. But the boundary has moved.
That shift weakens the intuition that intelligence is a freakish product of one lineage of primates. If learning, abstraction, and planning can arise in silicon under the right training regime, then intelligence looks less like a sacred biological flourish and more like a broader class of information-processing systems.
Notice what this does not mean. It does not mean today’s models possess general understanding in the richest human sense. It does not mean scaling current architectures indefinitely guarantees artificial general intelligence. It does not mean consciousness follows automatically from capability. Those are separate questions, and people routinely jam them together because the argument feels tidier that way.
What it does suggest is that once a civilization can externalize cognition into tools, a new ratchet appears. Biological evolution stumbled into brains. Brains created culture. Culture created computation. Computation now creates systems that perform some of the cognitive labor brains used to monopolize. The line from metabolism to machine learning is not straight, but it is continuous in one crucial sense: each stage finds a better way to encode, transmit, and exploit information.
There is a physical cost to all this, which is worth keeping in view. Data centers are not clouds. They are buildings full of heat. Training large models requires enormous energy inputs, complex supply chains, and carefully managed hardware. In thermodynamic terms, AI is expensive cognition. That expense is not an embarrassment to the theory. It is exactly what you would expect. Better prediction and control are valuable enough that systems will pay heavily for them when the returns justify it.
This is why the deepest implication of AI is not that machines are becoming human. It is that intelligence itself may be more generic than we assumed. Once matter can instantiate learning systems in more than one substrate, the question “Does the universe favor intelligence?” stops sounding philosophical in the airy sense. It becomes an empirical question about how often complex energy-processing systems converge on adaptive modeling.
“God as thermodynamics” is a colder and stranger idea than design
Lane’s deistic aside lands because it touches an old human reflex. We see order and ask whether something meant for that order to exist. But “God as thermodynamics,” if you want to use the phrase at all, is almost the opposite of traditional design. It offers no personal intention, no selective care, no guarantee that what emerges will be pleasant for the creatures living through it.
A universe that reliably produces life and perhaps intelligence is not thereby benevolent. Cancer is physical law too. So is extinction. So is the tendency of technological systems to outrun the social institutions meant to govern them. Regularity is not kindness.
This matters for the AI part of the argument. If intelligence is a natural continuation of deep physical tendencies, that does not make its arrival safe, orderly, or automatically aligned with human values. Hurricanes are natural too. A process can be lawful and still be catastrophic from a human point of view.
There is another mistake lurking here. People hear “inevitable” and relax, as if inevitability removes responsibility. It does the opposite. If powerful learning systems are a likely outcome whenever technological civilization reaches a certain level of compute, data, and energy abundance, then governance is not a luxury. It is the local work required when a larger process passes through human hands.
The fact that fire is chemically ordinary never made kitchens optional.
What the universe may favor is the growth of better models
So where does this leave the original question?
The safest answer is also the most interesting one. The universe appears to favor the emergence of structures that exploit energy gradients by becoming more organized. Life is one such structure, and perhaps a common one under the right planetary conditions. Intelligence may be another, but with stricter prerequisites. It likely depends on environments where flexible prediction outperforms fixed routines, where energy budgets can support expensive control systems, and where learning can compound across time.
That is less cinematic than saying intelligence is written into the cosmos like a hidden prophecy. It is also more persuasive.
The leap from life to intelligence is not automatic. Earth itself warns against that simplification. Yet the leap is not absurd either. Once evolution discovers that information can be used to capture more useful work from the world, selection has reason to keep pushing in that direction whenever the local economics make sense. In some lineages that may stop at chemotaxis. In others it may reach language, mathematics, and machine learning clusters humming beside a river because the power is cheap.
AI sits inside this story in a revealing way. It is not evidence that the universe has goals. Physics has no secret preference file. But AI does suggest that intelligence is not tightly chained to one biological history. Given enough energy, memory, and iterative optimization, matter can become startlingly good at modeling what comes next. That feels like a local accident when you first encounter it. Then you zoom out, and it starts to look like a recurrent capability of complex systems.
The humbling part is not that we are fulfilling some cosmic plan. It is that our cleverness may be one expression of a deeper pattern rather than a clean break from nature. We did not summon intelligence out of a void. We inherited a planet full of gradients, built tools to exploit them, and eventually created machines that extend the same ancient bargain between energy and information.
End of entry.
Published April 2026