Le goulot d'étranglement de la complexité : l'AGI aura-t-elle son endosymbiose ?
Life showed up on Earth early. Complex life took its time.
That gap is easy to glide past because we already know the ending. We see animals, forests, nervous systems, and assume complexity was the natural next step once chemistry got interesting. Nick Lane’s work pushes against that instinct. His argument, in broad strokes, is that bacterial-style life may be common, maybe even close to inevitable under the right conditions. Complex cells were different. They appear to descend from one freakish merger between two simpler organisms, and every plant, fungus, and animal is still living inside the consequences.
That is a useful lens for AI right now. Useful AI is everywhere. It writes code, summarizes meetings, generates images, routes customer support, and quietly runs the recommendation loops that shape what billions of people see. The field looks crowded with intelligence. But if Lane is even directionally right as an analogy, abundance at the simple end tells us very little about the jump to a much more integrated kind of mind. You can have a planet full of microbes and wait two billion years.
Earth spent a very long time being good enough
For most of Earth’s history, life did not look stalled from the inside. Prokaryotes were not failed eukaryotes waiting to become us. They were wildly successful. They spread everywhere, exploited every energy source they could reach, adapted to heat, cold, acid, radiation, and pressure. They remade the atmosphere. They built the biochemical foundations that everything else would later inherit. If you were placing bets during those first two billion years, microbes would have looked like the finished product.
That matters because it changes the emotional shape of the analogy. A plateau is not the same thing as stagnation. You can get immense local competence, ecological spread, and evolutionary durability without crossing the threshold into higher-order complexity.
Lane’s contribution was to give this old fact a sharper mechanism. The issue was not that prokaryotes lacked time or variation. It was that they were energetically constrained. A bacterial or archaeal cell has to generate energy across its surface membrane. As cells get larger, surface area does not keep up with volume. That limits how much energy is available for running and regulating a larger internal system. Genomes can grow, but not freely. Regulatory overhead starts to bite. Complexity becomes expensive in a very literal way.
The proposed escape hatch was endosymbiosis. An archaeal host somehow took in a bacterium and, instead of digesting it or being poisoned by it, stabilized the relationship. Over time that bacterium became the mitochondrion. The host gained access to distributed internal energy production. That changed the energy budget per gene so dramatically that larger genomes, more regulation, and eventually multicellular complexity became feasible.
The key point is not just that a merger happened. It is that, as far as we can tell, it happened once in the lineage that led to all complex life on Earth. One improbable integration rewrote the possible.
The singular event was a management problem, not a magic trick
People hear “endosymbiosis” and picture a dramatic moment, like a snake swallowing a lightbulb and suddenly becoming modernity. Biology was messier than that. A host cell engulfing another cell is common enough. Making the arrangement stable is the difficult part.
The guest can keep reproducing for itself. It can become a parasite. The host can lose control of it. Their division cycles can drift apart. Genes can remain in the wrong place. Protein transport has to evolve. Waste has to be managed. Conflicts of interest need to be reduced before the whole arrangement tears itself apart. Two formerly independent systems must become interdependent without destroying the ability of either one to function.
That is what makes Lane’s story so interesting for technology. Complexity is often described as an accumulation of capabilities, but many real bottlenecks are coordination bottlenecks. The hard part is not adding another useful subsystem. It is getting several useful subsystems to share a stable identity, a resource budget, and a common fate.
Biology solved that once at the cellular level and then spent the next stretch of deep time elaborating on the result. After mitochondria, the rest of the story is still astonishing, but the energy architecture had changed. There was finally room to build more.
When people say current AI is advancing exponentially, they usually point to visible performance: better reasoning benchmarks, longer context windows, lower hallucination rates in some settings, more competent coding, stronger multimodal perception. All of that is real. It may still be the wrong place to look for the decisive threshold. If the missing ingredient is architectural integration rather than raw capability, then progress can feel dramatic and still circle around the bottleneck.
Today’s AI is prolific without being deeply consolidated
Calling present-day AI “simple” sounds ridiculous if you have spent any time with frontier models. These systems are technically elaborate, expensive, and often uncanny. They can explain code, imitate styles, solve textbook problems, generate a camera move for a video shot, and talk their way through a legal clause with unnerving fluency. “Simple” needs a narrower meaning here.
A better word might be unconsolidated.
Most of the systems shipping into the world today are very good at pattern compression and local response. They map an input to a plausible output. Some do this with extraordinary breadth. But when you ask for durable agency over time, the seams show. Memory is often external. Planning is scaffolded by prompts or orchestration code. Tool use is wrapped around the model, not metabolized into it. Long-horizon goals drift. Context gets polluted. The system does not really know what should remain stable about itself from one hour to the next.
That is why so many demos still require a hidden human shepherd. Somebody resets the state, chooses the tools, narrows the objective, interprets the failure, and restarts the run. The wrapper stack can look less like a mind than a very polite Rube Goldberg machine.
This is not an insult. Bacteria are not insults either. They are one of the great inventions of the universe. Current AI is already economically transformative precisely because local intelligence is valuable. Most businesses do not need a synthetic colleague with lifelong memory and coherent self-maintenance. They need software that drafts the first version, triages the inbox, detects fraud, or accelerates one part of a workflow. A huge market can be built on that layer alone.
The problem appears when we mistake spread for convergence. If conversational agents, image models, coding systems, and ranking engines proliferate everywhere, it is tempting to assume they are all walking the same road toward generality. Evolution gives us a reason to doubt that. You can fill the planet with competent systems and still be missing the one architectural bargain that opens a new regime.
A computational endosymbiosis would change the internal economics
If there is an analogue to endosymbiosis in AI, it probably will not look like a bigger chatbot. It will look like a merger that changes the internal economics of cognition.
Right now, many ingredients associated with more general intelligence already exist in partial form. We have predictive models with broad priors. We have tool use. We have retrieval systems, planners, reinforcement learning loops, simulators, world models, sensors, execution environments, and policy layers. The missing piece may be that they remain neighboring services rather than one durable organism. They cooperate because engineers force the handshake, not because the system has an internal scheme for maintaining itself across tasks and timescales.
A more integrated architecture would likely need several properties at once. It would need persistent memory that is not just a document dump. It would need a way to distinguish transient context from identity-level knowledge. It would need to allocate compute the way living systems allocate energy, spending more when uncertainty or stakes rise and coasting when they do not. It would need conflict resolution between short-term rewards and longer-term commitments. It would need a model of its own limitations that is usable rather than decorative.
The energetic analogy is especially helpful here. Biological complexity did not arrive because cells suddenly desired greatness. It arrived because a new arrangement made the overhead affordable. AI may face a parallel issue. A system that can reason, perceive, act, remember, learn online, check itself, and remain coherent under changing conditions imposes enormous coordination costs. You do not get that for free by slapping a vector database onto a language model and telling it to be serious from now on.
One way to phrase the challenge is genome size. In biology, mitochondria helped support a much larger active genome and a more complex regulatory apparatus. In AI, the equivalent may be the ability to maintain a much larger body of live internal structure without becoming unstable. That structure could include parameters, specialized modules, self-models, procedural memories, world representations, and standing goals. Today we often store these pieces in different places and hope the prompt can knit them together. Prompts are impressive glue. Glue is not the same as a circulatory system.
This is why “just scale it” can be both sensible and insufficient. Scaling helps if the bottleneck is missing capacity. It helps far less if the bottleneck is organizational. A company can hire more brilliant people and still fall apart if the teams cannot coordinate. A model can absorb more data and still remain strangely shallow over time if the architecture has no robust way to preserve and update itself.
There are research programs circling this territory from different directions: recurrent memory, agentic loops, active inference, model-based reinforcement learning, neurosymbolic systems, differentiable databases, tool-native training, self-reflection methods, hierarchical planners. Some may turn out to be blind alleys. Some may be pieces of the answer. The interesting possibility is that the decisive leap is not any single component, but the moment several of them stop behaving like hired contractors and start functioning as one system with a stable center of gravity.
Smooth curves can hide a discontinuity
This is where the analogy becomes uncomfortable for forecasting. Many timeline arguments rely on visible curves. Loss falls. capabilities improve. Benchmarks move. Inference gets cheaper. Tool use gets cleaner. From that picture it feels natural to project forward and imagine a fairly direct path to something recognizably general.
History suggests another possibility. Systems can improve steadily inside one regime and still fail to cross into another. For a very long time, bacteria remained incredibly successful bacteria.
That does not mean AGI is decades away or impossible. It means extrapolation is fragile when a singular bottleneck may be involved. If the key requirement is a rare architectural integration, progress may look disappointingly continuous right up until it no longer does. Then, because software copies instantly, the spread after discovery could be much faster than anything biology ever allowed. A hard bottleneck and rapid diffusion are not opposites. They fit together.
This also explains why debate in AI feels so jagged. One camp sees rapid local progress and assumes continuity. Another sees the missing pieces around memory, agency, reliability, and self-correction, then assumes the field has hit a wall. Both camps may be reading different layers of the same situation. There is obvious momentum. There may also be a hidden chokepoint.
The analogy should not be stretched until it squeaks. Engineering is not evolution. Labs can search deliberately, reuse ideas, simulate variants, and borrow concepts from neuroscience, control theory, and computer systems. Biology had no whiteboards, no debuggers, and no cloud budget. A singular event in nature does not prove a singular event in technology. It only warns against treating higher complexity as a guaranteed continuation of whatever currently works.
The practical consequence is stranger than doom or hype
If AGI needs its own endosymbiotic moment, the immediate consequence is not reassurance. Narrow and semi-general systems can still reorder labor markets, education, software, media, and bureaucracy. Societies can be transformed by tools that are useful without being fully coherent agents. Microbial life changed a planet long before mammals arrived. There is no comfort prize in saying a deeper threshold may be rare.
The more interesting consequence is epistemic. It asks us to stop using ubiquity as evidence of inevitability. We are surrounded by systems that can mimic understanding, compress expertise, and slot into almost every white-collar process. That environment makes it psychologically hard to imagine a true bottleneck. Everything feels like a gradient. Yet some gradients lead to plateaus, and some plateaus only end when the structure of the system changes.
Lane’s picture of life offers a disciplined kind of humility. The universe may produce simple competence readily and reserve integrated complexity for exceptional arrangements. AI could follow a similar curve. We may be living through the explosive spread of useful intelligence while still lacking the merger that makes a more durable, self-maintaining, genuinely general system possible. If that is right, the central question is not how much more capability we can squeeze from current methods, but whether they can absorb their neighboring functions and remain coherent under their own weight.
End of entry.
Published April 2026