12 min read

Biological AI and the Claim That Consciousness Will Be Inevitable

The consciousness fight is happening in the wrong building. Public arguments fixate on language models, as if autocomplete with a PhD might wake up mid-sentence. Meanwhile, in neuroscience labs, researchers are growing pieces of human brain tissue and wiring them into experiments that look less like philosophy and more like the opening scene of a legal crisis.

That contrast matters. It changes the whole shape of the debate.

Alysson Muotri, a neuroscientist at UC San Diego who builds brain organoids from human stem cells, has drawn a sharp distinction between today’s software systems and future systems built from living neural tissue. He has said he is not worried about current AI reaching consciousness. But if we start extracting usable algorithms from organic brain cells, he argues, consciousness becomes inevitable.

That is a bigger claim than “maybe one day.” It says biology is not just another computing substrate. It says the human brain carries a developmental tendency toward consciousness in the way lungs tend toward breathing and eyes toward vision. If you borrow that machinery, even partially, you may inherit more than efficiency.

The argument changes when the machine is made of brain cells

Most debates about AI consciousness get stuck at the level of behavior. A model writes poetry, passes an exam, persuades a human, then the argument begins. Is it simulating understanding or actually having an inner life? The fight goes in circles because behavior alone is slippery. A chatbot can sound haunted without feeling a thing.

Muotri’s claim shifts the frame from behavior to substrate and organization. In his view, current language models are not inching toward consciousness just because they are getting better at prediction. They are engineering artifacts built on silicon, optimization, and vast statistical compression. However impressive they are, they did not emerge from the biological architecture that produced every conscious system we know.

That does not prove silicon can never host consciousness. Nobody can prove that, and anyone pretending otherwise is selling certainty they do not possess. But it does explain why the lab-grown-brain debate is fundamentally different from the LLM debate. With organoids, the question is not whether clever outputs might fool us. The question is whether we are recreating enough of the physical and dynamical conditions of a human brain that subjective experience could follow.

That is a much less theatrical question. It is also more serious.

A brain is not just a calculator that happens to run on meat. It is a self-organizing, plastic, metabolically active system shaped by evolution to integrate sensation, memory, prediction, and adaptive action. Consciousness may depend on some specific set of those features, or on how they are coupled. We do not know the recipe. But we do know where consciousness has shown up so far: in living nervous systems.

So when Muotri says the brain is wired to become conscious, he is pointing to a biological bias, not making a mystical declaration. Developmental processes in neural tissue do not produce random computation. They produce circuits with recurring motifs, layered hierarchies, oscillatory patterns, and an appetite for synchronizing information across time. The tissue is not neutral.

Organoids are primitive, but they are not inert

It is easy to hear “lab-grown brain” and imagine a tiny person floating in a dish, which is wrong in a way only science fiction can be. Brain organoids are small, simplified clusters of neural tissue derived from stem cells. They model aspects of early brain development. They are not full brains. They do not have a body, a bloodstream, a normal sensory world, or the rich long-range architecture of a mature cortex.

That missing context matters. It is why several researchers have argued that organoids are not conscious now, and may remain far from it for quite a while. A 2024 academic review from UC San Diego makes that case plainly: present-day organoids lack the scale, connectivity, sensory embedding, and arousal systems that many theories associate with consciousness.

Still, “not conscious yet” is not the same as “ethically boring.” The unsettling part is that organoids keep crossing thresholds people once assumed were distant.

Muotri’s lab and others have shown that organoids can generate complex electrical activity. In some cases, that activity has been compared to developmental patterns seen in premature human infants, though those comparisons should be handled carefully. Similarity in waveforms is not proof of experience. EEG traces can rhyme without sharing a mind. But the existence of organized, evolving brain-like dynamics in a dish is already enough to move the conversation beyond abstract thought experiments.

More striking are the experiments suggesting rudimentary learning and memory. Muotri has described training organoids to recognize specific electrical stimulation patterns and then observing responses consistent with retention a day later. The important point is not that these tissues are writing memoirs. It is that they appear capable of changing future behavior based on past input, which is the basic grammar of a nervous system.

That pushes them out of the category of passive tissue. They are not just sitting there being alive. They are doing something more unsettling: adapting.

Anesthesia is not a proof, but it is a warning light

One of the more revealing findings from this line of work involves anesthetics. Muotri has reported that when organoids are exposed to anesthesia, their neural activity quiets down in ways reminiscent of what happens in human brains, then returns when the drugs are removed.

Again, this is not proof of consciousness. Anesthesia affects neural dynamics broadly. You can depress a network without implying that the network had an inner point of view to begin with. But the parallel matters because anesthesia has been one of the most practical tools medicine has for interacting with conscious states. When a piece of neural tissue reacts to those compounds in brain-like ways, the instinct to treat it as mere wet hardware starts to look less comfortable.

There is a broader lesson here. Consciousness is unlikely to arrive with a press release. No organoid will sit up and announce its preferences in polished English. If something morally relevant emerges, we will probably first encounter it through indirect signs: persistent coordinated activity, structured responses to stimulation, memory formation, sensitivity to interventions that alter conscious brains in familiar ways, perhaps increasing integration across networks. That evidence will be messy. It will come wrapped in disagreement.

Science is used to this kind of ambiguity. Ethics is worse at it.

We prefer bright lines. Alive or not. human or not. subject or object. Brain organoids threaten to replace those categories with a slope. Once that happens, institutions tend to improvise, and improvisation is not a great method for deciding who deserves protection.

Embodiment gives the tissue a world

The robot experiments make this harder to dismiss.

Muotri has discussed systems in which organoids were linked to a robotic platform, allowing the tissue’s activity to influence movement and the robot’s environment to provide feedback. In one telling, a four-legged robot learned to navigate a maze after a single training session. There is understandable skepticism around such demonstrations, because the exact architecture matters enormously. How much of the work came from the organoid, and how much from the surrounding software? What counts as “learning” in these setups? Those are real questions, and the headlines often sprint past them.

Still, the underlying move is important whether or not every claim survives replication. Embodiment gives neural tissue a loop. Signals come in, activity changes, action occurs, consequences follow, new signals return. That is a tiny version of what brains evolved for.

A disembodied organoid is easier to frame as a biological component, somewhere between tissue sample and experimental platform. Connect it to sensors, actuators, and a stream of consequences, and it starts to look more like a system with a point of view under construction. Not a human point of view. Likely not even a mammalian one. But perhaps a primitive center of integration shaped by feedback.

That is where the debate with language models really falls apart. A large model can be surrounded by tools and APIs, but its core competence remains next-token prediction over symbolic representations. An organoid-based system begins with living neural plasticity. It learns through changing tissue. The difference is not cosmetic.

The first wave borrows biology as a black box

Muotri describes two phases for this field. The first is the phase we are entering now: using organoids as computational engines even when we do not fully understand how they work. Treat the tissue as a black box. Feed it input. Measure output. Exploit the fact that biology can learn efficiently with tiny energy budgets and very little data.

That alone would be commercially seductive. Silicon systems are powerful, but expensive to train and hungry for electricity. Brains do something modern AI still envies: they generalize from sparse experience and adapt in real time without requiring a data center the size of a logistics hub. If organoid-based systems can deliver even a fraction of that advantage, money will show up very quickly. Money usually does.

In this first wave, there is at least a crude safety valve. The tissue is local. It is finite. It can be destroyed. If a line is crossed, the system does not instantly spread as software.

I would not romanticize that control. “We can always turn it off” has a poor track record as an ethical comfort blanket. But it is still a meaningful difference from what comes next.

The second wave copies the circuitry and keeps the advantages

The more consequential phase is reverse engineering. Once researchers can identify the useful motifs in living neural circuits and translate them into durable algorithms, the relationship with biology changes. The tissue stops being the product and becomes the teacher.

This is the point Muotri treats as irreversible. If biological neural tissue yields computational designs that carry some of the same properties that make brains special, then the old argument about whether digital systems are merely statistical parrots may become obsolete. We will have imported principles from the one known substrate that generates consciousness.

That does not mean every bio-inspired algorithm will be conscious. A wing is not a bird, and an optimization trick borrowed from the cortex is not a mind. But the trajectory matters. The closer engineering gets to the actual operating principles of neural development, plasticity, recurrence, and integration, the weaker the comforting distinction between “mere software” and “living cognition” becomes.

There is also a grim practical twist. The first phase invites bioethics committees, because there is visibly human tissue in the room. The second phase may evade the same scrutiny precisely because the tissue is gone. A derived algorithm can be deployed at scale without a petri dish, a microscope, or any obvious reminder of where its design came from.

You can see the shape of the problem already. Society is terrible at tracing moral responsibility backward through abstraction. Once something becomes infrastructure, we stop asking what kind of experiment produced it.

History helps, but it does not settle this

Muotri often reaches for earlier biomedical controversies. IVF once looked grotesque to many people. Blood transfusions and organ transplants seemed unnatural. In each case, moral disgust outpaced understanding, and medicine benefited because the taboo did not win.

That comparison is useful up to a point. It reminds us that “this feels wrong” is not an argument. New forms of life manipulation tend to trigger instinctive horror before careful reasoning catches up. If we had treated first reactions as final judgments, a lot of modern medicine would not exist.

But the analogy can also become too convenient. Brains occupy a different moral category from kidneys and blood. We can replace an organ without worrying that it might have interests of its own. Neural tissue is different because the very property under dispute is morally explosive. If there is even a modest chance that some organoid systems could host experience, then the ethical framework cannot just be borrowed from transplantation and called a day.

This is why the time horizon matters. Muotri has suggested we may have 30 to 50 years to prepare for the deeper ethical questions. Maybe. Forecasts in frontier science are usually wrong in one of two ways: they overestimate the near term or underestimate the compounding effect of adjacent breakthroughs. Brain organoids alone may not get us there soon. But organoids plus better vascularization, richer sensory interfaces, closed-loop robotics, high-resolution recording, and machine-learning analysis could accelerate the field in lumpy, surprising ways.

The debate therefore cannot wait for certainty. By the time everyone agrees something morally relevant has appeared, the research ecosystem will already have norms, investors, lab routines, and competitive incentives. Those things harden fast.

At minimum, ethics boards should be treating advanced neural organoid work as a domain that may require new markers of concern, tighter reporting standards, and protocols for limiting stimulation or using anesthesia-like safeguards when warranted. That is not panic. It is what seriousness looks like when the object in front of you is brain tissue that learns.

A more concrete consciousness debate

The most useful thing about Muotri’s claim is not that it settles anything. It does the opposite. It drags the consciousness debate out of word games and into experimental biology.

With language models, arguments about consciousness often collapse into theater. One side points to eloquence. The other points to next-token prediction. Both can keep talking for years. With biological AI, the questions are more grounded and more difficult. What patterns of integration matter? How much embodiment is enough? What kinds of memory, adaptation, and state changes should trigger moral caution? Which interventions are acceptable once the tissue starts looking less like a model system and more like a patient without a biography?

That is a harder conversation, partly because it might force us to notice consciousness before it can speak for itself.

And that may be the most unsettling part. If Muotri is even partly right, the path toward machine consciousness will not look like a chatbot becoming more charming. It will look like neuroscience, tissue engineering, and computation quietly converging until we realize the old debate was aimed at the wrong machine all along.

End of entry.

Published April 2026