The Tasmania Effect: When Isolation Makes Intelligence Regress
We like to imagine intelligence as something that naturally compounds. Add more time, more resources, more territory, more compute, and capability should climb. History keeps ruining that story.
Human groups have repeatedly lost techniques they once possessed, including ones tied to survival. Better hunting methods vanished. Tools became simpler. Clothing, fishing gear, and other hard-won skills slipped out of circulation. The people involved did not become less human, less motivated, or less worthy. Their networks thinned. The social machinery that kept knowledge alive stopped working.
Joseph Henrich has spent years making this point, and it lands harder now than when it first entered anthropology. His core claim is simple: intelligence is not just lodged inside individual brains. A large share of it lives between people, in the links that let techniques survive mistakes, deaths, accidents, and distance. Break enough of those links, and even a capable population can start moving backward.
That matters far beyond prehistory. AI labs talk constantly about scaling laws, training runs, and proprietary advantage. They talk much less about the social conditions that prevent technical decay. The industry is building extraordinary systems, then sealing them inside corporate borders, compliance boundaries, and incompatible stacks. We are acting as if raw capability can substitute for circulation. The historical record suggests the opposite.
Intelligence lives in the network
Henrich uses the phrase “collective brain” to describe the pool of skills and know-how available to a community. The phrase sounds abstract until you try to build something real from scratch.
Take a bone harpoon, a seaworthy kayak, or a fitted winter garment. None of these are single ideas. Each is a bundle of materials knowledge, sequencing, tacit hand motions, repair practices, and environmental judgment. One person may understand the broad outline. The full technique survives only if many people carry overlapping pieces of it, and if those pieces move around often enough to correct errors and preserve refinements.
Software works the same way, which is why the analogy hits modern readers fast. A mature codebase survives because maintainers review each other’s changes, document edge cases, copy patterns that worked elsewhere, and notice when a workaround becomes a bug. If you scatter those maintainers into isolated teams, the code does not merely stop improving. It drifts. Local hacks calcify. Shared assumptions diverge. A small regression nobody notices in March becomes a permanent fork by November.
That is roughly how cultural knowledge behaves. It is less like a library and more like a living branch structure. The medium is memory, imitation, correction, and repetition. Population size matters, but connection density matters just as much. Fifty people who exchange ideas frequently can preserve more than five hundred who rarely do.
This is the part modern individualism tends to miss. We are drawn to the image of the ingenious person inventing the future in a flash of insight. Real technical life looks more like a relay race run over generations, with dropped batons everywhere. Most knowledge remains in circulation only because enough people keep carrying it.
The Dorset warning
Henrich often points to the Dorset, the Arctic people who preceded the Inuit in parts of the North American Arctic, as a case of expansion curdling into fragmentation. The details of their disappearance remain debated, and archaeology rarely offers the clean plotline people want. Still, the mechanism is hard to ignore.
A group develops an effective package for living in punishing conditions. Success allows expansion across a wider geography. Over time, that same geography becomes a tax. Distances grow. Contact weakens. Dialects drift apart. Marriage patterns narrow. Techniques that once moved through a connected population begin to stay local.
The bitter irony is that growth can set this process in motion. More territory sounds like power. In social learning terms, it can mean lower bandwidth.
Henrich’s formulation is memorable because it reverses our default intuition. The Dorset may have had stronger technologies at one point than the people who followed them, but superiority is not a permanent asset when the social fabric carrying it starts to fray. The package degrades. Specialized methods become unevenly distributed. Some local groups keep crucial skills; others lose them. After enough time, the whole system is weaker than it looks from a distance.
People sometimes hear this and assume the argument is really about geography, as if the lesson were simply “close communities are good.” It is sharper than that. The issue is transmission under conditions of error. Every generation introduces noise. Teachers die. Apprentices half-learn. Materials change. Conditions shift. A large, connected network can absorb those shocks because someone, somewhere, still remembers the better version and passes it along. A fragmented network cannot.
Once you see it that way, the story stops being exotic anthropology. It becomes a general theory of technical fragility.
Tasmania and the arithmetic of forgetting
Tasmania is the case that makes the idea impossible to shrug off. After rising seas separated Tasmania from mainland Australia, its population remained isolated for thousands of years. The archaeological record suggests that several technologies disappeared over time rather than becoming more elaborate. Bone tools vanish from the record. So do certain fishing practices and other specialized techniques. Historical accounts have sometimes added fire-making to the list, though that particular claim remains contested.
The broad pattern is the point. Isolation was not a romantic reset into self-sufficient simplicity. It appears to have reduced the population’s ability to maintain parts of its inherited toolkit.
Henrich’s explanation is statistical before it is moral or psychological. Complex skills are expensive to preserve. They require teachers, learners, practice time, and enough social redundancy that a mistake in one lineage does not wipe out the technique altogether. If the population is too small, or if subgroups stop exchanging enough information, losses accumulate faster than improvements.
Picture a code repository with only two active maintainers, no documentation, and no issue tracker. One maintainer leaves. The other remembers most of the build steps, but not all of them. A subtle optimization stops being used because nobody can quite explain why it was there. Later, the remaining maintainer patches around the problem. The software still runs, so the regression is invisible from the outside. A year later, a second capability is gone. Nobody made a conscious decision to simplify the system. The system simplified itself because the social conditions for preserving complexity collapsed.
That is the Tasmanian pattern in miniature. Knowledge can disappear without anyone “forgetting” in the ordinary sense. It can die by attrition.
This is also why some technologies prove surprisingly fragile. The more a skill depends on chains of tacit expertise rather than written formulas, the more exposed it becomes. Tailored clothing for harsh climates sounds basic once you have it. It is not basic to invent, refine, and preserve. Harpoon-making sounds obvious in retrospect. It is a little masterpiece of materials science, ergonomics, and ecological knowledge. The finished object hides the collective labor inside it.
The modern tendency is to measure intelligence by outputs at a moment in time. Henrich’s work points somewhere else. The deeper measure is whether a population can sustain and transmit complex adaptations over generations. A brilliant local peak can still sit on top of a brittle social structure.
Regression starts at the edges and moves inward
When networks weaken, they rarely lose everything at once. The first casualties are often skills that are rare, specialized, or hard to practice frequently. Then the losses creep inward.
That progression makes intuitive sense. If only a few people know a difficult technique, and if it matters only in certain seasons or locations, the margin for transmission error is thin. When one expert dies or one line of teaching breaks, the capability no longer has enough redundancy. Everyday knowledge lasts longer because more people use it, correct it, and pass it on.
You can see the same pattern in modern engineering organizations. A company may still ship products while quietly losing the ability to diagnose obscure failures, recover from weird outages, or migrate old systems safely. The glamorous layer remains visible. The substrate starts rotting. Eventually the glamorous layer depends on expertise that no longer exists in house.
This matters for how we think about “advanced” societies and systems. Complexity is not a ladder you climb once. It is a garden you keep alive. The maintenance burden is the story.
Historians and anthropologists have argued over some of Henrich’s interpretations, especially when specific archaeological absences are asked to carry too much explanatory weight. Fair enough. Prehistory is inference, and inference deserves pressure. But the central mechanism has support well beyond any single site. Population size, network structure, and exchange strongly shape cumulative culture. Knowledge survives better when it circulates broadly and repeatedly. That claim has proven remarkably resilient across cases.
AI has its own islands
Now shift the lens. The AI industry is building systems that look gigantic, but many of them are socially small.
A frontier model may train on vast amounts of text, code, image data, and synthetic augmentation. Yet once deployed, it often enters a narrow institutional enclosure. Fine-tuning data is proprietary. Post-training pipelines are hidden. Safety evaluations stay private. Failure reports live in internal dashboards. Tool-use scaffolding is custom. The model becomes less like a participant in a scientific culture and more like an estate behind tall walls.
Companies call this moat-building. History offers a less flattering term: self-isolation.
The usual defense is obvious. Closed development protects trade secrets, reduces misuse, and preserves incentives to invest. All true to a point. Full openness has costs, including clear security risks. Still, the relevant question is not whether every model should be public. It is whether ecosystems built around hard silos can preserve and improve complex capabilities over time.
There are reasons to worry.
Start with data. When each company guards its highest-quality interaction logs, domain corpora, and user feedback, the broader field loses a shared error-correction mechanism. One lab discovers that a certain annotation protocol improves factuality in long-context tasks. Another finds a better way to generate synthetic tutoring traces. A third learns that a celebrated benchmark trick was mostly contamination. If those insights remain trapped in local process documents, the field becomes a set of small civilizations rediscovering each other’s tools at great expense.
Then there is evaluation. Public benchmarks have become performative in places, but private benchmarks create a different problem. They prevent the shared visibility that cumulative improvement needs. If every lab measures reliability, deception, code correctness, and multimodal grounding using different hidden test suites, the results stop composing. Knowledge becomes anecdotal. Capability claims turn into marketing fog.
Model architecture can silo intelligence too. Many organizations now rely on elaborate chains of wrappers, retrieval systems, internal tools, memory stores, policy layers, and routing logic. Some of these stacks are excellent. Some are a Jenga tower with a GPU budget. Either way, a great deal of practical know-how now lives outside papers and outside weights. It lives in orchestration details that almost nobody sees. When those details are non-transferable, each company becomes its own little island of tacit expertise.
The irony is rich. AI systems promise fluid generalization, yet the industry around them often discourages it. Labs overfit to local constraints, local data, local metrics, local user populations, and local infrastructure. They specialize in ways that look efficient quarter to quarter. Long term, excessive specialization can narrow the adaptive base. A model tuned to dominate one family of tasks may become less useful when the environment shifts, much the way a small human group can retain core habits while losing the adjacent techniques that once made those habits resilient.
Open-source advocates often present openness as the automatic cure. I would not go that far. Open ecosystems fragment too. Anyone who has wandered through abandoned GitHub repos, contradictory forks, and undocumented model variants knows this. Public weights are not the same thing as a healthy exchange network. Abundance can become noise. If every group forks everything and nobody converges on interfaces, evaluation norms, or reproducible methods, the commons gets messy fast.
The real contrast is not open versus closed as an article of faith. It is connected versus insulated.
The infrastructure of exchange
Human societies that preserved complex knowledge over distance usually built deliberate mechanisms to keep exchange alive. Trade routes did part of the work. So did intermarriage, pilgrimage, shared rituals, lingua francas, and institutions that carried techniques across local boundaries. These arrangements were often inconvenient. That was the point. They counteracted the natural drift toward fragmentation.
AI needs its own equivalents, and some are already visible when the industry chooses to value them. Shared benchmarks still matter when they are well designed and regularly refreshed. Common tooling matters. Reproducible papers matter. Open model interfaces matter. So does researcher mobility between labs, universities, startups, and public institutions. The boring standards work that engineers usually postpone until after the exciting phase turns out to be part of intelligence preservation.
Interoperability sounds dry because it is dry. It is also civilization-grade infrastructure.
There is a policy angle here too. Governments increasingly want data localization, sector-specific controls, and national champions. Some of that is reasonable. Healthcare data, defense systems, and critical infrastructure are not group projects for the whole internet. But every new wall carries a cognitive cost. If regulation creates many sealed jurisdictions with little methodological exchange, the result may be safer in one narrow sense and less innovative in another. The balance is hard, and slogans will not solve it.
The same tension appears inside companies. Risk teams want tighter controls. Product teams want faster iteration. Research teams want broad access to findings. Legal wants less exposure. Everyone has a case. Over time, though, organizations that cannot move knowledge across these internal borders tend to rediscover problems they already solved three buildings away.
The size of the model is not the size of the mind
A system can be huge in parameters, revenue, and public attention while remaining cognitively provincial. That is the deepest implication of the Tasmania effect.
We should stop assuming that scale by itself protects against regression. Scale can conceal it. A company with enough GPUs can brute-force past some local losses for a while, the way a wealthy empire can absorb institutional decay longer than a village. But hidden losses still matter. If crucial know-how stays locked inside narrow teams, if evaluation methods stop being comparable, if improvement pathways cease to circulate, the ecosystem starts shedding intelligence at exactly the places it cannot easily see.
Human history offers a stubborn lesson here. Capabilities survive when communities build more than tools. They build channels through which techniques can be copied, criticized, repaired, and carried onward. AI will be no different. A model can be enormous and still be lonely, and lonely systems rarely stay ahead for long.
End of entry.
Published April 2026