12 min read

Serendipity Against Total Power

A lot of AI marketing can be translated into one promise: you will never have to be surprised again. The system will know the route, the product, the movie, the diagnosis, the next sentence. It sounds like convenience until you notice what disappeared.

Éric Sadin gives that disappearing space a name. He describes a form of “Total Power,” a cognitive infrastructure we are building ourselves, then handing authority to because it appears more informed, more neutral, more efficient than we are. In that vision, uncertainty is treated as a defect. The world becomes something to sense, model, and steer.

There is real seduction in that idea. If sensors and models can warn us about volcanic eruptions, catch disease earlier, stabilize power grids, and reduce waste, why would we defend friction at all? Why insist on mess when calculation can do better?

Because the same logic that helps detect danger can also flatten experience. Once prediction becomes the default answer to every problem, life starts to narrow around what can be anticipated. Serendipity stops looking like a gift and starts looking like system failure. That is the point where convenience becomes political.

The dream of a world with no surprises

Modern AI is often sold as a universal solvent for uncertainty. Put enough data into a model, connect it to enough sensors, and the unknown shrinks. Weather improves. Supply chains tighten. Insurance prices risk with more confidence. Recommendation engines find what you “really” want before you ask. Software stops waiting for commands and begins inferring intent.

Seen one tool at a time, this feels harmless. A navigation app saves ten minutes. A streaming service spares you twenty minutes of scrolling. A writing assistant removes blank-page anxiety. A hiring system ranks applicants faster than a recruiter can. Each little optimization looks reasonable on its own terms.

Sadin’s argument lands when you connect those systems into a culture. The issue is not one app recommending a book. The issue is a civilizational reflex that treats unpredictability as waste. Once that reflex hardens, every domain gets judged by the same standard: can we make it more calculable, more programmable, less open to accident?

That is what makes “Total Power” more than a dramatic phrase. It is not simply top-down control, and it does not need a dictator’s face on a poster. It operates through dashboards, confidence scores, personalized rankings, and automated suggestions that feel helpful because they usually are. The power lies in setting the horizon of the plausible. If the system is always pre-sorting reality, your contact with the unplanned gets thinner.

You can already feel this in ordinary digital life. The feed surfaces what similar people engaged with. The shop shows what your behavioral twins purchased. The map chooses the fastest road. The inbox writes the reply it thinks you intend to send. Each layer removes a little uncertainty. After a while, the smoothness starts to resemble enclosure.

Prediction freezes the self

The deepest problem with recommendation systems is not manipulation in the cartoon sense, where a machine forces you to click something absurd. It is more subtle. These systems build a statistical version of you, then keep serving that version back to you.

If you liked one kind of music, the model offers adjacent music. If you watch one kind of film, it suggests the nearest neighbors. If you spend time on one topic, the feed finds more from the same cluster. This is useful. It is also conservative in the literal sense. It conserves the self you have already displayed.

That matters because human beings do not only express preferences. We also form them. We stumble into books that rearrange our taste. We meet people who make earlier ambitions seem provincial. We enter fields we would never have searched for, because we did not yet possess the language to ask for them. The systems that know us best are usually best at satisfying the person we were yesterday.

Sadin gives a simple example from his own life. At seventeen, he found Proust by chance in a bookstore. The encounter mattered because it was not predicted, not tailored, not returned by a profile-driven engine that said, in effect, “users like you also enjoyed.” A recommendation system might still have put Proust in front of him. The point is not that algorithms cannot surface great work. The point is that an accidental encounter has a different structure of meaning.

When a book appears as the output of a model, it arrives wrapped in explanation. You were likely to enjoy this. It fits your inferred taste. It belongs to your cluster. The encounter is framed by continuity. When a book falls into your life by chance, it can rupture continuity. It has the power to feel alien at first, then formative later. One mode confirms a profile. The other can dissolve it.

This is why the language of personalization often misses what is at stake. Personalization sounds intimate, almost respectful. In practice, it can become a gentle prison. The walls are made from your own past behavior, which is why they are hard to notice. A feed built around prediction is not only guessing what you want. It is quietly proposing that who you have been is the best guide to who you should become.

Serendipity is not decorative

It is easy to hear “serendipity” and picture a luxury preference. A nice afternoon in a bookstore. A charming detour. A romantic defense of analog life by people with enough time to wander. That reading misses the scale of the idea.

Serendipity is one of the mechanisms by which a person remains open. It keeps experience from collapsing into preference fulfillment. It introduces contact with what was not selected, not optimized, and often not initially legible. You do not need to be an artist to need that. You need to be human.

Science offers the cleanest examples because it likes to imagine itself as methodical. Penicillin was not produced by a perfectly planned search path. X-rays were not found because somebody had a fully specified target and then executed it like a shopping query. Discovery often begins with an anomaly that interrupts the model, a result that looks like contamination, noise, or error until someone pays attention instead of filtering it out.

That pattern matters now because AI is entering research itself. Models can scan literature, propose compounds, simulate structures, and rank hypotheses with extraordinary speed. This is valuable work. Yet if institutions begin to trust systems only insofar as they reduce exploratory waste, they may also lose tolerance for the weird result that does not fit. Serendipity in research is not magical thinking. It is what happens when a culture allows enough slack for the unexpected finding to survive first contact with efficiency.

The same goes for ordinary lives. Careers are often changed by chance conversations, by overheard sentences, by taking a class that made no sense on paper, by reading outside one’s lane. Relationships start through improbable intersections more often than we admit. Even political understanding can shift because someone encountered a world that an engagement-optimized feed would never have served them.

If everything important enters through a ranked interface, then the ranked interface gains an enormous role in deciding what counts as encounter. That is too much authority to hand to systems built to optimize measurable outcomes.

Optimization has a hidden theory of the good life

Every optimization system contains a worldview, even when it claims neutrality. It has an objective function, a model of relevance, a tolerance for error, and a reward structure. Those are engineering choices. They are also social choices.

A map that always picks the fastest route assumes speed is the right answer unless told otherwise. A recommendation engine tuned for engagement assumes continued attention is a decent proxy for value. A productivity assistant assumes less effort is usually better. None of those assumptions are absurd. They become dangerous when they spread into domains where the measurable goal is too thin to capture what humans actually need.

Consider culture. The “best” recommendation is often defined as the item most likely to hold your attention. But attention retention and meaningful encounter are not the same thing. Plenty of works that shape a life do not flatter current taste. They demand patience, create resistance, or confuse before they illuminate. A system trained to minimize drop-off can easily underrate the very experiences that stretch a person.

Consider education. Adaptive learning tools promise to meet students where they are, which can be helpful. Yet education is not only the efficient transfer of content calibrated to a profile. Sometimes growth begins when a student hits material that feels oddly distant, even misaligned, and then finds a way in. If the environment continuously narrows around predicted aptitude and preference, curiosity may lose one of its engines.

Consider public life. If civic information is increasingly personalized, the shared accidental encounter with unfamiliar views gets weaker. You do not need a conspiracy theory about mind control to see the issue. A society where most people receive reality through customized ranking systems is a society where common reference points become harder to sustain.

The promise of total optimization hides a very narrow image of flourishing. It imagines a person whose highest need is seamless satisfaction. Many people do want less friction, especially when life is overloaded. Fair enough. But a life built entirely around friction reduction has a way of reducing more than friction.

Resistance can be built into everyday habits

Defending serendipity does not require rejecting AI across the board. That would be shallow and, in many cases, irresponsible. If better models can detect disease earlier or help forecast environmental threats, use them. Nobody needs performative devotion to chaos.

The more interesting question is where we insist on leaving room for the unplanned. That can be personal, architectural, and institutional at the same time.

At a personal level, the simplest move is to interrupt recommendation loops on purpose. Browse a physical bookstore without a shopping list. Read a magazine from front to back instead of following links. Use search for something you do not already know how to describe well. Take a route that is not the fastest when the stakes are low. Spend some time in digital spaces that are less ranked, less optimized, and less eager to guess your next move.

These sound small because they are small. Their value lies in the habits they preserve. A person who never practices wandering becomes easier to govern by relevance engines. A person who never experiences cognitive surprise starts expecting the world to come pre-sorted.

Design matters too. Libraries are powerful not only because they store books but because they create adjacency. Shelves place unlike things near each other. A human curator can make a connection no model would prioritize. A city with mixed-use neighborhoods generates chance encounters differently than a city designed for pure throughput. Public spaces, independent shops, open web structures, and non-personalized interfaces are not quaint leftovers. They are environments that keep accidental discovery socially available.

At the institutional level, serendipity needs protection from metrics. Research labs need room for exploratory work that does not justify itself through immediate prediction. Schools need some curricular space that is not entirely adaptive. News organizations need editors who sometimes place a story before readers because it matters, not because it is likely to maximize completion rate. If every institution outsources exposure to algorithmic relevance, then relevance becomes a gatekeeper with no democratic mandate.

The stakes are larger than nostalgia

This is not an argument for fetishizing the random. Plenty of accidents are bad. Plenty of traditions that generated “serendipity” also generated exclusion, opacity, and wasted time. Human gatekeepers were never innocent. They had their own biases, blind spots, and power games. An algorithm that helps someone find a diagnosis or a community they would otherwise miss can expand freedom, not shrink it.

That is why the line cannot be drawn between machines and people in some sentimental way. The line runs between systems that keep possibility open and systems that steadily collapse possibility into prediction. AI can do both, depending on how it is built and where it is trusted.

There is also a deeper asymmetry worth noting. Prediction works best on repeated patterns. Human freedom often appears precisely when someone breaks pattern. The more a society organizes itself around inferential systems, the more unusual behavior gets treated as outlier risk, low-confidence noise, or simply low-probability irrelevance. Yet many of the things we later call growth looked statistically unlikely beforehand.

This is why serendipity belongs in the conversation about power. A world that calculates well can still become spiritually thin if every meaningful encounter arrives as a managed recommendation. Freedom is not exhausted by having many options on a screen. It includes the chance to be interrupted by something you were not seeking and could not have justified in advance.

A life larger than the profile

Sadin’s warning is useful because it points past the gadget and toward the habit of mind beneath it. Once we start believing that the best life is the best-predicted life, the rest follows quickly. We accept narrower feeds, safer choices, more confident priors, and fewer encounters that have not already been scored.

Serendipity is not anti-technology. It is anti-closure. It insists that a person is more than a behavioral summary and that a culture needs spaces where the improbable can still arrive without permission from a ranking system. That may mean tolerating a little inefficiency, a little wandering, a little mismatch between the map and the territory.

A life fully tailored to who you have been is certainly convenient. It is also a quiet way of preventing you from becoming someone else.

End of entry.

Published April 2026