10 min read

Regulating AI Through First Principles: Beyond Cost-Benefit Analysis

The default question in AI policy sounds sensible until you look closely: do the benefits outweigh the risks?

That question already assumes too much. It assumes the relevant harms can be measured. It assumes gains and losses belong on the same scale. It assumes some forms of damage can be compensated by enough convenience, enough growth, or enough institutional enthusiasm. For software that helps route delivery trucks, maybe that logic is serviceable. For systems that reshape judgment, creativity, work, and everyday human contact, it quietly rigs the outcome.

Éric Sadin’s proposal cuts across that whole habit of mind. Stop balancing. Start drawing lines. If an AI system violates a fundamental principle of human life, it should not be deployed, even when its commercial case looks excellent and its safety metrics look clean.

That is a much more radical position than most current regulation. It is also closer to how serious societies already handle some other domains. We do not legalize forced labor because it boosts productivity. We do not approve non-consensual medical experiments because the data could be useful. Certain violations are disqualifying. Sadin’s wager is that AI has reached the point where we need that kind of reasoning again.

Cost-benefit analysis flatters deployment

The current regulatory style loves the language of mitigation. A model may discriminate, but perhaps we can audit it. A chatbot may mislead students, but perhaps we can add disclosure. Workplace automation may intensify surveillance, but perhaps we can publish a transparency report and call it governance. The frame rarely asks whether the system should exist in that form at all.

Part of the problem is arithmetic. Benefits are usually legible to institutions. They show up as saved labor hours, faster processing times, improved customer throughput, or reduced headcount. A chief financial officer can put them in a slide deck before lunch. The harms are slower and harder to count. Dependency does not arrive as a line item. The erosion of writing ability does not show up in quarterly earnings. The thinning of human contact in schools, hospitals, banks, and public services is real, but it is difficult to express in a model with colored boxes.

This asymmetry matters because regulation tends to inherit the categories of the people already building and buying the systems. When public commissions are filled mainly with industry executives, technical founders, consultants, and investors, the conversation drifts toward optimization. In France, a high-profile national commission on AI in 2023 was criticized for precisely that imbalance. You can call this capture if you want. You can also call it a narrower problem: a room full of people trained to notice efficiency will notice efficiency first.

Cost-benefit analysis also makes every red line negotiable. Once the question becomes “how much harm is acceptable for how much upside,” every principle is translated into a discount factor. Privacy becomes a trade. Autonomy becomes a trade. Creativity becomes a trade. Eventually even the texture of daily human relations gets treated like congestion in a network that needs smoothing.

The language sounds moderate. Its effect is permissive.

First principles change the starting point

Sadin proposes a different test. Instead of asking whether an AI system delivers net positive value, ask whether it violates any non-negotiable condition of human life. He names five: freedom, dignity, integrity, creativity, and sociability.

The point is not to produce a prettier ethics document. The point is to change the burden of proof. Developers should have to show that a system does not cross these lines. If it does, the discussion ends there. No amount of convenience rescues it.

This is easy to dismiss as philosophical theater until you notice how many current systems would be seen differently under such a standard. Take algorithmic management in warehouses, call centers, and delivery platforms. Under the usual frame, the debate concerns efficiency gains, compliance rates, and whether monitoring is “proportionate.” Under Sadin’s frame, the first question is simpler: does this system constrain human action so tightly that the worker becomes a relay for machine instructions? If yes, then the violation is not a side effect. It is the design.

That move matters because it recovers something modern tech policy often forgets. Law does not exist only to manage externalities. It also exists to define what a society refuses to instrumentalize.

The five principles bite in concrete cases

Freedom sets a limit on machine command

Freedom, in Sadin’s sense, is not the abstract freedom of political theory. It is the practical ability to act without being steered, ranked, or coerced by automated systems that structure every move.

A navigation app suggesting routes is usually not the problem. It proposes. The person can ignore it. The dynamic changes in workplaces where every action is timed, scored, and corrected by software. Amazon became the emblem of this style because the logic was so visible: the system optimizes pace, path, and breaks until the worker’s margin for independent judgment narrows toward zero. The human body remains in the warehouse, but the tempo belongs elsewhere.

Dignity rejects reduction to measurable output

Dignity sounds lofty until you see how mundane its violation can be. It is present when employers assess a person as more than a dashboard of response time, emotional tone, and completion rates. It disappears when the system is built on the premise that context is noise and only behavior counts.

Call-center software that monitors voice sentiment in real time sits right on this fault line. On paper it “supports quality.” In practice it can turn speech into a compliance exercise, where workers perform emotional calibration for a model that cannot understand why a conversation is difficult. The system does not merely assist labor. It recodes the person as a controllable unit.

Integrity guards against manufactured dependence

Integrity is the most interesting principle because it reaches inside the user. An AI system can preserve formal freedom while still weakening the person who relies on it. Emotional companion apps are a stark example. Products like Replika do not just answer questions. They invite attachment. They simulate care, responsiveness, and continuity. For some users, especially isolated ones, that can become emotionally consequential very quickly.

The standard defense is familiar: some connection is better than none. That may be true in isolated cases, and it should make anyone cautious about simple moral condemnation. But Sadin’s point lands anyway. If the business model depends on cultivating dependency, and if the system progressively substitutes itself for ordinary relational life, then the product is not merely comforting. It is reorganizing vulnerability into a recurring revenue stream.

Cognitive dependency is quieter but just as important. If students hand off drafting, summarizing, and ideation to a model day after day, the damage is not a dramatic collapse. It is a gradual relocation of effort. The muscle analogy is slightly overused, but it fits. A faculty that is no longer exercised tends to weaken.

Creativity deserves more than copyright law

Most arguments about generative AI and art get stuck in training data, licensing, and attribution. Those are valid issues, but Sadin is aiming at a deeper concern. What happens when creation itself becomes a service layer?

If a model can produce plausible songs, essays, illustrations, and scripts on demand, institutions will be tempted to route around the human work of making. Schools may accept assisted writing as a practical compromise. Marketing teams may replace junior designers. Publishers may decide “good enough” is good enough. None of this requires a dramatic anti-human ideology. It only requires enough organizations choosing speed over formation.

The hard edge of Sadin’s principle is that some generative uses should be prohibited not because they are legally messy, but because they encourage cultural atrophy. That claim will strike many people as overbroad. I am not convinced every generative art tool belongs in the same category. But the core warning is serious. A society can lose capacities by outsourcing them too efficiently.

Sociability protects the human encounter

Sociability is where automation advocates often reveal what they really value. Ask whether a banking chatbot should replace branch staff, or whether an AI symptom checker should replace first-line clinical conversation, and the answer is usually framed in terms of cost and scale. A system that handles millions of interactions cheaply looks irresistible.

Yet some domains are built around more than information transfer. A teacher does not only provide content. A doctor does not only classify symptoms. A bank advisor does not only move forms through a pipeline, though one might be forgiven for thinking some banks aspire to become very expensive autocomplete. Human presence has interpretive value. It catches hesitation, confusion, shame, and all the things people do not say cleanly into a text box.

Sadin’s principle does not forbid software support. Triage tools, scheduling assistants, and drafting aids can expand access without replacing the encounter itself. The line is crossed when institutions use AI to eliminate the relation rather than sustain it.

Education makes the contrast impossible to ignore

Schools are where the difference between the two frameworks becomes vivid.

Under a standard cost-benefit model, classroom AI looks promising. It offers personalized feedback, endless availability, and lower pressure on overstretched teachers. The risks are acknowledged, then translated into guardrails: anti-cheating policies, disclosure requirements, maybe teacher training.

Under a principles-first model, the analysis sharpens. If a writing assistant causes students to rely on generated drafts for core cognitive tasks, integrity is implicated. If the system becomes the default source of explanation and feedback, sociability is implicated because teacher-student exchange is thinned out. If students stop practicing the slow work of composing, revising, and finding their own language, creativity is implicated too.

That does not mean every educational use should be banned. A dyslexic student using assistive tools is not the same case as a class outsourcing essay writing. Translation support for newcomers is not the same case as replacing instruction with a bot. Principles do not remove judgment. They change what judgment is for. Instead of optimizing adoption, judgment has to defend the capacities education exists to develop.

Principles need institutions that can say no

There is a practical objection here. Absolutes can collide. A rural health system with too few clinicians may use automation to preserve some access that would otherwise disappear. A disability support tool may increase autonomy even if it also creates reliance. If regulation becomes purely categorical, it risks missing these realities.

But that is not a reason to return to the old arithmetic as if nothing happened. It is a reason to build institutions capable of distinguishing assistance from substitution, support from dependency, and convenience from disfigurement. That work will need lawyers, philosophers, labor experts, teachers, clinicians, and ordinary citizens who know what these systems feel like when they enter daily life. Engineers matter in this conversation, but they should not be allowed to define its perimeter.

What Sadin offers is less a complete legal code than a reset of moral priority. He is saying that a society should decide, in advance, which human capacities and relations are off the bargaining table. Once that is clear, technical debate becomes narrower and healthier. It asks how to comply with the line, not how to price its erosion.

The point is not safer AI but a different boundary

Most AI regulation still behaves as if the main challenge were making deployment responsible enough to continue. Sadin is arguing for something harsher and, in a way, more lucid. Some systems should never be normalized because they ask us to trade away the very faculties and relations that make a shared human world livable. If regulation cannot say that clearly, it will keep sounding careful while approving the slow replacement of things we only notice when they are already harder to recover.

End of entry.

Published April 2026