12 min read

L'ère de l'indistinction : Quand les images synthétiques dissolvent le réel

A photograph used to make a simple promise. Something stood before a lens, and light hit a sensor.

That promise was never pure. Photos were staged, cropped, edited, and weaponized from the start. Propaganda is older than the internet, and darkroom tricks predate Photoshop by decades. Still, the ordinary social meaning of a photograph rested on a stubborn fact: a camera had to encounter a scene. Even when images lied, they usually lied by bending something that had existed.

Synthetic image generators cut that link while preserving the visual authority that link created. That is the deeper problem. The trouble is not only fake news, or even deepfakes in the narrow sense. It is a wider collapse in the practical distinction between record and projection, between what happened and what someone wanted to appear to have happened. Eric Sadin gives this shift a useful name: the rise of the fantasmatic image. Once that enters everyday circulation at scale, public life gets stranger, meaner, and harder to hold together.

The camera stops resisting us

For most of modern media history, reality pushed back.

If you wanted an image of a politician taking a bribe, you needed a bribe, or a body double, or serious production resources. If you wanted a photograph of soldiers committing an atrocity, you needed access to a battlefield, archival footage, compositing skill, and time. There were always ways to fake visual evidence. They were just costly, specialized, and often traceable. The world retained a certain friction.

Generative image systems change the economics first, then the culture. They let anyone produce a plausible scene from language, references, and style cues in seconds. You do not start with a captured event and modify it. You summon a visually coherent event that never occurred. The result can look like a press photo, a smartphone shot, a CCTV still, a satellite image, or a family snapshot. It arrives pre-adapted to the formats that audiences already trust.

That matters because most people do not authenticate images through forensic methods. They authenticate through familiarity. Does it look like something I have seen before? Does it fit the visual grammar of evidence? Does it match my expectations about the world? Synthetic systems are very good at supplying exactly that texture of plausibility.

This is not the same as saying every generated image fools everyone. Plenty still look off. Hands go strange. Backgrounds smear. Text melts. But the trajectory is clear, and the threshold for social damage is lower than technical perfection. A fake image does not need to survive lab analysis. It only needs to travel fast enough, or hit a receptive audience, before doubt catches up.

Detection becomes an arms race nobody really wins

The standard response is technical confidence: watermark the outputs, detect the artifacts, label the provenance.

In practice, that confidence looks thin. Watermarks can be removed, metadata can be stripped, screenshots erase context, and open models ignore whatever safety standards large vendors adopt. Detection systems work until the next model update changes the artifact profile. Then they fall behind again. This is not a solvable puzzle with one clever classifier. It is an adversarial loop, and image generation is improving faster than trust infrastructure is spreading.

Even robust provenance systems such as C2PA help only if a chain of custody is preserved from capture to publication. That is valuable for newsrooms, courts, and some official channels. It is much less useful in the swamp where most images now move: reposts, private groups, meme pages, screenshots of screenshots, compressed uploads, edited clips, and feeds designed to reward velocity rather than verification.

There is also a social asymmetry here. The burden of proof sits with the person asking for restraint, not with the person posting the image. A sensational synthetic picture arrives with emotional force on first contact. A correction arrives later, often in text, and usually to a smaller audience. Vision lands before analysis. That is why “just teach media literacy” sounds responsible but feels inadequate. You are asking individuals to perform forensic caution inside environments built for instant reaction.

The fantasmatic image enters public life

Sadin’s idea of the fantasmatic image gets at something broader than deception. These systems do not merely fabricate facts. They materialize desire.

A conspiracy believer can generate scenes that “show” the hidden cabal. A partisan can produce images of an opponent as criminal, decadent, weak, diseased, or triumphant over enemies, depending on the mood required. A jealous ex can create compromising sexual images. A chauvinist community can picture migrants as invaders. A state can mint battlefield victories before they happen. A fandom can keep a dead celebrity alive in endless new tableaux. The point is not only that these images are false. The point is that they give visual body to wishes, resentments, and delusions.

That is a cultural break. Illustration used to announce itself as illustration. Satire had conventions. Fantasy art had a frame around it. Even a manipulated photo usually retained signs of intervention and required effort. The fantasmatic image travels in the style of documentation. It looks less like expression and more like evidence.

Once that becomes ordinary, the image changes function in public speech. It no longer records a shared world so much as helps each group project its preferred world back into circulation. The picture says: this is what happened. But underneath, it often means: this is what I need to have happened.

Social media is perfectly tuned for this mutation. Feeds reward intensity, identity confirmation, and repeatable templates. An image that flatters a group’s priors does not need to convince outsiders. It needs to consolidate insiders. In that sense, synthetic images behave like visual doctrine. They help communities inhabit a felt reality even when the underlying event never occurred.

Reality loses one of its oldest civic functions

There is a temptation to say none of this changes reality itself. The world remains stubbornly there. Buildings still stand or fall. Votes still get counted. Bodies still bruise. That is true at the physical level, but it misses the political problem.

Shared reality is never just the existence of facts. It is the social capacity to recognize, contest, and refer to them together. A society works because enough people can point at the same scene and argue over its meaning without first arguing over whether the scene existed. Courts rely on that. Journalism relies on that. Democratic disagreement relies on that. The common world does not require consensus, but it does require some stable objects of dispute.

Photographic culture, for all its flaws, supported that function. A photo could be challenged, contextualized, and manipulated, yet it still carried an indexical claim. It testified that some scene met some camera at some moment. That claim was not absolute truth, but it created a starting point.

Synthetic images weaken that starting point. They allow people to bypass the world’s refusal. If reality does not provide the visual proof your story needs, you can generate it. If an event is ambiguous, you can flood the zone with more vivid versions. If nothing happened at all, you can still populate the public square with convincing remnants of an imaginary occurrence. The image no longer disciplines belief through contact with a scene. Belief disciplines the image.

That changes the emotional logic of public debate. Reality used to interrupt fantasy more often than fantasy could overwrite reality. Now fantasy can arrive with documentary texture, and many audiences will meet it before any grounded account.

Violence scales when visual proof becomes synthetic

The most immediate harms are intimate.

A teenage girl no longer needs to have taken a photo to be shamed by one. An employee no longer needs to have attended a rally to “appear” in footage from it. A teacher can be placed in a compromising scene. A judge can be made to look corrupt. Revenge porn becomes detached from cameras and attached to resentment. The victim is forced to disprove an event that never happened, often under conditions where social damage arrives faster than formal exoneration.

These are not edge cases. They reveal the structure of the new environment. Visual evidence used to have scarcity on its side. If there was a compromising image, that fact itself carried weight because producing it usually required proximity to reality. Synthetic generation removes that scarcity. Anyone with motive can produce a plausible exhibit.

At larger scales, the same logic becomes combustible. A fabricated image of a desecrated religious site, a fake atrocity photo during ethnic tensions, a synthetic shot of ballot destruction during a close election, a bogus disaster scene during a crisis: each can trigger real behavior. People mobilize around pictures. Anger organizes faster around visible injury than around textual claims. By the time verification arrives, the crowd may already have moved.

This is why the “fake news” frame is too narrow. It suggests isolated falsehoods polluting an otherwise stable information system. What is happening is closer to a change in the medium of social belief. The image becomes a portable weapon for manufacturing witness. It gives anyone the ability to say, in effect, look for yourself, even when there was nothing to see.

Public space fragments into incompatible scenes

Democracy never depended on universal agreement. It depended on a common stage.

People could watch the same footage of a protest and argue over whether police acted lawfully. They could inspect the same image of a flooded street and debate climate policy. They could dispute motives, causes, and remedies because the visible event still functioned as a shared reference point. The argument began after the scene.

As synthetic images spread, the scene itself splinters. One community sees proof of fraud. Another sees proof of repression. A third sees proof that the whole event was staged. Each possesses visuals that fit its narrative, and each can circulate them inside platforms optimized for homophily. Public space starts to resemble a hall of mirrors where every faction carries its own archive of persuasive non-events.

That does not mean everyone becomes equally gullible. It means the cost of basic agreement rises. Institutions must spend more time establishing that something happened before they can debate what it means. Journalists shift from reporting events to validating artifacts. Courts face evidentiary pressure from media that looks immediate but lacks provenance. Ordinary citizens absorb a chronic uncertainty that corrodes trust even when an image is real.

Paradoxically, generalized fakery can also protect actual wrongdoing. Once people know convincing images can be generated, genuine footage becomes easier to dismiss. The liar’s dividend gets stronger. A politician caught on video can call it synthetic. A militia filmed committing crimes can claim manipulation. When false images become common, authentic images lose force by association.

So the damage runs in both directions. Fake visuals can create imaginary events, and the existence of fake visuals can dissolve confidence in real ones. The result is not simply deception. It is ambient evidentiary weakness.

Trust moves away from appearance and toward custody

The old habit was simple: see image, infer event.

That habit is becoming expensive. The credibility of an image can no longer live primarily inside the image. It has to come from the path around it: who captured it, where it traveled, what devices signed it, which newsroom verified it, which witnesses corroborated it, which institution is accountable for publishing it. In other words, trust shifts from visual plausibility to documented provenance.

This will feel unsatisfying because it asks for more mediation at a moment when digital culture trained us to expect immediacy. But appearance is no longer enough. If synthetic systems can mimic the surface features of documentary truth, then documentary truth needs stronger scaffolding than surface.

There is no clean return to an earlier visual order. Cameras themselves are becoming computational systems, editing tools are bundled into capture, and generation is blending with photography in consumer apps. The categories will keep blurring. That is exactly why the social response cannot rest on nostalgia for a pure photograph that never really existed. It has to build institutions and habits that survive impurity.

That includes better provenance standards, yes, but also slower editorial norms, legal remedies for synthetic sexual abuse, evidentiary protocols in courts and elections, and public education that teaches people to ask for source history rather than merely scan for weird fingers. None of this restores innocence. It gives society a way to assign credibility without pretending the eye can do the job alone.

The shared world becomes harder to hold

The deepest loss is not technical. It is civic.

A society can endure conflict, ideology, and propaganda. It has a harder time enduring the erosion of the visible world as a common reference. When any grievance can summon its own images, when any fantasy can dress itself as documentation, the basic act of showing one another what happened starts to fail. That weakens journalism, law, and democratic argument at the same time because all three depend on scenes that can be publicly pointed to and collectively examined.

The challenge now is not to rescue the purity of the image. That ship sailed long ago, probably before most of us were born. The challenge is to preserve shared procedures for establishing reality when images themselves can no longer bear that burden. Without those procedures, the loudest faction gets its own reality with pictures attached, and everyone else is left arguing inside a theater where the props no longer have to come from the world outside.

End of entry.

Published April 2026