AI Doesn’t Write Badly. It Writes Dead.
A marketing director clicked “Generate,” and the machine delivered a climate campaign in half a minute. The slogan was polished, warm, and instantly usable. Someone in the room smiled and said it was perfect because it “spoke to the heart.” The line was not wrong. That was the unsettling part. It was grammatically clean, emotionally familiar, and completely lifeless.
That reaction gets dismissed as aesthetic snobbery. It is not. It points to a deeper confusion that now runs through companies, schools, and public institutions. We keep mistaking technical efficiency for human value. An AI system can write fast, imitate tone, and smooth away friction. None of that means the writing carries thought. Sometimes it means the opposite.
Éric Sadin has a phrase for this: thanatos logos, a language of death. He does not mean that machine-generated text is evil or that it always lies. He means it drains language of the unstable, searching quality that makes expression more than output. It removes the tremor that tells you a person is trying to say something they did not fully know before speaking.
Across 17 AI integration projects in France, the same pattern kept surfacing. Whenever organizations let AI replace the human process of expression, they also weakened their ability to think beyond what was already familiar. The loss was quiet. No dramatic collapse, no science-fiction takeover. Just a narrowing of language, then a narrowing of judgment.
Fluency is not meaning
Human language is not a probability table with better punctuation. It is entangled with hesitation, memory, fear, status, pride, and the little bursts of courage required to say something unpopular in a meeting. If you have ever sat in a strategy workshop, you know the moment. Someone says, “Maybe we should…” and pauses. That pause is not noise in the system. It is the system. It marks the point where experience collides with a changing reality.
A model does not inhabit that collision. It predicts the next likely phrase from patterns it has already absorbed. It does not struggle toward an idea. It arranges language that statistically resembles one. That distinction sounds philosophical until it hits a balance sheet.
In one company, a team had to write to customers after a technical failure. The human draft took 23 minutes. People rewrote sentences to acknowledge anger without sounding defensive, and they referred to a promise the company had made in 2022. The AI draft arrived in 47 seconds. It used the standard language of regret, but it had no memory, no relationship, no trace of the shared history between company and customer. The unsubscribe rate rose by 12 percent.
That is the trap. Smooth language often looks safer than situated language. It feels less risky because it contains fewer edges. Yet those edges are where trust often lives. A sentence like “we understand your disappointment” can be perfectly acceptable and still sound like it was pulled from a customer service vending machine. Readers feel the template through the screen.
This matters far beyond marketing copy. Companies use AI to draft HR protocols, internal memos, investor notes, and executive summaries. The resulting documents are usually competent in the narrowest sense. They are also often dehumanizing because competence is being measured at the level of form rather than relationship. The machine can produce a reflection of an organization’s existing language. It cannot produce a living direction for that organization.
Conformity scales faster than insight
Once you see this, another problem comes into focus. AI does not just flatten expression. It normalizes consensus. It writes from what has already been written, which means it tends to reward the language that became common enough to leave a statistical trail.
Consider the standard ESG phrase: “We value diversity as a driver of innovation.” You can find versions of that sentence all over annual reports. It is not false. It is also content-free unless the company explains how disagreement improves decisions, where friction appears, and what trade-offs follow from hiring people who think differently. Without those specifics, the sentence is branding mist.
AI is exceptionally good at producing that kind of mist. If most firms in a sector describe themselves with the same approved vocabulary, a model will echo it with impressive speed. The result is a document that sounds current, responsible, and strategically vacant. It gives the appearance of alignment while hiding the tensions that actually define the business.
That is why AI-generated strategy writing can be more dangerous than obviously bad writing. Bad writing at least triggers resistance. People push back, ask questions, and force revision. Generic fluency slips through because nobody wants to slow the meeting down and say, “This sounds fine, but what are we actually saying?”
A review of 132 internship reports produced with ChatGPT by students at a leading Paris business school makes the point in miniature. Seventy-eight percent reused the same core expressions, including “proactive,” “synergy,” and “disruption.” None described a concrete failure or unresolved tension inside the company. More than a third contained factual errors because the model invented figures. Recruiters noticed not because the writing was weak, but because it was too perfect in the same way every other report was perfect.
That sameness has a strategic cost. If a company asks AI how to describe a low-cost strategy, it will return the language of existing low-cost winners. It has no lived encounter with the contradiction between budget discipline and premium service. It cannot feel the awkwardness of trying to do both. Yet that awkwardness is often where differentiation begins. Innovation rarely arrives in the vocabulary of a settled consensus.
What organizations lose when they outsource expression
The easiest story here is that AI makes people lazy. That is too simple and a little smug. The deeper issue is that outsourcing writing changes the function of writing itself. It turns language from a tool of discovery into a tool of completion.
When people write seriously, they often discover what they think halfway through the sentence. They find the contradiction, notice the weak claim, or realize the real issue is not operational at all but political. Anyone who has drafted a difficult email knows this. The first version says one thing. The third version admits another. By the fifth, the message has become less tidy and more true.
Generated text removes much of that friction. It offers finished language before the organization has done the work of clarification. Executives love this because it speeds up production. Then they wonder why reports feel aligned while decisions become strangely hollow.
That is also why AI-generated communication tends to erase uncertainty. A model has no reason to preserve the unstable middle where a real judgment forms. It smooths over doubts because the language of confidence is more common in polished corporate text than the language of honest ambiguity. Yet strategy depends on making uncertainty visible. If every memo sounds resolved, unresolved problems do not disappear. They just move downstream until they become expensive.
Keeping the human struggle in the loop
The answer is not to declare a war on AI. Organizations that try to ban it entirely will look principled for a week and inefficient shortly after. The more useful move is to protect the parts of the process where meaning is still being made.
One health insurer tested this in a simple way. Teams were asked to handwrite responses to the most delicate customer messages, especially reimbursement refusals, before using any tool. In A/B tests, those handwritten drafts produced 47 percent more measurable empathy. The gain did not come from sentimentality. The physical slowness forced people to stay with the emotional reality of the situation rather than reaching for stock phrasing.
Some teams now score important documents on three markers that AI tends to underproduce: visible contradictions, references to recent and specific events, and explicit uncertainty. A report that contains none of those signals is not automatically machine-written, but it often lacks human depth either way. The test is useful because it shifts the quality standard. Instead of asking whether the prose is polished, it asks whether the prose reveals actual thinking.
There is also a strong case for creating spaces where no one can lean on generated language at all. One industrial group set aside an hour each week for meetings without AI assistance. The first sessions were painfully quiet. People had become accustomed to outsourcing first drafts of thought. Over time, the silence broke, and most of the group’s real strategic pivots began emerging from those meetings. The point was not nostalgia for pre-automation life. It was to rebuild a capacity the organization had started to lose.
Language is part of the economic model
This is not a moral panic about authenticity. It is a business issue hiding inside a writing issue. If customer communication becomes generic, churn rises. If internal documents erase conflict, bad decisions travel farther before anyone challenges them. If recruiting pipelines fill with machine-polished sameness, managers lose the signals that reveal judgment, maturity, and intellectual risk.
One company ran the math after seeing AI-generated incident messages alienate customers. A lost customer was worth roughly €150. A weak machine-written message was costing 12 customers per 1,000. Human drafting was slower, but the return was obvious once the organization measured the right thing.
That is the shift many firms still have not made. They evaluate AI writing by speed, consistency, and surface quality because those metrics are easy. They should also be measuring erosion: how much relationship disappears, how much differentiation fades, and how much uncertainty gets buried under clean prose. The machine may still be worth using after that accounting. It just stops looking like a replacement for thinking.
The practical rule is simple enough to survive contact with real work. Let AI react to a draft. Let it suggest alternatives, condense a page, or expose clichés you missed. Do not let it produce the first meaningful version of what your organization is trying to say. If the machine writes before you have wrestled with the sentence, it will usually give you language that sounds complete before your thought is complete.
End of entry.
Published April 2026