Job Apocalypse: This Time Schumpeter Won’t Save Us
The most soothing sentence in tech policy is also becoming the least useful.
Every time the subject turns to AI and jobs, someone reaches for the same historical sedative: people feared the loom, the tractor, the assembly line, the spreadsheet. They were wrong then. They are wrong now. Old work disappears, new work appears, and capitalism keeps moving.
That story has one huge advantage. It lets everyone relax.
It also depends on a condition that may no longer hold: when a machine destroys one category of labor, there must be another large category ready to absorb the displaced workers. Schumpeter’s idea of creative destruction was never magic. It was a social mechanism. People lost work in one place and found it somewhere else. The somewhere else mattered.
That is the weak point in the current analogy. AI is not mainly coming for the remaining pockets of routine manual labor. It is moving straight into the sectors that were supposed to be the refuge.
The absorption problem
In developed economies, the service sector accounts for roughly 70 to 80 percent of employment. That includes law, media, design, education, finance, administration, software, consulting, marketing, customer support, and a long tail of knowledge work that keeps office districts, Zoom calendars, and graduate schools in business.
For two centuries, advanced economies shifted workers upward through a rough sequence. Fewer people farmed. More people worked in factories. Then fewer people worked in factories. More people moved into services and professional roles. Each wave of mechanization made the next layer of human work more valuable.
The current wave points in the opposite direction.
Generative models are built to perform tasks that sit inside service-sector work: drafting, summarizing, coding, translating, sorting, classifying, searching, formatting, advising, illustrating, editing, and increasingly deciding. Even when the system is not fully autonomous, it compresses the amount of labor required to produce an acceptable result.
That compression matters more than the headline claim of “replacement.” An employer does not need AI to do 100 percent of a job to reduce headcount. If a team of ten can now produce the same output with six people and a subscription, four jobs have already vanished in economic terms. The pink slip may arrive later. The surplus labor arrives first.
Schumpeter works when destruction in sector A creates broad demand in sector B. The problem now is that sector B is the same place being automated.
Why historical analogies feel better than they fit
The horse did not survive the automobile as a mass transport technology. The blacksmith did not become the center of the modern economy. Still, new industries emerged: manufacturing, logistics, road construction, retail chains, insurance, tourism, advertising, suburban real estate. The machine destroyed old routines but generated a wide ecosystem of human activity around itself.
That is what optimists are betting on again. AI will erase certain roles, they say, while creating prompt designers, model auditors, AI product managers, synthetic media editors, safety evaluators, and industries we cannot yet name.
Some of that is true. New work will appear.
The issue is scale. If a technology erases or radically thins out a giant share of clerical, junior professional, and creative labor, the replacement sectors must be enormous to keep the social contract intact. “New jobs we can’t imagine” is not an argument by itself. It is an appeal to precedent. Precedent matters, but structure matters more.
The structure here is awkward. Most of the newly created AI-adjacent roles are either temporary, specialized, or leverage-heavy. One capable operator with strong tools can supervise work that once required a small team. A law firm does not need fifty AI workflow managers. A newsroom does not need a synthetic-content department the size of its old reporting staff. A software company does not hire armies of “prompt engineers” to replace the junior developer pipeline it quietly stopped building.
The comfortable analogy breaks because the destination sector looks too thin.
The first cuts hit the middle, not the margins
People often imagine automation as a force that clears away drudgery. That was a big part of industrial mechanization. Dangerous, exhausting, repetitive work was the obvious target. Many jobs were miserable, and few mourn the loss of the factory task that destroyed knees, lungs, and backs.
This time the first wave is hitting occupations that people actively wanted.
Writers did not spend years learning the craft because they enjoyed invoices. Designers did not train their eye so a model could generate forty moodboards before lunch. Junior lawyers did not suffer through legal education in order to watch contract review collapse into a workflow pane. Teachers did not choose the profession because they dreamed of becoming compliance monitors for machine-generated lesson plans.
That is why Éric Sadin’s use of Hannah Arendt lands so hard. Arendt distinguished between animal laborans, bound to necessity, and homo faber, the maker who shapes a world through skill and judgment. Industrial automation mainly attacked the first domain. Generative AI is moving into the second.
This is not just about income. It is about what a society values when it values work.
A great deal of modern dignity is tied to the belief that education, judgment, and cultivated ability lead to useful social roles. If the market starts treating those roles as optional overhead, the damage is not merely financial. It hits identity in the joints.
You can already hear it in ordinary professional speech. Architects describe using AI tools to produce concept variations at absurd speed, then feeling oddly detached from the act of design. Developers talk about becoming reviewers of code they no longer fully compose. Marketers become editors of machine drafts. The task remains, but the locus of authorship shifts. Some people love that shift. Others experience it as a hollowing-out.
Small economic decisions become a social event
The public debate still sounds abstract, but the mechanism is intimate.
A founder who once paid a freelance copywriter €2,000 a month now discovers that a model subscription plus an hour of editing yields something close enough. A manager who used to hire two junior analysts realizes one senior analyst with AI can assemble the deck faster. A support team that once needed twelve people can now route simple tickets through bots and keep four humans for escalation. A translation budget shrinks because “good enough” has a new baseline.
None of these decisions feels historic in isolation. No CEO announces that civilization has crossed a threshold. Procurement changes. Hiring freezes. Intern classes shrink. Junior roles stop opening. Agencies keep the same client load with fewer contractors. Revenue holds, so executives call it efficiency.
This is how labor markets actually break. Rarely with a single dramatic event. Usually through thousands of rational substitutions that add up to a broken ladder.
The ladder matters because entry-level work has always been more than cheap labor. It is how people become senior. Strip out the junior layer in law, consulting, journalism, software, design, and finance, and you do not simply save money this quarter. You interrupt the reproduction of professional classes.
That creates a strange future. Firms still need experienced people, but they invest less in creating them. Students still train for professions, but the first rung is missing. Society ends up with a premium market for proven experts, a broad market for machine-assisted mediocrity, and a growing population that never gets enough paid repetition to become excellent.
Creative work was never “safe,” only expensive
A lot of the shock around generative AI comes from a quiet assumption that creative and intellectual work was protected by its very nature. It felt too contextual, too subjective, too human to scale through automation.
That was always partly a category mistake. What kept much of this work human was not sacred uniqueness. It was cost.
Hiring a designer, writer, editor, tutor, illustrator, or junior coder used to be the cheapest reliable way to convert ambiguous goals into output. Once models became cheap, fast, and broadly competent, the economics shifted before the culture could adjust. People still prefer human excellence when they can detect it and afford it. The market, unfortunately, often settles for adequacy delivered instantly.
This is already visible across media. Music platforms fill with AI-generated tracks optimized for ambience and volume rather than authorship. Low-stakes marketing content is increasingly synthetic. Image generation erodes whole layers of commercial illustration. Video remains harder, though not for long enough to comfort anyone whose work begins with “assemble a rough cut.”
Some defenders answer that AI output is slop, and much of it is. But labor markets do not require perfection to reorganize. They only require a sufficiently large zone where quality loss is tolerated.
That zone is larger than many professions hoped.
The junior bottleneck may become the real crisis
The most underrated labor issue in AI is not total replacement. It is career formation.
A junior developer once learned by implementing features, fixing bugs, reading code, and making mistakes under supervision. Now the machine writes the boilerplate, proposes the fix, and often explains the code. That sounds helpful, and often is. The catch is that firms may conclude they need fewer juniors in the first place. Productivity gains at the top shrink the demand for training capacity at the bottom.
The same pattern appears elsewhere. Junior associates drafted memos and reviewed documents. Junior consultants built research packets and slide decks. Junior journalists summarized reports, called sources, and assembled background. Those tasks were not glamorous, but they were the gym where professional judgment was formed.
If AI captures the gym, fewer people develop the muscles.
This is one reason the “humans will move up the value chain” line rings hollow. You do not move up a chain by teleportation. You climb. A society that automates away apprenticeship while praising higher-order judgment is performing a neat intellectual trick on itself.
Even the optimistic scenario still leaves this bottleneck unresolved. Suppose AI makes experts more productive and creates demand for a smaller number of elite professionals. How do new entrants become elite if the market no longer pays for the training path that produced the previous generation?
No historical analogy answers that cleanly, because the pipeline itself is now part of what gets optimized away.
The politics are late because the story is still early
Governments are poorly equipped for this shift, partly because the official statistics lag and partly because the damage arrives disguised as productivity. Falling entry-level hiring does not produce the same alarm as a factory closure. A worker who remains employed but loses bargaining power does not show up as a headline. A freelancer whose rates collapse because clients can now threaten substitution is still technically working.
That delay benefits the firms deploying the tools. By the time the social consequences are visible, integration is deeper, contracts are rewritten, habits are formed, and whole business models depend on reduced labor costs. Regulation then appears as interference with competitiveness rather than a defense of institutional balance.
The policy menu is real but narrow. Stronger data rights and licensing could slow the extraction pipeline that feeds model development. Sector-specific rules in law, education, and healthcare could block the most reckless forms of substitution. Labor law could require transparency when AI changes role expectations or shrinks teams. Tax policy could claw back part of the windfall. Public investment could support human-intensive sectors where social value exceeds market price.
None of that is impossible. None of it is likely to emerge at the required speed without organized pressure from professions that still believe they are individually safe.
That belief is fading, though unevenly. Hollywood writers saw the issue early because authorship and bargaining were visible. Many white-collar professions remain in denial because the first phase feels like assistance. By the time assistance turns into leverage against wages, the negotiation position is worse.
Schumpeter is not wrong. He is insufficient.
It is worth being precise here. The claim is not that innovation stops creating work. It never has. The claim is that the social mechanism described by creative destruction may fail to preserve mass employment at the scale advanced economies became used to during the twentieth century.
Schumpeter described capitalism as a process that constantly reorganizes production. He did not promise that every reorganization would be politically tolerable or morally acceptable. He certainly did not guarantee that every displaced class would find an equal place on the other side.
That distinction matters. A future can be highly productive, full of new products, rich in market value, and still terrible for millions of people whose skills were made cheap at once. Share prices can celebrate a transition that households experience as downward mobility. The economy can grow while the professional middle class thins out.
There are still open questions. Adoption may be slower than boosters claim. Regulation may bite harder in sensitive domains. Consumers may continue to reward trusted humans in law, medicine, education, and media more than cost models assume. Physical work tied to care, maintenance, and place may absorb some displaced labor. AI may complement more jobs than it replaces in certain sectors.
I would like those counterforces to be stronger than they currently look.
But the structural challenge remains. When the dominant employment sector is built around language, judgment, symbolic manipulation, and pattern recognition, a technology that attacks those capacities directly is not just another productivity tool. It is a destabilizer of the main employment architecture.
The real fear is social downgrading
People can survive many things economically if they retain a sense of trajectory. What breeds anger is not only loss, but demotion.
A society that tells educated workers to “adapt” while quietly moving them from meaningful professional tracks into lower-paid service roles is not offering progress in any serious sense. It is redistributing dignity downward while concentrating capital upward. The spreadsheet will still show gains. The culture will show strain.
You can already sketch the social map. A small class of owners, model builders, and highly leveraged experts does extremely well. A larger class becomes supervisors of machine output, still employed but more interchangeable. Beneath them sits a swollen layer of precarious service labor, local care work, logistics, maintenance, and ad hoc gigs that cannot be fully automated or are not worth automating yet. Somewhere around all of this, governments debate income supports while pretending the old professional bargain can be restored with enough reskilling slogans.
That bargain may not return.
If this diagnosis is even halfway right, the central issue is no longer whether AI will create value. It obviously will. The issue is whether societies can still translate technological gains into broad-based roles that people can actually build lives around. On current evidence, that translation looks fragile. Schumpeter described the engine. He did not provide a guarantee that everyone would still have a seat once it accelerated.
End of entry.
Published April 2026