13 min read

When Your Job Title Stops Explaining Your Value

For most of the software era, job titles did more than describe tasks. They explained delays, budgets, status, and self-worth. “I’m the engineer” meant you controlled one gate. “I’m the product manager” meant you controlled another.

That arrangement lasted long enough to feel natural. Front-end, back-end, design, research, project management, QA: each discipline came with its own language, rituals, and veto rights. You did not just have a job. You had a stable sentence about who you were and why other people had to wait for you.

Then tools like Claude Code showed up and made that sentence wobble.

Paul Ford, who has spent decades writing about software and building companies around it, recently admitted something many people still avoid saying aloud. He can no longer reassure people, in good faith, that the old comfort line will hold, that human skills will obviously remain valuable in the familiar way. The pressure is no longer confined to repetitive tasks or junior work. It is hitting the categories themselves.

That emotional shock is still underestimated. Before economics, before policy, before the usual panel discussion about regulation, there is a simpler event taking place inside people’s heads: the thing that used to answer “what do you do?” is getting harder to trust.

Job titles were doing more work than we admitted

A job title was a compression algorithm for modern work. It told employers what to pay you, colleagues when to defer to you, and clients why the invoice looked the way it did. It also told you which frustrations counted as normal.

If software took weeks, that made sense because different specialists had to coordinate. If it cost a lot, that made sense because expertise was scarce and distributed. If meetings dragged on, that made sense because every function had its own incentives and vocabulary. The entire apparatus of digital work ran on those boundaries.

This is why the current shift feels stranger than a normal tooling upgrade. A better IDE does not threaten the identity structure of a company. A chat interface that can sketch the PRD, write the code, generate the copy, draft the tests, and propose a launch plan starts to do exactly that. It shortens the distance between intention and artifact, which means it also shortens the distance between one role and the next.

People feel this immediately. They may not use the language of identity formation or professional signaling. They still recognize the sensation. The workday no longer matches the label on the badge.

Everyone thinks the value will flow to their lane

One of the more revealing patterns in tech right now is how each profession imagines AI will finally let it remove the others.

Product managers see a path to building without engineers. Engineers see a path to building without product managers. Designers see a path to shipping without waiting on either side. The fantasy is always the same: the machine collapses the messy middle, and my discipline becomes the one that supervises the future.

You can see why this is appealing. Each function already believes it understands something the others underestimate. PMs believe they understand the problem. Engineers believe they understand feasibility. Designers believe they understand users. AI appears to grant each group a temporary passport into the others’ territory.

But that does not mean value lands neatly in one profession’s lap. It may simply mean the old borders stop mattering as much.

Picture an operations manager at a small company who needs an internal tool for tracking inventory exceptions. A year ago, that request might have bounced across planning, design, engineering, and QA. Now the ops manager opens a model, describes the workflow, iterates for an afternoon, and gets a rough application that mostly works. It is ugly in places. The authentication is shaky. The schema would make a senior engineer wince. Yet the warehouse team can use it on Monday.

What does the company remember from that story? Usually this: the thing got built fast, and the person closest to the pain got it done. Very few people ask which specialist would have supervised it better. Fewer still care about the purity of the process. The social reward moves toward the sentence “I made this,” even when a model did much of the making.

That shift matters because it strips prestige from coordination, not only from production.

Supervision is a weaker moat than it sounds

A lot of threatened professions retreat to a reassuring claim. Fine, the model can produce outputs, but experts will remain indispensable because someone has to judge quality.

There is truth in that, especially in regulated fields, complex infrastructure, and high-stakes systems where failure is expensive and traceability matters. Hospitals, banks, aircraft software, tax platforms, and anything touching safety will not run happily on vibes and a green “looks good to me” button.

Still, much of ordinary digital work never operated at that standard. It lived in the world of good enough. The page should load. The report should run. The intake form should submit. The internal dashboard should stop breaking every Friday. Businesses paid specialists because specialization was the practical route to reliable enough outcomes.

If a model now gets them to that threshold faster, the aura around review starts to thin. Review turns into another delay people hope to route around. That does not erase expertise. It changes where expertise can still charge rent.

The GLP-1 analogy gets at the psychic part

Ford reached for an analogy that sounds odd until it clicks. He compared this moment to the arrival of GLP-1 drugs like Ozempic and Mounjaro, which changed how many people think about weight, appetite, and responsibility.

For decades, an entire belief system sat around obesity. The dominant explanations mixed biology, willpower, discipline, stigma, and medicine in unstable proportions. Then these drugs arrived and, for many people, made something long treated as a moral and behavioral struggle suddenly responsive to a different mechanism. The result was not just weight loss. It was the collapse of a narrative.

That is the useful part of the analogy. A whole explanatory framework can fall apart faster than people can emotionally process it.

AI is doing something similar to professional identity. For years, people understood software work through a familiar story: complexity requires specialists; specialists justify timelines; timelines justify budgets; budgets justify titles; titles explain value. When a model can blow holes through that chain, the disorientation is bigger than “my tool changed.”

The map that linked effort to worth starts to blur. People who spent years becoming fluent in one piece of the process are forced to watch the process itself become negotiable.

Even when the outcome is positive, that can feel awful. Relief and grief are not opposites here. A person can enjoy the new leverage and still mourn the loss of the framework that made their expertise feel legible.

New horizons do not cancel the loss

Dan Shipper offered a useful metaphor in the same conversation. Before better navigation, people imagined the ocean had an edge, and sailing too far meant falling off the world. In practice, the horizon kept receding. There was more territory.

That captures one part of the AI experience. Many encounters with these tools feel like reaching a limit and then discovering another field beyond it. What looked like a cliff becomes a passage. The model writes the boilerplate, then the tests, then the migration, then a passable design system. Each time you expect the trick to end, the horizon moves.

But the metaphor also flatters discovery. It makes expansion sound clean.

Real horizons come with wreckage. New routes reorder trade, language, labor, and power. AI has the same double character. It opens territory while degrading old landmarks. Knowing there is another horizon does not make it easier to watch your accumulated status turn into default functionality.

This is where a lot of commentary misses the human point. It treats fear as mere failure to appreciate opportunity. Sometimes fear is a reasonable response to watching the market recode your hard-won competence as a feature.

The camps forming around the tool

The reactions now settling across tech and adjacent fields are recognizable.

The inevitability crowd

These are the people who greet every advance as evidence that machine intelligence will subsume everything soon. They used to sound almost theological about it. Lately many have become more practical, which may have less to do with intellectual maturity than with the discovery that selling picks and shovels is more profitable than preaching the singularity.

Their main contribution is emotional pressure. They turn adaptation into a moral test of whether you have accepted destiny.

The preservationists

Writers, artists, and other cultural workers often take the opposite stance. Leave this stuff away from the work. Keep the prose human. Keep the process slow. Keep the room unoptimized.

This instinct deserves more respect than it usually gets. Not every human practice needs to be reorganized around speed and output. Some forms of value depend on friction, attention, and the knowledge that another person actually made the thing.

At the same time, protected zones are hard to maintain when the surrounding institutions change. Publishers, clients, schools, and audiences absorb the tools around them. Even a purist enclave still has to negotiate with a world that has become cheaper, faster, and less patient.

The pragmatic adopters

Nonprofits, scientists, climate groups, and small mission-driven teams often sound the clearest. They see the risks. They also see the leverage. If a model can compress administrative drag, summarize a dense literature review, or help a small staff build a usable internal tool, they want the gain.

There is something healthy in that posture. It is neither devotional nor allergic. It treats AI as a force multiplier that must be handled carefully, not as salvation or pollution.

The protective educators

Teachers and administrators often frame the issue as a defense of thinking itself. Students should learn to reason before they outsource cognitive labor. That concern is serious. A student who never learns to structure an argument will struggle long after the chatbot tab closes.

Still, the old educational model has a weak spot. It often confuses thinking with the production of neatly formatted abstractions. Essays, summaries, take-home reflections, generic explanations, and polished response papers were already imperfect proxies for thought. They were just easy to assign and grade.

Models expose that weakness. If students can generate the proxy on demand, schools may need to invest more in forms of learning that are harder to fake: close reading in discussion, oral defense, field observation, experiments, revision with traceable reasoning, and contact with the stubbornness of the physical world. Writing still matters, because writing can sharpen thought. The point is that the sharpened edge has to be visible somewhere besides the final paragraph.

Human adaptation is the slow system

The most unnerving part of this moment is how ordinary the interface feels. For decades, computing promised a simple fantasy: describe what you want and get it. That fantasy drove everything from spreadsheets to low-code tools to voice assistants. Most of those systems delivered it poorly.

Now the promise works often enough to change behavior. You type into a box and get a draft, a script, a mockup, a workflow, sometimes even a product.

The technical shock is obvious, but the social shock is deeper. Teams can adopt a model in a week. They cannot rebuild career ladders, compensation logic, training paths, and status hierarchies at the same speed. So organizations drift into an odd split reality. The org chart still says product, design, engineering, support. The actual day involves everyone crossing lanes constantly.

A designer writes SQL. A marketer builds an internal app. An engineer drafts sales copy because the model makes the first pass painless. A founder stops hiring for a role because the gap feels patchable with prompts and some patience. The categories remain on paper while the lived work mutates underneath them.

That mismatch is where a lot of anxiety comes from. People are judged by labels that no longer describe what fills their hours.

Value moves closer to consequences

If the old categories explain less, value does not disappear. It relocates.

It tends to move toward problem selection, domain knowledge, judgment, trust, integration, and accountability. Those words can sound suspiciously abstract until you put them back into real settings. In a hospital scheduling system, the scarce knowledge is often not the code. It is understanding legal constraints, staffing politics, patient safety, and what kinds of failure are tolerated by nobody. In a factory workflow, the leverage may lie in knowing downtime patterns, vendor behavior, safety procedures, and union agreements. A model can generate software. It cannot absorb blame or negotiate the consequences of being wrong.

That shift will not comfort everyone. In fact, it creates a serious problem for early-career workers. Routine tasks used to be how people learned the craft. You wrote the simple component before handling the architecture. You summarized the meeting before shaping strategy. You fixed the easy bug before owning the system. If those apprenticeship tasks evaporate, fields lose their on-ramp.

This is one reason the identity crisis is not sentimental. It is structural. A profession is not only a set of current experts. It is also a path by which novices become experts. AI is scrambling both at once.

Naming the loss is part of seeing the change

A lot of people still talk about this transition as if the emotional response were secondary, almost embarrassing, a temporary failure of nerve before the efficient future arrives. That misses the shape of what is happening.

When someone says, “I’m not sure what my role means anymore,” that is not evidence they are behind. It may be evidence they are paying attention. The categories that organized digital work are becoming porous faster than institutions can replace them. Many of the old assurances no longer fit cleanly in the mouth.

That does not mean every title disappears or every craft collapses into prompt paste. Large systems still need depth. Critical work still needs discipline. Plenty of organizations will keep specialized roles because accountability requires them. But the deeper shift is already underway. The sentence people used to use to explain their value is losing some of its force, and many replacements still sound thin.

Any serious conversation about AI and work should begin there, with the recognition that disorientation is part of the event itself. It is what it feels like when a machine does not just change the workflow, but reaches into the story you told about why your place in that workflow mattered.

End of entry.

Published April 2026