AI Isn’t Overrated. It’s Misread.
AI is either a miracle or a flop, depending on which dashboard you checked this morning.
On one side, tech executives talk as if software just discovered fire. On the other, skeptics point to surveys showing that most companies still cannot find real returns from generative AI. One widely cited figure says 95 percent of firms have seen no meaningful ROI. That number travels fast because it flatters a certain mood. It lets exhausted managers say, with some relief, that the emperor is wearing a GPU-shaped hat.
I think both camps are looking at the wrong clock.
AI is not overrated in the long run. It is misread in the short run. We keep treating it like a feature rollout when it looks much more like general infrastructure: closer to electricity than to a new app, closer to mechanization than to the arrival of social media. That does not mean instant abundance. It means a long period where the technology works, spending explodes, disappointment spreads, and the real gains remain strangely hard to see.
That pattern is not a bug in the story. It is usually how the story goes.
The mistake is about speed, not scale
When people say AI is overhyped, they are often smuggling in a timing claim. They are not really arguing that machine intelligence will leave the economy untouched. They are arguing that the dramatic promises made over the past two years have not yet appeared in GDP statistics, payroll data, or broad productivity numbers.
That part is true. The macro evidence is still modest. Some workers are clearly faster with AI. Some teams ship more. Some firms handle support, coding, or marketing operations with fewer hours. Yet the economy as a whole does not look like it has been hit by a civilization-level productivity shock. If you expected a visible step change in a handful of quarters, reality has been humbling.
Economic history says that should not surprise us.
There is a famous graph of world output that stays almost flat for centuries, then bends sharply upward around the industrial era. The machine age did not arrive as a clean quarterly beat. It arrived through a long reworking of production, transport, energy, management, labor, and capital allocation. A technology can be revolutionary and still spend years looking underwhelming from far away.
AI has that smell.
It reaches into writing, analysis, customer support, logistics, software, medicine, law, education, design, and planning. It touches the making of decisions as much as the making of things. That gives it unusually wide surface area. But wide surface area does not produce instant results. It produces a long queue of changes that institutions need to digest: new workflows, new controls, new interfaces, new job boundaries, new pricing models, new compliance routines, and eventually new expectations from customers.
The short-term argument against AI often assumes that a powerful tool should deliver value simply because it exists. That is rarely how broad technologies work. A new source of capability enters the system, and then the system resists it by trying to use it with old habits. The first chapter is substitution. The real chapter is redesign.
Electricity only paid off after factories changed shape
The closest historical parallel is not perfect, but it is illuminating.
In the early decades after electric power became available, adoption was slow and the productivity gains looked unimpressive. Roughly twenty years in, only a small minority of firms had adopted it, and the gains were far from spectacular. That delay created the same kind of confusion we see today. If electric motors were such a big deal, why was the data so sleepy?
Because factories were still built for the old world.
Steam-era factories ran on central power. One big engine drove line shafts, belts, and pulleys across multiple floors. The building itself reflected that logic. You stacked production vertically. You organized machines around the limits of mechanical transmission. You tolerated huge amounts of friction because that was how power moved.
Early adopters often used electricity as a drop-in replacement. They swapped one power source for another while keeping the rest of the plant intact. It was like putting a modern engine into a city designed for horse traffic. The engine mattered, but the streets still belonged to the old system.
The real gains arrived when firms reimagined the factory around distributed electric motors. Once you no longer needed massive shafts and the columns supporting them, you could redesign the building. Production could move onto one floor. Material flow improved. Supervision changed. Construction got cheaper. The placement of workers and machines could follow the logic of the process rather than the logic of the power train.
That is where the huge gains came from. Not from the motor alone, but from what economists call co-invention: the complementary reinvention of processes, layouts, roles, and business practices that allows a new technology to express its full value.
This point matters because it rescues us from a fake argument. We do not have to choose between “AI changes everything next quarter” and “AI is a toy that never mattered.” A technology can be deeply important while its measured impact lags for years because organizations are still arranged around an older substrate.
A lot of AI today looks exactly like early electric power looked then.
Most AI projects disappoint for a boring reason
Companies keep grafting AI onto inherited workflows and then acting surprised when the payoff is modest.
That sentence explains more than many investment memos do.
A support team adds a chatbot but keeps the same escalation tree, the same knowledge base, the same approval bottlenecks, and the same quality controls. A software team adds code generation but leaves review norms, testing pipelines, deployment rules, and ownership boundaries untouched. A hospital adds ambient transcription but preserves the same note structure, coding process, billing pathway, and reimbursement logic. Each local improvement is real. None is large enough to transform the system around it.
This is why the “95 percent see no ROI” claim is less of a refutation than it appears. If most firms are still in the equivalent of electrifying a steam factory without changing the layout, low returns are exactly what we should expect. In some cases the result is worse than low returns. It is negative returns disguised as innovation theater: extra subscriptions, more compliance review, and yet another interface for employees to babysit.
There is also a measurement problem. Many firms ask AI to accelerate tasks that should not exist in their current form. The gain then appears as local efficiency instead of structural simplification. If a bloated workflow becomes 20 percent faster, finance might not notice much. If the workflow disappears, everybody notices. The first outcome is what most pilot programs aim for because it is easier to approve. The second is where the real value usually hides.
Think about the internet in the late 1990s. A narrow snapshot could have convinced you that it was mostly brochures on websites, banner ads, and companies setting money on fire. At the time, many of the business models that later defined the internet were either primitive or nonexistent. Search was young. Cloud software had not reorganized enterprise computing. E-commerce logistics had not matured. The protocols existed before the ecosystem that made them feel inevitable.
AI today is in a similar gap between capability and organization.
That does not mean every current spending plan makes sense. Plenty of it does not. Some firms are plainly buying prestige, not productivity. Some are licensing models they do not need because no executive wants to look asleep while competitors issue breathless press releases. But skepticism toward current deployments should be aimed at deployment quality, not at the underlying significance of the capability.
Bubbles are often how infrastructure gets built
Another mistake is to treat bubbles as evidence that the underlying technology is fake.
Sometimes bubbles form around nonsense. Sometimes they form around very real infrastructure that gets financed long before demand is ready. Railways, telecom networks, and internet build-outs all passed through periods where capital outran immediate use. Investors lost shirts. The infrastructure remained.
That is the part worth remembering.
Railroads in their early phases did not produce smooth, rational returns. Many lines were built ahead of practical traffic. Speculation inflated expectations. Crashes followed. Yet rails still changed the shape of cities, retail, travel, hotels, farming, and industrial supply chains. A bubble did not cancel the railway age. It subsidized an overbuild that later generations learned how to use.
The same possibility hangs over AI data centers.
Right now, extraordinary amounts of capital are being directed toward chips, power, networking, cooling, and model training capacity. Some of that capacity will be misallocated. Some projects will never earn what their promoters promise. Some valuations will look absurd in hindsight. If you are waiting for a clean narrative without excess, you are waiting for a different century.
The more interesting question is what happens after the correction.
If installed compute capacity outruns near-term profitable demand, we may see a familiar sequence: prices fall, access broadens, use cases proliferate, and the infrastructure becomes ordinary enough for boring companies to build serious businesses on top of it. That is how a lot of technological revolutions stop looking revolutionary. They become cheap enough to disappear into the background.
This matters for executives because the existence of a bubble does not tell you whether to ignore the technology. It tells you to separate two claims that are often jammed together. One claim concerns financial pricing. The other concerns long-term productive capacity. A market can wildly overprice the leaders in a category while still underestimating how much the underlying capability will reshape real work.
Those are different bets. The market often confuses them. Managers should not.
The factory floor of knowledge work is about to be redesigned
The easiest way to see the coming change is to stop thinking about AI as a smart assistant and start looking at the workflow around it.
Take office communication. One person uses AI to draft a long email that sounds polished, comprehensive, and slightly haunted by the ghost of management consulting. The recipient uses another AI tool to summarize the message into three useful bullets. The productive content of the exchange was probably those three bullets all along. The rest was inherited ceremony: status signaling, hedging, formatting, and the old expectation that seriousness must arrive in paragraphs.
If that sounds trivial, it is not. Large organizations are full of knowledge-work equivalents of unnecessary factory motion. People spend time packaging information for other people to unpack. They turn decisions into documents and documents into presentations and presentations into meetings. AI makes this absurdity more visible because it can both generate and compress the same administrative fog.
The tempting move is to insert AI at both ends and call it optimization. The more radical and more plausible move is to ask why the fog still exists.
Once a system can capture intent more directly, route it to the right context, maintain state, surface needed evidence, and present only the relevant delta to the next person, whole classes of formal communication start to look like legacy overhead. That does not mean writing disappears. It means a lot of writing stops being the container through which work must travel.
The same logic shows up even more sharply in healthcare.
A seven-minute interaction between patient and clinician can trigger an elaborate downstream chain: notes, transcription cleanup, coding, claims preparation, routing to billing specialists, insurer review, and eventual reimbursement. Today many AI deployments target narrow segments of that chain. One tool drafts the clinical note. Another helps infer billing codes. Another tries to reduce denial rates. This can save time, and some of those gains are real enough to matter in an exhausted sector.
But the deeper opportunity lies elsewhere. If AI can convert the encounter into a structured, auditable representation of what happened, and if payer rules can be encoded and validated in near real time, much of the chain becomes unnecessary. The task is no longer “make every clerical step faster.” The task is “remove steps created by the old constraints.”
That sounds obvious when stated plainly. It is anything but obvious inside actual institutions, because institutions defend their seams. Departments, vendors, software contracts, compliance routines, and professional boundaries all solidify around the old process. The process starts to feel natural because so many livelihoods depend on it. Then a new capability arrives and suggests that a large part of the pipeline is there only because previous tools were weak.
That is the knowledge-work version of taking the drive shafts out of the factory.
Once you look for these hidden shafts, they are everywhere: analysts preparing decks for decisions that could be explored interactively, recruiters writing summaries of candidates for managers who could inspect the evidence directly, junior lawyers assembling first-draft patterns that models can now produce in seconds, operations teams moving data between systems because the systems were never designed to cooperate.
AI will automate some tasks. It will matter more by collapsing certain handoffs.
Learning loops create a second kind of advantage
There is another reason “we’ll wait until it matures” is a dangerous posture. AI is unusual in how directly it can improve through use.
That does not mean every deployment automatically gets smarter by magic. Plenty of systems stagnate. Some get worse when nobody maintains them. But firms that use AI seriously generate assets that late adopters do not: feedback data, evaluation sets, edge cases, human correction patterns, process maps, trust heuristics, and practical knowledge about where models fail inside their own operations.
These are not glamorous assets. They are often more valuable than the model choice itself.
A company that starts now can learn which customer intents need deterministic handling, which can be safely delegated, which prompts produce brittle behavior, where review is essential, and which parts of a workflow can be rewritten instead of accelerated. Over time, this becomes a compounding advantage. The organization is not simply consuming AI. It is teaching itself how to work with a new form of capability.
This is why the argument “the tools are imperfect, so we should wait” often backfires. Of course the tools are imperfect. Early electrical equipment was imperfect too. Early internet products were clumsy. Waiting can spare you some wasted spending, but it can also freeze the organizational learning required to capture future value. By the time the tools feel mature, other firms may already have years of workflow adaptation behind them.
Still, some caution is warranted. Not every company will build a durable moat from proprietary usage data. In many domains, models will become commoditized, vendor platforms will absorb a lot of the learning, and the value will shift to distribution, brand, regulation, or customer relationships. The lesson is not that every firm should train its own frontier system. Most should not. The lesson is that hands-on adoption teaches an organization what should change around the model, and that lesson is difficult to buy off the shelf later.
Management’s job is becoming architectural again
The practical implication is less glamorous than the sales pitch.
The crucial question for leaders is not whether employees can use a chatbot to save twenty minutes. It is which parts of the workflow were shaped by old limitations and can now be redrawn. Where are decisions delayed because information must be manually packaged? Where does compliance require transcription because systems cannot trust raw interaction data? Where do people spend time translating between tools, teams, or formats that exist only because previous software was dumb?
Those are architectural questions. They require product sense, operational judgment, and a willingness to disturb comfortable routines. They also require patience, because the organization usually needs a transition period where old and new methods coexist awkwardly. That in-between phase is expensive. It creates many of the disappointing ROI numbers people love to cite.
Yet that awkward phase is often the price of real change.
If AI were merely a better autocomplete engine, this would be a smaller story. You would measure gains task by task and move on. What makes the current moment more consequential is that AI interacts with language, judgment, search, pattern recognition, and software execution all at once. It sits close to the coordination layer of modern institutions. That is why the upside is larger than a normal tooling upgrade and why the path to that upside is slower than the market wants.
Misread, not overrated
The cleanest way to put it is this: AI is being judged as if its immediate ROI should reveal its eventual significance.
History suggests the opposite. Broad technologies often look disappointing when used inside structures built for an earlier era. Then, after enough co-invention, they stop looking like technologies at all. They become part of how work is organized. The excitement cools. The bubble may burst. The deeper transformation starts showing up after the headlines move on.
That does not guarantee every current claim will come true. Some promised applications will remain fragile, some sectors will move glacially, and some spending will age poorly. But the central error today is not excessive belief in the ultimate importance of AI. It is a shallow understanding of what that importance actually requires.
The payoff will not come from sprinkling models over existing processes and waiting for magic. It will come from redesigning the knowledge factory floor so information, decisions, and execution move through institutions with far less inherited friction. The companies that learn this early may look inefficient for a while. Later, they will look obvious.
End of entry.
Published April 2026