Most of the AI Boom Hasn’t Happened Yet
The loudest argument in AI is about model progress. Will the next release reason better, code faster, and make fewer mistakes. Arvind Jain thinks that debate points at the wrong bottleneck. His claim, coming from the middle of enterprise deployments, is almost absurd on purpose: “We’ve not even used 1% of current capabilities of these models.”
Take the number as a provocation, not a measurement. Nobody can audit a universal “1%” across every task and every company. But as a way to describe the gap between technical capability and actual business use, it lands hard. Most firms are still treating advanced models like polished demos rather than production infrastructure.
The bottleneck moved
For two years, the industry has behaved as if model quality were the main variable that matters. That made sense when the systems were brittle, expensive, and obviously limited. Each new release visibly changed what was possible. The center of gravity has shifted since then.
Today, many organizations already have access to models that can summarize dense documents, extract structured information, draft competent prose, translate across formats, answer questions over long context windows, and generate code that is often good enough to accelerate real work. The gap is no longer between impossible and possible. It is between possible and deployed.
That difference matters more than it sounds. A technical capability sitting in a chatbot is not the same thing as that capability sitting inside an approval flow, a support queue, a legal review process, or a sales workflow with the right permissions and data attached. Businesses do not buy intelligence in the abstract. They buy less time spent searching, fewer manual handoffs, and faster decisions with acceptable risk.
The market data already hints at this split. Stanford’s 2025 AI Index reports that 78% of organizations say they use AI, up sharply from 55% a year earlier. That sounds like saturation until you put it next to BCG’s finding that 74% of companies are still struggling to show tangible value from their AI efforts. Those numbers are not in conflict. They describe a market full of experiments and thin on operational depth.
Capability without context is mostly theater
A general model is impressive in the same way a new engine on a lab bench is impressive. You can admire the power. You cannot drive to work with it. The hard part is the vehicle around the engine: the wiring, the controls, the maintenance, and the messy environment where actual people use it.
This is especially true inside large companies. Ask a model, “Which customers are exposed to the new export restriction?” It can write a crisp paragraph about export controls. That is not an answer. To answer well, it needs the company’s product catalog, contract language, account ownership, geography data, internal policy updates, and a clear understanding of who is allowed to see what. It also needs to cite its reasoning so someone can check it before a sales team calls the wrong customer.
That is why enterprise AI so often disappoints after a promising first week. The model seems magical in a blank text box, then strangely dumb in the real environment that matters. It is not actually dumb. It is underfed, underconnected, and wrapped in too little process. A smart system without context behaves like a brilliant contractor who arrives with no badge, no map of the building, and no memory of yesterday’s meeting.
Jain’s company, Glean, has made a business out of this exact problem. The pitch is not “we have a better model than everyone else.” The pitch is that getting value from any model requires deep connection to the company’s own knowledge, systems, and permissions. That sounds less cinematic than the race to the next frontier model. It is also where a large share of the money will be made.
Enterprise value begins after the prototype
The easiest way to misunderstand AI adoption is to confuse a successful demo with a finished product. A demo proves the model can do the task once. A product proves the task can be done repeatedly, with enough accuracy, at the right point in a workflow, by people who are not AI enthusiasts.
That second part is where most of the labor sits. Someone has to decide when the model should answer directly and when it should ask a clarifying question. Someone has to build retrieval over internal documents and keep the connectors from breaking every time a system changes. Someone has to set evaluation criteria that reflect real business standards rather than benchmark vanity. Someone has to define what happens when the model is unsure. In regulated contexts, someone also has to decide what level of human review is required and how an audit trail gets preserved.
None of that work looks glamorous. It looks like product management, data plumbing, security review, process redesign, and training. Which is exactly the point. The next wave of AI value will come from ordinary, unflashy execution around already-strong models.
This also explains why some companies that expected Microsoft or Google to solve everything with a bundled assistant have kept shopping for specialized tools. The big platforms can provide broad capability. They cannot magically know how a particular pharmaceutical company handles deviations, how a bank defines an exception, or how a manufacturer traces a defect across suppliers. Generic AI gets you to the door. The last part of the trip is stubbornly local.
Frozen models could still power years of growth
Jain’s deeper claim is the one many people miss: even if frontier models stopped improving today, there would still be several years of meaningful growth ahead. That sounds counterintuitive because the industry has trained itself to think in release cycles. But deployment curves often outlast innovation curves.
Electricity did not transform factories the day the generator appeared. Early factories simply swapped steam for electric motors and kept the same layout. The real productivity gains arrived later, when managers redesigned workflows around what distributed electric power actually allowed. AI is heading into a similar phase. Plenty of firms have added a chatbot to old habits. Very few have reorganized work around what these systems do well.
That matters because the remaining headroom is huge. There are still support teams searching six systems to answer a routine question. There are sales reps manually turning call notes into CRM updates. There are finance departments copying values from PDFs into structured fields. There are internal policy questions that bounce across Slack, email, and a wiki graveyard before someone gets a reliable answer. Current models are already capable enough to shrink a lot of this waste if the surrounding system is built properly.
Seen that way, “1%” starts to make strategic sense. It does not mean companies are only scratching the surface of pure model intelligence in some abstract laboratory sense. It means they have barely begun to package that intelligence into repeatable workflows that survive contact with reality.
The advantage shifts from access to extraction
A year ago, access to the best models looked like a moat. It no longer does. Most serious software companies can call excellent models through an API. The technical frontier still matters, especially for the firms building the models, but it is a thinner source of product differentiation than many executives hoped.
The durable advantage is moving elsewhere. It lives in who can extract value from models faster and more reliably. That means better context assembly, better workflow design, better evaluation, and better adoption inside teams. It means knowing which tasks deserve full automation, which need a human checkpoint, and which should stay manual because the error cost is too high.
For software vendors, this is a more uncomfortable shift than the usual “AI will replace SaaS” slogan suggests. The threat is not simply that a foundation model will make their category vanish overnight. The nearer threat is that their product starts feeling clumsy compared with a competitor that has woven AI directly into the experience. If one tool makes users hunt through menus, copy information between tabs, and rewrite the same status update every week, it will start to feel older than its release date.
That does not mean every product needs an AI sidebar and a cheerful sparkle icon. Users are already tired of ornamental intelligence. The winning products will be the ones where AI reduces friction in ways that feel almost boring: the support console that drafts a response with the right account context already attached, the procurement tool that flags a risky clause before legal reads the contract, the knowledge system that answers in the company’s vocabulary rather than in polished generalities.
AI fluency becomes a basic job skill
There is a human side to this that gets less attention than it deserves. Companies like Glean now talk about “AI fluency” as a skill for nearly every role, not just technical ones. That idea is easy to dismiss as corporate slogan material until you define it more carefully.
AI fluency is not clever prompting as parlor trick. It is practical judgment. It means knowing when a model is likely to help, what kind of context improves the answer, what kinds of errors it tends to make, and how much verification the task requires. A recruiter, an analyst, a support manager, and a designer all need some version of that competence because their tools are changing around them.
This is one reason the market feels more uneven than the investment numbers suggest. Private generative AI investment reached $33.9 billion in 2024, according to Stanford, yet many organizations still have employees using these systems in ad hoc ways, with little shared understanding of best practice. Money can buy licenses quickly. It does not instantly produce trust, literacy, or good process.
The awkward middle phase is already visible. Some teams overuse AI and create more review work than they save. Others underuse it because they had one bad experience with a hallucinated answer and concluded the entire category is unreliable. The organizations that pull ahead will be the ones that treat adoption as a management problem, not just a purchasing decision.
Better models still matter, but they are not the gating factor
None of this means model progress has become irrelevant. There are domains where current systems still fail in ways that block adoption. High-stakes reasoning remains uneven. Long chains of action still degrade. Accuracy in specialized scientific or legal contexts can be strong one moment and slippery the next. Cost also matters at scale, and latency matters more than enthusiasts sometimes admit.
Still, for a wide band of enterprise work, model limits are no longer the binding constraint. The binding constraints are integration and design. A frozen model with great retrieval, solid permissions, a clean user flow, and clear escalation paths can create more business value than a more powerful model dropped into a vague interface with no surrounding discipline.
That is the part of the story benchmark culture misses. Benchmarks measure isolated capability under controlled conditions. Companies live inside constraints, legacy systems, risk policies, and employee habits. A gain of a few percentage points on a leaderboard can matter. So can a connector that finally unifies the document systems employees actually use. Only one of those shows up in a launch livestream.
The next five years belong to deployment
If Jain is even partly right, the market is still early in the least glamorous phase of the shift. The big prize is not waiting for some future model to descend and solve business software by decree. It is in the slow conversion of existing capability into products people trust and workflows they can rely on.
That changes the strategic question for almost everyone. Buyers should spend less time trying to predict the winning model family and more time asking whether their data, permissions, and processes are ready to support useful AI in production. Software vendors should worry less about whether model providers will swallow their category whole and more about whether their own product is becoming the last manual stop in an otherwise accelerated workflow.
The strange part of this moment is that both the hype and the skepticism can be true at once. The hype is real because the tools are already powerful. The skepticism is real because most organizations have not yet done the work required to cash that power out. Between those two facts sits a large, very practical growth curve, and it does not require a miracle from the next model release to keep climbing.
End of entry.
Published April 2026