13 min read

Your AI Moat Is Probably a Liability

The oldest idea in strategy suddenly looks shaky. For decades, companies were taught to build defenses: proprietary technology, hard-to-copy systems, distribution advantages, habits that kept customers from leaving. In AI, a lot of that instinct still sounds sensible. It is also how teams end up lovingly polishing the thing that will trap them.

Arvind Jain, who built Glean into one of the most closely watched enterprise AI companies, puts it with unusual bluntness: whatever you think of as your moat is probably a liability. That sentence lands because it reverses a business reflex so deep most founders barely notice it. We are trained to protect what we have built. In AI, the bigger risk is becoming emotionally, organizationally, and financially attached to it.

That does not mean durable advantages disappear. It means they move. The stack underneath your product is changing so quickly that code has started to behave less like an asset and more like inventory with a short shelf life. If you still treat last year’s architecture as sacred, you are not defending a moat. You are accumulating debt.

Code now expires on a different clock

Software used to age slowly enough that companies could amortize decisions over years. You chose a database, built a set of services, wrote internal tooling, and expected that foundation to carry a product line for a long time. Rewrites were expensive and risky, so institutions learned to avoid them. “Don’t touch it if it works” became a survival rule.

AI breaks that timing.

A retrieval pipeline that looked sophisticated in 2023 can look fussy today. Prompt chains that required custom orchestration get replaced by stronger model reasoning. Fine-tuning decisions get revisited when context windows expand or inference prices fall. Guardrails, evaluation stacks, agent frameworks, even the shape of the user interface can change when models gain a capability you had to simulate in code six months earlier.

This is what Jain is getting at when he says that if you built something last year, it should probably be obsolete. If it is not, maybe you found the rare stable layer. More often, you just have not pushed your imagination far enough.

That mindset sounds reckless until you look closely. He is not arguing for chaos or novelty worship. He is arguing against sentimental engineering. There is a difference. Sentimental engineering treats old code as earned wisdom simply because it survived. In a market where the substrate is improving continuously, survival can be a misleading signal. Some code persists because it still solves a real problem. Some code persists because nobody wants the pain of ripping it out.

Those are very different situations, and companies confuse them all the time.

The hidden cost of a technical moat

The danger is not only technical. It is psychological.

Once a team invests heavily in a particular approach, it becomes hard to evaluate alternatives honestly. The architecture is tied to careers, planning cycles, roadmaps, and the status of the people who championed it. A clean-sheet rethink starts to feel like a referendum on prior decisions. The more you spent, the more pressure you feel to justify that spending. Sunk cost is not a spreadsheet problem. It is an identity problem.

That is why a moat can turn into debt. It does not merely slow you down. It distorts your judgment.

You can see this in AI products that keep rebuilding proprietary layers around capabilities that foundation models soon absorb. A team creates a complex classification system because the base model is weak. Then the model improves. The custom layer still exists, so the team keeps tuning it, monitoring it, and defending it in planning meetings. The company is now paying interest on a technical decision whose strategic value is shrinking.

Traditional software had versions of this problem, but AI makes it harsher because the external rate of change is so high. When a capability becomes widely available through APIs, open weights, or infrastructure vendors, the competitive value of owning the plumbing collapses faster than most organizations can emotionally process. What looked like a fortress turns out to be scaffolding.

This is why some of the most interesting AI companies reward deletion. If engineers only get status from adding code, the organization drifts toward bloat. If removing code is treated as evidence of better judgment, the company has a chance of staying light enough to move.

Jain’s line about throwing things away matters because it names the real enemy: legacy is no longer just a stage of maturity. It can begin almost immediately. You can become a legacy stack while still calling yourself a startup.

Agility is not a slogan

Once you accept that code decays faster, “move faster” stops being a motivational poster and becomes a design requirement. Agility, in this context, is not about shipping more tickets per sprint. It is the ability to change assumptions without the company breaking in half.

That sounds abstract, so make it concrete. An agile AI company can swap model providers without a quarter-long drama. It can revisit a workflow when users behave differently than expected. It can kill a feature that looked impressive in demos but becomes redundant once models improve. It can reshape pricing when inference economics change. It can reassign engineers from maintaining brittle scaffolding to building product depth customers will still value next year.

Most companies are much less agile than they think. They have fast product teams sitting on slow coordination structures. The code can change quickly, but the roadmap cannot. Or the roadmap can change, but sales compensation cannot. Or engineering is ready to replace a system, but compliance, procurement, and customer success are all tied to the old promise. In AI, those frictions matter because the window between “promising idea” and “commodity feature” keeps shrinking.

The companies that adapt well tend to share a few habits. They modularize the pieces most likely to change. They avoid overfitting the entire product to one model vendor’s quirks. They invest in evaluation and observability so they can tell whether a new approach is actually better. They keep decision-making close to the people who can see model shifts early. None of this is glamorous. It is mostly institutional hygiene, which is why it often gets overlooked in excited conversations about agents conquering enterprise software by Tuesday.

Agility is also cultural. Teams need permission to say, with a straight face, “the thing we built last quarter no longer deserves to exist.” That sentence is difficult in any company. In AI it should be normal.

The parts customers actually remember

If the code is unstable, what lasts?

For enterprise companies, one answer is the relationship with the customer, though that phrase can sound softer than it deserves. This is not about account-manager charm. It is about earning the right to shape a customer’s behavior at the exact moment they are trying to figure out what AI is good for inside a real organization.

Glean is a useful example because enterprise search and knowledge work sound deceptively generic until you try to deploy them at scale. The hard part is not simply generating a plausible answer. It is grounding that answer in the right systems, respecting permissions, understanding how work actually flows, and fitting into governance structures large companies already have. That requires product decisions that emerge from repeated contact with real customers, not from a benchmark.

When Jain talks about shared roadmaps, he is pointing to something that is easy to underestimate. In a shifting market, customers do not only buy a product. They buy confidence that the vendor will keep steering with them as the terrain changes. That confidence becomes sticky in a way raw features are not.

A new entrant can copy interface patterns. It can adopt the same models. It can even clone parts of your architecture. What it cannot instantly clone is years of accumulated understanding about how Fortune 500 procurement works, where permissioning breaks, which security objections are symbolic and which are real, why one workflow gets adoption while another dies after the pilot, or which integrations matter enough to influence renewal decisions. Those details are not romantic. They are just expensive to learn.

There is a fashionable line that data is the moat in AI. Sometimes that is true, especially when the data is genuinely proprietary and hard to recreate. Often, though, the more durable advantage is not the data itself but the context around it: who can access it, how it maps to business processes, how it should be interpreted, and which failure modes customers will tolerate. Context is stickier than code because it lives in the relationship between product and customer reality.

Depth starts to matter more as models get broader

There is another inversion happening. As foundation models become more general, product companies need to become more specific.

This can feel counterintuitive. If the underlying models can do more and more tasks, shouldn’t the winning product be the broadest one? At the infrastructure layer, maybe. At the application layer, breadth is often a trap. A general capability invites a thousand possible products. Most of them will be shallow. The companies that survive are usually the ones that go deeper than a generalist can justify.

Jain’s view on focus is unusually disciplined for a moment when everyone is tempted to expand sideways. If you try to do everything, you end up competing on the giant’s favorite terrain: generic capability, marketing volume, and distribution muscle. If you go deep on a constrained problem, you can build workflow knowledge, customer trust, and implementation detail that broader players tend to neglect.

This is why the giant-versus-startup story in AI is more subtle than people think. Yes, OpenAI, Microsoft, Google, Anthropic, and others can move up the stack. They will overlap with application companies. Some overlap will be painful. Yet their incentives also pull them toward breadth. A startup can win by caring obsessively about a narrow slice of work that is too small, too operational, or too domain-specific for a platform company to own end to end.

That does not mean the giants are harmless. It means the competitive game is not simply “who has the best model.” It is “who turns a general capability into daily usefulness inside a particular environment.” Those are different skills.

Legacy companies are not automatically dead

AI discourse loves a clean extinction story. Old SaaS companies are supposed to get blindsided, interfaces are supposed to melt into chat, and the market is supposed to reset for a new generation of native products. There is some truth in that story. There is also a lot of lazy thinking.

Incumbents have real problems. They carry revenue commitments, installed workflows, older codebases, and management habits built for a slower era. Many will stumble because they treat AI as a decorative layer rather than a product rethink. Bolting a chatbot onto yesterday’s interface is not transformation. Users can smell the difference almost instantly.

Still, the assumption that existing application companies simply disappear is too simple. Businesses do not buy software as an abstract celebration of technological purity. They buy tools that fit processes, permissions, reporting requirements, audit constraints, and organizational habits. Those structures do not vanish because conversational interfaces got better.

In many cases, AI will increase the number of product surfaces rather than collapse them into one universal assistant. A company may still need a sales system, a support system, an HR system, a knowledge system, and an analytics system, even if all of them become more agentic and more conversational. Work has too many specialized constraints for a single interface to absorb everything cleanly.

That creates room for incumbents, but only if they are willing to cannibalize parts of their own product before a competitor does. The firms that struggle most may not be the ones with old technology. They may be the ones with old self-conceptions. If management still thinks the historical feature set deserves protection, the company will waste precious time defending categories that users no longer value in the same way.

The relevant question is not whether a company started before the current AI wave. It is whether it can reorganize around the present one.

Hiring for people who can survive moving ground

A strategy built on constant revision changes what kind of people thrive.

Jain has said he values hunger, and that can sound like generic startup chest-thumping until you place it in context. In an AI company, the job often includes discarding work you were proud of, relearning tools, and tolerating the fact that your expertise has a shorter half-life than it used to. Some highly accomplished people adapt beautifully to that. Some do not. Past success can become a source of drag when it convinces someone the world should stay legible on terms they already mastered.

This does not mean experience is overrated. Enterprise software without experienced operators quickly turns into a science fair with a sales team. It means experience only compounds when paired with intellectual flexibility. The best people in this environment are not the ones who know the most static facts. They are the ones who can update their model of the market without melodrama.

You see this in engineering first, but not only there. Sales teams need to learn products that change underneath them. Customer success teams need to guide clients through evolving workflows. Product leaders need to resist overcommitting to features whose value may evaporate. Finance teams need to plan around costs that can drop sharply, then spike again depending on usage and vendor choices. AI compresses the time between “we understand this” and “we need to rethink this.”

A company that hires for prestige without testing adaptability is buying a polished version of yesterday.

The real asset is institutional willingness to invalidate itself

The phrase “competitive advantage” still matters. It just needs a harder, less flattering interpretation.

In AI, durable advantage is less about defending a static possession and more about maintaining a dynamic capacity. Can you keep rebuilding your product around what changed outside your walls? Can you delete expensive complexity before it defines your roadmap? Can you get close enough to customers that their real constraints, not your internal mythology, shape what survives? Can you stay focused enough to build depth while the market screams at you to sprawl?

Those capabilities are easy to describe and difficult to institutionalize because they ask companies to act against their own instincts. Organizations like continuity. Investors like visible assets. Engineers like elegant systems. Executives like plans they can present with confidence. AI punishes all of those preferences when they harden into attachment.

There is still a place for proprietary technology, especially in reliability, security, compliance, domain-specific evaluation, and workflow integration. Some code will continue to matter deeply. The point is not that software has become worthless. The point is that its value now depends partly on how replaceable you are willing to let it be. Code is no longer impressive because it exists. It is impressive when it helps a company stay fluid in a market that keeps shifting underfoot.

That is the strange discipline AI demands. You build seriously, but you hold lightly. You commit resources, but not your identity. You invest in systems, while assuming some meaningful share of them will deserve retirement sooner than your planning cycle suggests. In this market, durability belongs to the company that can replace parts of itself without losing the trust of the people it serves.

End of entry.

Published April 2026