AI Makes Building Cheap. Human Attention Gets Pricier.
At an OpenAI town hall, someone asked the question hanging over this whole cycle: if I can build almost anything, how do I find people who actually get value from it?
Sam Altman's answer was unusually clean. In a world where creation gets cheaper and easier, human attention starts to look like the last genuinely scarce resource.
That sounds obvious until you follow it all the way down.
For years, software startups told themselves a flattering story. The hard part was building. If only the team were better, faster, more technical, the market would open up. Y Combinator spent years teaching founders the opposite. Shipping matters, but the deeper challenge is making anyone care. Product has always been only half the battle. AI is turning that half into the cheaper one.
The result is a strange inversion. The internet spent decades making distribution cheap and creation expensive. Generative AI is pushing in the other direction. Creation is collapsing toward commodity. Distribution is becoming the fight.
Creation stops being the excuse
Altman made a forecast that would have sounded absurd not long ago: by the end of 2026, a few hundred dollars of inference and a decent idea could produce software that once required a team working for a year.
Maybe that timetable slips. Predictions usually do. The direction is still clear enough to matter.
A founder used to need engineers, time, and a tolerance for broken weekends just to test a product thesis. Now a lot of the first draft can be generated, stitched together, and improved by small teams or even one stubborn person with a sharp prompt loop. The cost of trying things is falling fast.
That changes startup math in a very specific way. When experimentation gets cheap, more people can enter the game. More products get launched. More features get copied. More categories fill up with almost-good-enough alternatives.
If you want a concrete example, look at internal tools, customer support layers, note-taking assistants, meeting summarizers, prospecting agents, content repurposing tools, and lightweight analytics wrappers. These are useful products. Many also arrive with nearly identical landing pages, similar capabilities, and the same promise that your workday will finally stop behaving like a cursed side quest.
The moat is no longer the fact that you managed to build it.
That does not mean technical advantage disappears. Deep infrastructure, proprietary data, workflow integration, reliability, safety, and trust still matter. But the time window during which pure building confers a durable edge is shrinking. A lot of founders are still acting like they have a two-year lead when they really have six interesting weeks.
Attention does not get cheaper with scale
Human beings did not receive a matching upgrade.
We still have the same number of hours in a day. We still get distracted by messages, meetings, family, deadlines, low blood sugar, and the thousand tiny frictions that make adoption harder than demos suggest. Every new AI product enters that same narrow channel: someone's limited willingness to notice, evaluate, remember, trust, and return.
That is what makes attention different from compute. Compute scales with capital and engineering. Attention scales badly. You can buy impressions, rent feeds, automate outreach, and flood the zone with synthetic content, but none of that creates more real cognitive bandwidth on the other side. It mostly creates more competition for the same bandwidth.
This is where a lot of AI optimism quietly flips into an AI tax. If everyone can generate marketing copy, personalized emails, sales sequences, ad variants, explainer videos, and product mockups at almost zero marginal cost, the supply of things asking for attention explodes. The average quality does not keep pace. A lot of it is slop, and some of the slop is made by perfectly competent humans using very efficient tools.
The bottleneck becomes harsher because AI can automate the pressure on attention more effectively than it can expand attention itself.
That matters for go-to-market in a much deeper sense than "ads will get noisy." It means distribution is no longer a function you bolt on after shipping. It becomes part of the product problem. Your product must earn a place in a crowded mental environment where users are already over-subscribed. Convenience alone will not save you. Novelty wears off. Even usefulness is not enough if the cost of learning, trusting, or remembering the tool stays too high.
People do not adopt software because it exists. They adopt it because it fits into an already overfull life.
Idea quality becomes the production constraint
There is another bottleneck hiding behind attention: ideas.
If AI lowers the cost of making things, then producing more things is easy. Producing something worth attention is the hard part. This sounds almost quaint, like advice from a startup poster left in a WeWork basement, but the point sharpens in an age of abundance. When output gets cheap, judgment becomes expensive.
A consultant in the OpenAI discussion framed this well. On the consumption side, attention is scarce. On the production side, the scarce resource is good ideas. Plenty of products do not deserve the time they demand. The fact that they can be built in a weekend does not make them meaningful.
This is one place where AI may genuinely help, though probably less magically than enthusiasts claim. Models are getting better at ideation support: scanning your past work, proposing adjacent directions, surfacing missing assumptions, and playing the role of a surprisingly informed sparring partner. Altman mentioned the possibility of a kind of Paul Graham-style bot, one that asks the uncomfortable but useful startup questions.
That will not replace taste. It may improve the speed at which taste gets exercised.
And speed matters. If the cost of testing ideas drops, you can learn faster which ideas deserve a second week, a second month, a real team, and actual customers. The feedback loop shortens. You stop protecting mediocre ideas simply because they were expensive to build. That is healthy. It also means the market becomes less forgiving. More experiments reach users faster, so weak products get exposed faster too.
Cheap creation does not rescue bad judgment. It just removes the excuse for keeping it around.
Go-to-market moves to the center of the company
There is a useful test Altman offered for AI startups: will your company be happy or sad if the next major model release is dramatically better?
It is a deceptively sharp question. Some companies are building with the tide. Better models expand their market, improve their product, and lower their costs. Others are living in the cracks of current model limitations. They exist because the models are awkward, unreliable, or missing a feature. When the base models improve, those companies get flattened.
This distinction matters because it separates product strategy from patch strategy.
A lot of current AI startups are wrappers in the neutral sense of the word: they package raw model capability into a usable workflow, which can be valuable. The problem comes when the workflow is thin and the differentiation lives almost entirely in being earlier than the platform. That is not a company yet. That is a temporary gap in the road.
So what becomes durable? Usually some combination of distribution, embedded workflow, proprietary context, and trust. The old language of moats still applies, but it is shifting from invention alone toward relationships and position.
Alex Rampell has described a recurring startup race: can the startup win distribution before the incumbent builds the feature? AI compresses that race. Incumbents can copy faster than before, because the cost of implementation is dropping for them too. Startups therefore need distribution sooner, not later. They need users who return, data that compounds, habits that stick, and channels that do not vanish the moment a platform updates its roadmap.
This is why go-to-market stops being a department and becomes existential design. You cannot separate what the product does from how users discover it, trust it, and bring it into their routine. In many cases, the best product decision is the one that reduces explanation. If a user has to understand your architecture before they feel the value, you are already spending from a scarce account.
The web's gatekeepers are changing shape
There is another layer to the attention problem, and it is easy to miss if you still think in classic search terms.
For years, the web ran on ranking systems. Search engines, social feeds, recommendation algorithms, app stores, and marketplaces decided who got seen. That world was never exactly fair, but it was at least legible. You could learn SEO, buy ads, cultivate followers, or engineer for algorithmic preference.
AI assistants are introducing a different kind of gatekeeping. Instead of sending users toward a set of links, they increasingly answer the question themselves and occasionally cite a few sources. Visibility no longer depends only on being indexed or ranked. It depends on whether a model, or the retrieval layer around it, selects your information as worthy of inclusion.
That changes publisher and startup behavior.
One response is syndication. If traffic is less reliable, content owners try to get paid upstream by licensing data directly to model providers. We are already seeing variations of this across publishers, forums, and structured knowledge platforms. Another response is to build such a strong brand that users bypass intermediaries and come to you directly. In a world of machine-mediated discovery, memorability becomes distribution infrastructure.
This shift is subtle but important. The old web rewarded being found. The new environment may reward being selected by machines or remembered by humans. Sometimes both.
For startups, this means classic content marketing may weaken unless it is tied to a distinctive point of view, proprietary data, community, or utility. Generic educational content is easy for models to summarize and easier for competitors to mass-produce. If your demand engine depends on being the tenth competent explainer of a known topic, the floor is rising under you.
The safer route is to create things that are hard to compress: trusted tools, original data, communities with actual relationships, workflows tied to specific teams, and brands people actively seek out. A model can paraphrase your blog post. It cannot easily replicate the accumulated trust behind why someone chooses your product for payroll, legal review, medical workflow, or a mission-critical sales process.
The scarce thing stays human
The deepest consequence of AI abundance is not that software becomes free. It is that choice becomes heavier.
When many products can be built quickly, the winner is less often the team that merely ships first. It is the team that understands what deserves attention, how to earn it, and how to convert a moment of curiosity into ongoing use. That is a more human challenge than the current discourse admits. It depends on taste, timing, empathy, credibility, and the ability to remove friction from somebody else's already crowded day.
Founders like to believe technology changes the whole game. Sometimes it does. More often, it changes the price of one part of the game and exposes the part that was always hard. AI is doing that now. It is making creation cheaper and exposing how little of business was ever about creation alone.
You can build faster than ever. The question waiting behind every demo is still the same one from that town hall: who will care enough to make room for it in their life?
End of entry.
Published April 2026