The Web Is Turning Into a Negotiation Between Agents
At 11 p.m., the boundary moved
It was late, and I was watching a scheduling agent work through my next day. In under five minutes, it had talked to other services, reshuffled meetings, booked a ride, and secured a conference room I had not explicitly requested. The strange part was not speed. The strange part was that every move made sense in context.
That was the moment the category changed for me. A chatbot waits to be asked. This system inferred intent, made choices, and executed them across tools that already run my workday.
Around the same time, I read the 200-page paper, Agentic Web: Weaving the Next Web with AI Agents. Its central claim sounds abstract until you see one of these systems operating in the wild: the internet is shifting from a place humans navigate directly to a place where software agents increasingly transact on our behalf.
That does not mean websites vanish next year or that every task becomes a robot-to-robot handshake. It means the default direction of travel is becoming hard to ignore. We are moving from search-and-click toward delegate-and-monitor. Once you see that, a lot of current product decisions start to look temporary.
Agency changes the web’s basic unit
People keep describing this shift as better automation, and that undersells it. Automation usually means a predefined script. If X happens, do Y. An agentic system is closer to a junior colleague with broad permissions, partial judgment, and access to tools. You give it a goal, constraints, and a bit of context. It then figures out the sequence.
That difference matters because the web we built was organized around pages and forms. Even apps were mostly polished wrappers around human attention. You opened the thing, found the button, compared options, and carried the state in your own head. The machine did retrieval and rendering. You did the deciding.
Agents scramble that division of labor. If I want to plan a trip, I may never visit twelve sites, compare tabs, and manually align dates. My agent can query airline data, negotiate a hotel booking, check whether the arrival time collides with another commitment, and place a reservation that fits my stated preferences. The human role shifts upward, toward defining preferences and approving risk.
That sounds efficient because it is efficient. It also means the basic unit of the web is no longer a page view. It is a delegated intention moving through a network of services. That is a deeper change than another interface refresh or a better recommendation engine.
Protocols are the real infrastructure story
The flashy demos get attention, but protocols are where this becomes real. Without them, you have an eloquent model trapped in a chat window. It can describe the world beautifully and still fail to touch it.
Model Context Protocol, or MCP, is one of the key pieces here. The simple way to think about it is this: it gives models a standard way to connect to outside data and tools. Instead of every developer building one-off integrations for calendars, inboxes, databases, design files, and internal systems, MCP creates a reusable contract between a model client and a server exposing those capabilities.
That sounds dry until you feel the difference. A model without structured access is like a smart person locked in a library. It can reason, summarize, and speculate, but it cannot check your schedule, pull the latest contract, or complete a task in the real world. With a stable protocol, the model gets hands.
Agent-to-agent protocols matter for the same reason. Google’s A2A effort is one sign of where the industry thinks this is going: agents from different vendors need a standard way to discover each other, exchange capabilities, negotiate tasks, and return results. Scraping HTML was always a hack. It worked because websites were built for human eyes first. In a network of agents, direct machine-readable coordination becomes far more valuable than pretending every workflow is still a browser session.
Standards are rarely glamorous. They are also where power accumulates. The companies that shape these protocols are not merely improving convenience. They are defining how intent moves, how trust is expressed, and which actors become first-class citizens in the next version of the internet.
Useful systems still feel unsettlingly immature
I spent time running a small multi-agent setup for research monitoring. One agent scanned sources, another summarized, and a third prioritized findings against active projects. The time savings were obvious, but the bigger surprise was conceptual. The system surfaced links I would likely have missed when tired or overloaded, including one between an academic paper on image compression and a latency problem in a practical deployment.
This is where agentic systems feel genuinely new. They do not just reduce keystrokes. They can widen the search space around a problem and connect pieces that sit in different mental drawers for humans.
Then the wobble appears. An email-handling agent I was testing once classified a routine commercial follow-up as urgent and replied with a level of enthusiasm normally associated with an overeager sales intern and too much coffee. The result was not catastrophic, but it was instructive. The model understood the grammar of business communication and missed the social temperature entirely.
That combination of competence and naivety is the defining texture of current agents. They are often good enough to earn permissions and still brittle enough to misuse them. Which means deployment is less about asking whether the model is intelligent in the abstract and more about deciding which mistakes you can tolerate when it acts independently.
This is why the conversation cannot stop at benchmark scores. The lived question is operational judgment. How often does the agent recover from ambiguity? How gracefully does it fail? When it encounters conflicting signals, does it escalate, pause, or improvise? Those behaviors determine trust far more than another point on a leaderboard.
The market is shifting from clicks to machine preference
Once agents mediate transactions, the web starts to invert. A website’s glossy frontend matters less if the real customer is a software system evaluating availability, reliability, price, delivery terms, refund rules, and reputation through structured interfaces.
For years, businesses fought for human attention through design, ranking tricks, and behavioral nudges. In the next phase, many will fight to be legible and preferable to agents. That changes the optimization target. Clear APIs, structured product data, transparent policies, and stable service descriptions become more valuable than persuasive copywriting alone.
Search engine optimization does not disappear. It mutates. A travel brand may still care about human discovery, but it will also need to answer a more technical question: when a personal agent is comparing ten providers, how does your service communicate trust, constraints, and value in a way another machine can parse and act on?
There is an economic layer beneath that. Agents will not just retrieve options. They will negotiate. If your purchasing agent asks five vendors for a quote and four vendor agents respond with tailored terms, the market starts to look less like browsing and more like continuous machine-mediated procurement. Recommendation becomes bidding. Interface becomes policy.
That also reorders power among intermediaries. If your primary relationship to commerce, information, and scheduling is filtered through an assistant, whoever controls that assistant controls a large share of your reachable world. Default preferences, ranking logic, and commercial incentives stop being abstract platform concerns. They become active determinants of what your agent notices, chooses, and ignores.
Delegated power creates a larger attack surface
The security story gets darker as soon as agents can act. A bad answer from a chatbot is annoying. A bad action from an authorized agent can spend money, expose data, or trigger follow-on failures across systems it can reach.
Prompt injection was the first taste of this problem. In an agentic environment, the stakes rise because the model is no longer only producing text. It may be reading a malicious document, misinterpreting hidden instructions, and then carrying out a transaction with your permissions attached. In controlled tests, it does not take much creativity to confuse identity boundaries or smuggle hostile instructions into a workflow.
The phrase “context injection” sounds technical, but the human version is simple. Someone slips bad advice into the ear of your assistant, and your assistant has your keys. That is a very different risk profile from a model hallucinating an answer to trivia.
The danger multiplies when agents trust other agents. A compromised node in a multi-agent workflow can poison decisions downstream, pass along fabricated authority, or steer negotiations toward outcomes that look legitimate to every participant in the chain. The failure mode is closer to organizational fraud than classic malware. The system does not always crash. It can keep functioning while serving the wrong interests.
This is where a lot of cheerful product demos feel incomplete. Capability gets shown in bright lighting. Governance usually appears in a slide with vague boxes. Yet identity, permissions, auditability, rollback, and bounded autonomy are the real architecture of safe deployment. Without them, you are not building assistance. You are handing a persuasive stochastic process a corporate card and hoping the logs are readable.
A new literacy is emerging
Most people learned the consumer web through direct manipulation. Click the icon. Open the tab. Check the result. The next layer requires a different habit of mind. You are managing delegated labor rather than executing every step yourself.
That means writing better instructions, but it goes beyond prompting. The practical skill is setting boundaries. Which tools can this agent access? What transactions require approval? Which sources are trusted? When should it stop and ask? These are managerial questions disguised as product settings.
Developers feel this first because they sit closest to implementation. A surprising amount of modern software work is drifting from interface construction toward orchestration, tool definition, and policy design. The frontend still matters, though increasingly as a control surface for supervision rather than the place where all value is created.
Companies will feel it next. A lot of current digital strategy assumes the user arrives as a person with eyes on a screen. That assumption is weakening. If a customer’s representative is a machine, businesses need machine-readable contracts, predictable service semantics, clearer permission boundaries, and better observability. Many organizations are still spending as if the old acquisition stack will hold forever.
Users will feel it last, which is usually how these shifts go. When they do, the friction will be psychological as much as technical. Delegation sounds effortless until the agent books the wrong thing, reveals too much, or optimizes for your written preferences while missing your actual values. Human beings are full of tacit judgment. Encoding that cleanly is much harder than asking for a summary of a PDF.
The web is entering a supervised autonomy phase
There is a temptation to narrate this as a clean handoff from human-driven internet to machine-driven internet. Reality looks messier. We are entering a phase where systems are capable enough to act, still unreliable enough to require oversight, and increasingly connected enough that mistakes spread faster than before.
That awkward middle may last longer than enthusiasts expect. Many businesses do not have the data hygiene, access controls, or process discipline needed for broad autonomy. Many users do not want it yet. Some high-trust domains may remain stubbornly human-facing because accountability is part of the service, not overhead to be minimized.
Even so, the direction is set. The important question is no longer whether agents can do useful work online. They can. The important question is who gets to define their goals, permissions, defaults, and loyalties once they become ordinary intermediaries for everyday life.
That is where the real contest will sit. The visible layer will be convenience. The deeper layer will be governance. If your agent schedules your meetings, handles your purchases, filters your information, and negotiates with other systems while you sleep, then the most consequential product decision may be whose priorities it quietly learns to serve.
End of entry.
Published April 2026