The Per-Agent Era: When Every AI Will Need Its Own Computer
A hacked-together future
Last week, while configuring an n8n agent for a client, the setup stalled on something oddly mundane. The agent needed its own Google Workspace account. Its own Gmail inbox. Its own Drive. Its own permissions inside the company stack.
That felt improvised at first, the kind of workaround you accept because the cleaner abstraction does not exist yet. We like to imagine AI deployment as a model endpoint plus a few prompts and some glue code. In practice, useful agents crash into the same enterprise furniture humans use every day: email, files, calendars, spreadsheets, browser sessions, role-based access controls, retention policies, audit logs.
Then Satya Nadella described Microsoft’s emerging “per-agent” view of software, and the improvisation stopped looking temporary. It started looking like an early version of the real thing.
The line that matters is simple: don’t think only in terms of software sold per person. Think in terms of software, infrastructure, and identities provisioned per agent. That is not just a pricing tweak. It is a change in what counts as a worker inside a company’s systems.
The market has spent two years talking about models, benchmarks, and copilots. The less glamorous layer is arriving now. If agents are going to do actual work, they need somewhere to live, a way to authenticate, a set of permissions, access to tools, and a trail showing what happened when something goes wrong. In other words, they need the same scaffolding companies built for employees.
Microsoft’s answer to the per-user problem
Every large software vendor now gets a version of the same question from investors. If AI starts doing more of the work people do inside Office, Salesforce, ServiceNow, and similar stacks, what happens to a business model built around seats sold to humans?
Nadella’s answer is clever because it does not defend the old model. It expands it. If each employee can supervise multiple agents, then the commercial unit does not disappear. It multiplies. A company with 500 people might eventually manage 2,000 or 5,000 software actors that create documents, update records, analyze data, and move tasks through systems. The person count stays the same. The active identity count explodes.
That sounds self-serving, because of course it is partly self-serving. Microsoft would like the AI transition to increase demand for Microsoft software, not erode it. But there is a real operational truth underneath the investor-friendly phrasing. Companies do not deploy work by sending raw tokens into the void. They deploy work inside governed environments. As soon as an agent touches customer data, financial data, regulated documents, procurement workflows, or internal communications, it enters the same world of identity, compliance, and control that every enterprise tool has been shaped by for two decades.
This is the part many AI product demos politely skip. The demo shows an agent drafting a report or triaging support tickets. It rarely shows the directory group it belongs to, the conditional access policy applied to it, the retention setting on the mailbox it uses, or the logs that let legal reconstruct its actions six months later. Yet those details decide whether the agent is a toy, a pilot, or a production system.
So the “per-agent” frame is not just an accounting trick. It is a statement about where software value moves when AI leaves the chatbot window and enters an organization.
A computer for a nonhuman worker
One of the most interesting signals in Nadella’s recent comments was that companies deploying autonomous agents want to provision Windows 365 environments for them. At first pass, that sounds almost silly. Why would an AI need a Windows desktop?
Because most businesses are not clean collections of APIs. They are patchworks of old and new software, wrapped in policy. Some systems expose good programmatic access. Many do not. Some tasks are technically automatable but politically constrained, because the approved path runs through a browser, a desktop app, or a monitored workflow. Some data only becomes visible after a user signs in through the right identity provider, on the right managed device, with the right session controls.
A virtual machine solves a lot of this in one stroke. It gives the agent a persistent environment. It gives security teams a place to apply familiar controls. It gives administrators a boundary for networking, storage, patching, and observability. It gives compliance teams something legible. “This agent used this machine, under this identity, from this time to that time” is not elegant in the abstract. It is extremely useful when someone asks why a contract changed, why an email was sent, or why a report included restricted data.
The word “computer” also needs a small update here. It does not necessarily mean a laptop with a wallpaper and a coffee stain nearby. It means an execution environment with state. Somewhere credentials can be managed. Somewhere files can persist. Somewhere software can be installed, inspected, and constrained. Somewhere the agent can return tomorrow and continue working with memory that is not just context stuffed into a prompt.
Humans got desks because offices needed a way to place work in space. Agents get virtual desktops for a similar reason. Companies need a way to place machine work inside systems they already know how to govern.
That is why the Google Workspace account for an n8n agent matters. It is the same pattern in miniature. The agent needed an identity to send mail, access documents, and inherit permissions from existing tools. The improvised setup was not a bug in the future. It was a prototype.
Copilots and agents will share the same backbone
There is a useful distinction in Nadella’s framing, and it maps cleanly to what companies are actually buying.
In one mode, the human stays in the driver’s seat. AI drafts, summarizes, suggests, and searches, but a person initiates the action and approves the result. That is the copilot model. It slots neatly into current workflows because it preserves the familiar chain of responsibility. The email still comes from the employee. The spreadsheet still belongs to the analyst. The meeting still has an owner.
In the other mode, the company provisions an agent directly to carry out bounded work with varying degrees of autonomy. The human sets the goal, defines constraints, reviews exceptions, and monitors outcomes, but does not click through every step. That is the agent model.
These are often presented as competing futures. They are more likely to coexist for a long time, because organizations do not move from manual to autonomous in one clean leap. A finance team may use copilots for analysis while running autonomous agents for reconciliations. A sales team may draft outreach with assistance while handing lead routing, CRM hygiene, and reporting to agents. The boundary will shift task by task, not company by company.
The important point is that both modes lean on the same underlying backbone. Identity still matters. Storage still matters. Access control still matters. Discovery and retention still matter. If an agent creates a document, updates a record, sends a message, or touches regulated data, the company must still answer basic questions: which identity acted, what sources informed the action, what policy applied, and how the action can be reversed or audited.
That continuity is why incumbents are in a stronger position than some people expected. The AI layer may feel new. The governance layer is old, and old in enterprise software usually means “expensive to replace.” A company can swap a model more easily than it can redesign its identity architecture or compliance process.
Why agents will keep using Excel
There is another part of Nadella’s argument that sounds counterintuitive until you sit with it. Agents are not necessarily going to abandon human tools and do everything as pure code. In many cases, they will keep using those tools because doing so is more efficient.
Think about Excel. It is easy to sneer at spreadsheets as relics or as accidental databases with Stockholm syndrome. It is harder to ignore the amount of business logic sitting inside them. Pricing models, forecasts, reconciliations, approval structures, edge-case formulas, formatting conventions, domain assumptions, and hidden tabs full of delicate institutional memory all live there. Rebuilding that from scratch into bespoke software is expensive. Recreating it on the fly with model reasoning is wasteful and fragile.
Tool use changes the economics. If an agent can call into Excel, understand formulas, manipulate sheets, and preserve the existing logic, it spends far fewer tokens hallucinating the shape of the task. It uses the software as a compressed bundle of human intent. That is what Nadella means when he talks about tool use being more token-efficient.
This matters beyond spreadsheets. Email clients, CRMs, document editors, BI tools, ticketing systems, browsers, and vertical SaaS products are not just surfaces for human clicking. They are containers for workflows, permissions, schemas, and conventions. From an agent’s perspective, they can function like specialized instruments. You do not ask a model to reinvent bookkeeping every time it needs to close the month. You give it access to the bookkeeping system.
A lot of futurist talk assumes AI will erase interfaces. Nearer term, AI may increase their importance by turning them into stable tool endpoints. The software that survives will not only be the software humans like to use. It will be the software agents can reliably operate inside controlled environments.
Small companies will feel this sooner than they expect
For large enterprises, this story sounds like a continuation of familiar IT concerns with more automation layered on top. For small and midsize companies, it can sneak up faster.
A ten-person company that adds five meaningful agents does not suddenly resemble a ten-person company with a chatbot. It starts to resemble a fifteen-identity organization, then a twenty-identity organization, depending on how the agents are segmented. One agent may handle inbox triage. Another may manage lead enrichment. A third may reconcile invoices. A fourth may prepare weekly reporting. A fifth may monitor compliance exceptions. If each one needs distinct permissions and traceability, “just use one service account” stops being viable very quickly.
That shift creates a new kind of admin burden. Someone has to decide what an agent can access and what it cannot. Someone has to separate duties so the same agent cannot both create a vendor and approve a payment. Someone has to review logs when the agent performs an action that looks wrong but may not be wrong. Someone has to understand whether the failure came from the model, the prompt, the connector, the source data, or the surrounding permissions.
This is where a lot of current automation work still feels deceptively simple. Building the workflow is often the easy part. Deploying it so that security, finance, legal, and operations can all tolerate it is where the work becomes real.
The language of “digital employees” can be useful up to a point, because it reminds people that autonomous software needs onboarding, scoping, and oversight. It also breaks down if taken too literally. Agents do not need salaries, morale management, or ergonomic chairs. But they do create many of the same administrative questions employees create: who authorized this actor, what can it access, how do we suspend it, and how do we know what it did?
Those questions are going to land first in the lap of people who today call themselves automation builders, RevOps operators, IT generalists, or modern no-code consultants. The job is quietly broadening. It is becoming less about getting workflows to run and more about building trustworthy environments for machine work.
The real market is identity plus audit
Once you see the pattern, a lot of current vendor strategy becomes easier to read. The valuable layer in enterprise AI is not just model access. It is the combination of identity, data locality, permissions, observability, and policy enforcement wrapped around model access.
That is why Microsoft’s position is stronger than a simple “it owns Office” summary suggests. The company controls the directory, productivity apps, security stack, device management layer, and increasingly the AI interface sitting on top. If agents become first-class actors inside companies, those layers fit together in a way that is very hard for point solutions to match.
Google can play a similar game inside Workspace. Okta, Cloudflare, AWS, and others will try from adjacent angles. The common denominator is that the race is moving toward managed identities with action rights, not just conversational intelligence.
There is still genuine uncertainty here. Some workflows will become agent-native and bypass the old desktop metaphor entirely. New software will be built with machine operators in mind from day one. In those environments, the idea of provisioning a full virtual desktop for each agent may look wasteful, like giving a robot a cubicle because that is where the forms happen to be.
But transitions rarely begin in the clean room. They begin inside old structures. Companies will reach for the shortest path that makes an agent legible to their existing controls. A virtual machine, a Workspace account, a service identity with scoped permissions, a mailbox under retention, a monitored browser session: these are inelegant building blocks, and they are exactly the kind of building blocks organizations adopt.
The tidy future arrives later, after the invoices, incident reports, and audits have taught everyone what actually needs to be standardized.
The new admin layer
There is a temptation to describe all this as invisible plumbing and move on. That misses the human consequence.
When each productive agent needs its own environment, the center of gravity shifts from model cleverness to institutional design. Companies will need to decide which work deserves autonomy, which actions require human approval, how finely to separate machine identities, and how much traceability is enough before the overhead defeats the gains. Those are not merely technical settings. They are management decisions encoded into software.
That is why the little Google Workspace account for an agent feels important. It captures the moment AI stops being a feature someone can turn on and starts becoming a participant in the company’s structure. Once that happens, the supporting machinery stops being optional. Somebody has to issue credentials, scope access, review behavior, and clean up after failures. Somebody has to treat machine work as something that exists inside the business, not outside it.
The companies that understand this early will not necessarily have the flashiest demos. They will have something more durable: agents that can survive contact with real governance. And the people building those systems will be doing a job that barely had a name a year ago.
End of entry.
Published April 2026