AI Is Not an Extension of the Brain but of the Earth
AI is sold as weightless intelligence. Type a prompt, receive an answer, and the whole arrangement seems strangely clean, almost ethereal. That feeling is the first deception.
Kate Crawford has spent years arguing that we describe these systems with the wrong metaphor. We talk about them as if they were an extension of the mind, a digital halo around human thought. Her correction is sharper and more useful: this is an extension of the Earth. AI does not hover above material reality. It digs into it.
That shift in framing matters because the public conversation is still trapped in the language of abstraction. We debate model capabilities, benchmark scores, existential risk, and whether a chatbot sounds persuasive enough to pass as helpful. Meanwhile, an industrial system is expanding underneath the software. It pulls electricity from grids already under strain, water from municipal systems and watersheds, minerals from mines, and human labor from poorly protected parts of the global economy. The machine does not begin at the prompt box. It begins in land, energy, and logistics.
Once you see that, a lot of familiar language starts to look like camouflage.
The cloud was always a branding exercise
The word “cloud” did quiet political work. It turned a physical buildout into a metaphor, which made it easier to ignore. A cloud sounds diffuse, gentle, and borderless. A data center is none of those things. It is a building with a fence, a substation, diesel or gas backup, cooling systems, security staff, tax deals, and neighbors.
Generative AI intensifies that old sleight of hand. These models feel especially detached from matter because their interface is language. You ask for a poem, a market summary, or a Python function, and it arrives as text. Text has no obvious weight. The response appears to emerge from a void, as if the underlying process were mostly cognitive rather than industrial. But language models are not floating minds. They are compute systems, and compute systems are infrastructure.
That infrastructure has grown fast enough to distort the places around it. The leading facilities are not modest server rooms hidden in office parks. They are campuses, often planned around power availability before anything else. A modern AI cluster can involve tens of thousands of accelerators, dense networking, specialized memory, cooling hardware, and electrical equipment that looks more like utility engineering than software development. Companies present these expansions as innovation stories because “we are building a giant electricity consumer” does not play as well in a keynote.
There is also a timing problem. Software can scale faster than roads, substations, transmission lines, and new generation. Investor expectations move in internet time. The physical world does not. So the buildout starts to look improvised: temporary structures, emergency permits, generator fleets, and aggressive efforts to lock up power wherever it can be found. If you want to know what AI really is, do not start with the chatbot demo. Start with the procurement teams, the utilities, and the construction schedules.
Intelligence has a supply chain
The material story begins long before a model is trained. It starts with the hardware stack, and the hardware stack starts in extraction. That does not mean every chip is made out of the same “rare earths” slurry you see in simplified explainers. The reality is more mundane and more revealing. AI depends on a sprawling industrial base that includes high-purity silicon, copper, aluminum, gold, tin, steel, specialty chemicals, semiconductor gases, cooling equipment, backup batteries, transformers, and, across the wider system, minerals such as lithium, cobalt, nickel, graphite, and rare earth elements for motors and electrical components. The point is not one magic ingredient. The point is the whole chain.
The glamour object in the current cycle is the high-end GPU, whether it is an A100, H100, or the newer accelerators rushing into hyperscale deployments. These chips are engineering marvels. They are also dense bundles of matter, fabrication energy, intellectual property, global shipping, and scarce manufacturing capacity. Producing them requires some of the most complex industrial processes humans have ever built, with fabrication plants that consume enormous quantities of ultra-pure water and electricity. “Software” arrives to the end user only after this heavy industrial prelude.
Crawford’s reporting has been valuable partly because she refuses to let these beginnings disappear. When she writes about AI as extractive, she means that literally. Mines are involved. Smelters are involved. Factory workers are involved. So are communities that absorb pollution and landscape damage while the finished product is marketed to distant consumers as convenience or productivity. The physical burden is usually not borne in the same place as the financial reward.
That geographic split helps the fantasy survive. A knowledge worker in Paris or San Francisco sees a writing assistant. A county in Virginia sees a giant new load request. A mining region sees pressure for more output. A neighborhood near a rapidly permitted facility sees trucks, heat, noise, and emissions. The system looks immaterial only from the right distance.
Chips are geology with firmware
There is a temptation to make all this sound metaphysical, as if the insight were simply that “everything digital is physical.” That is true, but it is too soft. The more exact point is that AI transforms geological and electrical capacity into synthetic language and image generation at massive scale. It converts Earth into probability distributions and then sells the result back as intelligence.
That is why the “brain extension” metaphor misleads. Brains are biological organs with extraordinary energy efficiency. A human brain runs on about twenty watts. Training frontier models takes datacenter-scale power over extended periods, and serving them to millions of users creates a continuous demand curve. When people say these systems resemble thinking, they often smuggle in an assumption that the underlying process is elegant in the same way cognition is elegant. It is not. It is a brute-force statistical method made practical by extraordinary quantities of hardware and energy.
That does not make the systems useless. It does mean their costs should be discussed in the same breath as their capabilities.
Every prompt touches the grid
Training gets the headlines because it is dramatic: huge clusters, months of computation, runs that cost fortunes. But the long tail is inference. The real bill arrives when millions of people, then hundreds of millions, ask models to do work all day. A single prompt does not look like much in isolation. At platform scale, inference becomes a utility question.
Precise numbers are maddeningly hard to get because companies disclose very little. Estimates for the energy cost of a prompt vary widely depending on model size, hardware generation, response length, routing tricks, caching, batching, and whether the system is retrieving information or generating from scratch. Comparisons with web search are similarly slippery because search is changing too. Still, the broad direction is clear enough. Generative responses tend to require more computation than classic keyword retrieval, and small per-query differences become enormous when the query volume reaches the billions.
That is the important part. Scale swallows caveats. You do not need a perfect number to see the trend. If a service handles billions of daily interactions, then even fractional watt-hours accumulate into serious demand. The industry is now planning around gigawatts, not gadgets. Utilities are reworking forecasts because datacenters have become one of the fastest-growing categories of electricity consumption in some regions. The discussion has shifted from “can we power a server room” to “which new generation, transmission upgrades, and backup systems will support this wave of campuses.”
In some places, this shift is already visible in public planning. Northern Virginia remains the most famous case in the United States, where datacenter concentration has turned local zoning fights into debates about substations, transmission corridors, tax revenue, noise, and the shape of statewide energy demand. In Ireland, datacenters have grown into a striking share of national electricity use. This is no longer a niche technical concern. It is part of how energy systems are governed.
The industry likes to imply that ever-improving efficiency will neutralize the problem. Efficiency matters, and hardware has improved fast. But there is an old rule in computing: when a capability gets cheaper, people use much more of it. Better chips lower the cost per unit of computation, which often expands total demand rather than shrinking it. If the product strategy is to place generative models inside search, office software, customer support, coding tools, media workflows, and every enterprise dashboard within reach, efficiency savings can disappear into growth.
Cooling turns heat into water demand
Computation becomes heat, and heat has to go somewhere. This is where the environmental story gets more local and more concrete. Large AI facilities rely on a mix of cooling techniques, some of which can involve significant water use, especially under hot conditions. Operators prefer to highlight efficiency metrics and design improvements, which are real, but those metrics do not comfort residents when a new project arrives in a drought-prone region or when potable water enters the conversation.
Again, specifics are often obscured. Companies publish selective sustainability reports, and local reporting has to do the rest. What we know is enough to see the pattern. Datacenters need reliable cooling. In many places, water remains the simplest and cheapest way to achieve it. Even where reclaimed water is promised, the details matter: seasonal conditions, backup systems, local infrastructure, and whether municipal supplies are quietly carrying more of the burden than the marketing suggests.
The public debate around AI still treats electricity as its main resource, with water mentioned as an afterthought. That misses the way these systems bind the two together. Power generation uses water. Cooling uses water. Heat management shapes siting decisions. The result is not a generic footprint floating over “the environment.” It is a set of highly specific pressures landing on actual basins, aquifers, and utility systems.
The hidden workforce inside “automation”
There is another reason the brain metaphor flatters these systems: it hides labor. AI is often presented as the replacement for human effort, a machine that finally lets us bypass messy social processes. But the stack is saturated with work at every layer.
Some of that work is obvious once you look for it. People build and maintain the facilities. They manufacture components, pour concrete, run electrical systems, monitor operations, and perform the repetitive maintenance that keeps “always on” infrastructure from collapsing into “briefly on.” None of this appears in the chat window, but the chat window depends on it.
Some of the labor is hidden more deliberately. Data labeling, content moderation, safety review, reinforcement learning, and evaluation have often been pushed into global labor markets where wages are low and psychological protections are weak. Reporting over the last few years has shown how workers in countries such as Kenya were hired to clean up the ugliest outputs of generative systems or label data that companies later described in more abstract language. The story was sold as automation. In practice, many systems were automation wrapped around badly protected human labor.
Then there is the labor already embedded in the training data. Crawford’s point here is especially uncomfortable for the industry because it cuts through a favorite mythology. Models do not emerge from pure computation. They are trained on vast archives of human expression: books, articles, code repositories, forums, artwork, music, documentation, and the endless residue of people doing their jobs or making things online. Years of collective intellectual and creative effort become a raw material. The companies call this data. A more honest term, in many cases, would be uncompensated human contribution.
This matters economically and culturally. If the model’s value depends on ingesting the world’s creative and professional output, then “AI productivity” is not simply a machine generating wealth from nowhere. It is a system that centralizes returns from distributed human activity. That is part of why current fights over licensing, copyright, dataset access, and synthetic competition feel so bitter. They are not side issues. They are disputes over who gets to capture value from a new extractive pipeline.
Scarcity arrives in real places
For a while, it was possible to discuss AI as if its costs were mostly future tense. That period is over. The infrastructure race is now running directly into communities.
South Memphis is one of the clearest examples of what this looks like on the ground. xAI’s Colossus facility became a symbol of the current build-fast mood, with intense scrutiny over power supply, gas-fired equipment, permitting, and local air quality concerns. Residents and environmental groups did not need a seminar in machine learning to understand the issue. They could see a high-profile company receiving speed and deference while a historically Black community with longstanding health burdens was asked to absorb additional risk. The conflict was not philosophical. It was atmospheric.
That pattern repeats elsewhere with different local details. In Virginia, datacenter growth has collided with debates over transmission lines, rising demand, land use, noise, and who ultimately pays for grid expansion. Utility bills are not abstract. They are household constraints. When a region reorganizes its power system around server farms, the distributional question follows immediately. Which customers subsidize what? Which industries get priority access? Which neighborhoods inherit the visual and environmental footprint?
This is where the old language of “externalities” starts to feel inadequate. The effects are not outside the system. They are part of the business model. Cheap land, favorable regulation, tax incentives, fast permitting, municipal water access, grid connections, and pliable labor conditions are not incidental. They are inputs. If a company can secure them without paying the full social cost, the economics improve. Calling the harm an externality can make it sound accidental, when it is often a structural feature of how the project penciled out.
There is still room for tradeoffs. Some datacenter projects bring tax revenue, construction work, and local investment. Some AI applications will very likely prove useful enough to justify meaningful resource consumption, especially in science, medicine, or public service. The problem is not that every watt spent on AI is illegitimate. The problem is that the legitimacy question is barely asked before the land deal closes and the interconnection queue fills up.
Seeing AI as infrastructure changes the argument
Once AI is treated as infrastructure, several debates become less confused. Transparency is no longer a nice ethical add-on. It becomes basic accounting. How much power will this campus draw under realistic operating conditions? What water source will it rely on in August, not in the sustainability PDF? What emissions accompany the backup plan? What labor sits behind the data pipeline? What public subsidies are attached? Those questions sound less glamorous than AGI, which is probably why they matter more.
This reframing also punctures a common dodge in AI policy: the tendency to treat environmental costs as secondary because the technology is supposedly too important to slow down. That logic would be easier to take seriously if importance were actually demonstrated rather than assumed. Some uses of generative AI are substantial. Others amount to replacing a competent workflow with a more resource-intensive one because investors wanted a product story. A market can absorb a lot of waste when the waste is hidden in distant places.
There is a deeper cultural point here. For decades, the digital economy benefited from being imagined as an escape from physical limits. Information wanted to be free, software ate the world, and the internet felt like a realm where scale no longer had the same environmental texture as steel mills or railways. AI ends that illusion. It drags computation back into the visible world of land, water, factories, permits, and labor conflict. The code still matters. The model architecture still matters. But they matter inside an industrial apparatus, not outside one.
That is why Crawford’s formulation lands so hard. It is not just a poetic correction. It is a diagnostic tool. If we keep speaking about AI as if it were mostly a feat of disembodied cognition, we will continue asking the wrong questions and rewarding the wrong behaviors. We will praise sophistication while ignoring extraction. We will celebrate convenience while underpricing the landscape that makes it possible.
The useful debate starts lower to the ground. It asks what kind of buildout this technology requires, who is made more vulnerable by it, and which uses deserve the burden they impose. Until that accounting becomes ordinary, every “smart” response on a screen will carry a simpler truth beneath it: someone turned earth, water, electricity, and human work into this sentence.
End of entry.
Published April 2026