Corporate AI relies on extraction of the planet, people, and data contributing to the global climate crisis and environmental injustice.
Corporate AI systems are often celebrated as revolutionary tools that promise to transform industries, enhance productivity, and solve complex social problems. However, the development, maintenance, and use of these systems are built upon a deeply extractive relationship with the Earth. The planet is treated as an externality—a resource to be exploited rather than a living entity to be respected.
First of all, the production of AI is frequently portrayed as a purely digital or intellectual endeavor in the “clouds.” This abstraction hides the physical infrastructure and labor required to create and sustain AI systems. AI is not a distant, ethereal intelligence but a material network of people, servers, and computers located in real places on Earth, often in the form of massive data centers. These data centers are dependent on materials and energy extracted from the Earth, including rare earth metals, fossil fuels, and water. The extraction of these resources often occurs in marginalized communities, where environmental degradation and labor exploitation are rampant. For example, the mining of cobalt and lithium for hardware components has been linked to human rights abuses and ecological destruction in countries like the Democratic Republic of Congo. The rapid expansion of data centers also pushes communities off the land they have lived and worked on for generations. The physical space AI occupies is not neutral; it is a site of conflict where corporate interests often override the rights and livelihoods of local communities.
Secondly, AI systems are inherently resource-intensive, requiring significant amounts of energy, water, and raw materials to function. Data centers consume enormous amounts of electricity, often generated from extractive industries like coal and natural gas. Training large AI models, such as OpenAI’s GPT-4 or Google’s Bard, requires computational power equivalent to the energy consumption of thousands of households for extended periods. Data centers also require vast amounts of water for cooling, straining local water supplies and exacerbating water scarcity in already vulnerable regions. When servers become obsolete due to the relentless drive for more processing powers they are discarded, contributing to the growing problem of electronic waste. The significant “embodied” carbon emissions of constructing and building data centers are also part of the life cycle of AI systems.
Finally, the dominant paradigm in corporate AI development is growth without limits, driven by the belief that bigger models and more data will unlock unprecedented economic value. This mindset treats the Earth as an infinite resource, ignoring the ecological and social costs of extraction and consumption that require more data, energy, and computational power. The extractive and resource-intensive nature of AI exacerbates the very problems it is often touted as solving, such as optimizing energy use or reducing waste. Additionally, marginalized communities, far removed from the corporate centers where AI products are used, disproportionately bear the costs of raw material extraction, data center construction, and energy generation. These communities face the immediate impacts of environmental degradation, displacement, and health hazards while reaping few of the benefits of AI advancements… The actual siting of these communities can lead to protests, zoning conflicts, and political challenges. For example, in Arizona, fossil fuel projects are being expanded to help provide for more data centers while Black communities face continued health disparities caused by the projects, and Navajo Nation citizens continue to lack electricity on portions of the reservation. In Querétaro, the state government continues to provide incentives to these data centers in the midst of a drought.
The environmental impact and justice issues of corporate AI are starting to get the attention they deserve, though not nearly enough. Journalists, academics, and activists have been pointing out how AI systems are contributing to climate change, resource depletion, and environmental injustice. These critiques are vital, but they often focus on the symptoms of the problem rather than the root causes.
One of the most talked-about issues is the massive energy and water consumption of AI systems. Another major critique focuses on the environmental justice issues tied to AI. In response to these critiques, some tools have been developed to help mitigate AI’s environmental impact. For example, carbon calculators allow AI developers to estimate the carbon footprint of their models. These tools are a step in the right direction, but they often place the burden of responsibility on individual AI developers and users rather than addressing the systemic issues at play. While it’s important for individuals to be aware of their environmental impact, focusing on individual actions can distract from the larger problem: the role of corporations in driving the demand for resource-intensive AI systems. What’s needed is a shift in focus from individual responsibility to corporate accountability. Corporations must be held responsible for transitioning to green energy grids to power their data centers, designing hardware with longer lifespans, and ensuring that their supply chains are free from exploitation and environmental harm. This requires systemic change, not just individual action.
The existing dominant critiques shine a light on important issues, but they don’t go far enough in challenging the systems that create those issues in the first place. For example, while it’s great to talk about reducing AI’s carbon footprint, we also need to ask why is AI needed and so resource-intensive in the first place. Is it because of technical necessity, or is it because corporations are chasing endless growth and profit? What’s often missing from these dominant critiques is a deeper discussion about values. Corporate AI is built on a worldview that sees the Earth and its inhabitants as resources to be exploited. A liberatory AI way, on the other hand, starts from a place of respect—for people, for communities, and for the planet. It draws from Indigenous knowledge and centers the communities that sustainably steward the land. It asks: What if AI wasn’t designed to extract resources from the Earth and exploit marginalized communities? What if it was built to respect ecological limits and prioritize justice? What if it was built upon systems of values that celebrate sustainability and harmony with the earth instead of an infinite growth mindset? These questions go beyond making AI “less bad”—they demand a fundamental rethinking of what AI is for and who it serves.
Further Reading
- Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
- Escobar, A. (2018). Designs for the Pluriverse: Radical Interdependence, Autonomy, and the Making of Worlds. Duke University Press.
- Shiva, V. (2015). Earth Democracy: Justice, Sustainability, and Peace. North Atlantic Books.
- AI has an environmental problem. Here’s what the world can do about that.
- As generative AI asks for more power, data centers seek more reliable, cleaner energy solutions
- Why Microsoft made a deal to help restart Three Mile Island | MIT Technology Review
- How much water does AI consume? The public deserves to know – OECD.AI
- A bottle of water per email: the hidden environmental costs of using AI chatbots
- Microsoft’s Hypocrisy on AI
- In the shadows of Arizona’s data center boom, thousands live without power
- “This Has Nothing to Do With Clouds”: A Decolonial Approach to Data Centers in the Node Pole