About

Introduction

What is artificial intelligence? Where and how is it being used? How are its harms and benefits distributed? Is it “intelligent”? Is it inevitable? What are our alternatives?

Contributors to the liberatory AI ecosystem project came together because we share an interest in answering these questions. We are a group of researchers, artists, and practitioners from a variety of disciplines and institutions united by our engagement in the current state of AI practice and theory. 

From its inception nearly 100 years ago, systems called artificial intelligence have been deployed in contexts as far-ranging as cryptography, healthcare, language translation, and music production.  Early proto-AI systems included chess-playing algorithms and therapy chatbots.  Today, AI systems are being incorporated into many spheres of everyday life. Students use ChatGPT to brainstorm project ideas, email programs run spam filters so that we only see relevant correspondence, landlords use AI systems to do background checks on tenants (more on this later). The technologies behind these systems are actually quite diverse. They include models that structure data (e.g., topic modeling, k-means clustering), that classify data (e.g., logistic regression, decision trees, convolutional neural networks), and that generate data (e.g., large language models, diffusion models). 

What unites these systems is not a particular technological approach, but the values, hype cycles, discourses through which they are planned and funded.  That is, for the majority of AI systems being deployed today AI is not (only) a set of pattern-recognition techniques with data. It also encompasses a specific organization of material resources and, through the marketing and hype surrounding AI, proposes a certain ideological order of the world. 

As Luciano Floridi and others have pointed out, we are in the middle of a significant AI hype cycle.  As Clea Borne’s research has demonstrated, the hype cycle has been successful in part because it is based on pleas to emotion, framing AI as a “a panacea, silver bullet, miracle solution, universal remedy, and magic answer for any problem, question, or issue one may have.”   The popularity of artificial intelligence, then, is not based on any specific technical nuance.  Rather, AI’s popularity is based on speculation and rarely-fulfilled promises of ease and a better future. 

When we talk about AI, we’re sometimes talking about specific systems (e.g. Shotspotter) in specific locations (in Los Angeles).  At other times, AI is not a computational system, but an ideological one – a complex network of values, ethics, priorities, and resources – that shape our material world. 

Corporate AI Landscape

Our investigations into AI begin with the current state of corporate AI.  Our focus is on corporate AI specifically and not artificial intelligence as a sub discipline of computer science.  Corporate AI draws attention to the very particular ways that corporations have set the terms of AI development. There are as many ways to approach AI development as there are innovative applications for AI technology, yet Corporate AI has set a precedent for developing AI in ways that are extractive, environmentally harmful, and colonialist. 

For example, as Amelia Dogan and Hongjin Lin write, within corporate AI practices “the planet is treated as an externality—a resource to be exploited rather than a living entity to be respected.” They point to the ways that corporate AI is entrenched in approaches that do not consider the welfare of the natural environment.  They echo our commitment to examining and denormalizing corporate AI approaches to resources:  “[W]hile it’s great to talk about reducing AI’s carbon footprint, we also need to ask why AI is so resource-intensive in the first place. Is it because of technical necessity, or is it because corporations are chasing endless growth and profit?” 

Ololade Faniyi builds on this critique as she explores the ways that Corporate AI harms extend beyond the natural environment to the people working on AI. Tech giants rely on invisible labor, much of which is performed in the Global South.   So many of these violences go unreported because, as Faniyi notes, “Workers are coerced into signing non-disclosure agreements that prevent them from speaking about their experiences – a contractual silencing that whistleblowers have accurately described as “modern-day slavery.” 

Throughout the essays, our contributors hold corporate AI to the highest standards, demanding that contemporary AI practice not perpetuate harms to the planet, to ourselves, and to viable futures for our communities. 

Dead ends

It can be tempting to try to articulate a better way forward for each problem articulated in the essays in the corporate AI section. But this is a trap—it limits our work to being reactive and only able to address the problems on the terms set by Corporate AI.  Rather than trying to suggest a solution for each critique, we have to accept that some of the suggestions of corporate AI are dead ends. 

A dead end can be a stopping point because we reject the values that a particular approach prioritizes. For example, we are suspicious of attempts to make AI more “transparent” because transparency is not the same as accountability, and a system whose harms we understand is not better than a system that does not cause harm in the first pace. 

We also label some approaches dead ends  because, frankly, we just don’t want to think about the problem in the way it has been described, and our time could be better spent elsewhere.   For example, there are some contexts in which AI is simply not the solution.  There is no amount of  fast, accurate, “un-biased” or “objective” AI that can solve some types of problems, and we consider attempts to do this to be dead ends.   

Ways Forward 

Our visions for other ways are not meant to be perfectly fitting solutions for the critiques and problems outlined above.  Instead they are speculative, yet possible, visions for other ways to be in relationship to technology.  They are visions of reclamation, of harm reduction. They are calls to action in a moment designed to engineer freeze responses. We do not believe these will solve every problem with AI but we also do not need to wait for perfect conditions to begin to rethink AI.

For example, Yujia Gao reminds us that communities can set their own standards for evaluating AI systems.  Typically, AI systems are evaluated by the standards of their corporate creators.  Gao reminds us that “the failure to align evaluation metrics with community needs is not just a technical flaw – it is a structural issue rooted in capitalism and power asymmetries.”  However, there is another way.  In a liberatory AI framework, evaluation is not an afterthought, a marketing tool, or a pretense of accountability – it is the fundamental process through which communities collaboratively define what “good AI” looks like, who it serves, and how it aligns with their collective values.”

What’s next?

We invite you to read and engage with the essays here, including each piece’s recommendations for further reading.  As you read, you might find topics that we’ve missed, or that you think need a different treatment.  We would love to have you join us in writing for the project, and get your help navigating the corporate landscape and the misleading dead ends. We are especially interested in your visions for liberation with (and without) AI.