Corporate AI, especially LLMs, reinforce colonial and capitalist structures by privileging Western, male, and Global North perspectives while excluding localized knowledges.
Corporations have developed and deployed (through markets) large language models (LLMs), particularly generative AI systems like ChatGPT, expanding their reach across geographic and political boundaries and broadening the range of languages and topics these models can process. On the one hand, LLMs developed by these corporations aim to represent and communicate with diverse realities. On the other, by prioritizing profit through broader market reach and scalability via larger models, they strategically promote a particular vision [LINK TO UNIVERSALITY]of the world, relying predominantly on English-language data and Western social perspectives about the world. This reliance prevents these models from capturing the nuances of the cultures, languages, and societies in which they operate.
This strategic flawed attempt to universally represent the world and specific contexts while excluding local and situated contexts reveal a fundamental limitation of large language models (LLMs): the reproduction of stereotypes rooted in social structures, such as racism and sexism. While corporations often promote their AI systems as universally applicable to benefit “all of humanity”, large-scale models systematically encode the perspectives of specific and typically privileged social groups–predominantly white, cisgender, Christian, English-speaking men from the global North. Even though this kind of human being has been historically constructed as a universal way of being in the world–as demonstrated by centuries of colonization and the colonial consequences afterward (Grosfoguel 2008)–, by privileging this profile, it is a particular and situated perspective that these LLMs embrace.
Recent research illustrates how LLMs provide a limited perspective on different geopolitical regions, languages, and social groups. For example, Nyalleng Moorosi (2024) highlights the persistent limitations of large-scale AI models developed by Western tech companies, such as OpenAI, in processing African languages. ChatGPT, for instance, recognizes sentences in Hausa, Nigeria’s most widely spoken language, only 10–20% of the time. This demonstrates a constraint imposed by the model’s developers, as it lacks local linguistic and cultural contextualization.
This exclusion of diverse geographic and cultural perspectives is also evident in text-to-image generative models, which fail to capture regional diversity and instead reinforce dominant narratives and stereotypes (Hall et al. 2024). Research on these models in Europe, Africa, and Southeast Asia highlights how both automated and human evaluations of generated images vary significantly by location (Hall et al. 2024). Annotators from different regions frequently disagree on what constitutes geographic representation, reinforcing the broader issue of AI systems reflecting the prejudices of those who design and train them rather than fostering a situated perspective.
Additionally, a nearly universal gender gap in generative AI usage persists. 1Many artificial general intelligence systems are continuously trained on data acquired through use, e.g. the text of Chat-GPT prompts. Data from 18 studies covering more than 140,000 individuals worldwide reveal that this gap exists across regions, sectors, and occupations (Otis et al. 2025). This disparity risks creating a self-reinforcing cycle: women’s underrepresentation in generative AI usage results in systems trained on data that inadequately capture women’s preferences and needs. Consequently, even if achieving gender-equal usage of generative models is not a liberatory goal, these systems may be marketed as general and universal, but they ultimately reproduce harmful gendered stereotypes, generate lower-quality recommendations and outputs for women and gender minorities, and further widen existing gender disparities in technology implementation.
By attempting to generalize the world, corporate AI fails to recognize the critical importance of context-situatedness. And by doing that, it denies its main marker: its own particularity and non-universality. Or, as Lauren Klein et al. (2025) state, it reflects a narrow definition of culture rooted in the terms of modernity.
Saying AI is colonialist implies that AI reinforces capitalism by also reproducing violent racial categories—as colonial modernity historically does. These models, built on Western-centric datasets, embody colonialidade do saber/knowledge coloniality (Quijano 2005), privileging certain worldviews while marginalizing non-Western and local epistemologies. AI’s reliance on English clearly presents this exclusion, trying to make natural a narrow epistemology that frames Western knowledge–such as through English-trained data–as universal while distorting or erasing alternative ways of knowing. This is clear in the examples provided above, which highlighted how these prejudices manifest in AI’s poor performance in processing African languages (Moorosi 2024) and its failure to accurately represent non-Western cultural contexts (Hall et al. 2024). This asymmetry determines whose perspectives are considered valid and whose realities are ignored.
Neglection and removal of culture, in the last instance, is a way of dominating different cultures and, through this linguistic and epistemic hegemony, can be translated into material consequences, exacerbating global inequalities through the consolidation of colonialidade do poder/power coloniality (Quijano 2005). AI development remains concentrated in Western tech corporations that not only dictate technological and ethical standards but also extract data work from the Global South without equitable returns. This is what happens, for example, in Brazil, where 64% of AI data training workers are women, with two-thirds being mothers seeking additional income (Tubaro et al. 2025). At the same time, the outputs of these models reinforce racialized and gendered hierarchies, echoing historical and colonial patterns of domination (Santino 2024). For Dan McQuillan (2022, 66), “AI not only perpetuates racist discrimination but acts, at a deeper level, as a technology of racialization.” Think, for example, about the well-known cases of facial recognition technologies that misidentify Black individuals at higher rates compared to white individuals, leading to arrests and surveillance (Buolamwini and Gebru 2018). These errors are not technical flaws but stem from how machine learning constructs and enforces categories of difference, “forcing closeness in data space” based on biased datasets that reflect existing social hierarchies (McQuillan 2022, 66). Even if efforts to improve facial recognition for marginalized groups do not necessarily lead to liberation, by presenting this systemic exclusion, AI strategically produces and sustains racialized systems of domination and exclusion, assigning different values and risks to individuals based on racial attributes that are different from the dominant, or so-called “universal” human beings–usually white, men, from the global North.
As María Lugones (2014) argues, colonialidade do gênero/gender coloniality combines these exclusions, as generative AI models systematically misrepresent or erase gender-diverse identities while perpetuating harmful gender stereotypes. This digital coloniality does not merely “mirror” existing prejudices—it automates and scales them under the defense of technological neutrality. Os Keyes (2019) exemplify this problem in their text “Counting the Countless.” For them, the use of gender-recognition algorithms in facial recognition systems, which force individuals into rigid categories of “male” or “female” based on biological essentialism, misrepresents and erases trans and nonbinary people. These systems require legibility within a binary framework, often linking access to public resources or safety (such as bathroom access or legal ID recognition) to conformity with those categories. This is a form of gender coloniality that automates, scales, and enforces a violent system of exclusion, based on a gendered approach, rooted in colonization’s sexism and heteronormativity markers that appears neutral or scientific, but in the end, ignores situatedness to value a supposed universal, but also colonial, way of being.
Beyond “misrepresentation,” these dominant practices and prejudices have a material impact on thousands of people. For example, besides AI systems delivering lower-quality outputs in non-English contexts, studies have also revealed that LLMs exhibit broader social identity biases, such as favoring a group and showing negativity toward other groups (Hu et al. 2025). These prejudices can reinforce societal discrimination and contribute to issues like political polarization. The challenge is not only improving representation but questioning who controls knowledge production, whose perspectives are embedded in AI systems, and how these systems shape global power dynamics–including authoritarianism–in the current historical moment we live in.
Rooting the future of AI in our concrete collective experiences
I draw here on Rita Laura Segato (2025, 14) to argue that instead of embracing abstract utopias shaped by evolutionist and Eurocentric perspectives about the future—utopias that can have an “authoritarian effect”—we should instead look toward the concrete experiences of peoples who are still today “communally and collectively organized.” As Segato suggests, the inspiration for possible futures is not to be found in the illusion of a pre-designed future, such as the colonizing dreams of Space, but rather in concrete experiences. These experiences are rooted in the lives of those who, after hundreds of years of genocide, exploitation, and violence, have continued to resist.
From solidarity economy practices (McQuillan 2022) to feminist data activism against feminicide (D’Ignazio 2024), many real-world experiences offer concrete practices for social change in relation to AI. One such example is the work of the Stop LAPD Spying Coalition, which challenges police surveillance in the Skid Row community. Their 2023 report exposes how the State’s “family policing” system uses AI and predictive analytics to punish mothers for systemic issues beyond their control, such as lack of housing or disability, reinforcing colonial patterns of domination (Stop LAPD Spying Coalition and Downtown Women’s Action Coalition 2023). These examples remind us that building just futures for AI must begin by rooting our efforts in the community and situated struggles and solidarities that are already happening now.
References
D’Ignazio, Catherine. 2024. Counting Feminicide: Data Feminism in Action. MIT Pess, Cambridge.
Grosfoguel, Ramón. “World-Systems Analysis in the Context of Transmodernity, Border Thinking, and Global Coloniality.” Review (Fernand Braudel Center) 29, no. 2 (2006): 167–87. http://www.jstor.org/stable/40241659.
Hall, Melissa, Samuel J. Bell, Candace Ross, Adina Williams, Michal Drozdzal, and Adriana Romero Soriano. 2024. “Towards Geographic Inclusion in the Evaluation of Text-to-Image Models.” In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24), 585–601. Rio de Janeiro, Brazil. New York: Association for Computing Machinery. https://doi.org/10.1145/3630106.3658927.
Hu, Tiancheng, Yara Kyrychenko, Steve Rathje, Nigel Collier, Sander van der Linden, and Jon Roozenbeek. 2025. “Generative Language Models Exhibit Social Identity Biases.” Nature Computational Science 5: 65–75. https://www.nature.com/articles/s43588-024-00741-1.
Keyes, Os. 2019. “Counting the Countless: Why Data Science Is a Profound Threat for Queer People.” Real Life Magazine, April 8, 2019. https://reallifemag.com/counting-the-countless/.
Klein, Lauren, Meredith Martin, André Brock, Maria Antoniak, Melanie Walsh, Jessica Marie Johnson, Lauren Tilton, and David Mimno. 2025. “Provocations from the Humanities for Generative AI Research.” Preprint, arXiv, February 28, 2025. https://arxiv.org/abs/2502.19190.
Lugones, María. 2014. “Rumo a um Feminismo Descolonial.” Estudos Feministas 22, no. 3 (September–December): 935–952. https://periodicos.ufsc.br/index.php/ref/article/view/36755
McQuillan, Dan. 2022. Resisting AI: An Anti-Fascist Approach to Artificial Intelligence. Policy Press. https://rojen.uk/doc/resisting-ai-an-anti-fascist-approach-to-artificial-intelligence.pdf
Moorosi, Nyalleng. 2024. “Better Data Sets Won’t Solve the Problem — We Need AI for Africa to Be Developed in Africa.” Nature 636, no. 8042 (December): 276. https://doi.org/10.1038/d41586-024-03988-w.
Otis, Nicholas, Delecourt, Katelyn Cranney, and Rembrand Koning. 2025. “Global Evidence on Gender Gaps and Generative AI.” Harvard Business School. https://www.hbs.edu/ris/Publication%20Files/25-023_8ee1f38f-d949-4b49-80c8-c7a736f2c27b.pdf.
Quijano, Aníbal. 2005. “A Colonialidade do Poder, Eurocentrismo e América Latina.” In A Colonialidade do Saber: Eurocentrismo e Ciências Sociais. Perspectivas Latino-Americanas, 117–142. Buenos Aires: CLACSO – Consejo Latinoamericano de Ciencias Sociales.
Santino Regilme, Salvador. 2024. “Artificial Intelligence Colonialism: Environmental Damage, Labor Exploitation, and Human Rights Crises in the Global South.” SAIS Review of International Affairs 44, no. 2 (Fall–Winter). Johns Hopkins University Press. https://muse.jhu.edu/article/950958.
Segato, Rita Laura. 2025. The War Against Women. Critical South Series. Cambridge, UK: Polity Press.
Stop LAPD Spying Coalition and Downtown Women’s Action Coalition (DWAC). 2023. DCF(S) Stands for Dividing and Conquering Families: How the Family Policing System Contributes to the Stalker State. Available at: https://stoplapdspying.org/dcfs/.
Tubaro, Paola, Antonio A. Casilli, Mariana Fernández Massi, Julieta Longo, Juana Torres Cierpe, and Matheus Viana Braz. 2025. “The Digital Labour of Artificial Intelligence in Latin America: A Comparison of Argentina, Brazil, and Venezuela.” The Political Economy of Artificial Intelligence in Latin America, published online February 19, 2025. https://doi.org/10.1080/14747731.2025.2465171.