Corporate AI is built by exploited laborers.

Corporate AI development depends on invisible labor, much of which is performed by workers in the Global South. This extractive reality stands in stark contrast to the techno-utopian promises of AI companies. My personal interactions with Kauna Malgwi, the Nigerian chairperson of the content moderator’s union, and my close watching and reading of the whistleblowing expose at MozFest, BBC Africa, and 60 Minutes revealed the deception and exploitation at the heart of AI content moderation. Workers are recruited under false pretenses – told they would be “language annotators” for African languages, with no indication of the traumatic content they would be made to review. 

These workers arrive at unassuming sub-contractor company buildings (Sama or TelePerformance and more), but once inside, they find themselves logging directly into BigTech systems like Open AI, Meta, TikTok, etc. Workers are coerced into signing non-disclosure agreements that prevent them from speaking about their experiences – a contractual silencing that whistleblowers have accurately described as “modern-day slavery.” They receive no psychological support despite being exposed to the most disturbing content imaginable: graphic violence, child abuse, and extreme hate speech. All this while being severely underpaid relative to the immense psychological toll of their work.

This human cost is devastating. Just days ago, a content moderator for TikTok was found dead in her apartment in Kenya. She had been deceased for three days before anyone discovered her. Her name was Ladi Anzaki Olubunmi, and her story exemplifies the disposability with which these workers are treated. There is an added cruel irony here: TikTok pays Global North creators through its Creator Fund but systematically excludes African creators from the same compensation—even as it relies on African workers like Ladi to moderate the very content that generates its profits. Sub-contractor companies like Teleperformance establish operations in Kenya, leveraging the country’s national position calling for tech investment to “provide jobs for Africans.” They recruit content moderators from across the continent with promises of work permits and annual tickets home. Yet these promises remain unfulfilled.

While Big tech giants exploit African labor through content moderation, Chinese companies deploy a different but equally extractive approach. Chinese “smart-safe cities” initiatives with scene analysis and facial recognition technologies are marketed across Africa as solutions to reduce crime. Bulelani Jili’s work has extensively documented this, and his findings suggest that there is no clear evidence that these systems actually reduce crime rates. What is clear, however, is how authoritarian governments in Uganda and Zambia have weaponized these technologies to suppress anti-government opposition and dissenters.

The attraction of these systems to governments like Nigeria’s, Uganda’s, Zambia’s, and more becomes obvious: they extend state capacity for surveillance and control under the guise of public safety. These agreements are enabled by governmental greed and lust for power, offering new tools for monitoring and punishing dissent. At the same time, they extend Africa as a Chinese data territory, with people’s biometric and behavioral data flowing to Chinese companies and, by extension, the Chinese state—all without the consent of the citizens whose data is being harvested. This surveillance chain does not implicate just China, as similar safety-marketed surveillance tech from Israel, the United States, the United Kingdom and more connect several African countries to a global surveillance industry.

This dual exploitation—Western-Eastern extraction of labor, data, and surveillance extensions —represents two sides of the same colonial narrative. Ladi, denied a visa renewal and stranded in Kenya without the promised ticket home to Nigeria, died alone within this exploitative system. We may never get justice for her, as the insidious web of contractors, subcontractors, and multinational corporations diffuses responsibility and provides plausible deniability for tech giants.

This labor and data exploitation is not incidental but fundamental to how AI systems are built, maintained, and deployed globally. The racialized nature of this exploitation is also very familiar, as it follows centuries-old colonial patterns where Black and Brown bodies and data are extracted for wealth and power accumulation.

The pressure on African nations to align with competing techno-nationalist agendas creates an unending spiral of digital dependency. We see how Western and Eastern Big Tech corporations position themselves as saviors “bridging the digital divide” while actually extending colonial control. What emerges from these implementations are serious concerns about the intersection of state surveillance, data ownership, and labor and privacy rights. 

The fundamental questions remain unaddressed: What data do the cameras, traffic surveillance systems, and phone decryption tools collect? How is this data perceived in authoritarian contexts—as an asset for the state or as something inevitably entangled with people’s rights? What are the implications of deploying technologies already demonstrated to exacerbate biases and heighten risks of misidentification, particularly within criminal justice systems?

In June 2024, Google posted images of African children playing football on the platform currently known as X. The caption of this now-deleted post shared that the most popular game in Africa was not, as some might think, “hide-and-seek with lions” but football. This post, on the back of Google’s own extraction, reflects the same tired narrative of exoticization and infantilization that accompanies technological engagement with the continent. Every time it seems like techno-critics might be too much of a killjoy about technology saviorism in Africa, we encounter content like this that further justifies our critique, especially when viewed through the lens of a humanistic African feminist thought. This post exemplifies why the myth of “bridging the digital divide” functions as nothing but a colonial strategy to keep Africa tied to the imperial imaginary of the West. 

African nations thus find themselves caught between competing forms of digital colonialism, creating perpetual debt conditions—financial, technological, and political—that restrict African nations’ autonomy while creating a path of debilitation of African workers in their savorist wake. 

Corporate AI presents itself as automated, objective, and magical. Yet it depends entirely on human labor- specifically, the judgment of workers who are deliberately hidden, underpaid, and psychologically damaged by their labor. This is not merely a failure of corporate ethics but a structural feature of how AI wealth accumulation operates: by extracting value from racialized bodies kept invisible to end users.

The concrete manifestation of corporate AI’s extractive logic is perhaps most cohesively illustrated in Marc Andreessen’s “The Techno-Optimist Manifesto,” which champions a vision of technological progress that depends entirely on the creation of a  white, male, wealthy super monohuman. This manifesto serves as a perfect case study of how technological discourse obscures the debilitating reality of AI development.

Andreessen’s opening claim that “Our civilization was built on technology” immediately raises the questions: Which civilization? Whose technology? And built on whose bodies? The manifesto’s universalizing “we” erases the differential impact of technological development across global populations.

This erasure is not accidental but structural. We have established how AI systems from Meta and OpenAI are quite literally built on the psychological trauma of African content moderators making $2/hour while being exposed to the worst content humanity produces. These workers bear the psychological burden of filtering graphic violence, child abuse, and hateful content so users can experience a “clean” online environment. Yet they remain invisible in techno-optimist narratives of progress.

The manifesto’s framing of technology as “lifting people out of poverty” obscures the reality that many technological supply chains depend on resource extraction that creates what Cameroonian historian Achille Mbembe terms “death-worlds”—zones where vast populations are subjected to conditions of living death. In the Democratic Republic of Congo, systematic sexual violence is used as a weapon of mineral control, as the techno-capital machine’s demand for cobalt and coltan drives conflict and exploitation. Yet this human cost of their extraction is conveniently absent from techno-optimist discourse.

This monohumanism represents what I identify as a third invention in Afro-Jamaican scholar Sylvia Wynter’s question of the Human in our coloniality/modernity—what we might call ‘Man3’—a conception shaped by our increasing entanglement with and fetishization of technology. We begin with the inevitable fact that the experiences we discuss are evidence of a racial capitalism whose very roots began with the invention of ‘Africa’ with the transatlantic slave trade, through colonialism and the persistence of coloniality. Just as Wynter describes the evolution from Man1 (the Renaissance political subject) to Man2 (the biological, Darwinian subject), we now witness Big Tech elites creating technologies whose labor relies largely on the racialized bodies categorized under ‘Man2’—African and South Asian workers underpaid to do the arduous, emotionally taxing task of data labeling and content moderation. 

Our everyday lives become territories of surveillance, controlled by various platforms from social media to work tools, health records, and transportation, to the scene recognition tools of the streets, increasingly separating us into those who adapt to machines and techno-hegemony and those who control these technologies—the data elites versus the data producers. The Techno-Optimist Manifesto reveals the extent to which this ‘Man3’ paradigm has become normalized, presenting a vision of progress that requires the continued exploitation of racialized bodies while promising a techno-utopian future accessible only to those already privileged within global hierarchies and whose prescriptive statements define out coloniality/modernity.

This monohumanism operates through a closed loop: African language and stylistic choices are assimilated into AI systems through the labor of underpaid content moderators, only for those same people to find their own writing later flagged as “AI-generated” by detection systems, creating another cruel irony where the very people whose labor makes AI systems possible are summarily excluded.

The concrete infrastructure of corporate AI reveals precisely what Afro-Jamaican scholar Sylvia Wynter critiques—the overrepresentation of one vision of being human that makes other ways of being unthinkable. At this moment, the monohuman operates through Big Tech’s Euro-American-Chinese universalized imaginaries, while African bodies remain perpetual laboring bodies positioned outside technological progress except as sites of extraction.

When the manifesto proclaims that “technology opens the space of what it can mean to be human,” we must then ask: Whose humanity is being expanded, and whose is being erased? In the techno-capital machine, who cleans? Who mines? Whose bodies labor? Whose knowledge is extracted then discarded? Can technology solve the problem of its own colonial logic? Who, in this vision, gets to be human?

The material reality of corporate AI development answers these questions clearly. Until these questions are confronted, any techno-optimism that ignores the differential distribution of technology’s benefits and harms simply reproduces colonial patterns of exploitation under a new name.

A growing number of scholars, whistleblowers, and activists are bringing these issues to light, though their work often remains marginalized in mainstream tech discourse. Abeba Birhane’s work on “The Algorithmic Colonization of Africa” for instance critiques how AI and algorithmic systems reproduce colonial power dynamics on the continent. Birhane demonstrates how AI systems developed primarily in Western contexts are deployed across Africa with little regard for local needs, contexts, or potential harms. Her framework helps us understand that what we are witnessing is a new form of colonization operating through algorithms and data extraction.

Bulelani Jili’s research on Chinese Surveillance Technology in Africa also provides well-researched documentation of how Chinese tech companies are expanding their surveillance infrastructure across the continent. Jili’s work reveals how authoritarian governance models are embedded within the technologies themselves. He also questions the rhetoric promoting these systems, which emphasize crime prevention, accelerated emergency response, and technological modernization. Yet, as seen in the first implementation in Nairobi, Kenya, there is a troubling lack of empirical evidence supporting claims about the effectiveness of these surveillance technologies. In fact, reports from Huawei frequently contradict those from Kenya’s National Police Service, raising questions about who benefits from these systems.

From the work of Daniel Motaung, Kauna Malgwi, Mophat Okunyi, and more, whistleblowers have been bringing firsthand accounts of exploitation to public attention, connecting digital rights to mental health care and decolonization. Former content moderators have risked all to expose the traumatic working conditions at companies that contract with Meta, OpenAI, TikTok, and other platforms. Their testimonies have been featured in investigations by TIME magazine, BBC, and 60 Minutes. Reports from Nigeria have also uncovered that politicians are weaponizing digital surveillance technologies to target their opponents and even spy on their mistresses, revealing how quickly surveillance tools shift from their stated purpose (crime reduction) to serving the personal and political interests of those in power.

While critical technology scholars, including Safiya Noble, Ruha Benjamin, Deb Raji, Joy Buolawumi and Timnit Gebru and more, have further developed frameworks for understanding how technological systems encode and amplify existing social hierarchies. Nonetheless, we need more holistic analysis that connects extractive labor practices, surveillance infrastructure, geopolitical technology competition, and the persistent devaluation of African lives, to see the full scope of how AI systems perpetuate coloniality. My own research aims to bridge these conversations by centering African feminist thought as not just a critique of existing systems but as a foundation for alternative frameworks for technological development.

Further Reading (Academic)

  • Birhane, A. (2020). The Algorithmic Colonization of Africa. Script-ed, 17(2), 389-409.
  • Camp, S. M. H. (2005). Closer to Freedom: Enslaved Women and Everyday Resistance in the Plantation South. United States: University of North Carolina Press.
  • Jili, B. (2022). Chinese ICT and Smart City Initiatives in Kenya. Asia Policy 17(3), 40-50. https://dx.doi.org/10.1353/asp.2022.0051
  • Klein, L. & D’Ignazio, C. (2024). Data Feminism for AI. ArXiv. https://doi.org/10.1145/3630106.3658543 
  • Mbembe, A. (2019). Necropolitics. Duke University Press.
  • Ogundipe, M. (1994). Re-creating Ourselves: African Women & Critical Transformations. Africa World Press.
  • Steady, F. C. (1986). African Feminism: A Worldwide Perspective. In R. Terborg-Penn, S. Harley, & A. Benton Rushing (Eds.), Women in Africa and the African Diaspora (pp. 3-24). Howard University Press.
  • Weheliye, A. G. (2014). Habeas Viscus: Racializing Assemblages, Biopolitics, and Black Feminist Theories of the Human. Duke University Press.
  • Wynter, S. (2003). Unsettling the Coloniality of Being/Power/Truth/Freedom: Towards the Human, After Man, Its Overrepresentation—An Argument. CR: The New Centennial Review, 3(3), 257-337.
  • Barrett, T., Okolo, C. T., Biira, B., Sherif, E., Zhang, A. X., & Battle, L. (2025). African Data Ethics: A Discursive Framework for Black Decolonial Data Science. ArXiv. https://arxiv.org/abs/2502.16043 

Further Reading (Popular Press)