Corporate AI threatens democracy

To reclaim our socio-digital futures, we must challenge corporate AI’s capture of democratic politics and counter it by centering collective, solidary, and participatory forms of politics.

Business influence on politics is commonly thought of as discreet, happening behind closed doors and away from public view. Culpepper (2011) has called this “quiet politics”. But lately, the politics of Big Tech corporations, who control much of the AI pipeline, has been anything but quiet. The widely shared photos of CEOs from Meta, Amazon, OpenAI, Apple, and Google standing behind Trump at his 2025 inauguration—aptly captured in Barry Blitt’s satirical illustration below—is just one example of how this political influence has moved from back rooms to center stage. What this visibility reveals is a politics that is deeply particularistic or self-interested. Put simply, Corporate AI seeks to capture democratic political processes and imaginaries to advance its own interests—controlling algorithmic decision-making, steering regulation, and influencing democratic elections to prioritize profit and industry power over societal concerns and welfare.

Image credit: Barry Blitt for airmailweekly.

The term “capture” has been used by academics and commentators to describe various dynamics in AI development—for instance, how “tech oligarchs” seek to capture state resources and institutions to support personal gain (Cohen, 2025), or how the AI industry has taken control over academic research agendas (Whittaker, 2021). The term also has a long history in research on regulatory politics, where “capture” typically refers to the co-optation of regulatory bodies by the industries they are tasked with overseeing. For Carpenter and Moss (2014), capture is about consistently steering industry regulation, “in law or application”, away from the public interest.  

Building on these perspectives, the language of capture is helpful for describing how decision-making processes and institutions that should serve broad publics are shaped by Corporate AI to serve private interests instead. But the relationship between AI and democratic politics extends beyond industry regulation proper. Political capture by Corporate AI takes place across multiple sites of politics that configure how AI systems are built and governed—and who benefits (or profits) from them in the long-term. As we will see below, political capture begins with the technical development of AI systems—marked by opaqueness, data extraction, and centralized corporate control—but stretches to broader dimensions of democratic governance: shaping electoral processes, evading regulation, influencing policy and judicial processes through lobbying and campaign financing, and working to redefine our political imaginaries.

To grasp why Corporate AI’s grip on politics is a problem, imagine a world in which a single superintelligence, wired into every person’s brain, can immediately determine everyone’s political preferences, eliminating the need for elections or government. That’s pretty much the future that Andri Magnason sketches in the dystopian novel LoveStar. The catch: this superintelligence is corporate-controlled, and no one can tell whether the preferences identified by its “Democracy machine” are real or manufactured. The result is little human agency and total corporate control, disguised as seamless efficiency.

Unfortunately, the possibility of replacing democratic institutions and collective decision-making by efficient algorithms is a seductive political vision for many in Corporate AI. In the United States, this imaginary is already being pursued by Elon Musk’s Department of Government Efficiency (DOGE), which hopes to replace civil servants with AI systems. But this vision also transpires in the industry’s privileging of efficiency at the expense of public accountability (Neff, 2024)1Inefficiencies, in fact, may actually support civic innovation and help build trust with different publics (Gordon and Mugar, 2020). or in slogans such as “move fast and break things” and “ask for forgiveness, not permission” (Lalka, 2024), which, as Gardiner sums up (2024), celebrate innovation but can “often mask a darker, more authoritarian ethos.” As Cohen (2025, p. 9) writes, “technology oligarchs have systematically pursued a particular vision of technological progress that aims to advance by leaving messy humanity and messy humans behind.” This includes messy, unpredictable democracy.

What’s at stake, then, are the structures and processes through which we govern our societies. And Corporate AI is increasingly invested in reshaping these to serve its own interests. To understand how, it’s useful to follow McQuillan’s (2022, p. 2) framing of AI as a “layered and interdependent arrangement” comprising not just technology, but also institutions and ideology. Corporate AI operates politically across all these layers. Big Tech firms don’t just hold concentrated market power; they also control the digital infrastructures and services that governments, communities, and other actors increasingly rely upon (Rahman, 2017; Cohen, 2025). This infrastructural dominance enables them to convert technical control into political leverage and influence. For example, by lobbying for public investment in AI research and development, these companies can shape state priorities in ways that deepen public dependency on their technologies and reinforce their own market power (Whittaker, 2021). At the same time, control over key infrastructures and services makes it easier for them to sidestep public oversight and regulation (Schaake, 2024)—capturing political decision-making for themselves while shifting AI risks and harms onto society at large. 

How does Corporate AI wield power politically—and with what implications for democratic processes and institutions? We can think of political capture as occurring across various sites of political decision-making.

It begins at the scale of technology design and development, when decisions about how AI systems are built and trained are made. What and whose data are used, and with whose consent? Who gets to make decisions around safety, bias, or risk? Who governs and evaluates AI systems once they are developed? Corporate AI development has been marked by extractive and opaque data practices, with little public accountability. While a growing body of work seeks to counter these practices through participatory forms of AI development, open source models, or decentralized models of ownership, the current industry landscape remains characterized by anti-democratic logics of “centralization and control” (Neff, 2024).

But corporate efforts to shape AI politics go well beyond system design or deployment—they cut into the core of democratic institutions and values. One clear example is direct investment in politics through lobbying and campaign financing—what political scientists call instrumental business power.2For a recent discussion of corporate political spending in the US and how it undermines democracy, see Torres-Spelliscy’s Corporatocracy (2024). Big Tech’s political spending in the U.S. has grown steadily across local, state, and national levels.3While the tech industry once leaned Democratic, several major companies backed Trump in the 2024 election, signaling a shift in political alignment. Amazon, for instance, increased its lobbying expenditures nearly twelve-fold between 2009 and 2022—from US$1.81 million to US$21.38 million, according to data from the Senate Office of Public Records (available on Statista). In fact, lobbying by tech firms hit a record high in 2024, with “one lobbyist for every two members of Congress”. Much of this political spending has focused on weakening or blocking regulation that might constrain corporate control and profits. And it’s not just legislatures—tech billionaires are also targeting state courts, pouring money into judicial elections like Wisconsin’s race for a new state Supreme Court justice. The outcome of this race—where the Musk-backed candidate lost—offers a glimmer of hope that voters can resist overt attempts by corporate AI to buy political influence.

Under Trump’s second administration, we are also witnessing the direct capture of state institutions in ways that advance the interests of tech elites—most notably, Elon Musk. DOGE, spearheaded by Musk, has launched efforts to dismantle key federal agencies, including USAID and the Department of Education. DOGE’s restructuring approach closely follows a private equity playbook—identifying government functions as inefficient, then hollowing them out under the guise of reform. This power grab has also enabled Musk to weaken regulatory bodies overseeing his companies. Most strikingly, the Federal Aviation Administration has awarded Starlink a major federal contract, raising conflict-of-interest concerns given the agency’s ongoing role in regulating Musk’s ventures.4 Potential conflicts of interest have also been raised in relation to USAID This explicit convergence of public authority and private interest marks a troubling new phase in Corporate AI’s capture of American politics.

This story, however, extends far beyond the United States. As multinationals, Big Tech firms—and the technologies they control—are shaping politics across the world. In Brazil, Elon Musk’s platform X (formerly Twitter) openly defied Supreme Court orders to curb disinformation networks, with Musk even encouraging users to bypass judicial restrictions. He has also suggested the UK government be overthrown and signaled alignment with far-right actors in Europe, including public support for Germany’s Alternative für Deutschland (AfD) political party. In Turkey, X suspended opposition accounts amid mass protests, raising further concerns about authoritarian entanglements.

These issues point to a deeper concern: the role of AI-powered social media platforms in shaping democratic processes and elections. With business models centered around economies of attention, platform algorithms often prioritize emotionally charged content to maximize user engagement, increasing the potential for misinformation to spread. In Brazil, such dynamics have visibly shaped electoral outcomes since 2018, with studies showing that manipulated content dominated WhatsApp groups and deepened political polarization. Meta’s recent rollback of content moderation policies has only intensified concerns over its impact on democratic processes. Beyond disinformation, unaccountable data practices and algorithmic systems are being weaponized to support the surveillance of populations and forms of authoritarian control. As Anne Applebaum (2024) shows in Autocracy, Inc., autocratic regimes are not threatened by digital technologies—they are empowered by them.

Meanwhile, corporate leaders are increasingly open about their expectations regarding government responses. Meta’s Zuckerberg, for example, has stated that he expects the U.S. government to shield American tech giants from EU regulation—suggesting that the defense of corporate power should take precedence over democratic governance on the global stage.   

These interventions are not just about control over institutions or policy—as noted earlier, they are also about capturing our political imagination. Corporate AI seeks to influence the production of knowledge and public discourse on AI. As Whittaker (2021) shows, industry funding has captured academic research, building expert communities that legitimize corporate interests. At the same time, tech leaders promote belief systems—like long-termism—that shift focus from present harms to speculative futures. As Benjamin (2024) and Gebru and Torres (2024) indicate, these ideologies sidestep issues like racism, sexism, and imperialism in favour of grand narratives about saving humanity. Underlying the struggle for democratic politics, then, is a struggle over our sense of political possibility. As Ruha Benjamin reminds us, domination doesn’t always announce itself; sometimes, it works by narrowing what we believe is possible.

As the discussion above makes clear, a growing body of interdisciplinary scholarship and public commentary is raising concerns about the anti-democratic implications of corporate AI. If you want to dig deeper, check out the Further Reading section, which includes references to all the sources that were drawn upon. 

While these critiques offer important insights into the layered relationships between Corporate AI and democracy, much of the English-language literature on this problem remains anchored in U.S. and Western European contexts, often framed through liberal-democratic ideals within a capitalist order. This framing risks limiting our capacity to confront the deeper challenges posed by Corporate AI’s increasing concentration of power. In some analyses, for example, there is a disconnect: scholars go to great lengths to document how Big Tech routinely evades regulation and public accountability—only to conclude that the solution lies in stronger regulation and oversight within liberal democratic frameworks. But if the very structures meant to hold power in check have already been captured or hollowed out, does this response truly hold up?

If Corporate AI’s project of political capture is global—and, increasingly, framed as civilizational—then our analytical lenses and responses must be equally expansive, including a wider range of geographies, experiences, and perspectives on democratic politics. This means engaging with alternative imaginaries of democracy and resistance, especially those rooted in solidaristic, decolonial, anti-capitalist, and collective strategies. Inspired by Ruha Benjamin’s manifesto on imagination, we must ask: what other democratic futures become visible when we look beyond the dominant frameworks? And how might these reorient our understanding of what meaningful, democratic politics and alternative economic systems could look like?

Further Reading

  • Applebaum, A. (2024). Autocracy, Inc: The dictators who want to run the World. Random House.
  • Benjamin, R. (2024). Imagination: A Manifesto. W. W. Norton & Company.
  • Carpenter, D., & Moss, D. A. (2013). Preventing Regulatory Capture: Special Interest Influence and How to Limit it. Cambridge University Press.
  • Cohen, J. E. (2025). Oligarchy, State, and Cryptopia (SSRN Scholarly Paper No. 5171050). Social Science Research Network. https://doi.org/10.2139/ssrn.5171050
  • Culpepper, P. D. (2010). Quiet Politics and Business Power: Corporate Control in Europe and Japan. Cambridge University Press.
  • Gardiner, B. (2024). How Silicon Valley is disrupting democracy. MIT Technology Review. https://www.technologyreview.com/2024/12/13/1108459/book-review-silicon-valley-democracy-techlash-rob-lalka-venture-alchemists-marietje-schaake-tech-coup/ 
  • Gebru, T., & Torres, É. P. (2024). The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday. https://doi.org/10.5210/fm.v29i4.13636
  • Gordon, E., & Mugar, G. (2020). Meaningful inefficiencies: Civic design in an age of digital expediency. Oxford University Press.
  • McQuillan, D. (2022). Resisting AI: An anti-fascist approach to artificial intelligence. Policy Press.
  • Neff, G. (2024). Can Democracy Survive AI? Sociologica, 18(3), 137–146. https://doi.org/10.6092/issn.1971-8853/21108
  • Rahman, K. S. (2017). The New Utilities: Private Power, Social Infrastructure, and the Revival of the Public Utility Concept. Cardozo Law Review, 39, 1621.
  • Schaake, M. (2024). The Tech Coup: How to Save Democracy from Silicon Valley. Princeton University Press.
  • Torres-Spelliscy, C. (2024). Corporatocracy: How to Protect Democracy from Dark Money and Corrupt Politicians. NYU Press.
  • Whittaker, M. (2021). The Steep Cost of Capture (SSRN Scholarly Paper No. 4135581). Social Science Research Network. https://papers.ssrn.com/abstract=4135581