The Center for Land Use Interpretation over a decade ago took pictures of the internet. It was not photos of people on computer screens at the libraries or cafes. Most of the photos were actually of anonymous looking office buildings and squat structures behind manicured trees and plants. These photos were mostly of data centers, the specific places and spaces that the internet occupies. The internet and AI are cables buried underground, smaller cables hanging perilously through tree branches entering a house, and data centers in office parks around the country.
Comparatively, corporate AI tries to convince us about the cloud being far away. As Gonzalez Monserrate writes, “[l]ike a puffy cumulus drifting across a clear blue sky, refusing to maintain a solid shape or form, the Cloud of the digital is elusive, its inner workings largely mysterious to the wider public.” Corporate AI is advantaged of divorcing us from the places and spaces it exists in. Google, on their page selling cloud computing, tries to convince corporations “cloud computing enables companies to access and manage resources and applications anywhere there’s an internet connection.” The corporate cloud and AI enables users to no longer be somewhere. It promises an untethered future. By relying on abstracting space and place, a form of universalizing, corporate AI is able to assert that its findings are universal, not influenced or bound by specific locations it has origins in.
For example, in a recent demo of one of Google Cloud’s products with Deutsche Telekom using a Gemini Multimodal Live API, a demonstrator seemingly takes a photo of a larger than human vibrant floral installation. The wall towers over the demonstrator with large green leaves, bright pink flowers, and interspersed with red and yellow flowers. Once the app session starts, the camera identifies flowers being shown and the demonstrator prompts the device for more detail about a tiered yellow flower. The device identifies the flower as a Guzmania. The demonstrator then asks where the plant is from, and the device recites facts about the Guzmania originating from South and Central America. The short minute and a half demo then wraps up. Through the short demo we are transported to a technology partnership with a German telecommunications firm where a host of exotic plants have been imported and carefully arranged to a conference venue, where we are not exactly sure, to have information about the plan’s origins shared. The only place named in the short clip is South and Central America. We are able to understand how the space and place where the AI is being demonstrated is not of importance, much less which data center the AI was trained in. The AI is universal not coming from a specific space or place, but the flowers in their bright tangibility must be from somewhere else.
The actual infrastructure of AI must be located in a specific place, not only a geographical space but also within a place that is embedded in a local context like in the photo series from Center for Land Use Interpretation. This concern of Corporate AI is discussed more within the principle of harmony with Earth. Being able to abstract out the questions of space and place, allows corporate AI to shun questions often investigated by critical geography, environmental history, and others using spatial methods. Being able to dodge questions of space and place, also allows corporate AI to cast off questions about how corporate AI allows wealth accumulation among a few tech hotspots in the Global North. Rather, the flowers must be the object of attention, not the AI itself.
Reading space and place into corporate AI asks not just questions of infrastructure but also questions of where the AI is being developed. Much of AI is produced from developers in the Global North which erases other forms of knowledge produced around the world. This results in predictions and datasets mostly based on knowledge from specific places that attempt to sell their cultural assumptions as universal. This concentration of AI also results in AI flattening our understandings of space and place.
A recent study of ChatGPT and DALL-E 2 asked the generative AI model to produce images from 64 cities around the world (Jang et al., 2024)7). The authors found that when asked to produce images of Boston’s white neighborhoods the images returned well-kept brown stones, whereas when asked for images of the black community, images showed ill-maintained streetscapes with plain architecture. The generative AI was reinforcing racist notions of places when producing images of racialized communities. Another finding from the study was that while some of the generative AI model images were place-specific such as highlighting New York’s pre-war architecture or Singapore’s rainforest vegetation. Many of the images also flattened notions of place without including notable landmarks people may associate with cities (ie: the Sydney Opera House) and a “generic landscape of an urban environment is rendered.” These findings highlight how in many generative AI models the specificity of place can be erased through normative images constantly reproduced.
Previous work has shown disparities in how well object recognition AI differs across places where images were taken (De Veries et al., 2019)cite:). The authors provide the example of an image of soap taken in Nepal, the picture has two bars of soap and well-used sponge. Most object recognition systems identified it as food of some sort, while object recognition systems easily identified a bottle of soap on a sink from the UK as part of a sink or soap dispenser of some sort. Many image datasets are concentrated with images from the Global North disproportionately. While researchers have offered datasets with common objects from more places like Africa (cite: https://victordibia.com/cocoafrica/static/assets/paper/draft.pdf). Simple fixes on representation bias continue to not address the historical biases that corporate AI continues to perpetuate against specific places. Increasing representation from specific places does not make the tools AI universal but continues to reinforce the idea that AI meant to be abstracted from space, instead of grounded in specific geographies with specific contexts (link to the context principle).
Further Reading
- Does Object Recognition Work for Everyone?
- Jang, K.M., Chen, J., Kang, Y. et al. Place identity: a generative AI’s perspective. Humanit Soc Sci Commun11, 1156 (2024). https://doi.org/10.1057/s41599-024-03645-7