Artificial Whiteness: Politics and Ideology in Artificial Intelligence

by Yarden Katz, Columbia University Press, 2020, 352 pp. 
Paperback $28.00 ISBN: 978-0-231-19491-4 

The recent proliferation of books, white papers, and think tank discussions on algorithmic bias, AI ethics, and AI for good might already be more than anyone can keep up with in one human lifetime. Yarden Katz’s Artificial Whiteness: Politics and Ideology in Artificial Intelligence questions why there is all of a sudden so much talk about making AI ethical, fair, and good. The book begins by tracing the history of the label “artificial intelligence” and its usage, as it has fallen in and out of fashion over the decades since its initial appearance in the late 1950s. Katz, who works in systems biology at Harvard Medical School and has done work under the label of AI, now questions the very use of that label. 

The book examines the shifting definitions of AI and the varying uses of the label to brand different technologies. Katz finds that “attempts to ground AI in technical terms, along a set of epistemic considerations or even scientific goals, could never keep this endeavor going” (164). What then explains the appeal of the AI label? What keeps the endeavor going? Katz’s way of answering these questions might seem outlandish. The author turns not to thinkers typically associated with AI but to Toni Morrison, Herman Melville, W.E.B. DuBois, and Cedric Robinson. In reading how these writers wrote about whiteness, Katz noticed a resemblance to artificial intelligence. Like whiteness, artificial intelligence “gets its significance, and its changing shape, only from the need to maintain relations of power” (164). 

Artificial Whiteness is not another book of AI ethics or AI for good. It is a critical genealogy of AI that is informed by scholarship on race. By looking at multiple projects over time that have used the AI label, Katz finds that their commonalities are less technical than ideological and financial. The book documents what different approaches to AI have had in common: funding by the U.S. military-industrial complex and marketing that uses the tropes of white settler-colonial manifest destiny. AI has an ideological life of its own beyond any particular computing process. 

The AI label came back into fashion in the mid-2010s. Projects previously described as Big Data began to rebrand as AI. Reports of platform companies using user data for behavioral manipulation and Edward Snowden’s revelations about NSA surveillance had generated public scrutiny. Rebranding Big Data as AI, in Katz’s view, helped deflect public scrutiny and change the conversation to be about “futuristic machines” (69). We are now in a moment when, as Katz notes, “‘AI’ is applied to projects that use well-worn computer technologies that do not depend on either recent developments in parallel computing or particularly large data sets or neural networks” (68). 

Artificial Whiteness provides a way of seeing AI as an ideology, a political and economic project that presents itself as technology. Katz outlines the ideology of AI in terms of three “epistemic forgeries” (94). The first forgery is the idea that AI is universal, as if it has intelligence beyond a social context. The second forgery is the idea that AI surpasses human thought, as if human thought is only a calculation in a controlled setting like a game. The third forgery is the idea that AI arrives at knowledge on its own, as if AI’s developers are not responsible for setting the conditions in which AI arrives at its knowledge.  

There is a growing awareness of algorithmic bias, along with efforts to correct bias. Artificial Whiteness discusses, for example, projects that aim to improve machine learning for facial recognition so as to better recognize race and gender. Katz, however, warns that improving facial recognition’s sensitivity to a greater diversity of faces “ultimately enhances the carceral eye” (178). It is troubling to read that correcting algorithmic bias in facial recognition systems might exacerbate incarceration and policing, but it is a crucial reminder that oppression is not just a technical glitch. Artificial Whiteness warns that the ideology of AI, like the ideology of whiteness, can mutate to maintain oppressive institutions and social structures: “new computational engines come with old epistemic forgeries” (226).  

A highlight of the book is a section about a group in Los Angeles called the Stop LAPD Spying Coalition. Based in heavily policed communities, the group researches Los Angeles Police Department surveillance and data collection practices. With this research, the Stop LAPD Spying Coalition organizes against the use of predictive policing technology that targets their communities. The group’s work shows a way of questioning and challenging flawed technologies and the unjust social structures and institutions from which they emerge.  

Katz’s book is a much-needed challenge to the mystique of the AI label. The book arrives as a flood of discourse on ethical AI and AI for good might signal the continuation of AI’s “epistemic forgeries” by other means. Linking the history of AI and the history of whiteness, with inspiration from the grassroots research and organizing of the Stop LAPD Spying Coalition, Artificial Whiteness brings into focus the political and economic motivations and consequences of AI ideology.

Gregory Laynor, Thomas Jefferson University