The Promise of Artificial Intelligence: Reckoning and Judgement; Cloud Ethics: Algorithms and the Attributes of Ourselves and Others (Dual Review)

The Promise of Artificial Intelligence: Reckoning and Judgement by Brian Cantwell Smith, Cambridge, MA: MIT Press, 2019, 184 pages, $24.95 hardcover, (ISBN: 9780262043045)

Cloud Ethics: Algorithms and the Attributes of Ourselves and Others by Louise Amoore, Durham, NC: Duke University Press, 2020, 232 pages, $25.95 paperback, (ISBN: 9781478008316)

 

Brian Cantwell Smith's latest work is a brief but serious engagement with the history and philosophy of artificial intelligence (AI). Its central thesis is that AI systems as we currently know them are excellent at a kind of informed calculation, which Smith terms reckoning, but that they're still far from being able to form the situated understanding of consequence typical of human decision making, which he terms judgement. With these two concepts as a scaffold, Smith embarks on an ambitious and brisk trip though AI's history, its present, and its future, providing evidence for his core thesis and ultimately offering some initial prescriptions for how best to utilize AI for the benefit of society.

Smith builds his simple framing of reckoning and judgement into a tight yet powerful conceptual scheme for analyzing AI in relation to the human. The human capacity for judgement arises, Smith argues, from a normative deference towards the world. It is precisely this constituent of genuine intelligence that Smith claims AI systems are not yet capable of and predicts they will not be for the foreseeable future. Smith makes extensive use of concepts developed in his prior work, most notably On the Origin of Objects (1996), like registration, objects, and ontological schemes. This pre-existing conceptual machinery evolves here into tools for distinguishing human-style judgement from machine-style reckoning and for explaining both commensurably. Humans who hold their concepts accountable to the world have a sense of the stakes of their actions that's conspicuously absent from computers. In the words of John Haugeland, a major influence on Smith, computers “don't give a damn” (108). Smith elaborates on what “giving a damn” means in this context and shows how to determine when a system, human or otherwise, can be said to be capable of judgement.

Smith covers the failure of Good Old Fashioned AI (GOFAI), the rigid Knowledge Representation-focused AI of the ‘70s and ‘80s, informed by its history and illuminated by his framework (Chapters 2-4). GOFAI systems were incapable of dealing with anything that was not hard-coded into them ahead of time. In Smith’s terms, GOFAI systems were merely registering human registrations. Most damningly, these systems’ designers assumed that the world itself was neatly divisible into distinct objects, with unambiguous properties, an assumption Smith traces to Descartes and certain brands of philosophical realism. The kind of systems that might suggest placing a kidney in boiling water to treat an infection made inevitably egregious errors because their connection to the world we actually live in, where boiling water both cures the infection and kills the patient, was unavoidably shallow and rigid. Such systems, starved of any ability to register the world directly, had to be hand-fed increasingly verbose yet shallow encodings of human registrations to correct these rigidities with ever more finely wrought rigidities. GOFAI systems ultimately found their uses and live on in technologies such as the Semantic Web but fell far short of what most would consider to be the promise of AI.

Leaping ahead a decade or three, Smith acknowledges the staggering successes of “second wave AI,” such as deep learning, but uses his framework to qualify them as successes of reckoning and higher-fidelity, conceptually open registration of the world (Chapter 5). Algorithms that work on large amounts of low-level data don't require the kind of ontological scaffolding that GOFAI approaches did and don’t assume the world to be made of well-defined objects. Instead, they are capable of subconceptual nuance and use this nuance to produce semantically meaningful computations that align remarkably well with specific facets of our human understanding of the world.

Still, Smith is clear-eyed about their limitations and critiques these systems’ widespread use as classification systems. He argues that the conceptual collapse inherent in reducing a complex neural network into a tag of an image’s contents or a credit score recapitulates the core failures of GOFAI. The difference is that no one ever placed a kidney in boiling water to kill an ear infection because a computer said to, but real decisions are made, often in an automated way, based on the output of second wave AI systems every day. The harms may be more diffuse and subtle, but for Smith they are no less inevitable.

This leads to the main prescriptive thrust of the book: we should always incorporate human judgement into the use of these powerful reckoning systems (Chapters 11-13). Smith provocatively questions the potential of initiatives like “explainable AI,” reasoning that the terms with which AI systems would explain themselves demand the very conceptual collapse that is the source of their harms. This calls into question whether a technical “solution” to algorithmic bias is even possible. Instead, Smith commends research programs already underway that seek to keep humans in the loop or, even more intriguingly, seek to augment human judgement with algorithms in the loop.

This book’s main limitation is the obverse of its strength: its brisk pace and wide scope can lead to a nagging sense of unexplored connections. The politics and ethics of classification lurk throughout Smith’s critique of judgementless reckoning and remain an unexplored and important extension of his concerns here. I would have liked Smith to relate Bowker and Star’s torque, explored in Sorting Things Out: Classification and Its Consequences (1999), to the societal consequences of the conceptual collapse he critiques and how judgement might work to mitigate this. What’s more, it would have strengthened this account to hear Smith’s thoughts on when humans lack what he calls judgement. Odious ideologies such as racism and sexism, which we’re now seeking to purge from AI systems, have resided within humans for far longer than they have algorithms. In Smith’s terms, could we see these as a kind of conceptual collapse? What might that mean for Smith’s proposed solution of inserting human judgement into these processes? And, finally, Smith’s identification of deference to the world as a core of judgement seems to ignore the fact that AI systems increasingly create parts of social reality (i.e., credit scores) themselves. Deference to the world thus increasingly means deference to algorithms and their output. How this might challenge or expand his project is left to future critiques or extensions of this work.

With provocative claims and tightly crafted conceptual frameworks, Smith has provided one vision of how philosophically and historically informed studies of information might more effectively contribute to the ongoing project of realizing AI’s benefits while avoiding its societal harms. Ultimately, few writers are able to bring so many disciplinary perspectives to bear upon this topic.

Political geographer Louise Amoore is one of these few. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others is a far-reaching and ambitious take on the challenge of reconciling ourselves to algorithmically mediated existence. Amoore’s proposal for her titular cloud ethics construes algorithms as inherently political, and interrogates their development, deployment, and co-constitution with modern society.

While Smith frames his distinction between reckoning and judgement around the ontological and epistemological preconditions for human-style intelligence, Amoore grounds her approach in the opacity of identity and the inherently political process of the formation of relations between self and other, phenomena she locates in both humans and algorithms. This is an original take on the mechanism of how algorithms become sociopolitical actors and an exciting update of these powerful thinkers’ work. Amoore connects the opacity of how we attribute authorship, agency, and responsibility with individual human political identity and responsibility and extends this account to algorithms. For Amoore, algorithmic systems acquire a political identity absent from prior kinds of information systems like databases precisely because of their opacity to us and even to their creators. Their outputs, like human actions, emerge from the unknowable. Their power’s reach, and its danger, comes from algorithms’ ability to synthesize other political subjects as input and deploy pattern recognition on a global scale. She gives the example of facial recognition technologies deployed against protestors in Baltimore (Chapter 1), which may have been honed using protests years before in Turkey. Amoore’s compelling observation is that these algorithms’ global use have the power to limit what democratic participation can be, such as when would-be Baltimore protestors were detained on their way to the protest.

Amoore’s inherently ethicopolitical focus relieves her of the criticisms leveled at Smith above, but her work is far less digestible and actionable, particularly for practitioners. Amoore does call for hands-on engagement with algorithms and AI systems but is somewhat schematic or speculative about exactly how such explorations might combat the maladies she diagnoses.

As was the case with Smith’s book, though, these weaknesses are the obverse of Amoore’s book’s strengths. Amoore’s proposal for cloud ethics is speculative in a way that invites experimentation and preserves possibility. This is key to addressing what she sees as algorithms’ chief ill: the foreclosure of political possibility. This power comes from algorithms’ probabilistic outputs, which attempt to predict and anticipate possible futures. In doing so, particularly when deployed at scale by governments and corporations, they shape those futures in subtle yet increasingly pervasive ways. The kind of neat analytical framework available to Smith is incompatible with Amoore’s approach and goals. Amoore’s cloud ethics isn’t itself a system of thought but a proposal for how to think about new forms of political engagement with and through algorithms. Its success lies not in how well Amoore has articulated it but rather in whether artists, scholars, and activists take up Amoore’s call and realize the as-yet unknown potentialities algorithms might offer.

This difference in approaches is perhaps best highlighted by the convergences they produce. Take, for instance, algorithmic transparency, about which both authors are pessimistic. For Smith, the transparency of reckoning processes is incapable of endowing them with the kind of existential commitment he sees as critical for judgement. Amoore, however, sees calls for transparency as ignoring and obscuring the most important attribute of algorithms: their opacity. Having first linked opacity with identity and subjectivity, Amoore sees algorithms’ opacity as key to their “becoming-political” (158). Their inherent unaccountability mirrors human political subjects in this way and constitutes a subtle but powerful mechanism for their political action and identity. A second convergence is that neither thinker believes in the possibility of reducing ethics to mere rules. For Smith, rules may be derived from a capacity for judgement but cannot embody it. Amoore, by contrast, draws from Foucault to move away from ethics as code towards “ethics as the inescapably political formation of the relation of oneself to oneself and to others” (7). For each, the reason behind this rejection is core to their project.

Beyond these divergent convergences, there are promising areas of surface similarity that would bear careful investigation. In particular, it would be interesting to explore the relationship between Smith’s registration and Amoore’s concept of algorithmic aperture, Amoore’s opacity and Smith’s concept of the ineffable and infinite richness of the world, or Smith’s conceptual collapse and Amoore’s “crystalline certainty” of algorithmic output. Let’s hope future work might take up these productive contrasts.

Each book offers a rigorous, engaging, and ambitious take on how humans and algorithms relate to each other, and the areas in which they share form and content represent a potential consensus in this field of inquiry. Amoore and Smith both utilize frameworks that can analyze humans and algorithms in the same terms, but each stops short of the brand of posthumanism which fully de-centers the human. This seems to me like a best practice for ethical and especially ethicopolitical studies of AI. Amoore and Smith are also both skeptical that calls for algorithmic transparency will accomplish much in terms of their fairness or that rule-based ethics will offer a viable way forward. By reaching these conclusions from vastly different starting points, they lend support to efforts to move beyond such simple calls for solutions. Amoore and Smith’s work are both grounded in and deeply engage with philosophical work key to their respective analyses, and they weave the particulars of historical events through their analysis. Taken together, their differences, similarities, ambition, and scope indicate the enormity of the challenge facing researchers of this topic and offer many fruitful paths for future explorations.

Elliott Hauser, University of Texas at Austin