Publication
FAccT 2023
Conference paper

Trustworthy AI and the Logics of Intersectional Resistance

View publication

Abstract

Growing awareness of the capacity of AI to inflict harm has inspired efforts to delineate principles for 'trustworthy AI' and, from these, objective indicators of 'trustworthiness' for auditors and regulators. Such efforts run the risk of formalizing a distinctly privileged perspective on trustworthiness which is insensitive (or else indifferent) to the legitimate reasons for distrust held by marginalized people. By exploring a neglected conative element of trust, we broaden understandings of trust and trustworthiness to make sense of, and identify principles for responding productively to, distrust of ostensibly 'trustworthy' AI. Bringing social science scholarship into dialogue with AI criticism, we show that AI is being used to construct a digital underclass that is rhetorically labelled as 'undeserving', and highlight how this process fulfills functions for more privileged people and institutions. We argue that distrust of AI is warranted and healthy when the AI contributes to marginalization and structural violence, and that Trustworthy AI may fuel public resistance to the use of AI unless it addresses this dimension of untrustworthiness. To this end, we offer reformulations of core principles of Trustworthy AI - fairness, accountability, and transparency - that substantively address the deeper issues animating widespread public distrust of AI, including: stewardship and care, openness and vulnerability, and humility and empowerment. In light of legitimate reasons for distrust, we call on the field to to re-evaluate why the public would embrace the expansion of AI into all corners of society; in short, what makes it worthy of their trust.