In previous work we have sought to characterize “trustworthy AI” (Varshney 2022, Knowles et al. 2022). In this work, we examine the case of AI systems that appear to render verdicts about our (human) trustworthiness, and we inquire into the conditions under which we can trust AI systems to trust us appropriately. We argue that the inability to take on another’s perspective (henceforth, “empathy deficit”) can both explain and justify our distrust of AI in domains in which AI is tasked with forecasting the likelihood of human (un)trustworthiness. Examples include the use of AI to make forecasts for parole and bail eligibility, academic honesty, and creditworthiness. Humans have an interest in ensuring that judgments of our trustworthiness are based on some degree of empathic understanding of our reasons and unique circumstances. The inability of AI to adopt our subjective perspective calls into question our trust in AI systems’s assessments of human trustworthiness.