CHI EA 2022
Conference paper

Responsible Language Technologies: Foreseeing and Mitigating Harms


As increasingly powerful natural language generation, representation, and understanding models are developed, made available and deployed across numerous downstream applications, many researchers and practitioners have warned about possible adverse impacts. Harmful impacts include but are not limited to disparities in quality-of-service, unequal distribution of resources, erasure, stereotyping and misrepresentation of groups and individuals, they might limit people's agency or affect their well-being. Given that language tasks are often complex, open-ended, and incorporated across a diversity of applications; effectively foreseeing and mitigating such harms has remained an elusive goal. Towards this goal, Natural Language Processing (NLP) literature has only recently started to engage with human-centered perspectives and methods - that are often central to HCI research. In this panel, we bring together researchers with expertise in both NLP and HCI, as well as in issues that pertain to the fairness, transparency, justice, and ethics of computational systems. Our main goals are to explore 1) how to leverage HCI perspectives and methodologies to help foresee potential harms of language technologies and inform their mitigation, 2) synergies between the HCI and the responsible NLP research that can help build common ground, and 3) complement existing efforts to facilitate conversations between the HCI and NLP communities.