About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
IEEE Internet Computing
Paper
Gender Bias in Online Language Translators: Visualization, Human Perception, and Bias/Accuracy Tradeoffs
Abstract
Artificial intelligence(AI) systems that interact with humans, such as chatbots and language translators, have many useful applications. However, care must be put in addressing some concerns, such as the presence of bias, possible abusive language, and information leakage, that could hamper public trust in them. In this article, we focus on gender bias in online translators, as recognized and rated by a third-party assessment who does not have access to the training data, and we propose a visualization approach for such rating. We then conduct a survey of how users perceive bias in translators, whether they appreciate the proposed bias rating visualization, and how they may use it to reason about bias-accuracy trade-offs.