About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
IBM J. Res. Dev
Paper
An instrument to evaluate the maturity of bias governance capability in artificial intelligence projects
Abstract
Artificial intelligence (AI) promises unprecedented contributions to both business and society, attracting a surge of interest from many organizations. However, there is evidence that bias is already prevalent in AI datasets and algorithms, which, albeit unintended, is considered to be unethical, suboptimal, unsustainable, and challenging to manage. It is believed that the governance of data and algorithmic bias must be deeply embedded in the values, mindsets, and procedures of AI software development teams, but currently there is a paucity of actionable mechanisms to help. In this paper, we describe a maturity framework based on ethical principles and best practices, which can be used to evaluate an organization's capability to govern bias. We also design, construct, validate, and test an original instrument for operationalizing the framework, which considers both technical and organizational aspects. The instrument has been developed and validated through a two-phase study involving field experts and academics. The framework and instrument are presented for ongoing evolution and utilization.