AI models and services are used in a growing number of high-stakes areas. As a result, a consensus is forming around the need for a clearer record of how these models and services are developed and deployed. Increased transparency provides information for AI consumers to better understand how an AI model or service was created and determine if the model is appropriate for their specific situation or need.
We are excited to announce the release of a new, informative website that shares the latest in a concept introduced by IBM Research AI called AI FactSheets. A research project more than two years in the making, AI FactSheets is designed to foster increased levels of trust in AI by increasing transparency and enabling governance. The new website contains background on the project and IBM Research’s methodology, as well as several new technical papers, multiple new examples of FactSheets, and other resources.
In July 2020, IBM also released a policy view, “Accelerating the Path Towards AI Transparency,” that provides additional details and recommendations on how companies and policy makers can advance the ethical development and deployment of AI.
There is no denying the transformative power of AI today. It is prevalent across many areas of business and society and continues to unveil new ways to innovate. Yet, as exciting as this is, it has ushered forward an essential need to address and govern how an AI model or service is constructed and deployed.
This process, what we call AI governance, enables an enterprise to specify and enforce policies describing how an AI model or service should be constructed and deployed. This is critical to prevent undesirable situations, such as models being trained with unapproved datasets, models having biases, or models having unexpected performance variations.
Proposals for higher quality and more consistent AI documentation have emerged to address ethical and legal concerns and general social impacts of such systems. However, little is known about the needs of those who would produce or consume these new forms of documentation, as well as how to create this documentation.
In IBM Research’s new AI Factsheets website, we share two new technical papers that address these needs.
- The first paper, which appeared at the Late-Breaking Works session of the ACM CHI 2020 Conference on Human Factors in Computing Systems, uses semi-structured developer interviews and two document-creation exercises, to develop a clearer picture of the needs and various challenges faced in creating accurate and useful AI documentation. In this work, a team of IBM researchers provides multiple recommendations for easing the collection and flexible presentation of AI facts to promote transparency.
- The second paper is the first work describing a methodology to create AI documentation and explicitly includes FactSheet consumers and producers in FactSheet requirements gathering. We offer a seven-step methodology to create useful FactSheets for nearly two dozen models and include issues to consider and questions to explore with relevant people who will be creating and consuming the AI facts in a FactSheet. This methodology will increase the likelihood that FactSheets will provide the information needed to understand and mitigate potential harm or safety issues with an AI system and accelerate the broader adoption of transparent AI documentation.
In the new website, we provide multiple completed FactSheets for models in an open source model catalog offered by IBM. Each FactSheet is presented in full format used in the catalog, as well as tabular and slide format views to illustrate how abbreviated versions of the same information may be displayed for different purposes.
The website also contains several useful resources for the broader community, including related work, a glossary and FAQ, and a public slack channel where the larger community can discuss and provide feedback on the FactSheet concept.
The new FactSheets website furthers our Trusted AI contributions to the scientific communities and builds upon previous open source toolkits from IBM Research including AI Fairness 360, AI Explainability 360 and the Adversarial Robustness Toolbox.
For years, IBM Research AI has been dedicated to delivering AI systems that are built responsibly and act fairly. To achieve this, we believe that multiple dimensions of technical trust should be addressed, including fairness, explainability, adversarial robustness and governance.
We invite you to join us and collaborate on AI FactSheets by sharing your feedback and ideas on our slack channel.
Trustworthy AI: We need to understand why AI makes the decisions it does. We're developing tools to make AI more explainable, fair, robust, private, and transparent.