This is a two-and-a-half-hour workshop over two days on the theme of Trustworthy AI. The first day is a lecture and demo on the Trust 360 toolkits and their enhanced editions for making your machine learning models more fair, robust, explainable, and transparent. The second day starts with a demo of a new way to discover trust issues: multidimensional subset scanning and concludes with a group discussion with the IBM Research Trustworthy AI team.
IBM Research has pioneered the field of Trustworthy AI since its very beginnings a decade ago. In this workshop, you will learning about the topic through the AI Fairness 360, AI Explainability 360, Adversarial Robustness 360, and Uncertainty Quantification 360 toolkits, which are considered to be de facto standards among data science practitioners using Python. Many of their capabilities come together in a transparency approach known as AI FactSheets 360, in a new AI Risk Assessment offering, in a synthetic data generation capability from IBM Research, and in IBM’s enterprise-grade software solution Watson OpenScale within the Cloud Pak for Data suite, including its new AI Governance offering.
Background in Data Science or Computer Science
Take part in a day of interactive learning, as we explore that application of Trusted AI in real-world scenarios and share how the creative use of the AI Fairness 360, AI Explainability 360, Adversarial Robustness 360, and Uncertainty Quantification 360 software toolkits are being used to solve major problems.
KVKush VarshneyDistinguished Research Scientist and Manager ResearchIBMMSMoninder SinghResearch Staff Member ResearchIBM
- Trust 360 toolkits and their enhanced editions for making your machine learning models more fair, robust, explainable, and transparent
- Distribution of knowledge check to practice application of APIs at home
- Modernize your approach by leveraging a state of the art Trusted AI SDK in our IBM Research JupyterLab