About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Abstract
Responsible AI is built upon a set of principles that prioritize fairness, transparency, accountability, and inclusivity in AI development and deployment. As AI systems become increasingly sophisticated, including the explosion of generative AI, there is a growing need to address ethical considerations and potential societal impacts of their uses. Knowledge graphs (KGs), as structured representations of information, can enhance generative AI performance by providing context, explaining outputs, and reducing biases, thereby offering a powerful framework to address the challenges of responsible AI. By leveraging semantic relationships and contextual understanding, KGs facilitate transparent decision-making, enabling stakeholders to trace and interpret the reasoning behind AI driven outcomes. Moreover, they provide a means to capture and manage diverse knowledge sources, supporting the development of fair and unbiased AI models. The workshop aims to investigate the role of knowledge graphs in promoting responsible AI principles and creating a cooperative space for researchers, practitioners, and policymakers to exchange insights and enhance their comprehension of KGs' impact on achieving responsible AI solutions. It seeks to facilitate collaboration and idea-sharing to advance the understanding of how KGs can contribute to responsible AI.