DCOSS-IoT 2023
Conference paper

AI4Gov: Trusted AI for Transparent Public Governance Fostering Democratic Values

View publication


As Artificial Intelligence (AI) becomes more integrated into public governance, concerns about its transparency and accountability have become increasingly important. The use of AI in decision-making processes raises questions about bias, fairness, and the protection of individual fundamental rights. To ensure that AI is used in a way that upholds democratic values, it is essential to develop systems that are trustworthy, transparent, and accountable. Trusted AI allows citizens to have greater trust in public organizations and their decision-making processes, while it also enables public authorities and policy makers to be more transparent and accountable, providing citizens with greater visibility into how policies are developed. In addition, it encourages the use of AI in a way that promotes fairness and equity, ensuring that decision-making processes are unbiased and discrimination free against certain groups of individuals. This paper investigates how these desirable attributes can be developed in ways that are feasible and effective through the design of a holistic environment that incorporates AI and Big Data management mechanisms while preserving that the AI technology should be shaped around human rights, values, and societal needs. Societal change and evidence-based policies will be achieved through the extension of business and policy making processes with advanced approaches, such as eXplainable AI (XAI) and Situation-Aware Explainability (SAX). To this end, a novel approach is proposed, which will converge techniques and research on multiple domains, including social sciences, Trustworthy AI, Ethical AI, Big Data analytics, IoT, and blockchain into a unified ecosystem.