Research
5 minute read

IBM at the intersection of human-computer interaction and AI

The ACM International Conference on Intelligent User Interfaces (IUI) is the premier venue where the human-computer interaction (HCI) community meets the artificial intelligence (AI) community. Work presented at IUI focuses on improving the interaction between humans and AI systems by combining HCI approaches with state-of-the art AI techniques from machine learning (ML), natural language processing (NLP), data mining, knowledge representation, and reasoning. IBM Research has actively engaged with the IUI community for decades.

Although IUI was cancelled this year due to COVID-19, several workshops are still taking place virtually. We would like to share our IUI work with the community – two full papers, three workshop papers, two demos, and four workshops – across multiple key areas of IUI, including automated machine learning, explainable AI, conversational interfaces, and human-agent interaction.

Increasing Trust and Understandability of Automated AI

The automation of machine learning and data science is an emerging topic in the IUI community. At IBM, we developed a suite of technologies that make it easier for data scientists to produce high-quality models by automating different steps of the data science pipeline: joining disparate data sets and cleaning data, engineering features, crafting neural network architectures, tuning hyper parameters, and evaluating models for fairness and robustness. Collectively, these technologies are known as IBM AutoAI and are available for use in IBM Watson Studio.

IBM researchers identified features that increase peoples’ trust and understanding of automated data science tools. In one study, researchers examined the role of different kinds of transparency — in data, in models, and in the process of creating models — and found that each was important for establishing users’ trust of an AutoAI system. For example, increasing transparency by showing the process by which feature engineering operated was associated with a large and significant increase in peoples’ ratings of trust of the AutoAI system.

Building upon this work, IBM researchers also developed AutoAIViz, a novel visualization tool to increase process-oriented transparency. This visualization, based upon Conditional Parallel Coordinates charts, shows the alternatives considered by AutoAI in model selection and feature transformation. In an evaluation of this system, researchers found that peoples’ understanding of how AutoAI worked significantly increased.

In the visualization above, each colored line running from left to right represents an individual machine learning pipeline– a combination of feature transformation and modeling. Each pipeline also has a corresponding row in the Leaderboard, shown at the bottom. All pipelines consist of three steps: Transformation 1, Transformation 2, and Estimator Selection. Generated models are then evaluated on four metrics: group disparity (a measure of how outcomes differ across groups), prediction time (how long the model takes to make a prediction), and ROC AUC scores on the training data and the holdout data (a measure of how well the model is able to distinguish between outcome classes).

The next level of detail in the visualization contains conditional axes that enables users to view expanded detail of a pipeline stage. In the figure, the user has expanded the Sparse Random Projection Transformer in the Transformation 2 step. The conditional axes shown allows users to make additional comparisons of pipelines across the different hyperparameters of the Sparse Random Projection transformer: dense_output, density, eps, n_components and random_state. 

An evaluation of this visualization established that it was understood by users and effectively conveyed important information about machine learning pipelines and the process by which they were constructed.

Conversational, Explainable, and Generative AI

IBM researchers were involved in organizing four workshops at IUI this year, covering the topics of conversational agents, explainability, and generative AI.

Conversational AI

Conversational User Interfaces: A Workshop on New Theoretical and Methodological Perspectives for Researching Speech-based Conversational Interactions aims to tackle the significant gaps in theoretical frameworks, methodological approaches, and exploration of new paradigms within the research and design of Conversational User Interfaces (CUIs). These gaps include: a lack of validated design guidelines to improve the usability of CUIs; handling variability in speech, language, and conversation; error-recovery strategies that often lead to degraded user experiences; understanding how user individual behaviors and choices apply to specific CUI interactions; and issues of ethics and privacy.

This workshop will be held at CUI 2020 in July.

The 2nd workshop on User-Aware Conversational Agents aims to bring together researchers who are interested in user awareness and adaptation from different communities, including user modeling, HCI, NLP, and ML. Through a focused and open exchange of ideas, this workshop aims to identifyfuture research directions in user awareness and adaption in user profiles, contexts, personalities, emotions, social dynamics, and conversational styles. An IBM paper in this workshop describes the design of a system that intelligently issues, suppresses, and aggregates notifications in a world in which users may experience ”alert fatigue” from receiving too many notifications. This system is customized based on event severity and user preferences.

This workshop has been cancelled.

Explainable AI

Explainability and transparency are critical aspects of AI systems in order for users to understand, trust, and adopt them. Transparency could also enable approaches to mitigate algorithmic biases even without access to a system’s inter-workings. The Workshop on Explainable Smart Systems (ExSS) & Intelligent User Interfaces for Algorithmic Transparency in Emerging Technologies (ATEC) aims to provide a venue for exploring issues that arise in designing, developing, and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. At the workshop, IBM researchers will present a paper on desired functionalities of an ML tool to support geo-scientists to perform analysis on seismic data based on a study of brainstorm sessions. their findings show interesting indications on how ML experts express concerns about AI explainability, even implicitly, while defining an ML tool’s features.

This workshop will be held remotely on March 17, 2020. Please see the web site for details.

IBM researchers have also developed XAIT, a new tool for exploring the field Explainable AI (XAI). XAIT presents an interactive, hierarchical taxonomy of XAI methods and algorithms. This tool was developed to help subject matter experts understand the landscape of XAI techniques and determine which methods are most appropriate for their tasks. It also provides links to the latest XAI research and tools.

Generative AI

Generative models are now enabling new kinds of user experiences around content creation. The HAI-GEN workshop will bring together researchers to form a better understanding of the new opportunities and challenges of generative modeling from a Human-AI interaction perspective. One challenge is the potential for malicious use. To understand how businesses might be affected by misuse of generative technologies, IBM researchers created a set of design fictions describing near-future scenarios in which generative technologies were applied with malicious intent in order to spark discussion around the technological, legal, and societal measures needed to detect and mitigate harmful outcomes.

This workshop will be held remotely on March 17, 2020. Please see the web site for details.

Marriage of HCI and AI

At IBM Research, we believe that AI systems will always contain a human element — what we call a human-in-the-loop — in order to ensure that these systems are fair and unbiased, robust and secure, and applied ethically and in service to the needs of their users. HCI research is crucial for understanding how to design human-in-the-loop AI systems. HCI research methods help us understand who we are building AI systems for and evaluating how well those systems are working for their end users.

Accepted Papers

  • Jaimie Drozdal, Justin D. Weisz, Dakuo Wang, Gaurav Dass, Bingsheng Yao, Changruo Zhao, Michael Muller, Lin Ju, Hui Su. 2020. Exploring Information Needs for Establishing Trust in Automated Data Science Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI ’20). ACM, New York, NY, USA.https://arxiv.org/abs/2001.06509
  • Daniel Karl I. Weidele, Justin D. Weisz, Eno Oduor, Michael Muller, Josh Andres, Alexander Gray, Dakuo Wang. 2020. AutoAIViz: Opening the Blackbox of Automated Artificial Intelligence with Conditional Parallel Coordinates. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI ’20). ACM, New York, NY, USA.https://arxiv.org/abs/1912.06723

Accepted Demos

  • Dakuo Wang, Parikshit Ram, Daniel Karl I. Weidele, Sijia Liu, Michael Muller, Justin Weisz, Abel Nicolas Valente, Arunima Chaudhary, Dustin Torres, Horst Samulowitz, Lisa Amini. 2020. AutoAI: Automating the End-to-End AI Lifecycle with Humans-in-the-Loop.
  • Eno Oduor, Kun Qian, Yunyao Li, Lucian Popa. 2020. XAIT: An Interactive Website for Explainability for Text.

Workshops

Workshop Papers

  • Yara Rizk, Yasaman Khazaeni, Vatche Isahagian, Merve Unuvar. A Snooze-less User-Aware Notification System for Proactive Conversational Agents. https://arxiv.org/abs/2003.02097
  • Stephanie Houde, Vera Liao, Jacquelyn Martino, Michael Muller, David Piorkowski, John Richards, Justin Weisz, Yunfeng Zhang. Business (mis)Use Cases of Generative AI. https://arxiv.org/abs/2003.07679
  • Juliana Jansen Ferreira, Mateus de Souza Monteiro. Do ML Experts Discuss Explainability for AI Systems? A case in the industry for a domain-specific solution. https://arxiv.org/abs/2002.12450