Publication
INFORMS 2020
Talk

Human Cognitive Biases in Interpreting Machine Learning

View publication

Abstract

People are the ultimate consumers of machine learning model predictions and explanations in many high-stakes applications. However, people’s perception and understanding is often distorted by their cognitive biases, like confirmation bias, anchoring bias, and availability bias, to name a few. If our goal is to enable a human-machine collaboration that has the best possible classification accuracy (better than the human and machine working separately), we have to mitigate these cognitive biases. In this work, we make progress towards this goal through both mathematical modeling and human experiments. Specifically, we focus our human experiments on collaborative decision-making in the presence of anchoring bias.