About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Abstract
HCI engages with data science through many topics and themes. Researchers have addressed biased dataset problems, arguing that bad data can cause innocent software to produce bad outcomes. But what if our software is not so innocent? What if the human decisions that shape our data-processing software, inadvertently contribute their own sources of bias? And what if our data-work technology causes us to forget those decisions and operations? Based in feminisms and critical computing, we analyze forgetting practices in data work practices. We describe diverse beneficial and harmful motivations for forgetting. We contribute: (1) a taxonomy of data silences in data work, which we use to analyze how data workers forget, erase, and unknow aspects of data; (2) a detailed analysis of forgetting practices in machine learning; and (3) an analytic vocabulary for future work in remembering, forgetting, and erasing in HCI and the data sciences.