Publication
CIKM 2014
Workshop paper

Dare to compare: Motivating expertise building in the enterprise through intelligent user modeling interfaces

View publication

Abstract

Expertise and skill assessments are a common aspect of working in an enterprise, but manual assessments are onerous and quickly outdated. Automated assessments can alleviate these problems, albeit at the risk of being inaccurate. In this short paper, we focus on the problem of how to design an engaging learning system in the presence of potentially inaccurate automated expertise assessments; especially when the users are in their early stage of using the system. We explore two dimensions associated with reporting automated expertise assessments to users: I) the inclusion of a social comparison, and ii) the precision of how expertise scores are presented. In a controlled experiment (N=60), we examined the impact of these dimensions on the perceived accuracy of the assessments, the perceived utility of the system, and peoples' willingness to share expertise scores within the enterprise.

Date

03 Nov 2014

Publication

CIKM 2014

Authors

Share