Publication
Manuf. Serv. Oper. Manage.
Paper

Learning Personalized Product Recommendations with Customer Disengagement

View publication

Abstract

Problem definition: We study personalized product recommendations on platforms when customers have unknown preferences. Importantly, customers may disengage when offered poor recommendations. Academic/practical relevance: Online platforms often personalize product recommendations using bandit algorithms, which balance an exploration-exploitation trade-off. However, customer disengagement-a salient feature of platforms in practice-introduces a novel challenge because exploration may cause customers to abandon the platform. We propose a novel algorithm that constrains exploration to improve performance. Methodology: We present evidence of customer disengagement using data from a major airline's ad campaign; this motivates our model of disengagement, where a customer may abandon the platform when offered irrelevant recommendations. We formulate the customer preference learning problem as a generalized linear bandit, with the notable difference that the customer's horizon length is a function of past recommendations. Results: We prove that no algorithm can keep all customers engaged. Unfortunately, classical bandit algorithms provably overexplore, causing every customer to eventually disengage. Motivated by the structural properties of the optimal policy in a scalar instance of our problem, we propose modifying bandit learning strategies by constraining the action space up front using an integer program. We prove that this simple modification allows our algorithm to perform well by keeping a significant fraction of customers engaged. Managerial implications: Platforms should be careful to avoid overexploration when learning customer preferences if customers have a high propensity for disengagement. Numerical experiments on movie recommendations data demonstrate that our algorithm can significantly improve customer engagement.

Date

Publication

Manuf. Serv. Oper. Manage.

Authors

Share