Publication
IJCNN 2022
Conference paper

Linear Upper Confident Bound with Missing Reward: Online Learning with Less Data

View publication

Abstract

We consider a novel variant of the contextual bandit problem (i.e., the multi-armed bandit with side-information, or context, available to a decision-maker) where the reward associated with each context-based decision may not always be observed ('missing rewards'). This new problem is motivated by certain online settings including clinical trial and ad recommendation applications. In order to address the missing rewards setting, we propose to combine the standard contextual bandit approach with an unsupervised learning mechanism such as clustering. Unlike standard contextual bandit methods, by leveraging clustering to estimate missing reward, we are able to learn from each incoming event, even those with missing rewards. Promising empirical results are obtained on several real-life datasets.

Date

17 Jul 2022

Publication

IJCNN 2022

Authors

Share