About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
Management Science
Paper
Tiered Assortment: Optimization and Online Learning
Abstract
Due to the sheer number of available choices, online retailers frequently use tiered assortment to present products. In this case, groups of products are arranged across multiple pages or stages, and a customer clicks on “next” or “load more” to access them sequentially. Despite the prevalence of such assortments in practice, this topic has not received much attention in the existing literature. In this work, we focus on a sequential choice model that captures customers’ behavior when product recommendations are presented in tiers. We analyze multiple variants of tiered assortment optimization (TAO) problems by imposing “no-duplication” and capacity constraints. For the offline setting involving known customers’ preferences, we characterize the properties of the optimal tiered assortment and propose algorithms that improve the computational efficiency over the existing benchmarks. To the best of our knowledge, we are the first to study the online setting of the TAO problem. A unique characteristic of our online setting, absent from the one-tier MNL bandit, is partial feedback. Products in the lower priority tiers are not shown when a customer has purchased a product or has chosen to exit at an earlier tier. Such partial feedback, along with product interdependence across tiers, increases the learning complexity. For both the noncontextual and contextual problems, we propose online algorithms and quantify their respective regrets. Moreover, we are able to construct tighter uncertainty sets for model parameters in the contextual case and thus improve the performance. We demonstrate the efficacy of our proposed algorithms through numerical experiments.