Publication
EJOR
Paper

Approximate dynamic programming for stochastic linear control problems on compact state spaces

View publication

Abstract

This paper addresses Markov Decision Processes over compact state and action spaces. We investigate the special case of linear dynamics and piecewise-linear and convex immediate costs for the average cost criterion. This model is very general and covers many interesting examples, for instance in inventory management. Due to the curse of dimensionality, the problem is intractable and optimal policies usually cannot be computed, not even for instances of moderate size. We show the existence of optimal policies and of convex and bounded relative value functions that solve the average cost optimality equation under reasonable and easy-to-check assumptions. Based on these insights, we propose an approximate relative value iteration algorithm based on piecewise-linear convex relative value function approximations. Besides computing good policies, the algorithm also provides lower bounds to the optimal average cost, which allow us to bound the optimality gap of any given policy for a given instance. The algorithm is applied to the well-studied Multiple Sourcing Problem as known from inventory management. Multiple sourcing is known to be a hard problem and usually tackled by parametric heuristics. We analyze several MSP instances with two and more suppliers and compare our results to state-of-the-art heuristics. For the considered scenarios, our policies are always at least as good as the best known heuristic, and strictly better in most cases. Moreover, by using the computed lower bounds we show for all instances that the optimality gap has never exceeded 5%, and that it has been much smaller for most of them.

Date

16 Feb 2015

Publication

EJOR

Authors

Share