Distributionally robust optimization for sequential decision-making
The distributionally robust Markov Decision Process (MDP) approach asks for a distributionally robust strategy that achieves the maximal expected total reward under the most adversarial distribution of uncertain parameters. In this paper, we study distributionally robust MDPs where ambiguity sets for the uncertain parameters are of a format that can easily incorporate in its description the generalized moment as well as statistical distance information of the uncertainty. In this way, we generalize existing works on distributionally robust MDPs with generalized-moment-based and statistical-distance-based ambiguity sets to incorporate information from the former class such as moments and dispersions to the latter class that critically depends on empirical observations of the uncertain parameters. We show that under this format of ambiguity sets, the resulting distributionally robust MDP remains tractable under mild technical conditions and a distributionally robust strategy can be constructed by solving a sequence of one-stage convex optimization subproblems through a Bellman type backward induction.