Publication
J. Math. Anal. Appl.
Paper

An optimality principle for Markovian decision processes

View publication

Abstract

The following optimality principle is established for finite undiscounted or discounted Markov decision processes: If a policy is (gain, bias, or discounted) optimal in one state, it is also optimal for all states reachable from this state using this policy. The optimality principle is used constructively to demonstrate the existence of a policy that is optimal in every state, and then to derive the coupled functional equations satisfied by the optimal return vectors. This reverses the usual sequence, where one first establishes (via policy iteration or linear programming) the solvability of the coupled functional equations, and then shows that the solution is indeed the optimal return vector and that the maximizing policy for the functional equations is optimal for every state. © 1976.

Date

01 Jan 1976

Publication

J. Math. Anal. Appl.

Authors

Topics

Share