Publication
QEST 2006
Conference paper

Bound-preserving composition for Markov reward models

View publication

Abstract

Stochastic orders can be applied to Markov reward models and used to aggregate models, while introducing a bounded error. Aggregation reduces the number of states in a model, mitigating the effect of the state-space explosion and enabling the wider use of Markov reward models. Existing aggregation techniques based upon stochastic orders are limited by a combination of strong requirements on the structure of the model, and complexity in determining the stochastic order and generating the aggregated model. We develop a set of general conditions in which models can be analyzed and aggregated compositionally, dramatically lowering the complexity of the aggregation and solution of the model. When these conditions are combined with a recently developed general stochastic order for Markov reward models, significantly larger models can be solved than was previously possible for a large class of models. © 2006 IEEE.

Date

Publication

QEST 2006

Authors

Share