Revisiting stochastic loss networks: Structures and approximations
Abstract
We consider fundamental properties of stochastic loss networks, seeking to improve on the so-called Erlang fixed-point approximation. We propose a family of mathematical approximations for estimating the stationary loss probabilities and show that they always converge exponentially fast, provide asymptotically exact results, and yield greater accuracy than the Erlang fixed-point approximation. We further derive structural properties of the inverse of the classical Erlang loss function that characterize the region of capacities that ensures a workload is served within a set of loss probabilities. We then exploit these results to efficiently solve a general class of stochastic optimization problems involving loss networks. Computational experiments investigate various issues of both theoretical and practical interest, and demonstrate the benefits of our approach.