About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
INFORMS 2022
Short paper
Distributionally Robust Optimization for Input Model Uncertainty in Simulation-based Decision Making
Abstract
We consider a new approach to solve distributionally robust optimization formulations that address nonparametric input model uncertainty in simulation-based decision making problems. Our approach for the minimax formulations applies stochastic gradient descent to the outer minimization problem and efficiently estimates the gradient of the inner maximization problem through multi-level Monte Carlo randomization. Using theoretical results that shed light on why standard gradient estimators fail, we establish the optimal parametrization of the gradient estimators of our approach that trades off between computation time and statistical variance. We apply our approach to nonconvex portfolio choice modeling under cumulative prospect theory, where numerical experiments demonstrate the significant benefits of this approach over previous related work.