About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ACC 2023
Conference paper
Optimal Control via Linearizable Deep Learning
Abstract
Deep learning models are frequently used to capture relations between inputs and outputs and to predict operation costs in dynamical systems. Computing optimal control policies based on the resulting regression models, however, is a challenging task because of the nonlinearity and nonconvexity of deep learning architectures. To address this issue, we propose in this paper a linearizable approach to design optimal control policies based on deep learning models for handling both continuous and discrete action spaces. When using piecewise linear activation functions, one can construct an equivalent representation of recurrent neural networks in terms of a set of mixed-integer linear constraints. That in turn means that the optimal control problem reduces to a mixed-integer linear program (MILP), which can then be solved using offthe-shelf MILP optimization solvers. Numerical experiments on standard reinforcement learning benchmarks attest to the good performance of the proposed approach.