About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
IEEE TACON
Paper
An Improved Algorithm for the Solution of Discrete Regulation Problems
Abstract
This paper describes an improved algorithm for obtaining the steady-state feedback-gain matrix from the discrete matrix Riccati equation. This is of importance in the steady-state optimization of discrete linear systems with quadratic performance criteria. The solution of the Riccati equation by the natural iteration technique suggested by its dynamic programming derivation requires, in general, n(3n2+3r2+3nr+n+2r)/2 +r2(7+ 1)/2 multiplications per step, where n is the order of the system and 7 is the number of inputs. The improved algorithm requires only r(n2+2nr+n)/2 +r2(r+ 1)/2 multiplications per step, may converge in fewer iterations, and requires less storage. For the special case R = 0 (no weight on control effort), the number of multiplications can be reduced further to r(n-r)(n+r+l)/2+r2(r+l}/2 per iteration. The simplifications described above are accomplished in two ways. First, the characteristics of recently published canonical forms for controllable systems are exploited to reduce the number of free parameters appearing in the system matrices. Second, the concept of feedback-gain equivalence of performance criteria is used to derive a simply computed canonical form for the weighting matrix. © 1967, IEEE. All rights reserved.