IEEE Trans. Inf. Theory

On the Statistical Efficiency of the LMS Algorithm with Nonstationary Inputs

View publication


A fundamental relationship exists between the quality of an adaptive solution and the amount of data used in obtaining it. Quality is defined here in terms of“misadjustment, ” the ratio of the excess mean square error (mse) in an adaptive solution to the minimum possible mse. The higher the misadjustment, the lower the quality is. The quality of the exact least squares solution is compared with the quality of the solutions obtained by the orthogonalized and the conventional least mean square (LMS) algorithms with stationary and nonstationary input data. When adapting with noisy observations, a filter trained with a finite data sample using an exact least squares algorithms will have a misadjustment given by _- n_number of weightsN number of training samples’ If the same adaptive filter were trained with a steady flow of data using an ideal“orthogonalized LMS” algorithm, the misadjustment would be_ n_number of weightsM 4Tmsenumber of training samples' Thus, for a given time constantίmseof the learning process, the ideal orthogonalized LMS algorithm will have about as low a misadjustment as can be achieved, since this algorithm performs essentially as an exact least squares algorithm with exponential data weighting. It is well known that when rapid convergence with stationary data is required, exact least squares algorithms can in certain cases outperform the conventional Widrow–Hoff LMS algorithm. It is shown here, however, that for an important class of nonstationary problems, the misadjustment of conventional LMS is the same as that of orthogonalized LMS, which in the stationary case is shown to perform essentially as an exact least squares algorithm. © 1984 IEEE


01 Jan 1984


IEEE Trans. Inf. Theory