Publication
IEEE Trans. Inf. Theory
Paper

The Least Mean Fourth (LMF) Adaptive Algorithm and Its Family

View publication

Abstract

New steepest descent algorithms for adaptive filtering and have been devised which allow error minimization in the mean fourth and mean sixth, etc., sense. During adaptation, the weights undergo exponential relaxation toward their optimal solutions. Time constants have been derived and surprisingly they turn out to be proportional to the time constants that would have been obtained if the steepest descent least mean square (LMS) algorithm of Widrow and Hoff had been used. The new gradient algorithms are insignificantly more complicated to program and to compute than the LMS algorithm. Their general form is WJ+1= Wj + 2vK<j«-%, where Wjis the present weight vector, WJ+1is the next weight vector, ∊jis the present error, Xjis the present input vector, μ is a constant controlling stability and rate of convergence, and 2K is the exponent of the error being minimized. Conditions have been derived for weight-vector convergence of the mean and of the variance for the new gradient algorithms. The behavior of the least mean fourth (LMF) algorithm is of special interest. In comparing this algorithm to the LMS algorithm, when both are set to have exactly the same time constants for the weight relaxation process, the LMF algorithm, under some circumstances, will have a substantially lower weight noise than the LMS algorithm. It is possible, therefore, that a minimum mean fourth error algorithm can do a better job of least squares estimation than a mean square error algorithm. This intriguing concept has implications for all forms of adaptive algorithms, whether they are based on steepest descent or otherwise. © 1984 IEEE

Date

01 Jan 1984

Publication

IEEE Trans. Inf. Theory

Authors

Topics

Share