A COMPARATIVE ANALYSIS OF
ADAPTIVE
FILTERING ALGORITHMS
by
Andreas Taber
B.S., Towson State University, 1986
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering
1997
This thesis for the Master of Science
degree by
Andreas Taber
has been approved
by
Miloje Radenkovic
Tamal Bose
Tom Altman
Date
Taber, Andreas (M.S., Electrical Engineering)
A Comparative Analysis of Adaptive Filtering Algorithms
Thesis directed by Assistant Professor Miloje S. Radenkovic
ABSTRACT
Adaptive algorithms are employed as a means of filtering a stationary signal. A
comparative analysis between the Least Mean Squares algorithm, the Normalized Least
Mean Squares algorithm and the Recursive Least Mean Squares algorithm is
undertaken to determine which most successfully converges upon the filter
coefficients required for optimal filtering. From a variety of examples using Matlab,
the Recursive Least Mean Squares algorithm yields the best results.
This abstract accurately represents the content of the candidates thesis. I recommend
its publication.
Signed
Miloje S. Radenkovic
in
ACKNOWLEDGMENT
The author wishes to thank his wife, Renee, and his advisor, Miloje, for their
continuous support. Without their help, the hill would have been a mountain.
IV
CONTENTS
Chapter
1. Introduction............................................. 1
2. Adaptive Noise Canceler.................................. 3
2.1 Noise Analysis .......................................... 5
2.2 Noise Minimization....................................... 6
2.3 AR, MA, and ARMA Models ................................. 7
2.4 Cost Function ........................................... 9
2.5 Applications ............................................... 12
3. Least Mean Squares Algorithm ............................... 14
3.1 LMS Convergence............................................. 17
3.2 LMS Robustness.............................................. 21
3.3 Normalized Least Squares Algorithm.......................... 21
4. Recursive Least Squares Algorithm........................... 23
4.1 RLS Convergence............................................. 25
5. Experiments and Results..................................... 28
5.0. 1 Experiment 1............................................. 28
5.0. 2 Experiment II............................................ 29
5.0. 3 Experiment III........................................... 30
6. Conclusion.................................................. 60
v
Appendix
A.l Matlab Programs........................................... 62
Glossary...........................:........................ 80
References.................................................... 81
Endnotes...................................................... 82
vi
1.0 Introduction
The development of filters whose parameter coefficients can adjust almost instantly
to an incoming signal in order to extract the requisite information was witnessed in the
1950s. Known as Adaptive Filters, the impetus for their development was many
fold. So vast have their applications been, that their use is now typecast into four
basic classes: Identification, Inverse modeling, Prediction and Interference canceling.
Of the four, this thesis is concerned with Interference canceling, in particular, adaptive
noise cancellation.
The earliest work in adaptive noise cancellation started with two independent
researchers: Kelly of Bell Telephone Laboratories (1965) and Widrow of Stanford
University (1965). Kellys work focused on developing a method to cancel the echoes
experienced during a telephone conversation while Widrows work entailed the
development of a device known as an adaptive line enhancer to cancel 60Hz
interference at the output of an electrocardiographic amplifier and recorder1. Since
that time, a number of other adaptive noise cancellation schemes have been invented.
The implementation of an adaptive filter entails the development of an algorithm
whose iterative process not only has a fast rate of convergence upon the filter
coefficients but also maintains good tracking during the filters use. Of the variety of
algorithms that have been developed thus far, only the LMS, NLMS and RLS
algorithms will be studied. That is, a detailed analysis of each algorithm and their
comparative noise cancellation efficiencies will follow.
The format of this thesis has been established in such a way as to provide a
thorough understanding of the aforementioned algorithms. Beginning with the
development of a noise cancellation system and its respective probabilistic models in
1
chapter 2 (Noise Cancellation), chapters 3 and 4 develop the least mean squares and
the least squares algorithms respectively. Chapter 5 (Experiments and Results)
investigates the implementation of these algorithms in various adaptive noise filters. A
discussion and conclusion of the results ensue in chapter 6 (Conclusion).
2
2.0 Adaptive Noise Canceler
The process of adaptive noise cancellation is based upon the system shown in
figure 1.0.
As is illustrated, the adaptive noise canceler has two inputs: one from the primary
sensor and the other from the reference sensor. The signal received by the primary
3
sensor consists of some source signal, s(n) corrupted by an additive noise, v0(ri),
which combined produce the primary signal
d(n) = s(n) + v0 (n) . (2.0.1)
Since s(n) and v0 (n) are generated by two different sources, they are considered to be
independent signals. Consequently, the mean of s(n) is derived independent of v0(w),
i.e. the cross correlation of these signals is a product of their means:
R[s(n)v0 (/!)] = Â£[5()v0 ()] = Â£[s(7i)]Â£[v0 ()], (2.0.2)
and their covariance
COV(s(n)vQ (n)) = Â£[>(h)v0 ()] E[s(n)~\E[v^ ()] = 0. (2.0.3)
For the noise signals v0 () and v, (n), whose source is the same, independence is
not a factor and therefore
Â£[v0 ()v, ()] E[v0 (n)JE[v, ()]. (2.0.4)
Instead, their is some unknown value r(nk) for a lag k; that is,
r(nk) = E[v0 ()v, (nk)\ for all k (2.0.5)
and their covariance
COV[v0 ()v, (n &)] = r(n k)~ E[v0 ()]Â£[v, (n k)] & 0. (2.0.6)
By definition, independent signals are uncorrelated: their covariance is zero. For
correlated signals, the covariance is not zero. Hence, s(n) and v0 (n) are uncorrelated
4
with each other while v0 (n) and v, (n) are correlated to one another. By the same
reasoning, s(n) and v, (n) are independent and uncorrelated.
Upon input to the adaptive noise canceler, the reference signal, v, (n), is
processed by the adaptive filter. The result is the output signal:
Ml
y(n)= Z/t(w)vi(^)
k= 0
(2.0.7)
where fk (ri) are the filter coefficients. If these filter coefficients are such that
the output
y(n) = v0(n), (2.0.8)
then the error signal, e(n) becomes
e(n) = d(ri)~ y(n) = s(n) + v0 (n) v0 (n) = s(n). (2.0.9)
In words, the system output, e(n) given the proper filter coefficients, becomes the
uncorrupted signal generated by the signal source. This desired outcome is obtained
only through perfect filtering. For this to hold, an investigation of the noise source
and its minimization is in order.
2.1 Noise Analysis
The noise signals v0(n) and v,() are assumed to be white noise whose mean is
zero. The justification of this assumption rests upon the fact that a non zeromean, or
the direct current value can easily be removed by a highpass filter. Under this
5
condition, the mean of the product of s(n) andv0(/?), Eq. (2.0.2), as well as that of s(n)
and v, (n) are aero i.e.
Â£[s()v0()] = E[s(n)v] ()] = 0. (2.1.1)
Furthermore, the covariance of v0(n) and v,(), Eq. (2.0.6), reduces to the
autocorrelation
COV[v0 ()v, (n A:)] = p(k) = r(n k). (2.1.2)
To successfully establish these conditions, the placement of the reference sensor must
be such that it cannot detect the source signal, otherwise j(n) and s(n) will be
correlated. In addition, it must be placed in close proximity to the noise source so that
v, (n) is highly correlated to the noise component of d(n), the primary source output.
If this relationship is not maximized, the correlation between these two signals will
approach zero; that is,
E[d(ri)v, (n k)\ = 0 for all k (2.1.3)
Given this state, the adaptive filter will not function properly and actually force the
filter outputy(n) to zero. The resulting adaptive noise canceler will have no effect on
minimizing d(n). Hence, the maximization of the corresponding signaltonoise ratio
of the output signal e(n) is sacrificed.
2.2 Noise Minimization
The average power of the output signal,
E[e2 ] = E[s2 ] + Â£[(v0 y)2 ] + 2E[s(v0 y)] (2.2.1)
6
is a result of taking the square of
e(n) = s(ri) + v0(n)y(n)
(2.2.2)
and finding its mean. Observing that the source signal, s(n) and the noise signals,
v0(/?) and v,(), are uncorrelated, reduces Eq. (2.2.1) to
Hence, to maximize the signaltonoise ratio of the output signal, the minimization of
the meansquare value of the noise v0(n) y(n) is required. This holds true because
the power of the signal s(n) remains essentially constant. That is,
To optimize this outcome, the relationship of the filter coefficients to that of the error
signal e(n) must be established. The idea being that the proper filter will cause the
signal y(n) to converge upon vg(n) and thereby force vQ (n) y(n) to a minimum,
ideally zero.
2.3 AR, MA and AJRMA Models
The idea that a random signal can be represented as the output of a white noise
excited linear shiftinvariant filter was first theorized by Wold in 1938. Mathematically,
this can be expressed as
E[e2] = E[s2]^E{{ygyf].
(2.2.3)
E[e2\mai^E[s2} + E[(v0yf\
(2.2.4)
CO
x(ri) = ^h(i)w(ni)
(2.3.1)
i=0
where h(i) are the filter coefficients and iv(ni) is the incoming noise. A model of this
system is illustrated in figure 2.0.
7
White Noise w(ni) FILTER Nondeterministic stationary random sequence x(n)
Figure 2.0 Random Signal (Stochastic) Model
The design of a filter with infinite coefficients is not practical. Therefore, Wolds
theorem is expressed as :
N M
x(n) = '^ub{k)w{n fc) ^a(k)x(n k)
i=0 k=1
(2.3.2)
where b(k) and a(k) are the filter parameters. Such a filter model is known as an
Autoregressive Moving Average (ARMA) model.
If N 0 in Eq. (2.3.2), then
M
x(n) = b(0)w(n) ^ a(k)w(n k) .
k=1
(2.3.3)
This allpole model invokes an Autoregressive (AR) or AR(M) process.
If M0 in Eq (2.3.2), then
N
x(n) = ^b(k)w(n k).
k=0
(2.3.4)
This allzero model generates a Moving Average (MA) or MA(N) process.
8
The above models represent two types of filters commonly referred to as finite
impulse response (FIR) and infinite impulse response (HR) filters. The MA model is
an FIR filter while the AR and ARMA models are both HR filters. Note that MA, AR
and ARMA models are exclusively termed for those systems filtering white noise. It
would be improper to term either an FIR or HR filter by their aforementioned models
if the incoming signal was other than white noise.
Due to the complexity of the ARMA model and the recursive nature of the AR
model, the MA model or FIR filter is employed in this thesis. All subsequent
algorithms will be designed with this filter in mind.
2.4 Cost Function
Having established the relationship of the filter coefficients to the incoming white
noise signal v, (n), the error signal, e(n) can be written in complex notation as:
(2.4.1)
k=0
Accordingly, the mean square of e(n) or J, the cost function, results in
J = E[e(n)e* ()]
(2.4.2)
M1
= Â£[k(n)IV Z/,*Â£[v,(*>/] Â£/.Â£[("
k=0
+ Z Z/*/Â£[vi (p k>* (p 0]
k=0 i=0
from which four expectations have evolved. These are discussed as follows:
9
1. The first expectation is the variance of the reference signal d(n),
= Elk")!1] (2)
whose mean is assumed to be zero since the mean of v0() and v,(t?) are both
zero.
2. The second and third expectations are respectively
p(k) = Â£[v, (n k)d*(n)\ (2.4.4)
and
p*(k) = E[v*(n k)d{n)] (2.4.5)
where p(k) is the crosscorrelation.
3. The final expectation is the autocorrelation of the noise input to the filter for lag
of ik, i.e.
r(i k) = Â£[v, (n k)v\ {n /)] (2.4.6)
With the proper substitution, the cost function can be rewritten as
M1 M1 MlMl
J = SAXi) + Z/,/(*)+ Z Z/.>(' *)  (2.4.7)
4=0 4=0 4=0 i=0
That is, the cost function is a secondorder function of the filter coefficients given that
both the input signal and the filters response are in agreement with Wolds theorem.
Consequently, this function can be visualized as a bowlshaped (M+l) dimensional
surface with M degrees of freedom represented by the filter parameters Therefore,
10
the minimum of the cost function is at the bottom of the bowl. This minimum value
of /, written as Jmin, can be obtained setting the gradient vector VJ to zero and
differentiating the cost function with respect to the filter coefficient fk where
fk=ak+jbk , (2.4.8)
r SO cO
V,J = + j
dau
Sb,
(2.4.9)
M1
= 2p(k) + 2'Â£fir(ik)
i=0
and
J = Vt7 = 0.
(2.4.10)
In other words,
M1
P(rk)='ZfAik)> = 0,1, ....Ml (2.4.11)
i=0
These equations are termed the WienerHopf equations. Expressed in matrix form,
they become
where R is an MbyM correlation matrix:
11
(2.4.12)
r(0) r(l)
r\ 1) r(0)
r(M 1)
r(M 2)
R =
r\M1) r*(M 2)
0)
p, an Mby1 crosscorrelation vector:
P = [p(o), pii),
p( 1 M)\ (2.4.13)
and foan Mby1 optimum filter coefficient vector:
(2.4.14)
To solve these equations for f0, the correlation matrix of R is assumed to be
nonsingular i.e., its inverse exists. Therefore, the WienerHopf equations can be
rewritten as
Computation of the optimum filter coefficients requires both information of the
correlation matrix and the crosscorrelation vector. When such information is not
easily obtainable, a stochastic gradient algorithm is used.
2.5 Applications
Adaptive filters have been employed with regard to noise cancellation in various
capacities. As stated earlier, one such use has been to cancel the 60Hz interference in
electrocardiography (ECG). As a tool to monitor heart patients, this method is very
sensitive to small electrical discharges. Minimizing noise interference is therefore
f = R_1P
12
critical to maximizing the reliability of the apparatus. One such noise picked up by the
receiving electrode is from the surrounding electronic equipment which generates a
60Hz periodic waveform. By applying the adaptive noise cancellation method
diagrammed in figure 1, the noise can be successfully canceled.
In addition, another application for the adaptive noise canceler has been to reduce
the noise experienced by pilots from the aircrafts engines. With communications
between all the individuals in the cockpit and the control tower being especially vital to
a successful flight, the need for minimized signal interference is stressed. Again, by
applying an adaptive noise canceler, adequate noise minimization can be achieved.
Obviously, adaptive noise cancellation can be found in many systems as a means to
cancel a random noise signal. The design of each application will be consistent with
that illustrated in figure 1.
13
3.0 Least Mean Squares Algorithm
The Least Means Squares Algorithm, originated by Widrow and Hoff (1960), is one
of a variety of stochastic gradient algorithms developed to iteratively converge upon
the filter coefficients necessary to minimize /, the cost function. Two significant
attributes of the algorithm are that it does not require measurements of the pertinent
correlation function, nor does it require matrix inversionlv. Hailed for its simplicity, it
is the benchmark by which other adaptive filtering algorithms are rated.
The development of this algorithm begins with the gradient of the cost function
derived in Chapter 2 and presented here as a gradient vector. That is,
aj(n) , aj(n)
dt0(n) + J ob0(n)
WJ{ri) =
dJ(n) dJiri)
+ 3 dbMM
= 2p + 2Rf(n).
(3.0.1)
Applying the method of steepest descent, the estimated value of the filter coefficient,
A
/, is found as
/( + !) = /() + ^VJ()
(3.0.2)
14
= /(7?) + /u[pRf()].
To avoid the impossible task of measuring the exact value of the gradient vector, an
instantaneous estimation of the correlation matrix R and the crosscorrelation matrix
p, that are based upon the sample values of the filter input vector v, (n) and the
desired response dip) i.e. v0(/j) is desired. The corresponding estimation vectors are
defined by
R(n) = v,()vf()
(3.0.3)
and
p() = v, (ri)d* (n). (3.0.4)
where the superscript H denotes the Hermitian transposition combined with complex
conjugation. Substitution of these new estimations into the steepest gradient
algorithm Eq. (3.0.2) yields
f(n +1) = f (n) + //[p R /()] (3.0.5)
= f(ri) + Juv] (ri)[d*(n) vf f ()].
This equation is the LMS algorithm. The FIR filter needed to implement this
algorithm is illustrated figures 3.0, 3.1 and 3.2.
15
16
3.1 LMS Convergence
The convergence of the LMS algorithm depends upon the value of the constant,
fj. also known as the stepsize parameter. If its value is too small, convergence upon
the optimal filter coefficients will be a time consuming process due to the extended
number of iterations. If, however, it is too large, convergence will not be obtained.
Therefore, the proper choice of the stepsize parameter is paramount.
Applying the independence assumption whereby successive input vectors are
required to be mutually uncorrelated, to the LMS algorithm yields
E{i(n +1)] = (I //R)Â£[f ()] + MP (31.1)
= (IAiR)Â£[f(n)] + //Rf.
A
where I is the identity matrix. Next, f0 is subtracted from both sides of the equation
to give
E[f( + l)]f0 =(I//R)Â£[f] + (///R)f0 .
Substitution of the error vector, u(n) defined as
(3.1.2)
u() = Â£[?()]f0
(3.1.3)
into Eq. (3.1.2) gives
u( + !) = (!// R)u(ri).
(3.1.4)
17
In words, this equation is a recursive approximation of the LMS
further the convergence proof, the matrix R is decomposed to
R = QAQ
given
Q = [*. ?2 Qm 1
and
0 0
0 K
A = 0
0 0 0
where qi is an eigenvector corresponding to the eigenvalue A.
Substituting Eq. (3.1.4) into (3.1.5) gives
u(w + 1) = (I /^JAQ)u().
Using Q to rotate the error vector, defined as
u =Qu
yields
u' (n +1) = (I //A)u (n)
coefficients. To
(3.1.5)
(3.1.6)
(3.1.7)
(3.1.8)
(3.1.9)
(3.1.10)
18
when applying the modal matrix property,
QQ = I. (3.1.11)
The nature of the spectral matrix, A allows the decoupled equations to be separated
into an expression for each element, thereby becoming
Wy ( + !) = (1 M/ ); ()
(3.1.12)
Substitution into the iterative expression gives
where
lim{u.(ri)} 0
nion
(3.1.13)
(3.1.14)
if
1 //2.
<1.
(3.1.15)
The eigenvalues of R are all real and positive since R is symmetric and positive
semidefinite, which allows for the possibility of Xj = 0 and thereby ensuring
convergence when7
<^<
j = 0,1,....,Ml.
(3.1.16)
The condition for convergence is then satisfied for all modes provided
19
2
(3.1.17)
o
Anax
lmax being the maximum eigenvalue of the correlation matrix.
To compute the timeconstant, the length of time for the mean coefficient error to
decay to 1/e of its initial value, the decoupled equation for the yth element can be
expressed as
, w;()
(3.1.18)
where n tj. By taking the logarithm of this expression and assuming
fj.Xj 1 gives
ln(l
(3.1.19)
That is,
(3.1.20)
for theyth element or
t =
(3.1.21)
being the time it takes for the entire iterative process to decay to the standard of the
timeconstant 1 j e.
20
3.2 LMS Robustness
During the 1980s, extensive studies in the field of robust control were undertaken
to determine the optimum criterion for convergence of the LMS algorithm if a
single optimal estimate of the filter coefficients was not realizable. From a new
convergence criterion originated at the time, the socalled minimax criterion,
numerous calculations resulted to prove that the LMS algorithm could not realize such
a single optimal state in the least mean squares sense.
3.3 Normalized Least Squares Algorithm
The NLMS algorithm takes the same form as the LMS algorithm differing only in
the value assigned to the stepsize parameter jj. whose upper bound may exceed the
maximum filter input limit guaranteeing stability in the latter algorithm. Consequently,
fi is normalized by those factors which have a tremendous effect on the convergence
and stability of the LMS algorithm: the filter length and the power of the input signal.
The stepsize parameter for the NLMS algorithm can therefore be expressed as
The estimated power factor, Tp, is obtained via a time averaged measure of the input
data,
With the new expression for the stepsize parameter, the NLMS algorithm is given by
(M 1) x E[v* ()] Tp
(3.3.1)
M1
(3.3.2)
21
(3.3.3)
Should the estimated power parameter equal zero, the above expression will deliver an
undesirable outcome i.e. the filter coefficients will approach zero. In order to avoid
this instability, a small positive constant, c, is introduced to update the above equation
to
f(n +1) = f(n) +   [p Rfin)] (3.3.4)
c + v\(n)v^n)
where convergence is realized given 0 < jj. < 2.
22
I
4.0 Recursive Least Squares Algorithm
As with the LMS and NLMS algorithms, the development of the RLS algorithm
was also with the intent of solving the WienerHopf Equations. However, the manner
in which these algorithms accomplish this task is fundamentally different. Where the
LMS algorithm updates the filter coefficients based upon instantaneous estimates of
the correlation and crosscorrelation matrices, R and p respectively, the RLS algorithm
accomplishes this task utilizing input information extending back to the initiation of
the algorithm. The computational complexity of the RLS method is obvious but is
reduced by the use of the matrix inversion lemma.
Based upon the philosophy of the RLS model, the error signal, e(n) is defined as
e(n) = d(ri) f (t)v(n), (4.0.1)
where f(t) represents the estimated filter coefficients at time t, given by
f(t) = [/o(0,/.(0,.....fM.mT (402)
and v(n) is the incoming noise signal
v(n) = [v(), v(n 1),...v(n M + l)]r (4.0.3)
previously defined as Vj (n). Note that the filter coefficients remain constant during
the interval 1 < n < t. For simplicity sake, the complex notation as given in the
derivation of the LMS model is not employed here. This new expression for e(n) in
turn changes the cost function to
23
(4.0.4)
J = e(n)2 = [2d(n) ? (t)T v(n)
V=o
As before, the optimum filter coefficients are found by minimizing the cost function,
the outcome of which is
f(t) =
V i
2>(n)v(n)T
rt=0 '
2v(n)rf()
n=0
(4.0.5)
i.e., the WienerHopf equations revisited. Upon inspection, the new estimates for R
and p' at time t are
R(t) = 2v(nMn)T
/?=0
(4.0.6)
and
P(t) = 2 v(nM00
=0
(4.0.7)
These equations can also be expressed as
R(t) = R(tl) + v(t)v(t)x
(4.0.8)
and
P(t) = P(tl) + v(t)rf(0
Applying the matrix inversion lemma
(.A + BCDr1 = A'1 A'BCDA'B + C'y'DA'
(4.0.9)
(4.0.10)
24
where
A = R(t)
B = v(t)
C=I
D = v(t)T
(4.0.11)
yields
R(t)1 = R(tl)1
R(t ~ I) v(t)v(t)T R(t 1)"1
1 + v(t)T R(t 1)1 v(t)
(4.0.12)
Hence, the product of Eqs. (4.0.12) and (4.0.9) gives the RLS algorithm,
f(t) = R(t)p(t) = f(t1) + v(t)e^R_1(t) (4.0.13)
4.1 RLS Convergence
The parameter estimation error, defined as the difference between the optimum
and estimated parameter values,
f(tl) = f(tl)f (4.1.1)
sets the standard by which to prove whether the RLS algorithm will converge.
Substitution of the error estimate into the algorithm gives
f(t) = f(tl) + R1(t)v(tMO. (41.2)
This expression can also be written as
25
R(t) f (t) = R(t) f (t 1) + v(t)e(7).
(4.1.3)
Substituting Eq. (4.0.8) for the correlation matrix on the right side of Eq. (4.1.3) yields
R(t)f(t) = R(t l)f(t1) + v(t)v(t)T f(t 1) + v(t)e(0 (4.1.4)
which can be rewritten as
R(t) f (t) = R(t 1) f (t 1) + v(t)v(t)T f (t 1) (4.1.5)
+ v(t)(5(0v(t)Tf(t))
where the time dependent expression for the error signal was incorporated into Eq.
(4.1.4). Upon close inspection, this new expression can be reduced to
R(t)f(t) = R(tl)f(tl) + v(t)5(0
(4.1.6)
Summation from t = 1 to t = N produces
L R(t)f (t) = LR(tl)f(tl) + Z v(t)s(0
r=l
1
t=1
(4.1.7)
or
N
R(N)f(N) = R(0)f(0) + X v(t)5(0
/=i
(4.1.8)
If
(4.1.9)
lim
R(N)
TV
> sxI,wheresx > 0
TV oo
then
limTVR(N)~1<
Â£\ .
TV oo
(4.1.10)
Rearranging Eq. (4.1.8) gives
f(N) = TVR(N) ^R(0)f(0) + TVR(N)J ^fv(t)5(0. (4.1.11)
Noting that v(t) and s(n) are not correlated thereby results in
limf (N) = 0
TV  oo
(4.1.12)
which is the desired result. That is, the parameter estimation error, f(n 1) goes to
zero as the estimated filter coefficients converge upon the true filter parameter values
f.
27
5.0 Experiments and Results
To test each of the algorithms for convergence, a variety of experiments have been
created involving both FIR and HR filters. From these computational results, a
decision can be elicited as to which of the algorithms provides the highest degree of
convergence within the shortest period of time. Once the true filter parameters have
been obtained, it will be shown that all algorithms deliver convergence on a
continuum.
5.0.1 Experiment I
To conduct an experiment that would most likely mimic practical uses of these
algorithms, the FIR filters implemented in this experiment consisted of one with
parameters equaling those of the incoming noise signal and one having coefficients less
than those needed to achieve optimal filtering. The experimental setup is as follows:
flxnxn
s(n) incoming signal = 5 x sin^
v,() = noise signal to be filtered = (2.2*x(n)+4.1*x(nl)1.5*x(n2)
3.8*x(n3)+7*x(n4))
where
x(n) = 2*w(n) + 1.5*w(nl)
28
is a white noise signal given that w(n) is random number generated. From the
incoming noise signal, it is obvious that the FIR filter used to filter such a signal
requires a minimum of five coefficients. Using FIR filter (1) where
FIR filter (1) = a x(n)+b x(nl)+c x(n2)+d x(n3)+e x(n4),
figures 5.1 to 5.6 respectively illustrate that convergence upon the incoming signal,
shown in figure 50, is realized for such a filter employing either of the three
algorithms. The outcome for the coefficients a, b, c, d, e, after a sampling iteration of
one thousand points are
TABLE 5.0 (FIR Filter Coefficients and their respective algorithms)
A b C tl e
! LMS Algorithm 2.226 1 4.342 1.571 1 3.542 j 6.970 1
1 NLMS Algorithm 2.108 l 4.302 ! 1.868  3.436 6.739 1
i RLS Algorithm 2.190 1 4.099 1 1.503 j 3.799 7.002 
! True Parameters 2.2 ...1.4,1 j ! 1.5 i 3.8 . 1...7 ]
Employing FIR filter (2) where
FIR filter (2) = a x(n) + b x(nl) + c x(n2),
figures 5.7 through 5.5.12 show convergence is not achieved. With two missing filter
parameters, the algorithms tend to approximate the true filter coefficients with the
three existing parameters. Subsequently, the incoming noise signal is not optimally
filtered and the resulting error signal is not ideally s(n) but rather some amalgam of
s(n) and v,(w).
29
5.0. 2 Experiment II
To incorporate the possibility of an incoming noise signal being generated by
some fashion as to display HR qualities, an FIR filter having an order of fifteen was
used to filter out such an signal. Displayed in figures 5.13 through 5.18, the outcome
of such filtration is illustrated as not being ideal, at least not for an FIR filter with only
fifteen parameters. Increasing the number of coefficients will help to make the
convergence more ideal.
5.0. 3 Experiment III
The final experiment entails an HR adaptive filter with both modeled and
unmodeled noise. That is, the incoming noise signal is
v,(w)
2
4.6z1 + 2.4z~2
1 + 0.4z~2
+ g(n)
where g(n) describes the unmodeled noise given by
gamma x(n)
g^ 1 + O.lz1 0.56z2
Gamma = 0, 0.001, 0.01, 0.1, 1, (3.2, 4 orlO) and x(n) is the white noise signal
x(n) = 2 w(n) +1.5 w(nl)
where w(n) is a random number generator. The incoming signal also differs in this
experiment and is described by
30
2 X7cxn
s(n) = incoming signal = 5 x sin
30
The adaptive HR filter employed therefore is modeled after the input noise signal and
is represented by
Filter =
ra + bz 1 + cz 2>
v. l + dz~1 j
Figures 5.19 through 5.54 illustrate the outcome of these results. Accordingly, tables
(2) through (4) display the coefficients converged upon by their respective algorithms
for a given gamma after one thousand sample iterations.
Table 5.1 (HR filter using LMS algorithm for both modeled (gamma=0)
and unmodeled input noise)
lllllllPilSIil h d
1 gamma = 0 j 2.023 4.658 2.419 0.403
j gamma.OOl ! 1.826 4.615 2.372 0.418
I eamma^.Ol ! 1.994 4.584 2.280 0.488
i gamma=.l I 2.174 4.484 2.616 0.310
i gamma=l ! 3.026 4.577 2.727 0.345
1 gamma=3.2 ! 4.899 4.761 3.920 0.491
Table 5.2 ((HR filter using NLMS algorithm for both modeled (gamma=0)
and unmodeled input noise)
iillSlwII b Iliiiisi 11
gamma=0 1.766 [ 4.257 j 2.200 0.432
3 II O o 1.693 1 4.330 ! 2.075 0.422
gamma=.01 1.719 ! 4.288 ! 2.168 0.436
eamma=.l 1.763 1 4.436  2.111 0.422
eamma=l 2.651 [ 4.188 j 2.631 0.462
gamma=4 5.284 i 4.204 i 3.714 0.501
31
Table 5.3 ((HR filter using RLS algorithm for both modeled (gamma=0) and
unmodeled input noise)
llliiilllllll Illllliilliil illillililllli d
pamma=0 2.039 i 4.558 2.518 1 0.384
O 2amma=.001 1.960 ! 4.595 2.413 i 0.394
2.102 1 4.551 2.634 ! 0.369
eamma=.l 2.080 4.587 2.501  0.390
gamma=l 2.980 1 4.636 2.879 i 0.390
gamma=5 6.733 1 4.990 5.078 I 0.340
INCOMING SIGNAL s(n)
Figure 5.0
32
ADAPTIVE FIR FILTER COEFFICIENTS USING "LMS" ALGORITHM
8
6
4
LU
S 2
z.
O n
< o
2
4
6
0 200 400 600 800 1000
SAMPLING ITERATIONS
Figure 5.1
MODEUNG ERROR: FILTER MISSING 2 COEFFICIENTS
 W\
Figure 5.2
33
ADAPTIVE FIR FILTER COEFFICIENTS USING "LMS" ALGORITHM
UJ
Q
3
CD
<
2
ERROR SIGNAL e(n)
Figure 5.4
34
ADAPTIVE FIR FILTER COEFFICIENTS USING ''NLMS" ALGORITHM
g MODELING ERROR: FILTER MISSING 2 COEFFICIENTS
6
4
ULI
Â§ 2
CD n
< o
S'
%
2
4
6
*****
'r"~sS _______. *,
20D 400 600 800
SAMPUNG ITERATIONS
1000
Figure 5.5
ERROR SIGNAL e(n)
60 
80 
1001111
0 200 400 600 BOO 1000
SAMPLING ITERATIONS
Figure 5.6
35
ADAPTIVE FIR FILTER COEFFICIENTS USING "NLMS ALGORITHM
Figure 5.7
LU
Q
0
<
ERROR SIGNAL e(n)
0 200 400 600 80D 1000
SAMPLING ITERATIONS
Figure 5.8
36
ADAPTIVE FIR FILTER COEFFICIENTS USING "RLS" ALGORITHM
8
6
4
iU
Â§ 2
< 0
MODELING ERROR: FILTER MISSING 2 COEFFICIENTS
a**!
I V
r> r"v>
l s4
&
200 400 BOO 800
SAMPLING ITERATIONS
1000
Figure 5.9
ERROR SIGNAL e(n)
60 j
80111
0 200 400 600 800 1000
SAMPLING ITERATIONS
Figure 5.10
37
ADAPTIVE FIR FILTER COEFFICIENTS USING "RLS" ALGORITHM
Figure 5.11
ERROR SIGNAL e(n)
Figure 5.12
38
ADAPTIVE FIR FILTER COEFFICIENTS USING "LMS" ALGORITHM
Figure 5.13
ERROR SIGNAL
Figure 5.14
39
ADAPTIVE FIR FILTER COEFFICIENTS USING "NLMS" ALGORITHM
Figure 5.15
ERROR SIGNAL
Figure 5.16
40
MAGNITUDE MAGNITUDE
ADAPTIVE FIR FILTER COEFFICIENTS USING "RLS" ALGORITHM
Figure 5.17
ERROR SIGNAL
Figure 5.18
41
ADAPTIVE HR FILTER COEFFICIENTS USING "LMS" ALGORITHM
Figure 5.19
ERROR SIGNAL e(n)
40 ,1,
3D !
20 r
.30 I1''
0 200 400 600 800 1000
SAMPLING ITERATIONS
Figure 5.20
42
MAGNITUDE MAGNITUDE
ADAPTIVE HR FILTER COEFFICIENTS USING "LMS" ALGORITHM
Figure 5.21
ERROR SIGNAL e(n)
20 j
30 L
401111
0 200 400 600 800 1000
SAMPLING ITERATIONS
Figure 5.22
43
ADAPTIVE HR FILTER COEFFICIENTS USING "LMS" ALGORITHM
6
4
2
UJ
3
z 0
o
<
2
4
S
0 200 400 GOO 800 1000
SAMPLING ITERATIONS
Figure 5.23
P111
UNMODELED NOISE PRESENT: GAMMA = 0.01
J_____________________I____________________L
LU
O
O
<
25
20
15
10
5
0
5
10
15
20
25
0 200 400 600 800 1000
SAMPLING ITERATIONS
ERROR SIGNAL e(n)
Figure 5.24
44
MAGNITUDE MAGNITUDE
ADAPTIVE HR FILTER COEFFICIENTS USING LMS" ALGORITHM
Figure 5.25
ERROR SIGNAL e(n)
SAMPLING ITERATIONS
Figure 5.26
45
ADAPTIVE HR FILTER COEFFICIENTS USING LMS" ALGORITHM
LLI
O
=>
0
<
2 
4 
UNMODELED NOISE PRESENT: GAMMA =1
y
l
V,
'W~Jr"\
200 400 600 800
SAMPUNG ITERATIONS
1000
Figure 5.27
ERROR SIGNAL e(n)
20 :  i
301111
0 200 400 600 800 1000
SAMPLING ITERATIONS
Figure 5.28
46
ADAPTIVE HR FILTER COEFFICIENTS USING "LMS" ALGORITHM
Figure 5.29
ERROR SIGNAL e(n)
400 600
SAMPLING ITERATIONS
800
1000
Figure 5.30
47
ADAPTIVE HR FILTER COEFFICIENTS USING "NLMS" ALGORITHM
Figure 5.31
ERROR SIGNAL e(n)
Figure 5.32
48
ADAPTIVE HR FILTER COEFFICIENTS USING "NLMS" ALGORITHM
Figure 5.33
ERROR SIGNAL e(n)
30 i11,
30111'
0 200 400 BOO BOO 1000
SAMPLING ITERATIONS
Figure 5.34
49
MAGNITUDE
ADAPTIVE HR FILTER COEFFICIENTS USING "NLMS'' ALGORITHM
Figure 5.35
ERROR SIGNAL e(n)
30 i.,,
301111
0 200 400 600 800 1000
SAMPLING ITERATIONS
Figure 5.36
50
MAGNITUDE MAGNITUDE
ADAPTIVE HR FILTER COEFFICIENTS USING NLMS" ALGORITHM
Figure 5.37
ERROR SIGNAL e(n)
40.,,,
3D 
301111
0 200 400 600 800 1000
SAMPLING ITERATIONS
Figure 5.38
51
ADAPTIVE HR FILTER COEFFICIENTS USING "NIMS" ALGORITHM
Figure 5.39
ERROR SIGNAL e(n)
50111
40 
30 
30 
40'1'1
0 200 400 600 800 1000
SAMPLING ITERATIONS
Figure 5.40
52
ADAPTIVE HR FILTER COEFFICIENTS USING "NLMS" ALGORITHM
Figure 5.41
60
40
20
LU
0
3
z 0
<3
<
2
20
40
60
0 200 400 600 800 1000
SAMPLING ITERATIONS
Figure 5.42
ERROR SIGNAL e(n)
53
ADAPTIVE HR FILTER COEFFICIENTS USING "RLS" ALGORITHM
Figure 5.43
ERROR SIGNAL e(n)
Figure 5.44
54
ADAPTIVE HR FILTER COEFFICIENTS USING "RLS" ALGORITHM
UJ
Q
z.
0
<
6
4
2
0
2
4
6
0 2D0 400 600 BOO 1000
SAMPLING ITERATIONS
UNMODELED NOISE PRESENT: GAMMA = 0.001
I . I___________________________________________________I I
Figure 5.45
ERROR SIGNAL e(n)
Figure 5.46
55
ADAPTIVE HR FILTER COEFFICIENTS USING "RLS" ALGORITHM
6
4
2
HI
Q
3
z 0
:
2
4
6
0 2D0 400 600 80D 1000
SAMPLING ITERATIONS
UNMODELED NOISE PRESENT: GAMMA = 0.01
r
K
Figure 5.47
ERROR SIGNAL e(n)
Figure 5.48
56
ADAPTIVE HR FILTER COEFFICIENTS USING "RLS" ALGORITHM
Figure 5.49
ERROR SIGNAL e(n)
Figure 5.50
57
ADAPTIVE HR FILTER COEFFICIENTS USING "RLS ALGORITHM
400 600
SAMPLING ITERATIONS
1000
Figure 5.51
ERROR SIGNAL e(n)
Figure 5.52
58
ADAPTIVE HR FILTER COEFFICIENTS USING RLS" ALGORITHM
6 ip
400 600
SAMPLING ITERATIONS
Figure 5.53
ERROR SIGNAL e(n)
1000
Figure 5.54
59
6.0 Conclusion
From the outcome of the experimental results, the RLS algorithm converges upon
the optimal filter coefficients in the least amount of time as compared to the LMS and
NLMS algorithms. This proficiency can be attributed to the vanishing gain quality of
the algorithm as the parameter estimation error goes to zero with increasing signal
samples. Furthermore, by not having to contend with a normalization factor or step
size parameter as with those for both the LMS and NLMS, the potential for
convergence error is minimized by employing the RLS algorithm. That is, given a
stepsize parameter that is too large, convergence upon the optimal filter coefficients
may not be realized.
A continued look at the results yields more interesting observations. For one,
given an FIR filter with at least fifteen filter coefficients, all algorithms can converge
upon some optimal parameter values to filter out signals that are generated by either a
Autoregressive or Moving Average filter model. If, however, the FIR filter is not
designed with the minimum number of coefficients, modeling errors may result.
Consequently, the adaptive algorithm will be unable to converge upon the optimal
filter parameters thereby rendering the filter model ineffective.
Another fascinating observation is the ability of each algorithm to continue to
converge upon the optimal filter coefficients in the presence of unmodeled noise. For
all three examples, a certain robustness is illustrated. Although the noise may not be
optimally filtered, the resultant error signal consists primarily of the desired input
signal, given that the presence of the unmodeled noise is slight.
The most important result of the experiments is that the algorithms do work. By
the nature of their design, they are each able to successfully filter stationary noise
60
signals so efficiently as to require a low cost of implementation to the user. Without
having to know too much about the incoming noise signal, other than whether it is
stationary or not, filtering can be a matter of a general system setup as opposed to a
specific filter design.
61
APPENDIX
A.1 Matlab Programs
% MATLAB PROGRAM FOR THE GENERATION OF AN ADAPTIVE
% LEAST MEAN SQUARES FILTER. THIS PROGRAM IS
% MODELED FOR A FILTER HAVING ALL THE COEFFICIENTS
% AS THAT OF THE INCOMING NOISE SIGNAL AND ONE NOT
% HAVING ENOUGH PARAMETER (MODELING ERROR)
clear all;
num = 1000;
fl = zeros (5,num);
f = zeros (3,num);
x = zeros (1,num);
b = zero s(l, num);
xl = zero s(l,num);
w  sqrt(2) randn(l,num);
% INCOMING SIGNAL
for n = l:num
S(n) = 5 sin(2*pi*n/50);
end
% GAIN FACTOR
u=input('Gain factor = ');
% FOR MODELING ERROR
F=eye(3);
for n=5:num;
x(n) = 2*w(n) + 1.5*w(nl);
phi = [x(n) x(nl) x(n2)]';
phi2 =phi*phi';
v(n) = 2.2*x(n)+4.1*x(nl)1.5*x(n2)3.8*x(n3)+7*x(n4);
d(n) = S(n) + v(n);
e(n) = d(n) f(:,n)'*phi;
z(n) = sum(eG2)/(n4);
F = F + phi phi';
minF(n) = min(eig(F))/(n4);
f(:,n+l) = f(:,n) + u*phi*e(n);
62
end
% FOR NONMODELING ERROR
Fl=eye(5);
for n=5:num;
xl(n) = 2*w(n) + 1.5*w(nl);
phil = [xl(n) xl(nl) xl(n2) xl(n3) xl(n4)]';
phil2 =phil*phil';
vl(n) = 2.2*xl(n)+4.1*xl(nl)1.5*xl(n2)3.8*xl(n3)+7*xl(n4);
dl(n) = S(n) + vl(n);
el(n) = dl(n) fl(:,n)'*phil;
zl(n) = sum(elT2)/(n4);
FI = FI + phil phil';
minFl(n) = min(eig(Fl))/(n4);
fl(:,n+l) = fl(:,n) + u*phil*el(n);
end
figure(l)
whitebg(figure(l))
plot(f(l,:))
axis([0 num 7 9]);
hold on
plot(f(2,:))
plot(f(3,:))
title ('ADAPTIVE FIR FILTER COEFFICIENTS USING "LMS" ALGORITHM);
gtext('MODELING ERROR: FILTER MISSING 2 COEFFICIENTS');
xlabel('SAMPLING ITERATIONS)
ylabelCMAGNITUDE1)
pause;
figure(2)
whitebg(figure(2))
plot(e)
title('ERROR SIGNAL e(n)');
ylabel('MAGNITUDE')
xlabel('SAMPLING ITERATIONS')
pause;
figure (3)
whitebg(figure(3))
plot(fl(l,:))
63
axis([0 num 7 9]);
hold on
plot(fl (2,:))
plot(fl(3,:))
plot(fl(4,:))
plot(fl(5,:))
title('ADAPTIVE FIR FILTER COEFFICIENTS USING "LMS" ALGORITHM');
xlabel('SAMPLING ITERATIONS)
ylabel ('MAGNITUDE')
pause;
figure (4)
whitebg(figure (4))
plot(el)
axis([0 num 30 30]);
title('ERROR SIGNAL e(n)');
ylabel('MAGNITUDE')
xlabel('S AMPLING ITERATIONS);
f(:,num)
fl (:,num)
% MATLAB PROGRAM FOR THE GENERATION OF AN ADAPTIVE
% NORMALIZED LEAST MEAN SQUARES FILTER. THIS PROGRAM IS
% MODELED FOR A FILTER HAVING ALL THE COEFFICIENTS
% AS THAT OF THE INCOMING NOISE SIGNAL AND ONE NOT
% HAVING ENOUGH PARAMETER (MODELING ERROR)
clear all;
num = 1000;
fl = zeros(5,num);
f = zeros(3,num);
x = zeros (1,num);
xl = zeros (1,num);
w = sqrt(2) randn(l,num);
% INCOMING SIGNAL
for n = l:num;
S(n) = 5 sin(2*pi*n/50);
end
% GAIN AND NORMALIZATION FACTORS
64
u=input('Gain factor = ');
c=input('Normalization factor = ');
% FOR MODELING ERROR
F~eye(3);
for n=5:num;
x(n) = 2*w(n) + 1.5*w(nl);
phi = [x(n) x(nl) x(n2)]';
phi2 = phi*phi';
v(n) = 2.2*x(n)+4.1*x(nl)1.5*x(n2)3.8*x(n3)+7*x(n4);
d(n) = S(n) + v(n);
e(n) = d(n) f(:,n)'*phi;
z(n) = $0111(602)/(n3);
F = F + phi2;
minF(n) = min(eig(F))/(n3);
r(n) = c + norm(phi);
f(:,n+l) = f(:,n) + (u/r(n))*phi*e(n);
end
% FOR NONMODELING ERROR
Fl=eye(5);
for n=5:num;
xl(n) = 2*w(n) + 1.5*w(nl);
phil = [xl(n) xl(nl) xl(n2) xl(n3) xl(n4)]';
phil2 = phil*phil';
vl (n) = 2.2*xl (n)+4.1*xl (nl)1.5*xl (n2)3.8*xl (n3)+7*xl (n4);
dl(n) = S(n) + vl(n);
el(n) = dl(n) fl(:,n)'*phil;
zl(n) = sum (el 02)/(n4);
FI = F1+ phil2;
minFl(n) = min(eig(Fl))/(n4);
rl(n) = c + norm (phil);
fl(:,n+l) = fl(:,n) + (u/rl(n))*phil*el(n);
end
figure(l)
whitebg(figure(l))
Plt(f(l,:))
axis([0 num 7 9]);
hold on
65
plot(f(2,:))
plot(f(3,:))
title(ADAPTIVE FIR FILTER COEFFICIENTS USING "NLMS" ALGORITHM');
gtext('MODELING ERROR: FILTER MISSING 2 COEFFICIENTS');
xlabel('SAMPLING ITERATIONS')
ylabel^1 MAGNITUDE')
pause;
figure (2)
whitebg(figure (2))
plot(f(l,:))
plot(e)
titleCERROR SIGNAL e(n)');
ylabel('MAGNITUDE')
xlabel('SAMPLING ITERATIONS')
pause;
figure (3)
whitebg(figure (3))
pIot(fl(l,:))
axis([0 num 7 9]);
hold on
plot(fl (2,:))
plot(fl (3,:))
plot(fl (4,:))
plot(fl(5,:))
tide('ADAPTIVE FIR FILTER COEFFICIENTS USING "NLMS" ALGORITHM');
xlabel('SAMPLING ITERATIONS')
ylabel('MAGNITUDE')
pause;
figure (4)
whitebg(figure (4))
plot(el)
axis([0 num 30 30]);
tideCERROR SIGNAL e(n)');
ylabelCMAGNITUDE')
xlabel ('SAMPLING ITERATIONS');
f(:,num)
fl (:,num)
66
% MATLAB PROGRAM FOR THE GENERATION OF AN ADAPTIVE
% RECURSIVE LEAST SQUARES FILTER. THIS PROGRAM IS
% MODELED FOR A FILTER HAVING ALL THE COEFFICIENTS
% AS THAT OF THE INCOMING NOISE SIGNAL AND ONE NOT
% HAVING ENOUGH PARAMETER (MODELING ERROR)
clear all;
num = 2000;
fl = zeros (5,num);
f = zeros (3,num);
x = zeros (1,num);
xl = zeros (1,num);
w = sqrt(2) randn(l,num);
% INCOMING SIGNAL
for n = l:num
S(n) = 5 sin(2*pi*n/50);
end
% FOR MODELING ERROR
F=eye(3);
p=eye(3);
for n=5:num;
x(n) = 2*w(n) + 1.5*w(nl);
phi = [x(n) x(nl) x(n2)]';
phi2 = phi*phi';
v(n) = 2.2*x(n)+4.1*x(nl)1.5*x(n2)3.8*x(n3)+7*x(n4);
d(n) = S(n) + v(n);
e(n) = d(n) f(:,n)'*phi;
z(n) = sum(eA2)/(n4);
F = F + phi2;
minF(n) = min(eig(F))/(n4);
p = p (p*phi*phi'*p)/(l + phi'*p*phi);
f(:,n+l) = f(:,n) + p*phi*e(n);
end
% FOR NONMODELING ERROR
Fl=eye(5);
pl=eye(5);
for n=5:num
67
xl(n) = 2*w(n) + 1.5*w(nl);
phil = [xl(n) xl(nl) xl(n2) xl(n3) xl(n4)]';
phil2 = phil*phil';
vl (n) = 2.2*xl (n)+4.1*xl (nl)l .5*xl (n2)3.8*xl (n3)+7*xl (n4);
dl(n) = S(n) + vl(n);
el(n) = dl(n) fl(:,n)'*phil;
zl(n) = sum (el C2)/(n4);
FI = FI + phil2;
minFl(n) = min(eig(Fl))/(n4);
pi = pi (pl*phil*phil'*pl)/(l + phil'*pl*phil);
fl(:,n+l) = fl(:,n) + pl*phil*el(n);
end
figure(l)
whitebg(figure(l))
plot(f(l,:))
axis([0 num 7 9]);
hold on
p1ot(f(2,:))
plot(f(3,:))
title('ADAPTIVE FIR FILTER COEFFICIENTS USING "RLS" ALGORITHM');
gtext('MODELING ERROR: FILTER MISSING 2 COEFFICIENTS');
xlabel('SAMPLING ITERATIONS')
ylabel^1 MAGNITUDE')
pause
figui:e(2)
whitebg(figure (2))
plot(e)
title('ERROR SIGNAL e(n)');
ylabelCMAGNITUDE')
xlabelCSAMPLING ITERATIONS')
pause;
figure(3)
whitebg(figure (3))
%plot(fl(l,:))
axis([0 num 7 11]);
%hold on
%plot(fl (2,:))
plot(fl(3,:))
68
%plot(fl(4,:))
%plot(fl(5,:))
title('ADAPTIVE FIR FILTER COEFFICIENTS USING "RLS" ALGORITHM');
xlabel('SAMPLING ITERATIONS')
ylab el ('MAGNITUDE')
pause
figure (4)
whitebg(figure (4))
plot(el)
%axis([0 num 30 30]);
title(ERROR SIGNAL e(n));
ylabelCMAGNITUDE1)
xlabel('SAMPLING ITERATIONS');
pause;
figure (5)
whitebg(figure (5))
plot(S)
axis([0 num 10 10])
title('INCOMING SIGNAL s(n)')
ylabel('MAGNITUDE')
xlabelf'SAMPLING ITERATIONS)
f(:,num)
fl (:,num)
% MATLAB PROGRAM FOR AN HR NOISE SIGNAL FILTERED BY A
% LEAST MEAN SQUARES ADAPTIVE FIR FILTER HAVING
% 15 COEFFICIENTS.
clear all;
num = 1000;
f = zeros (15,num);
w = sqrt(2.5) randn(l,num);
x=zeros(l,num);
y=zeros(l,num);
v=zeros(l,num);
% INCOMING SIGNAL
for n = lrnum
69
S(n) = 5 sin(2*pi*n/30);
end
u=input('Gain factor = '); %use .0001
F~eye(15);
p=10*eye(15);
for n=15:num;
x(n) = 1.9*w(n) + 0.4*w(nl);
phi= [x(n) x(nl) x(n2) x(n3) x(n4),...
x(n5) x(n6) x(n7) x(n8) x(n9) x(n10) x(nll) x(n12) x(n13) x(n14)]';
y(n) = * phi;
phi2 = phi* phi';
v(n)=1.4*v(nl)0.49*v(n2)+x(n)+1.7*x(nl);
d(n) = S(n) + v(n);
e(n) = d(n) y(n);
z(n) = sum(eG2)/(n14);
F = F + phi2;
minF(n) = min(eig(F))/(n14);
f(:,n+l) = f(:,n) + u*phi*e(n);
end
figure(l)
whitebg(figure(l))
plot(f(l,:))
axis([0 num 4 5]);
hold on
plot(f(2,:))
plot(f(3,:))
plot(f(4,:))
plot(f(5,:))
plot(f(6,:))
plot(f(7,:))
plot(f(8,:))
plot(f(9,:))
plot(f(10,:))
plot(f(ll,:))
plot(f(12,:))
plot(f(13,:))
plt(f(14,:))
plot(f(15,:))
title('ADAPTIVE FIR FILTER COEFFICIENTS USING "LMS" ALGORITHM);
70
gtext('INCOMING HR NOISE SIGNAL FILTERED WITH FIR FILTER');
ylabel('MAGNITUDE')
xlabel('SAMPLING ITERATIONS')
pause
figure (2)
whitebg(figure (2))
plot(e)
title('ERROR SIGNAL1);
ylabel('MAGNITUDE')
xlab el (S AMPLING ITERATIONS')
filter_co efficients = f(:,num)
% MATLAB PROGRAM FOR AN HR NOISE SIGNAL FILTERED BY A
% NORMALIZED LEAST MEAN SQUARES ADAPTIVE FIR FILTER
HAVING
% 15 COEFFICIENTS.
clear all;
num = 1000;
f = zeros (15,num);
w = sqrt(2.5) randn(l,num);
x=zeros(l,num);
y=zeros(l,num);
v=zero s(l,num);
% INCOMING SIGNAL
for n l:num
S(n) = 5 sin(2*pi*n/30);
end
u=input('Gain factor = '); % use .001
c=input('Normalization factor = '); % use 10
F=eye(15);
for n=15:num;
x(n) = 1.9*w(n) + 0.4*w(nl);
phi= [x(n) x(nl) x(n2) x(n3) x(n4),...
x(n5) x(n6) x(n7) x(n8) x(n9) x(n10) x(nll) x(n12) x(n13) x(n14)]';
71
y(n) = f(:,n)' phi;
phi2 = phi*phi';
v(n)=1.4*v(nl)0.49*v(n2)+x(n)+1.7*x(nl);
d(n) = S(n) + v(n);
e(n) = d(n) y(n);
z(n) = sum(eG2)/(n14);
F = F + phi2;
minF(n) = min(eig(F))/(n14);
r(n) = c + norm(phi);
f(:,n+l) = f(:,n) + (u/r(n))*phi*e(n);
end
figure(l)
whitebg(figure(l))
plot(f(l,:))
axis([0 num 4 5]);
hold on
plot(f(2,:))
plot(f(3,:))
plot(f(4,:))
plot(f(5,:))
plot(f(6,:))
. plot(f(7,:))
plot(f(8,:))
plot(f(9,:))
plot(f(10,:))
plot(f(ll,:))
plot(f(12,:))
plot(f(13,:))
plot(f(14,:))
plot(f(15,:))
tideCADAPTIVE FIR FILTER COEFFICIENTS USING "NLMS" ALGORITHM1);
gtext('INCOMING HR NOISE SIGNAL FILTERED WITH FIR FILTER');
ylab el ('MAGNITUDE')
xlabel('SAMPLING ITERATIONS')
pause
figure(2)
whitebg(figure (2))
plot(e)
tide('ERROR SIGNAL');
ylabel('MAGNITUDE)
72
xlabel('SAMPLING ITERATIONS')
filter_coefficients = f(:,num)
% MATLAB PROGRAM FOR AN HR NOISE SIGNAL FILTERED BY A
% RECURSIVE LEAST SQUARES ADAPTIVE FIR FILTER HAVING
% 15 COEFFICIENTS.
dear all;
num 1000;
f = zeros (15,num);
w = sqrt(2.5) randn(l,num);
x=zeros(l,num);
y=zeros(l,num);
v=zeros (1 ,num);
% INCOMING SIGNAL
for n = 1 mum
S(n) = 5 sin(2*pi*n/30);
end
F=eye(15);
p=10*eye(15);
for n=15:num;
x(n) = w(n) + 0.9*w(nl);
phi= [x(n) x(nl) x(n2) x(n3) x(n4),...
x(n5) x(n6) x(n7) x(n8) x(n9) x(n10) x(nll) x(n12) x(n13) x(n14)]';
y(n) = f(:,n)' phi;
phi2 = phi*phi';
v(n)=1.4* v(nl)0.49* v(n2)+x(n)+1,7*x(nl);
d(n) = S(n) +v(n);
e(n) = d(n) y(n);
z(n) = sum(eC2)/(n14);
F = F + phi2;
minF(n) = min(eig(F))/(n);
p = p (p*phi*phi'*p)/(l + phi'*p*phi);
f(:,n+l) = f(:,n) + p*phi*e(n);
end
figure(l)
73
whitebg(figure(l))
plot(f(l,:))
axis([0 num 4 5]);
hold on
plot(f(25:))
plot(f(3,:))
Plt(f(4,:))
plot(f(5,:))
plot(f(6,:))
plot(f(7,:))
plot(f(8,:))
plot(f(9,:))
plot(f(10,:))
plot(f(ll,:))
plot(f(12,:))
plot(f(13,:))
plot(f(14,:))
plot(f(15,:))
title('ADAPTIVE FIR FILTER COEFFICIENTS USING "RLS" ALGORITHM');
gtext('INCOMING HR NOISE SIGNAL FILTERED WITH FIR FILTER');
ylabel('MAGNITUDE')
xlabel('SAMPLING ITERATIONS')
pause
figure (2)
whitebg(figure(2))
plot(e)
titleCERROR SIGNAL');
ylab el('MAGNITUDE1)
xlabel('SAMPLING ITERATIONS')
filter_co efficients = f(:,num)
% MATLAB PROGRAM OF AN HR ADAPTIVE FILTER NEEDED TO
% FILTER A NOISE SIGNAL WITH BOTH A CHANGING AND
% NONCHANGING COEFFICIENT. FILTERED USING THE
% LEAST MEAN SQUARES ALGORITHM.
clear all;
num 1000;
f = zeros (4,num);
w = sqrt(2) randn(l,num);
74
x = zeros (l,num);
v = zeros (l,num);
y = zeros (l,num);
g = zeros (l,num);
for n = l:num
S(n) = 5 sin(2*pi*n/30);
end
gam = input('Please enter a value for gamma = ');
u=input('Gain factor = ');
F=eye(4);
for n=4:num;
x(n) = 2*w(n) + 1.5*w(nl);
phi = [y(nl) x(n) x(nl) x(n2)]';
y(n) = * Phi;
g(n) = 0.1*g(nl) + 56*g(n2) + gam*x(n);
v(n) = 0.4*v(nl) + 2*x(n) 4.6*x(nl) + 2.4*x(n2) + g(n);
d(n) = S(n) + v(n);
e(n) = d(n) f(:,n)'*phi;
z(n) = sum(e./v2)/(n3);
F = F + phi phi1;
minF(n) = min(eig(F))/(n3);
f(:,n+l) = f(:,n) + u*phi*e(n);
end
figure(l)
whitebg(figure(l))
plot(f(l,:))
axis([0 num 6 6]);
hold on
plot(f(2,:))
plot(f(3,:))
plot(f(4,:))
title('ADAPTIVE HR FILTER COEFFICIENTS USING "LMS" ALGORITHM');
gtext('UNMODELED NOISE PRESENT: GAMMA = 3.2')
ylabel ('MAGNITUDE')
xlabelCSAMPLING ITERATIONS')
pause;
figure (2)
75
whitebg(figure(2))
plot(e)
title('ERROR SIGNAL e(n)');
ylabel('MAGNITUDE')
xlabelfSAMPLING ITERATIONS)
f(:,num)
% MATLAB PROGRAM OF AN HR ADAPTIVE FILTER NEEDED TO
% FILTER A NOISE SIGNAL WITH BOTH A CHANGING AND
% NONCHANGING COEFFICIENT. FILTERED USING THE
% NORMALIZED LEAST MEAN SQUARES ALGORITHM.
clear all;
num =1000;
f = zeros (4,num);
w = sqrt(2) randn(l,num);
x = zeros (1,num);
v = zeros (1, num);
y = zeros (1,num);
g = zeros (1,num);
for n = 1 mum;
S(n) = 5 sin(2*pi*n/30);
end
gam  input('Please enter a value for gamma ~ ');
u=input('Gain factor = ');
c=input('Normalization factor = ');
F=eye(4);
for n=4:num;
x(n) = 2*w(n) + 1.5*w(nl);
phi = [y(nl) x(n) x(nl) x(n2)]';
y(n) = f(:,n) phi;
phi2 = phi* phi';
g(n) = 0.1*g(nl) + .56*g(n2) + gam*x(n);
v(n) = 0.4*v(nl) + 2*x(n) 4.6*x(nl) + 2.4*x(n2) + g(n);
d(n) = S(n) + v(n);
e(n) = d(n) f(:,n)'*phi;
z(n) = sum(eC2)/(n3);
76
F = F + phi2;
minF(n) = min(eig(F))/(n3);
r(n) = c + norm (phi);
f(:,n+l) = f(:,n) + (u/r(n))*phi*e(n);
end
figure(l)
whitebg(figure(l))
plot(f(l,:))
axis([0 num 6 6]);
hold on
plot(f(2,:))
plot(f(3,:))
p!ot(f(4,:))
title('ADAPTIVE HR FILTER COEFFICIENTS USING "NLMS" ALGORITHM');
gtext('UNMODELED NOISE PRESENT: GAMMA = 4')
ylabel ('MAGNITUDE')
xlabel('SAMPLING ITERATIONS')
pause;
figure(2)
whitebg(figure (2))
plot(e)
title('ERROR SIGNAL e(n)');
ylabelCMAGNITUDE')
xlabel('SAMPLING ITERATIONS')
f(:,num)
% MATLAB PROGRAM OF AN HR ADAPTIVE FILTER NEEDED TO
% FILTER A NOISE SIGNAL WITH BOTH A CHANGING AND
% NONCHANGING COEFFICIENT. FILTERED USING THE
% RECURSIVE LEAST SQUARES ALGORITHM.
clear all;
num 1000;
f = zeros(4,num);
w = sqrt(2.5) randn(l,num);
x = zeros (1,num);
v = zeros (1,num);
y = zeros(l,num);
g = zeros (1, num);
77
gam input('Please enter a value for gamma = ');
for n = l:num
S(n) = 5 sin(2*pi*n/30);
end
Feye(4);
p=10*eye(4);
for n=5:num;
x(n) = 2*w(n) + 1.5*w(nl);
phi = [y(nl) x(n) x(nl) x(n2)]';
y(n) = * phi;
phi2 = phi*phi';
g(n) = 0.1*g(nl) + .56*g(n2) + gam*x(n);
v(n) = 0.4*v(nl) + 2*x(n) 4.6*x(nl) + 2.4*x(n2) + g(n);
d(n) = S(n) + v(n);
e(n) = d(n) y(n);
z(n) = sum(e.^2) / (n4);
F = F + phi2;
minF(n) = min(eig(F))/(n4);
p = p (p*phi*phi'*p)/(l + phi'*p*phi);
f(:,n+l) = f(:,n) + p*phi*e(n);
end
figure(l)
whitebg(figure (1))
plot(f(l3:))
axis([0 num 6 6]);
hold on
Plt(f(2,:))
plot(f(3,:))
plot(f(4,:))
title('ADAPTIVE HR FILTER COEFFICIENTS USING "RLS" ALGORITHM');
gtext('UNMODELED NOISE PRESENT: GAMMA = 5')
ylab el ('MAGNITUDE')
xlabel('SAMPLING ITERATIONS')
pause;
figure(2)
whitebg(figure(2))
78
plot(e)
titIe('ERROR SIGNAL e(n)');
ylabelCMAGNITUDE')
xlabelCSAMPLING ITERATIONS)
f(:,num)
GLOSSARY
Stochastic Process: also referred to as a random process, describes a statistical event
via probabilistic laws. The exact evolution of such an event is impossible to define.
Examples include speech, television, radar and noise signals.
Wide Sense Stationary: a signal whose mean is a constant regardless of the sample
index and whose autocorrelation is only a function of the difference in sample indices.
Filtering: the extraction of data of particular interest.
RLS: Recursive Least Mean Squares.
LMS: Least Mean Squares.
NLMS: Normalized Least Mean Squares.
Rate of Convergence: number of iterations required for the algorithm to converge
upon the ideal filter coefficients in response to stationary inputs.
Tracking: ability of the algorithm to adjust to statistical changes in a nonstationary
environment.
White noise: a zero mean stochastic process with a constant power spectral density for
all frequencies.
80
REFERENCES
Simon Haykin, Adaptive Filter Theory, Third Edition, Upper Saddle River, NJ: Prentice
Hall, 1996.
Peter M. Clarkson, Optimal and Adaptive Signal Processing, Boca Raton, Florida: CRC Press,
1993.
Oppenheim, A.V., and R.W. Schafer, DiscreteTime Signal Processing, Englewood Cliffs, NJ,
PrenticeHall, 1989.
LeonGracia, Alberto, Probability and Random Processes for Electrical Engineerings Second
edition, MA, Addis onWesley Publishing Co., 1994.
Macchi, O., Adaptive Processing: The LMS Approach with Applications in Transmission, NY,
Wiley, 1995.
Jazwinski, A. H., Stochastic Processes and Filtering Theory, NY, Academic Press, NY, 1970.
Huhta, J.C., and J.G. Webster, 60Hz interference in electrocardiography, IEEE
Trans. Biomed. Eng., vol. BME20M pp. 91101, 1973.
Kang, G.S., and L.J. Fransen, Experimentation with an adaptive noisecancellation
filter, IEEE Trans. Circuits Syst., vol. CAS34, pp. 753758,1987.
81
ENDNOTES
1 Simon Haykin, Adaptive Filter Theory. Third Edition, (Upper Saddle River, NJ: Prentice Hall, 1996), pg.75.
Peter M. Clarkson, Optimal and Adaptive Signal Processing. (Boca Raton, Florida: CRC Press, 1993), pg. 13.
Simon Haykin, Adaptive Filter Theory. Third Edition, (Upper Saddle River, NJ: Prentice Hall, 1996), pg.207.
iv Simon Haykin, Adaptive Filter Theory. Third Edition, (Upper Saddle River, NJ: Prentice Hall, 1996), pg.365.
v Peter M. Clarkson, Optimal and Adaptive Signal Processing. (Boca Raton, Florida: CRC Press, 1993), pg. 170.
82
