Citation
Adaptive noise cancellation using an extended least squares algorithm

Material Information

Title:
Adaptive noise cancellation using an extended least squares algorithm
Creator:
Anderson, Mark
Place of Publication:
Denver, Colo.
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
vii, 64 leaves : illustrations ; 29 cm

Thesis/Dissertation Information

Degree:
Master's ( Master of Science)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
Department of Electrical Engineering, CU Denver
Degree Disciplines:
Electrical Engineering
Committee Chair:
Radenkovic, Miloje S.
Committee Members:
Bialasiewicz, Jan T.
Fardi, Hamid

Subjects

Subjects / Keywords:
Adaptive filters ( lcsh )
Adaptive filters ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaf 64).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Electrical Engineering.
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Mark Anderson.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
34415880 ( OCLC )
ocm34415880
Classification:
LD1190.E54 1995m .A63 ( lcc )

Full Text
ADAPTIVE NOISE CANCELLATION
USING AN
EXTENDED LEAST SQUARES ALGORITHM
by
Mark Anderson
B.S.E.E., Colorado State University, 1989
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering
1995


This thesis for the Master of Science
degree by
Mark Anderson
has been approved
by
Miloje S. Radenkovic
Hamid Fardi
ar/os/vr
Date


Anderson, Mark (M.S., Electrical Engineering)
Adaptive Noise Cancellation Using an Extended Least Squares Algorithm
Thesis directed by Assistant Professor Miloje S. Radenkovic
ABSTRACT
This thesis discusses the problem of noise cancellation in the area of Signal
Processing. The focus is on the development and application of .an Extended Least
Squares (ELS) adaptive algorithm to a specific system model. Many approaches
to the development of an adaptive filtering algorithm are based upon the Wiener
filter, the Kalman filter, or the Method of Least Squares with each using a tapped
delay line, or transversal filter, as the basis structure. Adaptive filters rely on their
self-adjusting property to deal with unknown system parameters and new, or time
varying, situations after having been trained on a finite number of training signals
or patterns through some form of recursive operations. An Extended Least
Squares algorithm is developed for a noise cancellation problem, and computer
simulations show the actual performance of the algorithm in this application.
This abstract accurately represents the content of the candidate's thesis. I
recommend its publication.
Signed:
Miloje S. Radenkovic
iii


CONTENTS
Chapter
1. Introduction................................................1
1.1 Adaptive Automation................................. 1
1.2 Filtering Background................................ 2
1.3 Adaptive Noise Cancellation Applications.............3
1.3.1 Electocardiographic Applications..............7
1.3.2 Communications Applications..................11
1.3.3 Periodic and Broadband Applications..........14
2. Noise Cancellation Problem............................... 19
2.1 ARMAX Model Definition..............................19
2.2 Extended Least Squares Algorithm Development........22
2.3 Algorithm Analysis..................................25
3. Noise Cancellation Simulations.............................27
3.1 Problem Definition..................................27
3.2 ELS Algorithm Set-up................................28
3.3 MATLAB Simulation Results...........................30
4. Conclusions and Future Work................................43
Appendix
A MATLAB Script Files for Computer Simulation................44
References........................................................64
IV


FIGURES
Figure
1.1 Closed-Loop Adaptation........................................2
1.2 Adaptive Noise Canceler.......................................4
1.3 Adaptive Noise Canceler with Uncorrelated Inputs..............6
1.4 Cancelling 60-Hz Interference.................................8
1.5 Multichannel Adaptive Noise Canceler.........................10
1.6 Simplified Long Distance System..............................12
1.7 Long Distance System with Adaptive Echo Cancellation.........14
1.8 Cancelling Periodic Interference................ ...........15
1.9 Self-Tuning Filter...........................................16
1.10 Adaptive Line Enhancer.......................................17
1.11 Adaptive Line Enhancer as Strong Signal Eliminator...........18
2.1 System Definition with ARMAX Input...........................19
3.1 A priori Prediction Error....................................31
3.2 "Close-Up" A priori Prediction Error.........................31
3.3 Predicted System Output vs. Real System Output...............32
3.4 Estimation Parameters for Afa'1)........................... 33
3.5 "Close-Up" for Afe1)........................................33
3.6 Estimation Parameters for B(q_1)............................ 34
3.7 "Close-Up" for Bfa'1)........................................34
3.8 Estimation Parameters for C(q-1).............................35
3.9 "Close-Up" for C(q1).........................................35
3.10 Composite of the Normalized Squared-Differences..............37
3.11 Average Squared-Difference...................................38
3.12 Trial #4 Squared-Difference..................................39
3.13 Trial #4 Estimates for A(q_1)................................39
3.14 Trial #4 Estimates for B(q*1)................................40


3.15 Trial #4 Estimates for C(q1)................................40
3.16 Trial #6 Squared-Difference..................................41
3.17 Trial #6 Estimates for B(q'1)..................................42
VI


ACKNOWLEDGMENTS
Special appreciation is extended to Dr. Miloje Radenkovic for his guidance
and support during this project and other graduate work as well. Appreciation is
also extended to TRW Systems Services Company Inc. for supporting my
graduate work through their program for study under company auspices.
I am also very grateful for the support and encouragement given by
my wife and my parents throughout my graduate work.
vii


CHAPTER 1
INTRODUCTION
1.1 Adaptive Automation
Interest in the area of adaptive automation continues to increase because of
the significant improvement in a system's performance which can be brought about
through contact with its environment. This improvement is achieved as the
adaptive system adjusts its structure through some type of "training" process.
Fields such as communications, radar, sonar, seismology, mechanical design,
navigation systems, and biomedical electronics are finding several applications for
adaptation.
Several adaptive schemes have been proposed and developed which can be
broken down into open-loop and closed-loop adaptation. The closed-loop, or
performance feedback, process monitors some measured system performance and
optimizes that performance by adjusting itself to better deal with the variable or
inaccurately known components of the physical system. The general scheme for
closed-loop adaptation is shown in Figure 1.1[1].
These closed-loop models process an input signal u and a defined
"desired", or tracking, response signal d to find an error signal e. Typically, an
adaptive algorithm is used to minimize some measure of this error and then adjust
the structure of the adaptive system processor.
1


Figure 1.1 Closed-Loop Adaptation
1.2 Filtering Background
The general idea of enhancing the performance of a system while operating
in a changing environment and tracking time variations in input is the driving force
that changes the focus from a fixed filter design to that of an adaptive filter. The
fixed filtering methods, which were developed through the work of Wiener,
Kalman, Bucy, and others, require a priori information about certain statistical
parameters of the useful signal and any unwanted additive noise. In the noise
cancellation problem, a filter needs to suppress the noise while passing through the
useful signal. The Wiener filter minimizes the mean-square value of the error
between the desired response and the actual filter output and is said to be the
optimal filter in the mean-square sense.
In many situations, prior knowledge of the input signal or noise may not be
known and/or they may vary with time. Thus the filter design must be able to
change its structure in order to continue meeting the response criteria. Adaptive
2


filters rely on a recursive algorithm which starts from a predetermined set of initial
conditions, proceeds to train itself on the system inputs, and converges to the
optimum Wiener filter in some statistical sense. Many algorithms have been
developed giving various performance factors to consider. These include rate of
convergence, robustness, computational requirements, along with other factors.
These measure how rapidly the algorithm moves towards the optimum solution in
some mean-square sense, how well it can adjust to a changing environment, and
-how practical the implementation would be given a certain operating platform121.
1.3 Adaptive Noise Cancellation Applications
The late 1950's saw a number of independent researchers working on
various adaptive algorithms including the least-mean-square (LMS) algorithm
which was developed in 1959 by Widrow and Hoff for use in their use of adaptive
transversal filters used in pattern recognition. Adaptive algorithms found interest
in both signal processing and controls applications. In the signal processing arena,
the cancellation of 60-Hz interference at the output of an electrocardiographic
amplifier and recorder was the focus of B. Widrow and others at Stanford
University in 1965. Other areas of electrocardiography along with the elimination
of periodic interferences and the elimination of transmission line echoes have also
applied adaptive noise cancellation algorithms successfully111.
The adaptive noise canceler operates on a primary sensor and a reference
sensor as seen in Figure 1.2. The primary signal consists of input from the signal
source s and some uncorrelated noise no, while the reference signal is noise ni
which is correlated with the noise in the primary signal. The adaptive filter
operates on the reference signal to produce an output y that is an estimate of the
source noise. It is subtracted from the primary input giving the signal alone as the
3


output of the canceler. The adaptive canceler tends to minimize the mean-square
value, or total power, of the system's output s by readjusting the algorithm's
structure. This produces the best estimate of the desired signal in the minimum-
mean-square sense.
Figure 1.2 Adaptive Noise Canceler
The following argument will show that little or no prior knowledge of s, no,
or ni, or of their statistical or deterministic interrelationships is required.
Assuming that s, no, nls and y are statistically stationary and have zero means, and
that s is uncorrelated with no and ni which are correlated. The output is
s = s + no -y (1.1)
Now square both sides and take the expectations of each realizing that s is
uncorrelated with no and with y.
E[s2] = E[s2] + E[(no -y)2 ] + 2E[s (no y)] (1.2)
4
/


E[s2] = Efs2] + E[(no -y)2]
(1.3)
Minimizing the output power will not affect the signal power. Hence, the output
noise power E[(no -y)2] is also minhnized.
E ^[s2] = E[s2] + E^Kno -y)2] (1.4)
This results in the filter output y being a best least-squares estimate of the primary
.noise no. E[(e -s)2] is also minimized since, from (1.1),
s- s = no -y (1.5)
This now shows that minimizing the output power causes the output s to be a best
least-squares estimate of the signal s. Since the signal power in the output remains
constant, minimizing the total output power maximizes the output signal-to-noise
ratio. The smallest possible output power is E minfs2] = E[s2]. This gives
E[(no -y)2] = 0 and y = no and s = s, and the output signal is free of noise.
If, however, the no and ni are uncorrelated, the filter will "turn itself off'
and will not increase output noise. It also follows that the filter output y will be
uncorrelated with the primary input, and the output power becomes
E[s2] = E[(s + no)2 ] + 2E[-y (s + no)] + Ety2] (1.6)
E[s2]= E[(s + no)2] + Ely2] (1.7)
Minimizing the output power now requires Ety2] to be minimized. This is
achieved by making all the filter coefficients zero, or turning off the adaptation,
and the output power of the filter becomes zero. These arguments can be
extended to cases where the primary and reference inputs also consist of additive
random noises uncorrelated with each other and the previously mentioned inputsfIl
A more detailed picture of the single-channel noise canceler is shown in Figure 1.3.
5



Figure 1.3 Adaptive Noise Canceler with Uncorrelated Inputs
Realizing that a noncausal HR filter cannot be physically realizable in a
real-time system, approximations are possible with finite length FIR filters. The
impulse responses of FIR filters will approach that of the HR filters by increasing
the number of weights in the FIR filter. However, this will increase the
implementation cost of the FIR filter. Also, using a delay A in the primary input
path will provide an acceptable delayed real-time response which approximates
that of the noncausal filter. Through the feedback, the optimum adaptive filter
impulse response will also develop this delay. The shape of the impulse response
generally changes with different values in A. It has been shown that a value for A
of approximately to half the time delay of the adaptive filter will produce the least
minimum output noise power. Without a proper delay, the impulse response of the
6


filter will not be a delayed version of the optimum filter, and the noise canceler
output will be a poor approximation of the signallI].
1.3.1 Electrocardiographic Applications
1.3.1.1 60-Hz Cancellation
The problem of 60-Hz interference plagues many systems including the
recording of electrocardiograms (ECGs). The causes of this interference include
magnetic induction, displacement currents in leads or in the body of the patient,
and equipment and set-up imperfections. Conventional methods for reducing this
interference include the use of twisted pair leads and properly grounding all
relating affected equipment and personnel. Using adaptive noise cancelling can
further reduce the interference. The cancellation system, shown in Figure 1.4,
takes the ECG preamplifier as the primary input, and the 60-Hz reference input is
taken from a properly attenuated wall outlet111.
The reference input is divided into dual reference inputs with one
undergoing a 90 phase shift. The adaptive filter adjusts these dual reference
signals with two variable weights. The resulting summed reference input can then
be changed in magnitude and phase to achieve cancellation. Since two "degrees of
freedom" are required to identify and cancel a single sinusoidal signal, two weights
are chosen for the filter to cancel the 60-Hz signal.
7


Figure 1.4 Cancelling 60-Hz Interference
1.3.I.2 Donor Heart Cancellation
In cardiac transplant research, it is the patient's new heart that will interfere
with the study of the old heart rate which is still controlled by the central nervous
system. The central nervous system controls the frequency of the heartbeat by
controlling the rate of electrical depolarization in a group of specialized muscle
cells known as the sinoatrial (SA) node in the patient's atrium. Normally, this
initiates an electrical impulse transmitted by conduction through the atrial heart
8


muscle to another group of muscle cells known as the atrioventricular (AV) node.
The AV node is also capable of independent, asynchronous operation. This node
then triggers the electrical depolarization of the ventricles in the heart (3,4l Norman
Shumway of the Stanford University Medical Center developed a transplant
technique involving the suturing of the "new" or donor heart atrium to a portion of
the atrium of the patient's "old" heart151. The SA node the in atrium of the old
heart is electrically isolated from the new heart atrium which contains both the SA
and AV nodes. The nervous system continues to control electrical output from the
old S A node, but that output varies significantly from beat to beat in strength. The
new S A node generates a spontaneous pulse that causes the new heart to beat at a
separate self-pacing rate. In order to study these signals properly, adaptive noise
cancellation provides an answer. The reference input to the canceler (see Figure
1.2) can be obtained by a pair of ordinary chest leads and will mainly contain the
new heart beat. The primary input is received by the use of a catheter threaded
through the brachial vein in the left arm and the vena cava to a position in the
atrium in the old heart. This primary input will contain the beat signals of both
hearts. The new heart beat is cancelled allowing more detailed study of the old
heart patterns.
1.3.1.3 Maternal ECG Cancellation
The use of adaptive noise cancelling in fetal electrocardiography can
provide a way to clearly observe a fetal ECG. The details of the actual waveform
of the electrical output from the fetus are better monitored during labor and
delivery with the use of abdominal electrocardiograms1681. While muscle activity
and fetal motion form some of the interference in an abdominal electrocardiogram,
the mother's heartbeat is the greatest interferer as it is 2 to 10 times stronger in
9


amplitude than the fetal heartbeat19"111. For this application, the adaptive cancel: is
provided with four reference inputs which are taken from chest leads to record the
mother's heartbeat. Multiple reference inputs are used to make the interference
filtering task easier. A single abdominal lead provides the primary input consisting
of the combined heartbeats. This canceler is shown in Figure 1.5. The output of
the canceler provides a cleaner fetal signal allowing for better analysis of the fetal
ECG.
f
Reference
Inputs \

Figure 1.5 Multichannel Adaptive Noise Canceler
10


1.3.2 Communications Applications
1.3.2.1 Noise Cancellation in Speech Signals
A common example of the need to suppress interference in speech signals
is the situation a pilot faces in the cockpit of an aircraft where high engine noise is
present. Because the frequency and intensity of the noise components within the
speech bandwidth vary due to the changing environment of the cockpit,
conventional filtering would be impractical. This example falls directly into the
basic noise cancellation problem shown in Figure 1.2. The primary input consists
of the pilot's voice and the strong periodic components and rich harmonics present
in the ambient noise. A second microphone is placed at a location in the cockpit
which will provide a reference signal of the ambient noise which is free of the
pilot's voice. The output of the noise canceler will significantly reduce the
interference producing a relatively undistorted signal of the voice transmission.
A typical experiment could consist of an adaptive filter using a Least Mean
Squares (LMS) algorithm with 16 weights and a triangular wave for the
interference which contains many harmonics that vary in amplitude and phase from
point to point in the testing room. The time delay caused by the different
transmission paths from the interference source to the two sensors can be ignored
because of the periodic nature of the triangular wave. This noise canceler can
minimize the output noise power by 20 to 25 dB while introducing no noticeable
distortion in the speech signal. This adaptive filter is able to converge after about
5000 adaptations or 1 second of real time and is easily adjusted to changes in the
interference signal resulting from moving the positions of the microphones or
changing the interference frequency from 100 to 2000 Hz[1].
11


1.3.2.2 Echo Cancellation
Long distance telephone circuit connections involve the use of a hybrid
device which converts a two-wire loop circuit of the customer to a four-wire
circuit of the long distance trunk lines. The trunk lines of the hybrid consist of two
lines for the incoming signal and two lines for the outgoing signals. A simplified
version of this long distance system is shown in Figure 1.6. Incoming signals are
passed from through an ideal hybrid to the local two-wire customer loop with 3 dB
of attenuation with no energy passing to the outgoing trunk lines. This hybrid will
also pass the signal from the local loop to the outgoing lines with 3 dB of
attenuation without reflecting anything back to the local loop. In practice,
however, perfect separation of the incoming and outgoing signals is not obtained
due to variations in instrument characteristics, in loop length, and in impedance
problems. This results in a "leakage" signal attenuated by about 15 dB relative to
the incoming signal to be directed along the outgoing trunk lines111.
4-Wire Trunks.
Port Port
Figure 1.6 Simplified Long Distance System
12


Echo Suppressors developed at the Bell Telephone Laboratories have been
successful when used in continental calls where the round trip delay is 100 ms or
less. These suppressors detect when an incoming signal is present on the four-wire
lines of the hybrid device and cut a relay in the outgoing path: The path is
reestablished when another signal is detected coming from the microphone on the
two-wire port. With the relay open during any transmission, echoes may be passed
during the situation of both callers talking at the same time112,131.
In satellite communications, the round trip delay generally exceeds 500 ms
and the relay switching causes choppiness and missing pieces of words. The
development of adaptive echo cancelers resulted from these switching problems.
Figure 1.7 shows a single end of a telephone system with an adaptive canceler
where the incoming signal is applied to both the hybrid and the adaptive filter as
the reference input. The outgoing signal from the hybrid is taken as the primary
input to the canceler and has a leakage component that is correlated to the
incoming signal. The filter output is the estimation of this leakage component and
is subtracted from the outgoing signal to minimise the "echo" in the transmitted
signal in some least-square sense. This estimate of the echo component is also
used to update the filter weights. Practical considerations place limits on the
sampling time and on quantization in the adaptive filter realization. An example of
an adaptive echo canceler for speech could use an 8-kHz sampling rate and 128
weights111.
13


4-Wire
Trunk
Lines
Figure 1.7 Long Distance System with Adaptive Echo Cancellation
1.3.3 Periodic and Broadband Applications
1.3.3.1 Periodic Interference Cancellation
In several possible applications for interference cancellation, it appears that
adaptive noise cancelling cannot be applied due to the presence of some periodic
interference when no external reference input that is free of the desired broadband
signal is available. Some examples include the playback of speech or music in the
presence of tape hum or turntable rumble, or sensing seismic signals in the
presence of vehicle engine or powerline noise. In this situation, the composite
signal is split with one path forming the primary input and the other path being
delayed to form the reference input. The length of the delay is chosen to cause the
broadband signal components in the reference path to become decorrelated from
those in the primary path. This delay parameter is often called the decorrelation
14


parameter. The periodic components in each path will remain correlated due to
their periodic nature. The adaptive filter will produce a prediction of the
correlated component which will leave the unpredictable component, the
broadband signal, at the output of the system. As discussed in section 1.3, if a
delay which is less than the total delay of the adaptive filter is inserted in the
primary input path, the filter will converge to that primary input and cancel both
components.
Periodic
Interference

Broadband
Input
+
Primary
Input ^
Broadband
Output
O
>
Reference
Input
Adaptive Noise C^celer
Figure 1.8 Cancelling Periodic Interference
15


1.3.3.2 Adaptive Self-Tuning Filter
Given similar circumstances as in the previous section except that the
periodic component is the signal of interest, the same configuration with delay may
be used. It is still the periodic component which the adaptive filter is predicting.
Since it is the desire to recover the periodic component and not the broadband
component, the system output will not be taken from the error signal of the
adaptive canceler. As shown in Figure 1.9, the output comes from the adaptive
filter. Thus, the adaptive filter is being used as a self-tuning filter. Further
experiments have shown this self-tuning configuration to be able to predict the
presence of multiple sinusoidal signals combined with the broadband component.
Broadband
Interference
Periodic
Signal
iKD
Primary
Input ^
Delay
-A
Reference
Input
Figure 1.9 Self-Tuning Filter
16


1.33.3 Adaptive Line Enhancer
A specific use of the self-tuning adaptive filter is to detect extremely low-
level sine waves embedded in broadband noise. This system enhances the presence
of sine waves and can compete with the fast Fourier transform algorithm as a
sensitive detector. Figure 1.10 shows the enhancer with a secondary output as
adaptive weight values of the filter which are the discrete Fourier transform of the
filter's impulse response113.
Figure 1.10 Adaptive Line Enhancer
Experiments using an adaptive line enhancer and a discrete Fourier
transform (DFT) power spectral density measurement were conducted with similar
set-ups to insure a fair comparison111. Both methods easily detected the sinusoidal
17


signal when the broadband interference was white noise. However, in cases where
the noise has some spectral coloration, the baseline of the DFT power spectral
density measurement can hide the signal peak if the noise bandwidth contains the
peak frequency. On the other hand, only the narrowband sinusoidal components
are contained in the transfer function magnitude plots of the adaptive filter making
signal detection easier and more certain with a baseline close to zero.
The adaptive line enhancer can also be useful in detecting and estimating
weak signals in noise accompanied by strong signals. With adaptive process
minimizing the mean-square error, both the noise and the strong signals are
cancelled. The ability to detect a weak signal in the presence of a strong
interference could be used in a radio receiver application111. This application of an
adaptive enhancer being used as a strong signal eliminator is shown in Figure 1.11.
Figure 1.11 Adaptive Line Enhancer as Strong Signal Eliminator
18


CHAPTER 2
NOISE CANCELLATION PROBLEM
2.1 ARMAX Model Definition
The focus of this thesis is to develop and apply an Extended Least Squares
(ELS) adaptive algorithm to a noise cancellation problem. The problem
considered here follows the basic system as defined in section 1.1 except for the
differences described below and shown in Figure 2.1. The reference input u(t) is
an interference signal. The primary input y(t) consists of the sum of an
interference v(t) and signal s(t). The interference v(t) is a filtered version of the
reference noise u(t), and the input signal s(t) is a filtered version of the original
signal w(t). The signals u(t) and w(t) are uncorrelated.
Figure 2.1 System Definition with ARMAX Input
19


v(t) lu(t)
A(q-1)
(2.1)
W'S-w <2-2>
where q'1 is the unit delay operator defined as q_1y(t) = y(t-l) and
A(q-1) = l+a1q-,+a2q2+ -+aii^qnA (2.3)
B(q-1) = l + biq-,+b2q-2+ ...+bn§q"nB (2.4)
C(q_,) = l+c,q"1+c2q_2+...+ciieq'c (2.5)
D(q _1) = 1+d^ _1 + d2q 2 +... +dn5q nD (2.6)
y(t) =v(t) +S(t) (2.7)
y(t) ^u(t) +?q ~'W(t) 1 A(q~1) W D(q-1) W (2.8)
This input follows the form of an Auto-Regressive Moving Average with
"exogenous", or external, input (ARMAX) system model. To further simplify the
notation, combine the denominators of the two input filters. Also note that the
system delay is zero as the output at time t depends on the input signals also at
timet.
A(q-1)D(q-1)y(t) = B(q-,)D(q-,)u(t)+A(q-,)C(q-1)w(t)
A(q -I )y (t) = B(q _1 )u(t) + C(q _1) w(t)
where A(q-1) = l + aIq-1+a2q'2 + ...+anq-
=A(q-1)D(q-1)
B(q_l) = b0 +b,q-1 + b2q-2 + ...+bmqm
^(q-^DCq-1)
(2.9)
(2.10)
(2.11)
(2.12)
20


C(q 1) = 1 + c,q'1 + c2q~2 +...+ceq~e
= A(q-1)C(q-1)
(2.13)
This simplified equation combines the system polynomials into composite
polynomials which will be used in the algorithm development. This primary signal
may also be written as a linear difference equation.
y(t +1) = -a,y (t) a2y(t -1)-... -any (t n +1) +
b0u(t +1) + b!u(t)+...+bmu(t m+1) + c,w(t)+...+c*w(t-i +1) (2.14)
+w(t + l)
or in vector form
y(t + l) = 0oT where 0o is the true system parameter vector and
Oo(t) is the true measurement/signal vector
0OT =[ai,a2,..,an,b0,b1,..,bm,c1,..,cj (2.16)
(t)T = [y(t),y(t -1),,-y(t n +1),
u(t + l),u(t),,u(t-m+l), (2.17)
w(t), w(t -1),, w(t -1 +1)]
It is assumed that w(t) is a white noise interference that is nonmeasurable and
normally distributed with
mean value = £{w(r)} = ju = 0 (2.18)
variance = £{w2(t)} = where £{} is the mathematical expectation
21


2.2 Extended Least Squares Algorithm Development
If it is assumed that the system parameters are known, equation 2.20 shows
a predictor that will give a white prediction error. Notice that the equation
parameters are the exact system parameters.
y (t +1) = -a,y(t) a2y(t -1)-. ,.-any (t n +1) +
b0u(t +1) + b!u(t)+...+bmu(t m+1) + c1w(t)+...+ciw(t -l +1)
This predictor will minimize the variance of the prediction error
^{[.yO+O-X'+i)]2}-
By using equation 2.14, the minimization criterion then consists of three terms as
shown in equation 2.21. The third term is zero as w(t+l) is independent of all the
other signals in that term. The second term does not depend upon the choice of
y (t +1). The first term can only be positive or zero and will determine the
minimization criterion.
(2.20)
£{[y(/ + l)-y(r + l)]2}
= £{[-fl1y(0-...+V(' + l)+...+c,w(0+...-j>(? + l)f}
+£{[w(r + l)]2}
+2£{[-a. y(t)-..+b0u(t + l)+...+c1M'(r)+..-j>(f + l)]w(r +1)}
(2.21)
Thus, the minimum variance predictor given by equation 2.20 will satisfy the given
criterion. It follows that the prediction error also provides an estimate of the
nonmeasurable w(t+l).
e(t +1) = y(t +1) y(t +1) = w(t +1) (2.22)
22


In the case of unknown system parameters, the a priori adjustable
predictor also uses this estimate for the nonmeasurable w(t) and becomes:
y(t +1) =-ai(t)y(t) -a2(t)y(t -1)-... -an(t)y(t -n +1)
+b0(t)u(t +1) +b1(t)u(t)+... +bm(t)u(t -m +1)
-fc, (t)e(t) +... +c, (t)e(t -l +1)
=9(t)T^(t)
where
a _
0 (t) is the system parameter estimation vector and
0(t)T -{ai(t),aa(t),.v,fi.(t),
b0(t),b,(t),..,bm(t),
c, (t)..,c (t)]
u(t +l),u(t),,u(t -m +1),
e(t),e(t -l),---,e(t l +1)]
The a priori prediction error is e (t +1) =y (t +1) y (t +1). Equations 2.26
and 2.27 now give the a posteriori adjustable predictor and the a posteriori
prediction error respectively. This a posteriori predictor uses the next predicted
set of parameters with the currently known signal measurement vector.
y(t +1) = -a, (t +l)y(t) -a2(t +l)y(t -1) -.. -an(t +l)y(t -n +1)
+b0(t +l)u(t +1) +b,(t +l)u(t)+... +bm(t +l)u(t -m +1)
-^(t +l)e(t)+... -fc^(t +l)e(t t +1)
=0(t+l)T^(t)
e(t) =y(t) y(t)
(2.23)
0
(2.24)
(2.25)
(2.26)
(2.27)
23


The least squares estimation algorithm given below in equations 2.28 to
2.33 minimizes the variance of the prediction error1141. Since this error
asymptotically tends toward white noise, the parameter estimates will be unbiased.
Included in these equations is the covariance matrix F(t) and a variable
weighting/forgetting factor X(t). The covariance matrix provides the adaptation
gain. Setting X(t) data. Equation 2.30 will tend the forgetting factor asymptotically toward 1 giving
equal weighting to later estimates. This can improve the convergence during the
initial phases of adaptation by overcoming the effects introduced by the unknown
estimation of w(t) = s(t). A high adaptation gain is maintained at the beginning
when the parameter estimates are further from the optimum values. The weighting
value Xo determines the rate at which the forgetting factor approaches l1151.
(t +1) =0(t) +F(t +!Mt)e(t +1)
F(t +1) =
X(t)
F(t)
F(tMtMt)TF(t) '
X(t) +*(t)TF(tMt)
(2.28)
(2.29)
X(t) = XoX(t-l)+l-^ (2.30)
where Xo = weighting value 0< X<,<1 (2.31)
X(0) = initial setting 0.95< X(t) <0.99 (2.32)
F(0) = £J fo>0 (2.33)
Note: If Xo = 1 and X(0) = 1, X(t) will become a constant value of 1. This will
remove any effects of the forgetting factor on updating the covariance
matrix, which leads to a decreasing adaptation gain.
Convergence to an unbiased estimation of the system parameters is
obtained with the Extended Least Squares (ELS) algorithm as long as the inputs
24


are sufficiently rich (i.e. there are sufficient frequencies to excite all the modes of
the system). Since the algorithm is using estimates for the nonmeasurable w(t), the
A -
convergence of the parameters of C(q ) is slower than that of the parameters of
A - A _
A(q ) and B(q- ). A condition for global parameter convergence for noise is
for the following relation to be strictly positive real1151.
1 1
C(q~1) 2
(2.34)
2.3 Algorithm Analysis
This Extended Least Squares algorithm will determine estimates for
polynomials A(q_1), B(q_1), and C(q_1) from the primary input equation 2.10.
Remember that these are just composites of the original system polynomials given
ri/ - 1 \
in equation 2.9. In the real system, the input w(t) is filtered by ^ to give
the input signal s(t). Now, since the a posteriori prediction error e(t) is available
as an unbiased estimate of the real input w(t), the estimates of the composite
polynomials can be used to derive an estimate s(t) of the input signal s(t).
Equation 2.35 shows how s(t) will converge to s(t) in equation 2.36 (also given
in equation 2.2).
s(t)= ^q;1?s(t):
A(q )
SQw(t) = A(q-)C(q-;) = COCl w(t)
A(q'1) A(q'1)D(q-1)
(2.35)
s(t)=Si4w(t)
D(q-1) W
(2.36)
25


Thus, a measure of system performance can be monitored by studying a
normalized squared difference of the estimated input signal s(t) and the true input
signal s(t).
f(t) yts(i) for t= 1,2,3,...,N (2.37)
t i =1
If s(t) converges to s(t), this relationship will asymptotically approach zero
and the rate of convergence will be seen. This can be determined in simulations
because the original input signal w(t) is available for analysis.
26


CHAPITER 3
NOISE CANCELLATION SIMULATIONS
This chapter will present the results of applying the Extended Least
Squares (ELS) adaptive algorithm to real system parameters. The ARMAX system
model defined in section 2.1 will be used as the basis for these simulations and
analysis. The results include plots of the a priori prediction error s, the system
a.
output y(t) versus predicted output y(t), the estimation parameters 0(1), and the
normalized squared difference function f(t) described in section 2.3.
3.1 Problem Definition
The input signals, u(t) and w(t), considered here are uncorrelated white
noise sequences with mean values of zero and variances of 2 and 1 respectively.
The real system parameter equations and composite parameter equations are listed
below.
Real: A(q-,) = l-1.9q'1+0.49q~2 (3.1)
B(q-1) = l (3.2)
C(q-1) = 1 0.7q-1 (3.3)
D(q-1) = l + 0.8q-1 (3.4)
Composite: A(q-1) = 1 0.6q-1 -0.63q"2 +0.392q3 (3.5)
=A(q-1)D(q-) B(q-1) = l + 0.8q-1
=B(q"1)D(q-1) (3.6)
27


(3.7)
C(q-1) = 1 2.1q-1 + 1.47q-2 -0.343q"3
=A(q-1)C(q-5)
Equations 3.8 and 3.9 list the system equation in polynomial and vector forms.
A(q-)y(t) = B(q>(t) + C(q-)w(t)
y(t +i) =eT$(t) +w(t +i)
The true system parameter vector 0o becomes:
9oT =[a, a2 a3 b0 b, c, c2 c3]
where ai= -0.62
a2= -0.63
a3= 0.392
b0 1.0
bi = 0.8
ci= -2.1
C2= 1.47
c3= -0.343
The true signal vector 0(t) becomes:
*o(0T =[-y(t),-y(t -1), -y(t -3 +1),
u(t +l),u(t),
w(t),w(t 1),w(t -3 +1)]
3.2 ELS Algorithm Set-up
With the orders of the composite system polynomials known, the
A
parameter estimation vector 0(t) and the ELS measurement vector sized and initial values are chosen.
(3.8)
(3.9)
(3.10)
(3.11)
28


where
, (0) =-1.0
a2(0) =1.0
a3(0) =2.0
b0(0) =1.0
b2 (0) =2.0
, (0) =-1.0
c2(0) =1.0
c3(0) =2.0
p(t) =[-y(t), -y(t -l), -^(t -3 +i)
u(t +l),u(t), (3-13)
s(t),e(t-l),e(t-3+1)]
where Recall that the measurement vector used by the ELS algorithm uses the a
posteriori prediction error given in equation 2.27. This equation and the other
prediction equations are again listed here.
The a priori predictor:
y(t+l)=e(t)Mt) (3.14)
The a priori prediction error:
e(t +1) =y(t +1) y(t +1) (3.15)
The a posteriori predictor:
y(t+1) =6(t+l)Mt) (3.16)
The a posteriori prediction error
£(t +1) =y(t +1) y(t +1) (3.17)
29


The following set-up values were chosen from the initialization criteria
listed in equations 2.31 2.33.
Recall that these settings will give less weighting to the error signals during
the starting phase of the ialgorithm. This should increase the rate of convergence.
The MATLAB script files are set-up to run any desired number of
iterations for the adaptive identification loop. Also, any desired number of trials
with different random noise sequences generated for u(t) and w(t) may be chosen.
For the analysis here, 10 different trials were run with 1000 iterations each. The
detailed results of each trial will not be plotted. However, each squared difference
function will be included to show the estimation and convergence of the ELS
algorithm for differing inputs, and selected trial results will be plotted. Also
included will be the final values of the estimates of the system parameters. All
initial values described in section 3.2 remained constant for each trail. The seed
values for the noise signals u(t) and w(t) were derived at run-time as a function of
the clock giving a random selection of input sequences.
The results of the first trial will be detailed below. Figure 3.1 shows that
the a priori prediction error s(t) asymptotically converges to white noise. A
closer look at the error is given in Figure 3.2.
weighting value k0 = 0.99
initial setting X(0) = 0.97
initial covariance £> = 20
(3.18)
(3.19)
(3.20)
3.3 MATLAB Simulation Results
30


10
A priori Prediction Error
I 1 I I 1
<
Number of Iterations
Figure 3.1 A Priori Prediction Error
A priori Prediction Error
Figure 3.2 "Close-Up A Priori Prediction Error
31


The fact that the algorithm is tracking the system is also shown in Figure
3.3, the plot of the a priori predictor y(t) (y(t) in this plot) and the system output
y(t).
Figure 3.3 Predicted System Output vs. Real System Output
In the following set of Figures (3.4 3.9), the system parameter estimates
for trial #1 are plotted in both the full 1000 iteration view and the 60 iteration
"close-up" view. Recall that the real system parameter vector 0 is listed in
equation 3.10 and is plotted as dotted lines. The estimated parameters are plotted
as continuous lines. After N=1000 iterations they are:
a, (N) = -0.6197 b0 (N) = 0.9878 c, (N) = -1.7452
a2 (N) = -0.6617 b, (N) = 0.7404 c2 (N) = 0.7188
a3(N) = 0.4356 c3(N) = 0.0504
32


2.5
Estimation Parameters from A
Figure 3.4 Parameter Estimates for A(q_1)
Estimation Parameters from A
Figure 3.5 "Close-up" for A(q_1)
33
\ /


Estimation Parameters from B
Figure 3.6 Parameter Estimates for B(q_1)
Estimation Parameters from B
Figure 3.7 "Close-up" forB(q*1)
34
\ /


Estimation Parameters from C
Figure 3.8 Parameter Estimates for (Xq1)
Estimation Parameters from C
Figure 3.9 "Close-up" for C(q_1)
35


A _
These plots show the convergence of the estimation parameters of A(q )
and B(q -1) to the actual system parameters in A(q1) and B(q_1). The estimation
algorithm behaves in a stable manner after the initial learning phase of the
algorithm. These plots also show the typical response of the estimation parameters
A -
of C(q ). These parameters do not converge to the actual system parameters of
C(q_1) because of the uncertainty introduced by the measurement vector q>(t) using
the a posteriori prediction error as an estimate for the nonmeasurable input signal
w(t). They do however converge, and they converge to values that will allow
proper estimation of the desired signal s(t). All the trials exhibited similar
convergence and stability in their responses. The following table lists the final
values of the estimation parameters for each of the 10 trials. The first row again
lists the real system vector 0O, and subsequent rows contain the parameter
estimates for each trial.
0 ai a2 a3 bo bi Cl c2 c3
o -0.6000 -0.6300 0.3920 1.0000 0.8000 -2.100 1.4700 -0.3430
T1 -0.6197 -0.6617 0.4356 0.9878 0.7404 -1.7452 0.7188 0.0504
T2 -0.6425 -0.5563 0.3555 1.0100 0.7376 -1.5383 0.9524 -0.2518
T3 -0.5573 -0.6583 0.3801 0.9761 0.8695 -1.3725 0.5541 0.0324
T4 -0.5710 -0.6800 0.4219 0.9776 0.8281 -1.7370 0.9226 -0.1799
T5 -0.6465 -0.5682 0.3743 0.9512 0.7959 -1.7220 0.7971 0.0037
T6 -0.6312 -0.6041 0.3901 1.0345 0.7018 -1.7171 0.9882 -0.2302
T7 -0.5796 -0.6486 0.3888 0.9727 0.8771 -1.5681 0.6673 -0.0786
T8 -0.5603 -0.6669 0.3923 0.9722 0.8591 -1.7062 0.9616 -0.2136
T9 -0.5539 -0.7497 0.4693 1.0289 0.7349 -1.4839 0.5575 0.2211
T10 -0.5807 -0.6540 0.3969 1.0419 0.7469 -1.6207 0.9755 -0.0784
Table 3.1 Estimation Parameters at 1000 Iterations
36


Part of the reason the parameter estimates do not converge exactly to the
real system values is the relative size of the input signal variances. Recall that they
were established in section 3.1 as 2 for u(t) and 1 for w(t). Smaller values
decrease the uncertainty in the algorithm allowing for tighter convergence.
The other measure of performance of interest here is the rate of
convergence of the normalized squared-difference function from section 2.3 If the
algorithm is estimating the system parameters correctly, tins function will tend
toward zero. Figure 3.10 is a composite plot of all 10 of the squared-difference
functions.
0 200 400 600 800 1000
Number of Iterations
Figure 3.10 Composite of the Normalized Squared-Differences
Three of the ten dominate this plot; however, each trial shows the rapid
rate of convergence desired. The plots show the behavior of the algorithm in
estimating the input signal s(t). At the beginning of the estimation, the difference
37


grows to a peak while the algorithm is still "learning" the system parameters. Then
the function tends toward zero as the algorithm converges. Figure 3.11 presents
the average of these 10 trials.
0 ' 200 400 600 800 1000
Number of Interations
Figure 3.11 Average Squared-Difference
The next few plots detail some unusual characteristics noticed in two of the
simulations.
The first set of plots was obtained from trial #4. Notice that the squared-
difference plot in Figure 3.12 is very jagged during approximately the first 40
adaptations. After this time frame, the curve continues to decrease toward zero.
This is understood by looking at the estimated system parameters in Figures 3.13
to 3.15. Each parameter appears to be either oscillating or overshooting the
desired value during this initial phase.
38


System Parameters
0 20 40 60 80 100
Number of Iterations
Figure 3.12 Trial #4 Squared-Difference
Estimation Parameters from A
Figure 3.13 Trial #4 Estimates for A(q_1)
39


System Parameters
5 -----------1---i_____________i______________i______^______i______________
0 20 40 60 80 100
Number of Iterations
Figure 3.14 Trial #4 Estimates for B(q'x)
Figure 3.15 Trial #4 Estimates for C(q_1)
40


Trial #6 provides the next characteristic of interest. Figure 3.16 shows a
jump in the decaying tendency of the squared-difference function between 200 and
250 iterations.
0 50 100 150 200 250 300 350 400
Number of Iterations
Figure 3.16 Trial #6 Squared-Difference
It can be seen in Figure 3.17 that the estimates for B(q_1) appear to have
some oscillatory characteristics or uncertainty around the area of the jump in the
squared-difference function. This oscillation settled, and the parameters
converged.
41


System Parameters
0 50 100 150 200 250 300 350 400
Number of Iterations
Figure 3.17 Trial #6 Estimates for B(q_1)
42


CHAPTER 4
CONCLUSIONS AND FUTURE WORK
In this thesis, a foundation for the need of adaptive filtering techniques in
various Signal Processing applications of noise or interference cancellation was
discussed. As was shown, a wide variety of system models with different
processing needs exists. The model chosen for the specific development and
application of an Extended Least Squares adaptive algorithm was the ARMAX
system model. The specific system chosen also had no delay between input and
output. This created the need for further modifications to the ELS algorithm.
The development of the ELS algorithm followed conventional methods.
The system inputs provided the necessary frequency "richness" needed to produce
unbiased estimates of the system parameters. The simulations also showed this
result. The validity of using the a posteriori prediction error for a nonmeasurable
input proved to be correct as the simulations showed proper convergence and
stability. While the effects of the initial covariance matrix and the weighting factor
need to be understood better, the effects of different input signals and the
possibility of adding some delay into the input paths also need to be addressed.
Future considerations should include investigating two other algorithms.
The Output Error Method (OEM) and the Recursive Maximum Likelihood
(RML). The RML is a direct extension of the ELS algorithm which enlists a
filtering technique with the parameter estimates of C(q1) to remove the positive
real condition discussed in section 2.2.
43


Appendix A
MATLAB Script Files for Computer Simulation
44


NCELSRUN.M
Mark Anderson
7 November 1994
% NCELSRUN.M
%
%
%
%
%
%
%
%
%
%
%
% Global Variables:
System Identification
- using extended least squares prediction -with variable
forgetting factor
- system definition and setup are in NCELSDEF.M
- Calls NCELSID.M for the identification loop
SYPLOT.M for system parameter plots
SIGLOOK.M for output analysis
FPLOT.M for run averaged analysis
/o Z;
% Number of trials -trials
% Number of iterations per trial -N
% Level of interactive effort required -outlvl
% (extra plots to the screen with pauses, 0 = none
% and Iceyboard' commands to halt the run) 1 = trial
% Normalized Squared Difference of the Signal 2 = all
% Variables s(t) and shat(t) -fit)
% - fl(trial,t)
% % Average Difference over the trials - fave(t)
^*************************************************************
% Select initialization parameters:
^*************************************************************
%
clear
roinit = input(' Enter initial covariance setting:');
%
0^*************************************************************
% Initialization : set time span, number of trials,
% and output level
^*************************************************************
%
N =input(' Enter number of iterations per trial: N = ');
trials = input(' Enter number of trials:');
outlvl = input(' Enter output levelfX^one/l^al^aU):');
%
45


NCELSRUN.M
O,'***************************************************************
% Enter system and initial estimations of parameters.
% (from NCELSDEF.M file)
% Note: t is a time variable (columns)
% i is a pointer to the parameters (rows)
% phi_t is a vector containing signal elements at a time t
%***************************************************************
%
ncelsdef
%
% Save system setup
%
save initsys
%
<^/*************************************************************
% Begin trial loop:
^*************************************************************
%
il=l;
while il <= trials (
if il > 1, load initsys, end
%
% Generate inputs u & w-
% dependent on current time for seed values
%
noisegen
%
% Keyboard command is used to pause the run to save, load, or
% check variables.
%
dispC')
disp('*** At Beginning of a trial loop ***')
trial = il
if outlvl = 2
disp('')
disp(NOTE: Type BREAK to continue if in WINDOWS or AZ in DOS')
keyboard
end
%
0^*************************************************************
% Begin identification loop (process is in NCELSID.M):
46


NCELSRUN.M
% Initial state (time=tstart=Max(n,m,l)
O,/J*******#*****##**#****************:*:**************************
%
for t = tstart: N -1
ncelsid
end
%
0/0*************************************************************
% End of identification loop
/o
% Display final values for paramater estimation vector thetahat
% and system vector thetao
%
disp(")
disp('*** At End of a trial loop trial parameters ***')
dispC')
thetahat(:,N)
thetao
%
% Save parameters of interest
%
save ncrun s shat thetahat thetao N phi phio eonml roinit seed
%
% Call for system plots:
if outlvl = 2
dispC')
disp('*** Starting System Plotting ***')
dispC' hit SPACE BAR to continue *)
syplot
dispC')
end
%
% Run analysis on s(t) and shat(t)
%
siglook
%
% Save f(t) from siglook in array, trial variables, iterations,
% and plotting level flag
% Clear memory and reload only these variables
%
47


NCELSRUN.M
save trialrun fl il trials N outlvl
clear
load trialrun
%
if outlvl = 1
dispC')
disp('*** Rename ncrun.mat file if desired ***)
disp(")
. dispCNOTE: Type BREAK to continue if in WINDOWS or AZ in DOS')
keyboard
end
%
% Increment trial number
%
il = il+ 1;
end
^******3|c#*]|::|e*t****:|c****j|c**3|c:|e**j|:***j|c***************************
% End of trial loops
%
% Run averaging routine on normalized differences f(t)
%
dispC 0
dispC*** All trials are Finished ***')
dispC*)
% flplot
fplot
dispC 0
dispC')
%
% END OF NCELSRUN.M
48


NCELSDEF.M
% System Definition for Noise Cancelation Mark Anderson
% Problem using Extended Least Squares 26 October 1994
%
% A*y(t) = B*u(t) + C*w(t) : AJi,C are composites of
% : Atrue, Btrue, Ctrue
%
% Where Dtrue*s(t) = Ctrue*w(t); : s(t) is the signal
%
% System output -y(t)
% System input -u(t)
% Disturbance -w(t) : Unmeasurable Noise
% Degree of A -n : Aismonic
% Degree of B -m
% Degree of C -1 : C is monic
% Degree of Ctrue -nC : Ctrue is monic
% Degree of Dtrue -nD : Dtrue is monic
% Size of system - nsize : n + m+1 +1
% Start time - tstart : Max(n,m+l,l)
% System parameters -thetao : A, B, and C
% Systran Signal vector - phio(t) : Real system inputs
% includes unmeasurable
% w(t) for True Output
% Initial estimates - ahat(i), bhat(i), chat(i)
% Estimation parameters - thetahat(t) : [ahat(i) bhat(i) chat(i)]'
% Predicted output - yhat(t) : a'priori predictor
% Predicted output -yp(t) : a'posteriori predictor
% Prediction error - eo(t) : a'priori error
% Prediction error ep(t) : a'posteriori error
% Measurement vector - phi(t) : [-yuep]'
% Covariance Matrix -F(t) : nsize X nsize
% Initial weighting value - lambda(l)
% Updating weighting value lambdainit
%
% Note for MATLAB: initial time = 1 (not 0)
% Note: t is a time variable (columns)
% i is a pointer to the parameters (rows)
% phi_t is a vector containing signal elements at a time t
%
tstart = 3;
49


NCELSDEF.M
n = 3;
m= 1;
1 = 3;
nsize = n + m+1 +1;
%
A= [1 -0.6 -0.63 0.392];
B = {1 0.8];
C = [1-2.1 1.47-0.343];
thetao = [A(2:n+1) B C(2:l+1)]'
%
nC = 1;
Ctrue = [1 -0.7];
nD = 1;
Dtrue= [1 0.8];
%
% Build Parameter vector, thetahat(t), and measurement vector phi(t)
% Note: set the columns in thetahat and phi as the time base.
%
% ahatl=-l;ahat2=l;ahat3=2;bhatl = l;bhat2 = 2
% chatl = -1; chat2 = 1; chat3 = 2;
% keep estimates equal for t=l to tstart.
%
thetahat = [-1 1212-1 12;-1 1 212-1 12;-1 1212-1 12]'
%
% a'posteriori prediction error ep(t) is stored in phi
% best estimate for initial ep(t) = 0 because mean = 0
% Initialize variables
%
for i=l: tstart
s(i) =0;
shat(i) = 0;
u(i) =0;
y(i) =0;
yhat(i) = 0;
ep(i) =0;
wCO =0;
end
%
phi = [-y(l) 0 0 u(l) 0 ep(l) 0 0; -y(2) -y(l) 0 u(2) u(l) ep(l) ep(2) 0;
-y(3) -y(2) -y(l) u(3) u(2) ep(3) ep(2) ep(l)]';
%
50


NCELSDEF.M
% Fill True system signal vector for generating "measured" y(t)
% Note: phio(t) contains unmeasurable noise input w(t), this value
% is only used to generate y(t)
phio= [-y(l) 0 0 u(l) 0 w(l) 0 0; -y(2) -y(l) 0 u(2) u(l) w(l)-w(2) 0;
-y(3) -y(2) -y(l) u(3) u(2) w(3) w(2) w(l)];
%
% Fill a'priori error eo(t)
%
for i=l: tstart
. eo(i) = y(i) yhat(i);
end
%
% Variable forgetting factor
% lambda(t) = lambdainit*lambda(t-1) + 1 lambdainit
% Note: set lambdainit = 1 for constant forgetting factor
% which will remain at the lambda(l) value,
lambdainit = 0.99;
lambda(l) = 0.97;
%
% For Least Squares set-up, set covariance matrix
for i=l: nsize
oldF(i,i) = roinit;
end
%
if tstart >= 2
for t=2 : tstart, % See WLSTSQR.M
phi_t = phi(:,t-l);
lambda(t) = lambdainit*lambda(t-1) + 1 lambdainit;
tmpl = oldF*phi_t*phi_t'*oldF;
tmp2 = lambda(t) + phi_t'*oldF*phi_t;
F = (oldF tmpl/tmp2)/lambda(t);
oldF = F;
end
end
%
% END OF SYSTEM DEFINITION
51


NOISEGEN.M
% NOISEGEN.M
%
%
Mark Anderson
7 November 1994
% Input signal: u(t) White Noise with mean = 0 and variance^ 2.0
% Disturbance : w(t) White Noise with mean = 0 and variance =1.0
% NOTE: u and w are uncorrelated; thus, each will be generated
% separately before the processing loop
% Also, values for time = 1 to tstart must be set up prior to
% calling this routine, and the seed values are generated
% by using the 'clock' function to return the current seconds.
% This is not used for the first trial.
%
% Set variance
%
wsigma2 = sqrt(2.0);
usigma2 = sqrt(4.0);
% use older MATLAB generator 'rand('nonnal')'
% seeds are determined by the trial number il
% randCnormal');
% rand('seed',il*121);
% u(tstart+l :N) = rand(l,N-tstart)*usigma2;
% rand('seed',il*747);
% w(tstart+l:N) = rand(l,N-tstart)*wsigma2;
%
% or use MATLAB 4.0 generator 'randnO'
if il = 1
seed(l) = 121;
randn('seed',seed(l));
else
temp = clock;
seed(l) = temp(6);
randn('seed',seed(l));
end
u(tstart+l:N) = randn(l,N-tstart)*usigma2;
if il = 1
seed(2) = 747;
randn('seed',seed(2));
else
temp = clock;
%
%
52


NOISEGEN.M
seed(2) = 2*temp(6);
randn('seed',seed(2));
end
w(tstart+l:N) = randn( 1 ,N-tstart)* wsigma2;
%
seed
%
% END OF NOISEGEN.M
53


NCELSID.M
% NCELSID.M
%
%
Mark Anderson
26 October 1994
% System Identification Loop called from NCELSRUN.M
% - using extended least squares prediction with variable
% forgetting factor
% - Calls WLSTSQR.M
%
% Initial state (time=tstart=Max(n,m,l)
% *** Store u(t+l) in measurement vector phi(t)
% and in true measurement vector phio(t)
%
phi(n+l,t) = u(t+l);
pbio(n+l,t) = u(t+l);
% *** Estimate w(t) by calculating the a'posteriori error ep(t)
% and store in measurement vector phi(t)
% NOTE: a'posteriori prediction yp(t) uses current estimation
% *** Signal vector, phi(t), now contains signals upto time = t
%
% *** Calculate signals s(t) and shat(t)
%
ift>tstart
s(t) = 0;
for i=l : nC + 1
s(t) = s(t) + Ctrue(i)*w(t+l-i);
end
for i=2 : nD + 1
s(t) = s(t) Dtrue(i)*s(t+l-i);
end
%
% shat uses the A and C coefficients of thetahat
%
%
% parameters and previous signal vector
%
yp(t) = thetahat(:,t)'*phi(:,t-l);
ep(t) = y(t)-yp(t);
phi(n+m+2,t) = ep(t);
%
54


NCELSID.M
%
shat(t) = ep(t);
offset = nsize -1;
for i=l : 1
shat(t) = shat(t) + thetahat(i+offset,t)*ep(t-i);
end
for i=l : n
shat(t) = shat(t) thetahat(i,t)*shat(t-i);
end
end
%
% *** Measure True System Output (sensor) y(t+l)
% (includes unmeasurable disturbance w(t+1) and true
% system signal vector phio(t) with w(t))
%
y(t+l) = w(t+l) + thetao'*phio(:,t);
%
% *** Calculate a'priori prediction output yhat(t+l) using current
% estimation parameters and current signal vector.
%
yhat(t+l) = thetahat(:,t)'*phi(:,t);
%
% *** Calculate Prediction Error (a'priori) eo(t+l) for use in
% extended least squares prediction algorithm
%
eo(t+l) = y(t+l) yhat(t+l);
%
% *** Estimate next parameters using weighted least squares estimator
%
wlstsqr
%
% *** Update measurement vector phi for time = t+1 (shift elements
% and save locations for u(t+l) and ep(t+l))
% NOTE: also update true system measurement vector using w(t+l)
for i=l : n
phi(i,t+l) = -y(t+l-i+l);
phio(i,t+l) = -y(t+l-i+l);
end
for i=2: m+1
phi(n+i,t+l)= u(t+l-i+2);
phio(n+i,t+l) = u(t+l-i+2);
55


NCELSID.M
end
for i=2:1
phi(n+m+1 +i,t+1) = ep(t+l-i+l);
phio(n-hn+l+i,t+l) = w(t+l-i+l);
end
phio(n+m+2,t+l) = w(t+l);
%
% END OF NCELSID.M
56


WLSTSQR.M
% WLSTSQR.M Mark Anderson
% 9 May 1994
% Weighted least squares algorithm utilizing a
% variable forgetting factor
%
% lambdainti = starting value for forgetting factor
% lambda(t) = variable forgetting factor
% eo(t+l) = a priori prediction error
% phi(t) = measurement vector
% F(t+1) = covariance matrix
% thetahat(t+l) = next prediction of system parameters
%
0^************************************************************
%
% Extract phi vector at time = t:
%
phi_t = phi(:,t);
%
% Update forgetting factor:
%
if t = 1
lambda(t) = lambda(t);
else
lambda(t) = lambdainit*lambda(t-l) + 1 lambdainit;
end
%
% Generate covariance matrix F(t+1) using F(t) & phi(t):
% (F(t) was saved in oldF)
%
tmpl = oldF*phi_t*phi_t'*oldF;
tmp2 = lambda(t) + phi_t'*oldF*phi_t;
F = (oldF-tmpl/tmp2)/lambda(t);
%
% Generate next estimation parameters:
%
thetahat(:,t+l) = thetahat(:,t) + F*phi_t*eo(t+l);
%
% Save covariance matrix in "oldF" for next iteration:
%
oldF = F;
%
57


WLSTSQR.M
% END OF WLSTSQR.M
58


SYPLOT.M
% SYPLOT.M Mark Anderson
% 24 October 1994
%
% Plotting routine for A'priori Prediction Error (eo(k)},
% Predicted and Actual System outputs {yhat(k) & y(k)},
% evolution of the parameter estimates (thetahat(:,k)},
% where k is the time vector 1 to N. Use j=l to 60 for
% 'close-up' plots
%
clg
nsize = n + m+1 +1;
j = 1: 60;
k= 1: N;
%
% Setup system parameters:
sys = ones(nsize,N);
for i=l: nsize,
sys(i,:) = thetaCi)*sysO,:);
end
%
% Plot prediction error in semilog on the x axis
semilogx(k,eo,'-b')
titleOA priori Prediction Error'),pause
%
% Plot prediction error on normal x axis
plot(k,eo,'-b'),grid
titleCA priori Prediction Error1),pause
%
% Note: u(t) was calculated only up to time = N-l
%
plot(k,phi(n+l,k),'-b'),grid
title(Tnput Signal u(t)'),pause
% plot(j,phi(n+lj),'-b'),grid
% titleCInput Signal u(t)'),pause
%
for i=l: N,
yhat(i) = eo(i) phi(l,i);
end
plot(k,yhat,'g',k,phi(l,k)*(-l),'rO,grid
title(Predicted Output green System Output.. .red'),pause
% plot(j,yhat(j),,gj,phi(lj),'rr),grid
59


SYPLOT.M
% titIe(Tredicted Output green System Output ...red'),pause
%
plot(k,thetahat(:,k),k,sys(:,k))
titleCEstimation Parameters'),pause
% plot(j,thetahat(: j)j,sys(: j))
% title^Estimation Parameters'),pause
%
% Plot each set of parameters:
% A
plot(k,thetahat(l :n,k),k,sys(l :n,k))
titlefEstimation Parameters from A1),pause
% plot(j,thetahat(l :nj)j,sys(l :nj))
% titieCEstimation Parameters from A1),pause
% B
pIot(k,thetahat(n+l:n+m+l,k),k,sys(n+l:n+m+l,k))
titleOEstimation Parameters from B'),pause
% plotQ,thetahat(n+l:n+m+lj)j,sys(n+l:n+m+lj))
% titlefEstimation Parameters from B1),pause
% C
pIot(k,thetahat(n+m+2:nsize,k),k,sys(n+m+2:nsize,k))
titie(Estimation Parameters from C'),pause
% plot(j,thetahat(n-Hn+2:nsizej)j,sys(n+m+2:nsizej))
% titleCEstimation Parameters from C),pause
% end
%
% End Plotting
60


SIGLOOK.M
% SIGLOOK.M Mark Anderson
% 7 November 1994
% Analysis of shat(t) and s(t)
%
k=l:N-l;
%
% Generate square of difference of predicted minus real signal
%
dispC*** Running Signal Analysis ***')
%
f(l) = (shat(l) s(l))A2;
fort=2 :N-1
f(t) = (t-l)*f(t-l)/t + ((shat(t) s(t))A2)/t;
end
%
if outlvl >= 1
dispO
dispC*** Start Plotting Signal Analysis ***')
dispC hit SPACE BAR to continue ')
clg
plot(k,f),grid,
title(f(t) Normalized Squared Difference1),pause
end
%
% End signal analysis
61


FPLOT.M
% FPLOT.M Mark Anderson
% 26 October 1994
%
% Analysis of normalized squared difference (fl(iloop,t)) of .
% shat(t) and s(t) from various runs
% f(t) = (t-l)*f(t-l)/t + ((shat(t) s(t))A2)/t;
%
clg
k=l:N-l;
%
% Generate average of squared difference from runs
%
dispC*** Averaging Signal Statistics ***')
%
fave = zeros(l,N-l);
for t=l: N-l,
for il = 1: trials,
fave(t) = fave(t) + fl(il,t);
end
fave(t) = fave(t) / trials;
end
%
disp('')
disp('*** Start Plotting Average Statistics ***')
disp(' hit SPACE BAR to continue 7
%
%Plot
plot(k,fave),grid,
title('fave(t) Average Normalized Squared Difference'),pause
%
% End Plotting
62


FLPLOT.M
% FLPLOT.M Mark Anderson
% 26 October 1994
% Plotting routine for individual f plots
%
k=l:N-l;
%
dispC *)
disp('*** Start Plotting Signal Analysis ***')
dispC hit SPACE BAR to continue *)
pig
plot(k,fl(:,k)),grid,
titleOfl(t) Normalized Squared Difference1),pause
%
% Plot each trial's output separately
%
for i = 1: trials
plot(k,fl(i,:)),grid,
title([1fl(',num2str(i),',t) Normalized Squared Difference']),
pause
end
%
% End plotting routine
63


References
1. B. Widrow and S.D. Stearns, Adaptive Signal Processing, Prentice-Hall,
Englewood Cliffs, New Jersey, 1985.
2. S. Haykin, Adaptive Filter Theory, Prentice-Hall, Englewood Cliffs, New
Jersey, 1986.
3. W. Adams and P. Moulder, "Anatomy of heart," in Encycl. Britarmica, vol. 11,
pp. 219-229,1971.
4. G. von Anrep and L. Arey, "Circulation of blood," in Encycl. Britannica, vol.
- 5, pp. 783-797,1971.
5. R R. Lower, R. C. Stofer, and N. E. Shumway, "Homovital transplantation of
the heart," J. Thoracic Cardiovasc. Surg., vol. 41, pp. 196,1961.
6. T. Buxton, I. Hsu, and R. Barter, "Fetal electrocardiography," JAMA, vol. 185,
pp. 441-444, Aug. 10, 1963.
7. J. Roche and E. Hon, "The fetal electrocardiogram," Am. J. Obstet. Gynecol.,
vol. 92, pp. 1149-1159, Aug. 15, 1965.
8. S. Yeh, L. Betyar, and E. Hon, "Computer diagnosis of fetal heart rate
patterns," Am. J. Obstet. Gynecol, vol. 114, pp. 890-897, Dec. 1,1972.
9. E. Hon and S. Lee, "Noise reduction in fetal electrocardiography," Am. J.
Obstet. Gynecol, vol. 87, pp, 1087-1096, Dec. 15,1963.
10. J. Van Bemmel, "Detection of weak fetal electrocardiograms by
autocorrelation and crosscorrelation of envelopes," TF.F.K Trans. Biomed
Eng., vol. BME-15, pp. 17-23, Jan. 1968.
11. J. R. Cox, Jr., and L. N. Medgyesi-Mitschang, "An algorithmic approach to
signal estimation useful in fetal electrocardiography," IEEE Trans. Biomed.
Eng., vol. BME-16, pp. 215-219, July 1969.
12. M. Sondhi, "An adaptive echo canceller," BellSyst. Tech. J., vol. 46, pp. 497-
511, Mar. 1967..
13. J. Rosenberger and E. Thomas, "Performance of an adaptive echo canceller
operating in a noisy, linear, time-invariant environment," Bell Syst. Tech. J.,
vol. 50, pp. 785-813, Mar. 1971.
14. I D. Landau, System Identification and Control Design, Prentice-Hall,
Englewood Cliffs, New Jersey, 1979.
15. R. Isermann, K.-H. Lachmann, and D. Matko, Adaptive Control Systems,
Prentice-Hall, Englewood Cliffs, New Jersey, 1992.
64