Citation |

- Permanent Link:
- http://digital.auraria.edu/AA00002980/00001
## Material Information- Title:
- Adaptive noise cancellation using an extended least squares algorithm
- Creator:
- Anderson, Mark
- Place of Publication:
- Denver, Colo.
- Publisher:
- University of Colorado Denver
- Publication Date:
- 1995
- Language:
- English
- Physical Description:
- vii, 64 leaves : illustrations ; 29 cm
## Thesis/Dissertation Information- Degree:
- Master's ( Master of Science)
- Degree Grantor:
- University of Colorado Denver
- Degree Divisions:
- Department of Electrical Engineering, CU Denver
- Degree Disciplines:
- Electrical Engineering
- Committee Chair:
- Radenkovic, Miloje S.
- Committee Members:
- Bialasiewicz, Jan T.
Fardi, Hamid
## Subjects- Subjects / Keywords:
- Adaptive filters ( lcsh )
Adaptive filters ( fast ) - Genre:
- bibliography ( marcgt )
theses ( marcgt ) non-fiction ( marcgt )
## Notes- Bibliography:
- Includes bibliographical references (leaf 64).
- General Note:
- Submitted in partial fulfillment of the requirements for the degree, Master of Science, Electrical Engineering.
- General Note:
- Department of Electrical Engineering
- Statement of Responsibility:
- by Mark Anderson.
## Record Information- Source Institution:
- |University of Colorado Denver
- Holding Location:
- Auraria Library
- Rights Management:
- All applicable rights reserved by the source institution and holding location.
- Resource Identifier:
- 34415880 ( OCLC )
ocm34415880 - Classification:
- LD1190.E54 1995m .A63 ( lcc )
## Auraria Membership |

Full Text |

ADAPTIVE NOISE CANCELLATION
USING AN EXTENDED LEAST SQUARES ALGORITHM by Mark Anderson B.S.E.E., Colorado State University, 1989 A thesis submitted to the Faculty of the Graduate School of the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Electrical Engineering 1995 This thesis for the Master of Science degree by Mark Anderson has been approved by Miloje S. Radenkovic Hamid Fardi ar/os/vr Date Anderson, Mark (M.S., Electrical Engineering) Adaptive Noise Cancellation Using an Extended Least Squares Algorithm Thesis directed by Assistant Professor Miloje S. Radenkovic ABSTRACT This thesis discusses the problem of noise cancellation in the area of Signal Processing. The focus is on the development and application of .an Extended Least Squares (ELS) adaptive algorithm to a specific system model. Many approaches to the development of an adaptive filtering algorithm are based upon the Wiener filter, the Kalman filter, or the Method of Least Squares with each using a tapped delay line, or transversal filter, as the basis structure. Adaptive filters rely on their self-adjusting property to deal with unknown system parameters and new, or time varying, situations after having been trained on a finite number of training signals or patterns through some form of recursive operations. An Extended Least Squares algorithm is developed for a noise cancellation problem, and computer simulations show the actual performance of the algorithm in this application. This abstract accurately represents the content of the candidate's thesis. I recommend its publication. Signed: Miloje S. Radenkovic iii CONTENTS Chapter 1. Introduction................................................1 1.1 Adaptive Automation................................. 1 1.2 Filtering Background................................ 2 1.3 Adaptive Noise Cancellation Applications.............3 1.3.1 Electocardiographic Applications..............7 1.3.2 Communications Applications..................11 1.3.3 Periodic and Broadband Applications..........14 2. Noise Cancellation Problem............................... 19 2.1 ARMAX Model Definition..............................19 2.2 Extended Least Squares Algorithm Development........22 2.3 Algorithm Analysis..................................25 3. Noise Cancellation Simulations.............................27 3.1 Problem Definition..................................27 3.2 ELS Algorithm Set-up................................28 3.3 MATLAB Simulation Results...........................30 4. Conclusions and Future Work................................43 Appendix A MATLAB Script Files for Computer Simulation................44 References........................................................64 IV FIGURES Figure 1.1 Closed-Loop Adaptation........................................2 1.2 Adaptive Noise Canceler.......................................4 1.3 Adaptive Noise Canceler with Uncorrelated Inputs..............6 1.4 Cancelling 60-Hz Interference.................................8 1.5 Multichannel Adaptive Noise Canceler.........................10 1.6 Simplified Long Distance System..............................12 1.7 Long Distance System with Adaptive Echo Cancellation.........14 1.8 Cancelling Periodic Interference................ ...........15 1.9 Self-Tuning Filter...........................................16 1.10 Adaptive Line Enhancer.......................................17 1.11 Adaptive Line Enhancer as Strong Signal Eliminator...........18 2.1 System Definition with ARMAX Input...........................19 3.1 A priori Prediction Error....................................31 3.2 "Close-Up" A priori Prediction Error.........................31 3.3 Predicted System Output vs. Real System Output...............32 3.4 Estimation Parameters for Afa'1)........................... 33 3.5 "Close-Up" for Afe1)........................................33 3.6 Estimation Parameters for B(q_1)............................ 34 3.7 "Close-Up" for Bfa'1)........................................34 3.8 Estimation Parameters for C(q-1).............................35 3.9 "Close-Up" for C(q1).........................................35 3.10 Composite of the Normalized Squared-Differences..............37 3.11 Average Squared-Difference...................................38 3.12 Trial #4 Squared-Difference..................................39 3.13 Trial #4 Estimates for A(q_1)................................39 3.14 Trial #4 Estimates for B(q*1)................................40 3.15 Trial #4 Estimates for C(q1)................................40 3.16 Trial #6 Squared-Difference..................................41 3.17 Trial #6 Estimates for B(q'1)..................................42 VI ACKNOWLEDGMENTS Special appreciation is extended to Dr. Miloje Radenkovic for his guidance and support during this project and other graduate work as well. Appreciation is also extended to TRW Systems Services Company Inc. for supporting my graduate work through their program for study under company auspices. I am also very grateful for the support and encouragement given by my wife and my parents throughout my graduate work. vii CHAPTER 1 INTRODUCTION 1.1 Adaptive Automation Interest in the area of adaptive automation continues to increase because of the significant improvement in a system's performance which can be brought about through contact with its environment. This improvement is achieved as the adaptive system adjusts its structure through some type of "training" process. Fields such as communications, radar, sonar, seismology, mechanical design, navigation systems, and biomedical electronics are finding several applications for adaptation. Several adaptive schemes have been proposed and developed which can be broken down into open-loop and closed-loop adaptation. The closed-loop, or performance feedback, process monitors some measured system performance and optimizes that performance by adjusting itself to better deal with the variable or inaccurately known components of the physical system. The general scheme for closed-loop adaptation is shown in Figure 1.1[1]. These closed-loop models process an input signal u and a defined "desired", or tracking, response signal d to find an error signal e. Typically, an adaptive algorithm is used to minimize some measure of this error and then adjust the structure of the adaptive system processor. 1 Figure 1.1 Closed-Loop Adaptation 1.2 Filtering Background The general idea of enhancing the performance of a system while operating in a changing environment and tracking time variations in input is the driving force that changes the focus from a fixed filter design to that of an adaptive filter. The fixed filtering methods, which were developed through the work of Wiener, Kalman, Bucy, and others, require a priori information about certain statistical parameters of the useful signal and any unwanted additive noise. In the noise cancellation problem, a filter needs to suppress the noise while passing through the useful signal. The Wiener filter minimizes the mean-square value of the error between the desired response and the actual filter output and is said to be the optimal filter in the mean-square sense. In many situations, prior knowledge of the input signal or noise may not be known and/or they may vary with time. Thus the filter design must be able to change its structure in order to continue meeting the response criteria. Adaptive 2 filters rely on a recursive algorithm which starts from a predetermined set of initial conditions, proceeds to train itself on the system inputs, and converges to the optimum Wiener filter in some statistical sense. Many algorithms have been developed giving various performance factors to consider. These include rate of convergence, robustness, computational requirements, along with other factors. These measure how rapidly the algorithm moves towards the optimum solution in some mean-square sense, how well it can adjust to a changing environment, and -how practical the implementation would be given a certain operating platform121. 1.3 Adaptive Noise Cancellation Applications The late 1950's saw a number of independent researchers working on various adaptive algorithms including the least-mean-square (LMS) algorithm which was developed in 1959 by Widrow and Hoff for use in their use of adaptive transversal filters used in pattern recognition. Adaptive algorithms found interest in both signal processing and controls applications. In the signal processing arena, the cancellation of 60-Hz interference at the output of an electrocardiographic amplifier and recorder was the focus of B. Widrow and others at Stanford University in 1965. Other areas of electrocardiography along with the elimination of periodic interferences and the elimination of transmission line echoes have also applied adaptive noise cancellation algorithms successfully111. The adaptive noise canceler operates on a primary sensor and a reference sensor as seen in Figure 1.2. The primary signal consists of input from the signal source s and some uncorrelated noise no, while the reference signal is noise ni which is correlated with the noise in the primary signal. The adaptive filter operates on the reference signal to produce an output y that is an estimate of the source noise. It is subtracted from the primary input giving the signal alone as the 3 output of the canceler. The adaptive canceler tends to minimize the mean-square value, or total power, of the system's output s by readjusting the algorithm's structure. This produces the best estimate of the desired signal in the minimum- mean-square sense. Figure 1.2 Adaptive Noise Canceler The following argument will show that little or no prior knowledge of s, no, or ni, or of their statistical or deterministic interrelationships is required. Assuming that s, no, nls and y are statistically stationary and have zero means, and that s is uncorrelated with no and ni which are correlated. The output is s = s + no -y (1.1) Now square both sides and take the expectations of each realizing that s is uncorrelated with no and with y. E[s2] = E[s2] + E[(no -y)2 ] + 2E[s (no y)] (1.2) 4 / E[s2] = Efs2] + E[(no -y)2] (1.3) Minimizing the output power will not affect the signal power. Hence, the output noise power E[(no -y)2] is also minhnized. E ^[s2] = E[s2] + E^Kno -y)2] (1.4) This results in the filter output y being a best least-squares estimate of the primary .noise no. E[(e -s)2] is also minimized since, from (1.1), s- s = no -y (1.5) This now shows that minimizing the output power causes the output s to be a best least-squares estimate of the signal s. Since the signal power in the output remains constant, minimizing the total output power maximizes the output signal-to-noise ratio. The smallest possible output power is E minfs2] = E[s2]. This gives E[(no -y)2] = 0 and y = no and s = s, and the output signal is free of noise. If, however, the no and ni are uncorrelated, the filter will "turn itself off' and will not increase output noise. It also follows that the filter output y will be uncorrelated with the primary input, and the output power becomes E[s2] = E[(s + no)2 ] + 2E[-y (s + no)] + Ety2] (1.6) E[s2]= E[(s + no)2] + Ely2] (1.7) Minimizing the output power now requires Ety2] to be minimized. This is achieved by making all the filter coefficients zero, or turning off the adaptation, and the output power of the filter becomes zero. These arguments can be extended to cases where the primary and reference inputs also consist of additive random noises uncorrelated with each other and the previously mentioned inputsfIl A more detailed picture of the single-channel noise canceler is shown in Figure 1.3. 5 Figure 1.3 Adaptive Noise Canceler with Uncorrelated Inputs Realizing that a noncausal HR filter cannot be physically realizable in a real-time system, approximations are possible with finite length FIR filters. The impulse responses of FIR filters will approach that of the HR filters by increasing the number of weights in the FIR filter. However, this will increase the implementation cost of the FIR filter. Also, using a delay A in the primary input path will provide an acceptable delayed real-time response which approximates that of the noncausal filter. Through the feedback, the optimum adaptive filter impulse response will also develop this delay. The shape of the impulse response generally changes with different values in A. It has been shown that a value for A of approximately to half the time delay of the adaptive filter will produce the least minimum output noise power. Without a proper delay, the impulse response of the 6 filter will not be a delayed version of the optimum filter, and the noise canceler output will be a poor approximation of the signallI]. 1.3.1 Electrocardiographic Applications 1.3.1.1 60-Hz Cancellation The problem of 60-Hz interference plagues many systems including the recording of electrocardiograms (ECGs). The causes of this interference include magnetic induction, displacement currents in leads or in the body of the patient, and equipment and set-up imperfections. Conventional methods for reducing this interference include the use of twisted pair leads and properly grounding all relating affected equipment and personnel. Using adaptive noise cancelling can further reduce the interference. The cancellation system, shown in Figure 1.4, takes the ECG preamplifier as the primary input, and the 60-Hz reference input is taken from a properly attenuated wall outlet111. The reference input is divided into dual reference inputs with one undergoing a 90 phase shift. The adaptive filter adjusts these dual reference signals with two variable weights. The resulting summed reference input can then be changed in magnitude and phase to achieve cancellation. Since two "degrees of freedom" are required to identify and cancel a single sinusoidal signal, two weights are chosen for the filter to cancel the 60-Hz signal. 7 Figure 1.4 Cancelling 60-Hz Interference 1.3.I.2 Donor Heart Cancellation In cardiac transplant research, it is the patient's new heart that will interfere with the study of the old heart rate which is still controlled by the central nervous system. The central nervous system controls the frequency of the heartbeat by controlling the rate of electrical depolarization in a group of specialized muscle cells known as the sinoatrial (SA) node in the patient's atrium. Normally, this initiates an electrical impulse transmitted by conduction through the atrial heart 8 muscle to another group of muscle cells known as the atrioventricular (AV) node. The AV node is also capable of independent, asynchronous operation. This node then triggers the electrical depolarization of the ventricles in the heart (3,4l Norman Shumway of the Stanford University Medical Center developed a transplant technique involving the suturing of the "new" or donor heart atrium to a portion of the atrium of the patient's "old" heart151. The SA node the in atrium of the old heart is electrically isolated from the new heart atrium which contains both the SA and AV nodes. The nervous system continues to control electrical output from the old S A node, but that output varies significantly from beat to beat in strength. The new S A node generates a spontaneous pulse that causes the new heart to beat at a separate self-pacing rate. In order to study these signals properly, adaptive noise cancellation provides an answer. The reference input to the canceler (see Figure 1.2) can be obtained by a pair of ordinary chest leads and will mainly contain the new heart beat. The primary input is received by the use of a catheter threaded through the brachial vein in the left arm and the vena cava to a position in the atrium in the old heart. This primary input will contain the beat signals of both hearts. The new heart beat is cancelled allowing more detailed study of the old heart patterns. 1.3.1.3 Maternal ECG Cancellation The use of adaptive noise cancelling in fetal electrocardiography can provide a way to clearly observe a fetal ECG. The details of the actual waveform of the electrical output from the fetus are better monitored during labor and delivery with the use of abdominal electrocardiograms1681. While muscle activity and fetal motion form some of the interference in an abdominal electrocardiogram, the mother's heartbeat is the greatest interferer as it is 2 to 10 times stronger in 9 amplitude than the fetal heartbeat19"111. For this application, the adaptive cancel: is provided with four reference inputs which are taken from chest leads to record the mother's heartbeat. Multiple reference inputs are used to make the interference filtering task easier. A single abdominal lead provides the primary input consisting of the combined heartbeats. This canceler is shown in Figure 1.5. The output of the canceler provides a cleaner fetal signal allowing for better analysis of the fetal ECG. f Reference Inputs \ Figure 1.5 Multichannel Adaptive Noise Canceler 10 1.3.2 Communications Applications 1.3.2.1 Noise Cancellation in Speech Signals A common example of the need to suppress interference in speech signals is the situation a pilot faces in the cockpit of an aircraft where high engine noise is present. Because the frequency and intensity of the noise components within the speech bandwidth vary due to the changing environment of the cockpit, conventional filtering would be impractical. This example falls directly into the basic noise cancellation problem shown in Figure 1.2. The primary input consists of the pilot's voice and the strong periodic components and rich harmonics present in the ambient noise. A second microphone is placed at a location in the cockpit which will provide a reference signal of the ambient noise which is free of the pilot's voice. The output of the noise canceler will significantly reduce the interference producing a relatively undistorted signal of the voice transmission. A typical experiment could consist of an adaptive filter using a Least Mean Squares (LMS) algorithm with 16 weights and a triangular wave for the interference which contains many harmonics that vary in amplitude and phase from point to point in the testing room. The time delay caused by the different transmission paths from the interference source to the two sensors can be ignored because of the periodic nature of the triangular wave. This noise canceler can minimize the output noise power by 20 to 25 dB while introducing no noticeable distortion in the speech signal. This adaptive filter is able to converge after about 5000 adaptations or 1 second of real time and is easily adjusted to changes in the interference signal resulting from moving the positions of the microphones or changing the interference frequency from 100 to 2000 Hz[1]. 11 1.3.2.2 Echo Cancellation Long distance telephone circuit connections involve the use of a hybrid device which converts a two-wire loop circuit of the customer to a four-wire circuit of the long distance trunk lines. The trunk lines of the hybrid consist of two lines for the incoming signal and two lines for the outgoing signals. A simplified version of this long distance system is shown in Figure 1.6. Incoming signals are passed from through an ideal hybrid to the local two-wire customer loop with 3 dB of attenuation with no energy passing to the outgoing trunk lines. This hybrid will also pass the signal from the local loop to the outgoing lines with 3 dB of attenuation without reflecting anything back to the local loop. In practice, however, perfect separation of the incoming and outgoing signals is not obtained due to variations in instrument characteristics, in loop length, and in impedance problems. This results in a "leakage" signal attenuated by about 15 dB relative to the incoming signal to be directed along the outgoing trunk lines111. 4-Wire Trunks. Port Port Figure 1.6 Simplified Long Distance System 12 Echo Suppressors developed at the Bell Telephone Laboratories have been successful when used in continental calls where the round trip delay is 100 ms or less. These suppressors detect when an incoming signal is present on the four-wire lines of the hybrid device and cut a relay in the outgoing path: The path is reestablished when another signal is detected coming from the microphone on the two-wire port. With the relay open during any transmission, echoes may be passed during the situation of both callers talking at the same time112,131. In satellite communications, the round trip delay generally exceeds 500 ms and the relay switching causes choppiness and missing pieces of words. The development of adaptive echo cancelers resulted from these switching problems. Figure 1.7 shows a single end of a telephone system with an adaptive canceler where the incoming signal is applied to both the hybrid and the adaptive filter as the reference input. The outgoing signal from the hybrid is taken as the primary input to the canceler and has a leakage component that is correlated to the incoming signal. The filter output is the estimation of this leakage component and is subtracted from the outgoing signal to minimise the "echo" in the transmitted signal in some least-square sense. This estimate of the echo component is also used to update the filter weights. Practical considerations place limits on the sampling time and on quantization in the adaptive filter realization. An example of an adaptive echo canceler for speech could use an 8-kHz sampling rate and 128 weights111. 13 4-Wire Trunk Lines Figure 1.7 Long Distance System with Adaptive Echo Cancellation 1.3.3 Periodic and Broadband Applications 1.3.3.1 Periodic Interference Cancellation In several possible applications for interference cancellation, it appears that adaptive noise cancelling cannot be applied due to the presence of some periodic interference when no external reference input that is free of the desired broadband signal is available. Some examples include the playback of speech or music in the presence of tape hum or turntable rumble, or sensing seismic signals in the presence of vehicle engine or powerline noise. In this situation, the composite signal is split with one path forming the primary input and the other path being delayed to form the reference input. The length of the delay is chosen to cause the broadband signal components in the reference path to become decorrelated from those in the primary path. This delay parameter is often called the decorrelation 14 parameter. The periodic components in each path will remain correlated due to their periodic nature. The adaptive filter will produce a prediction of the correlated component which will leave the unpredictable component, the broadband signal, at the output of the system. As discussed in section 1.3, if a delay which is less than the total delay of the adaptive filter is inserted in the primary input path, the filter will converge to that primary input and cancel both components. Periodic Interference Broadband Input + Primary Input ^ Broadband Output O > Reference Input Adaptive Noise C^celer Figure 1.8 Cancelling Periodic Interference 15 1.3.3.2 Adaptive Self-Tuning Filter Given similar circumstances as in the previous section except that the periodic component is the signal of interest, the same configuration with delay may be used. It is still the periodic component which the adaptive filter is predicting. Since it is the desire to recover the periodic component and not the broadband component, the system output will not be taken from the error signal of the adaptive canceler. As shown in Figure 1.9, the output comes from the adaptive filter. Thus, the adaptive filter is being used as a self-tuning filter. Further experiments have shown this self-tuning configuration to be able to predict the presence of multiple sinusoidal signals combined with the broadband component. Broadband Interference Periodic Signal iKD Primary Input ^ Delay -A Reference Input Figure 1.9 Self-Tuning Filter 16 1.33.3 Adaptive Line Enhancer A specific use of the self-tuning adaptive filter is to detect extremely low- level sine waves embedded in broadband noise. This system enhances the presence of sine waves and can compete with the fast Fourier transform algorithm as a sensitive detector. Figure 1.10 shows the enhancer with a secondary output as adaptive weight values of the filter which are the discrete Fourier transform of the filter's impulse response113. Figure 1.10 Adaptive Line Enhancer Experiments using an adaptive line enhancer and a discrete Fourier transform (DFT) power spectral density measurement were conducted with similar set-ups to insure a fair comparison111. Both methods easily detected the sinusoidal 17 signal when the broadband interference was white noise. However, in cases where the noise has some spectral coloration, the baseline of the DFT power spectral density measurement can hide the signal peak if the noise bandwidth contains the peak frequency. On the other hand, only the narrowband sinusoidal components are contained in the transfer function magnitude plots of the adaptive filter making signal detection easier and more certain with a baseline close to zero. The adaptive line enhancer can also be useful in detecting and estimating weak signals in noise accompanied by strong signals. With adaptive process minimizing the mean-square error, both the noise and the strong signals are cancelled. The ability to detect a weak signal in the presence of a strong interference could be used in a radio receiver application111. This application of an adaptive enhancer being used as a strong signal eliminator is shown in Figure 1.11. Figure 1.11 Adaptive Line Enhancer as Strong Signal Eliminator 18 CHAPTER 2 NOISE CANCELLATION PROBLEM 2.1 ARMAX Model Definition The focus of this thesis is to develop and apply an Extended Least Squares (ELS) adaptive algorithm to a noise cancellation problem. The problem considered here follows the basic system as defined in section 1.1 except for the differences described below and shown in Figure 2.1. The reference input u(t) is an interference signal. The primary input y(t) consists of the sum of an interference v(t) and signal s(t). The interference v(t) is a filtered version of the reference noise u(t), and the input signal s(t) is a filtered version of the original signal w(t). The signals u(t) and w(t) are uncorrelated. Figure 2.1 System Definition with ARMAX Input 19 v(t) lu(t) A(q-1) (2.1) W'S-w <2-2> where q'1 is the unit delay operator defined as q_1y(t) = y(t-l) and A(q-1) = l+a1q-,+a2q2+ -+aii^qnA (2.3) B(q-1) = l + biq-,+b2q-2+ ...+bnÂ§q"nB (2.4) C(q_,) = l+c,q"1+c2q_2+...+ciieq'c (2.5) D(q _1) = 1+d^ _1 + d2q 2 +... +dn5q nD (2.6) y(t) =v(t) +S(t) (2.7) y(t) ^u(t) +?q ~'W(t) 1 A(q~1) W D(q-1) W (2.8) This input follows the form of an Auto-Regressive Moving Average with "exogenous", or external, input (ARMAX) system model. To further simplify the notation, combine the denominators of the two input filters. Also note that the system delay is zero as the output at time t depends on the input signals also at timet. A(q-1)D(q-1)y(t) = B(q-,)D(q-,)u(t)+A(q-,)C(q-1)w(t) A(q -I )y (t) = B(q _1 )u(t) + C(q _1) w(t) where A(q-1) = l + aIq-1+a2q'2 + ...+anq- =A(q-1)D(q-1) B(q_l) = b0 +b,q-1 + b2q-2 + ...+bmqm ^(q-^DCq-1) (2.9) (2.10) (2.11) (2.12) 20 C(q 1) = 1 + c,q'1 + c2q~2 +...+ceq~e = A(q-1)C(q-1) (2.13) This simplified equation combines the system polynomials into composite polynomials which will be used in the algorithm development. This primary signal may also be written as a linear difference equation. y(t +1) = -a,y (t) a2y(t -1)-... -any (t n +1) + b0u(t +1) + b!u(t)+...+bmu(t m+1) + c,w(t)+...+c*w(t-i +1) (2.14) +w(t + l) or in vector form y(t + l) = 0oT Oo(t) is the true measurement/signal vector 0OT =[ai,a2,..,an,b0,b1,..,bm,c1,..,cj (2.16) (t)T = [y(t),y(t -1),,-y(t n +1), u(t + l),u(t),,u(t-m+l), (2.17) w(t), w(t -1),, w(t -1 +1)] It is assumed that w(t) is a white noise interference that is nonmeasurable and normally distributed with mean value = Â£{w(r)} = ju = 0 (2.18) variance = Â£{w2(t)} = 21 2.2 Extended Least Squares Algorithm Development If it is assumed that the system parameters are known, equation 2.20 shows a predictor that will give a white prediction error. Notice that the equation parameters are the exact system parameters. y (t +1) = -a,y(t) a2y(t -1)-. ,.-any (t n +1) + b0u(t +1) + b!u(t)+...+bmu(t m+1) + c1w(t)+...+ciw(t -l +1) This predictor will minimize the variance of the prediction error ^{[.yO+O-X'+i)]2}- By using equation 2.14, the minimization criterion then consists of three terms as shown in equation 2.21. The third term is zero as w(t+l) is independent of all the other signals in that term. The second term does not depend upon the choice of y (t +1). The first term can only be positive or zero and will determine the minimization criterion. (2.20) Â£{[y(/ + l)-y(r + l)]2} = Â£{[-fl1y(0-...+V(' + l)+...+c,w(0+...-j>(? + l)f} +Â£{[w(r + l)]2} +2Â£{[-a. y(t)-..+b0u(t + l)+...+c1M'(r)+..-j>(f + l)]w(r +1)} (2.21) Thus, the minimum variance predictor given by equation 2.20 will satisfy the given criterion. It follows that the prediction error also provides an estimate of the nonmeasurable w(t+l). e(t +1) = y(t +1) y(t +1) = w(t +1) (2.22) 22 In the case of unknown system parameters, the a priori adjustable predictor also uses this estimate for the nonmeasurable w(t) and becomes: y(t +1) =-ai(t)y(t) -a2(t)y(t -1)-... -an(t)y(t -n +1) +b0(t)u(t +1) +b1(t)u(t)+... +bm(t)u(t -m +1) -fc, (t)e(t) +... +c, (t)e(t -l +1) =9(t)T^(t) where a _ 0 (t) is the system parameter estimation vector and b0(t),b,(t),..,bm(t), c, (t)..,c (t)]
u(t +l),u(t),,u(t -m +1),
sized and initial values are chosen.
Recall that the measurement vector used by the ELS algorithm uses the a |