Parallel model adaptive noise cancellatoion [sic]

Material Information

Parallel model adaptive noise cancellatoion [sic]
Alternate title:
Parallel model adaptive noise cancellation
Chen, Ai Qin
Publication Date:
Physical Description:
vii, 134 leaves : illustrations ; 29 cm


Subjects / Keywords:
Electric noise ( lcsh )
Adaptive control systems ( lcsh )
Adaptive control systems ( fast )
Electric noise ( fast )
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )


Includes bibliographical references (leaf 134).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Electrical Engineering.
Statement of Responsibility:
by Ai Qin Chen.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
38324718 ( OCLC )
LD1190.E54 1997m .C44 ( lcc )


This item is only available as the following downloads:

Full Text


by Ai Qin Chen B.S., Shanghai Jiao Tong University, 1982 A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Electrical Engineering 1997


This thesis for the Master of Science degree by Ai Qin Chen has been approved Miloje S. Radenkovic Tamal Bose Jan T. Bialasiewicz


Chen, Ai Qin (M.S., Electrical Engineering) Parallel Model Adaptive Noise Cancellation Thesis directed by Associate Professor Miloje S. Radenkovic ABSTRACT This thesis considers a general parallel structure consisting of several unknown linear time invariant systems and a tunable or partially tunable system. information-bearing signal is combined with the output of the main unknown system. If the source of the corruption, which is serving as a common input for the parallel structure, is stochastically uncorrelated with the information-bearing signal, the tunable or partially tunable system would produce a signal matching the behavior of the main unknown system, the signal eventually is used to extract the information-bearing signal from the corruption. This abstract accurately represents the contents of the candidate's thesis. I recommend its publication. Signed Miloje S. Radenkovic iii


I dedicate this thesis to my son Charles for his understanding and support.


I wish to express my gratitude to Dr. Miloje S. Radenkovic for his continuing support and direction during the course of the thesis work. Gratitude is also extended to Dr. Tarnal Bose and Dr; Jan T. Bialasiewicz for serving in the graduate committee.


Chapter 1. Introduction ........................................................................... 1 1.1 Optimization problem .............................................................. 2 1.2 Concept of adaptive filter .......................................................... 7 1.3 Adaptive noise cancellation ...................................................... 10 1.4 Application of adaptive filters.................................................. 13 2. Adaptive algorithm .................................................................. 16 2.1 Least Mean Square algorithm .................................................... 16 2.2 Recursive Least Mean Square algorithm ....................................... 20 3. Stability and convergence ofLMS and RLS algorithm ......................... 26 3.1 Convergence ofLMS algorithm ................................................. 26 3.2 Convergence ofRLS algorithm .................................................. 36 3.3 Simulation on LMS and RLS algorithm comparison ......................... .43 4. Stochastic parallel modeL .......................................................... 53 4.1 Statement of problem A ........................................................... 56 4.2 Statement of problem B ........................................................... 67 vi


4.3 Statement of problem C ........................................................... 72 5. Simulation ............................................................................ 78 5.1 Simulation of problem A ......................................................... 79 5.2 Simulation of problem B .......................................................... 86 5.3 Simulation of problem C .......................................................... 96 Conclusion ................................................................ 106 Appendix .................................................................. 107 Reference .................................................................. 134 vii


The main topic of this thesis is parallel adaptive noise cancellation. The model outline can be described by the following: an unknown linear time invariant system is connected in parallel with a whole or partial tunable system. There are two stochastically uncorrelated signal inputs: one is considered a noise serving as a common input for the whole structure, the other is the desired signal in combination with the output of the unknown system forming a "contaminated signal." The objective of the tunable or partially tunable system is to produce a signal matching the behavior of the unknown system, so that it can eventually be used to cancel the effect produced by the unknown system. Since the noise may arise from different sources, the system's physical structure varies from case to case, depending on the types of applications and physical conditions. For the convenience of the implementation, this thesis illustrates three choices of general structure models: the one unknown plant model, the three unknown plants model and the four unknown plants model. The following section will focus on the fundamental principle of the adaptive system.


Let us begin with an simple example to explain how we usually solve a optimization problem: consider a function satisfies equation (1.1.1), that is (1.1.1) where and are non-zero constants, while is a variable. To locate the minimwn-extreme-point of we solve the equation (1.1.2) which is = 0 (1.1.3) Since is non-zero, therefore we obtain 2


-(1.1.4) By substituting variable in equation with equation the minimum extreme point of can be obtained as follows (1.1.5) From the above example, the function is simple and the mathematical model is clear, and the function involved is differentiable everywhere. When such conditions cannot be met, a different approach is necessary. One method known as approximation estimation is found to be an effective tool. To explain how this method works, referring to Figure and equation notice that variable which is the error, is related with and in the following way 3


(1.1.6) Next we assume initial estimation of is Each new estimation is based on the previous estimation and a correction factor, which is of the following form: = /" 1 (1.1. 7) where is the iteration number and is the update constant. To obtain better estimates we need to repeat the estimation procedure numerous times until reaching an acceptable point. In our example, after 5 iterations, approaches "2.5" and min reaches very close to "0" as shown in Figure 1.2 and Figure 1.3. The simulation supports the previous analytic results in both equation (1.1.4) and equation (1.1.5). Here the non-zero constants we used the in simulation are selected as: = 2 ; = 5 ; = 118 For Matlab code see appendix. 4


Estimates min --iteration rrilit 20 10 5 ----1 1.5 2 2.5 3 3.5 4 4.5 5 5


2.4 22 2 1.8 _____ L _____ L _____ L _____ L _____ L _____ _____ ____ 1.6 1.4 ....1...-_---'__ -'--_---''--_--'__ -'--_---' 1 2 3 4 4.5 5 6


adaptive filter is a time variant digital filter. The impulse response of the adaptive filter at each step of iteration equals the previous filter impulse response plus a term proportional to the error. Referring to Figure a fixed filter, an adaptive filter is capable of adjusting itself in a manner which allows producing the desired system output in the presence of any disturbance that is stochastically uncorrelated with the input. 7




Figure 1.5 shows the filter as a tapped-delay-line, where Z is a delay operator. For a Finite Impulse Response (FIR) filter, at the time the instantaneous adaptive filter Impulse Response is (1.2.1) Correspondent filter coefficient vector or weight vector can be written as (1.2.2) For a Infinite Impulse Response (IIR) filter, referring to equation (1.2.3), = (1.2.3) if the remainder resulted from the division of the polynomials in the numerator and denominator is other than zero, at the time the instantaneous filter impulse response contains infinitely many terms, that is 9


Z-I) Co C 1 (t)Z-1 C C (1.2.4) ; Correspondent filter coefficient vector or weight vector for equation (1.2.3) is = [at ... (1.2.5) Notice that the coefficient vectors given in both equation (1.2.2) and equation (1.2.5) are updated at each iteration. Noise cancellation is an important matter in signal processing. There are many different fixed digital filters performing extremely well in certain applications. However, in some circumstances we do not fully understand the properties of the system due to uncertainty about the incoming signal or we may find difficult to apply the conventional digital filter. For example, in some cases the use of very precise sensors is required, while the sensors are connected to monitors by long


length cables. Cables are susceptible to 60HZ power disturbance, and a fixed high pass filter mayor may not solve the problem, as the signal of interest could fall in the same frequency range and be cutoff as well. adaptive filter is capable of "LEARNING" the system's characteristics and self adjusting its coefficients constantly to adapt to any change occurring either in the signal of interest or the environment noise. In other words an adaptive filter is capable of extracting the signal from the unknown disturbance. The nice thing about the adaptive filter is its simplicity of implementation. The process is nothing more than computing the new parameters required, storing the parameters in the memory and over-writing the previous parameter values. In Figure 1.5, we assume that we can derive a suitable reference signal which is corrupted by a noise is also a noise, which correlated with noise (collected from a different physical location). is used as an input to the adaptive filter. The difference between signal and the output of the filter is measured as system output is not difficult to see that ifthe adaptive filter output matches with the signal our cancelling objectivewill be achieved. Besides the noise cancellation application, adaptive filters can also be used as a system identifier, a delay detector, a dynamic controller, and so on. Though the overall principle is simple, the decision about choosing the proper 11


algorithm sometimes can be difficult. The most widely known adaptive algorithm is the LMS (Least Mean Square) algorithm. Once these algorithms become established, the correspondent computer architecture that would implement them most efficiently is then developed. Filter Input Observation Vectors "-----------, (Based on "learning") Filter Output


Over the past few years there have been several successful implementations of digital adaptive noise cancellers that are able to cancel the noise within the cockpit of a car, helicopter or airplane. In the case of the car, the noise to be cancelled was originally caused by the engine and the resonance set up in the body panels as the result of engine vibration. The noise cancellation system takes the engine speed as a reference and attempts to produce an "anti-noise" signal to cancel the cockpit noise. Microphones are placed in each headrest to determine the success of the attempt. Based on the changes detected by the microphones, the system changes the characteristics of the "anti-noise" signal until the best noise reduction is achieved. When the engine speed changes, the system adapts once again to the new engine speed. In long distance telephone communication, there has been a problem with echo back in phone conversation, especially in the case of overseas calls. The echo appears so distinct that whenever the sender talks, the voice is not only sent to the receiver, it also echoes back to the sender. The primary cause of echo is the


mismatch impedance at the joint points where a two-wire local subscriber loop (the signal travels in both directions) is connected to a four-wire trunk (the signal travels a single direction). The device that serves as the joint as well as the translator between the two-wire and four-wire is called a hybrid. To each hybrid. an adaptive echo canceller is added. The canceller obtains the infonnation from. the outgoing signal and keeps it in its memory. When the incoming signal arrives it contains two parts of the infonnation element: one is the signal that the remote station wants to send to the local station, the other is the same but delayed signal that the local canceller held in its memory. By comparison, the echo is identified and removed. The adaptive canceller does indeed improve the quality of the long distance communication. Biomedical equipment applications are even more interesting. One example is the Fetal ECG Monitor. As we know, the wavefonn generated by a fetus's heart is strongly interfered with the mother's heartbeat. By sampling the signal from the transducer near the mother's chest, the adaptive canceller receives the origin of the interference. A second set of transducers around the mother's abdomen picks up the combined signal that contains fetal heart function infonnation and the interference caused by the mother's heartbeat. The canceller then updates its filter parameter and matches the interference. As a result, the difference of the


signals sent to the monitor gives better representation of the fetus's heart functionality 15


The adaptive algorithms are designed in accordance with a single underlying principle: adapt to reduce the output error for the current training pattern, with minimal disturbance to response already learned. The minimal disturbance principle was the motivation behind the idea that led to the discovery of the Least Mean Square or the LMS algorithm. In fact, the LMS algorithm had existed for several months as an error reduction rule before it was discovered that the algorithm uses an instantaneous gradient to follow the path of the steepest descent and minimize the mean square error of the training set. It was then given the name "LMS" which stand for Least Mean Square algorithm. The LMS algorithm is an implementation of steepest descent using measured or estimated gradient V J the update estimation is 16


(2.1.1) where WI is the adaptive filter parameters vector, a is an adaptive constant. The error signal is obtained by taking the difference between the reference and the output of the filter, that is (2.1.2) where is the current reference. The observation signal vector is (2.1.3) The expectation of square is (2.1.4) The gradient of can be written as


(2.1.5) Apply the modification step: replacing the expectation of the mean square error by an instantaneous value of the square error, we get (2.1.6) where (2.1.7) Obviously, (2.1.8) Equation (2.1.8) assumes and are not correlated. Substituting in equation (2.1.1) with equation (2.1.8), we have 18


(2.1.9) This is known as the Least Mean Square algorithm. The operation outlines of the LMS algorithm can be classified as two processes: the filtering process and adaptive process. In the filtering process, the LMS algorithm produces a "matching" output in accordance to the current estimated filter weights. In the adaptive process, by comparing the filter's "matching" output and the reference output, the LMS algorithm acknowledges its current perfonnance. A new correction is then detennined. The direction of new correction is toward improving the perfonnance criterion that is the minimization of the square error. Adaptive constant represents the step size of iteration. The larger is the faster of the adaptation process. Since the gradient of the square error causes amplifmg noise as well, the LMS algorithm suffers the so-called "missadjustment." By reducing the step size it will help in reducing the misadjustment. There are many different types of the LMS algorithm, such as the Nonnalize LMS Algorithm, the Recursive LMS Algorithm, the Exponential Weighted RLS Algorithm and so on. 19


The Recursive Least Mean Square algorithm (RLS) implements recursively an exact least square solution. However the algorithm is derived from quite a different stand point, by considering the minimization of the detenninistic measure of averaging, which is = = (2.2.1) ;=0 ;=0 Differentiating and equating to zero, we have (2.2.2) if is not correlated with ,from the previous equation, we obtain 20


2[ (i) -<1>( = 0 (2.2.3) ;=0 or I = [I (2.2.4) ;=0 ;=0 Solving equation (2.2.4) for we derive = [I I (2.2.5) ;=0 ;=0 If we name such that (2.2.6) ;=0 similarly, (2.2.7) ;=0 21


we derive (2.2.8) Further, from equation (2.2.5) we notice that = (2.2.9) ;=0 and = (2.2.10) ;=0 Rearranging equation (2.2.9) and equation (2.2.10), we obtain = (2.2.11) ;=0 and = (2.2.12) ;=0 22


Next, we subtract equation (2.2.12) from equation (2.2.11) and we obtain (2.2.13) Based on equation (2.2.13), add and subtract tenn to both side of the equation simultaneously, = (2.2.14) Grouping equation (2.2.14) as follows, we have According to equation (2.2.8), we rewrite equation (2.2.15), that is (2.2.16) Left multiplying to1Joth sides of equation (2.2.16), we obtain 23


/,,-1 /,,_IT (2.2.17) where is determined by equation (2.2.8). Notice that the computation of the matrix inverse has been involved. To avoid the inconvenience, apply the Matrix Inverse Lemma (MIL) which is (2.2.18) where A is matrix, B and C are vectors. In contrast with equation (2.2.8), substitute ,C with = = C = Cl>(nf into equation (2.2.18), or = 1 O. (2.2.19)


The recursive least square algorithm is a natural extension of the LMS algorithm. As the iteration number approaches to infinity, the error coverges to zero. The convergence rate of the RLS algorithm is faster than the LMS algorithm. The update weight factor or the step size is no longer a constant, as in the LMS algorithm; instead, the step-size is replaced by the inverse correlation matrix of the observation vector. 25


A system is stable implies that giving a bounded input signal, system output must also be bounded. In z-domain the detennination criteria is the locations of the system close loop poles. If all poles are within the unit circle, the system is a stable system. Otherwise the system is unstable. Since the adaptive process actually applies feedback to the original system, a close loop is included in the system. Hence, we need to examine the stability and convergence of the adaptive algorithm the LMS algorithm, the error can be expressed as follows (3.1.1) 26


Based on the current estimation and the required correction, the new updates are (3.1.2) Within equation (3.1.2), combining the tenns containing together, (3.1.3) and taking expectation on both sides of the equation (3.1.3), we have (3.1.4) where (3.1.5) Assuming the optimal estimation is from equation (3.1.1), obviously, or 0, or 27


or (3.1.6) Using equation (3.1.6), equation (3.1.4) becomes (3.1.7) Since is a constant set, is equivalent to Substituting equation (3.1.5) into equation (3.1.7), we get (3.1.8) the above equation (3.1.8), grouping tenns with".u together, which is (3.1.9) If we name the difference between the optimum and the average as follows, (3.1.10) 28


Equation (3.1.9) becomes = (3.1.11) We can also write equation (3.1.11) as follows (3.1.12) Obviously, by applying equation (3.1.10), equation (3.1.12) is equivalent to (3.1.13) By the definitions of eigenvalue and eigenvector, the correlation matrix is defined (3.1.14) where matrix is a diagonal matrix with the diagonal elements correspondent to the eigenvalue Qis also a matrix, and each column of matrix 29


corresponds to the eigenvector of matrix Thus matrix can be written as: 0 0 o (3.1.15) (3.1.16) where is the length of adaptive filter. Substituting in equation (3.1.13) with equation (3.1.14), (3.1.17) applying rotation to by matrix that is (3.1.18) equation (3.1.17) becomes 30


Then we left multiply to both sides of equation(3.1.19), that is Notice that matrix satisfies Obviously, equation (3.1.20) is equivalent to For each element we list the subanalysis in time sequence as follows = &n = = (3.1.19) (3.1.20) (3.1.21) (3.1.22) (3.1.23) (3.1.24) (3.1.25)


(3.1.26) (3.1.27) Substitute equation (3.1.26) with equation (3.1.27), and then repeat the substituting steps upward, until is found, which is (3.1.28) If the algorithm converges, we have (3.1.29) or (3.1.30) From equation (3.1.30), we derive (3.1.31) or 32


(3.1.32) Hence, for element the condition of LMS algorithm convergence is = 0 1, .. 1 (3.1.33) To satisfy the all elements, we reduce the selection range selecting as follows (3.1.34) where Amax is the largest eigenvalue of correlation matrix which guarantees (3.1.35) or, referring to equation (3.1.10) and (3.1.18), we have (3.1.36) 33


Since is non-zero matrix, we get o. (3.1.37) Equation (3.1.37) shows that the algorithm converges to the mean only. Further, the trace of matrix is defmed as (3.1.38) Since a physical system correlation is a positive semidefinite matrix, and the eigenvalues of matrix are real, which satisfies thus, or max, 34 (3.1.39) (3.1.40) (3.1.41)


By combining equation (3.1.34) with equation (3.1.41), we refine as 2 (3.1.42) Since the trace of matrix is in proportion to the product of filter length and the auto correlation of matrix which is = (3.1.43) or (3.1.44) Finally, by substituting the summation of in equation (3.1.42) with equation (3.1.44), we obtain the condition for LSM algorithm converging to the mean, which is 2 (3.1.45) 35


From the above convergence analysis, for the stability of the LMS algorithm, we concluded: the edge of the adaptive constant depends on the filter length and the power of the observation vector. The algorithm converges to the mean rather than to the true system parameters. There is no guarantee for the error to converge to zero. The input of the algorithm is model-independent, which can be both stationary and non-stationary. The biggest advantage of the algorithm is the simplicity in computation; therefore, it is easy to implement. The disadvantage is that the algorithm suffers the problem known as noise "misadjust." One solution is to select a small value for an adaptive constant, which also causes reducing step size and slowing the algorithm convergence rate. 3.2 Convergence of RLS algorithm From chapter 2, the filter update estimation of the RLS algorithm is determined by (3.2.1) 36


Let be the optimal filter parameter, subtract from both sides of equation (3.2.1), that is (3.2.2) The difference between the current estimates and the optimum is (3.2.3) Combining equation (3.2.3) with equation (3.2.2), we have -(3.2.4) Left mUltiplying to both sides of equation (3.2.4), we get (3.2.5) Since is defined as 37


(3.2.6) substituting equation (3.2.6) into equation (3.2.5), we obtain Referring to Figure 1.6, is the same as which is = (3.2.8) Noise and are correlated by the true unknown parameter which is = (3.2.9) Therefore, equation (3.2.8) can be written as = (3.2.10) where 38


= (3.2.11) Next, we substitute in equation (3.2.7) with equation (3.2.10), + + (3.2.12) and yields (3.2.13) Based on equation (3.2.13), we list the subanalysis in sequence oftime as follows = = (0)10 39 (3.2.14) (3.2.15) (3.2.16)


By subsituting (1) equation (3.2.15) with equation (3.2.16), repeat the subsituting steps upward to equation (3.2.13), to derive = (3.2.17) or l,. (3.2.18) When the iteration number becomes large enough, where 0 we have 1 1 = Notice that is a positive definite matrix, that is lim ? >0, or >0. 40 (3.2.19) (3.2.20) (3.2.21)


Since initial conditions are bounded, that is 0 c C(), we have (3.2.22) Futher, assuming signal and signal is uncorrelated, referring equation (3.2.11), makes up the elements of signal vectoreD(n), therefore, lim 0, (3.2.23) or lim = 0 (3.2.24) By taking limit for equation (3.22.19), we derive that lim = {lim N-+rrJ N-+rrJ In summery with equations in (3.2.21) (3.2.22) (3.22.24), we obtain 41


lim = lim(fN = O. (3.2.26) From equation (3.2.26), we conclude that the RLS algorithm converges to the true parameter. comparison with the LMS algorithm, the RLS algorithm converges much faster. The disadvantage of the RLS algorithm is relatively expensive in computation. Moreover the RLS algorithm suffers from the so called problem of Gain Vanishing or "Eigenvalue Disparity". The reduction in the correction gain causes the algorithm to be less sensitivity to the change of the input after certain iteration. To avoid the problem, we may use modified algorithms or reinitialize the process periodically. The following equations are the Exponential Weighted RLS algorithm, which is similar to the RLS algorithm. The added factor is to weigh the inputs differently by setting the highest priority for the current input, reducing weights as the number of interations increasing. The following equations are the forms of the Exponential Weighted RLS Algorithm. = (3.2.27) = (3.2.28) 42


0 The following simulation example is to show the characteristics of the LMS and RLS algorithms. Simulations use system identification as an example. The system to be identified has the behavior model as follows: 2.7 -1) 2) 3) The exciting signal is: = -1); = 0; = 1. Figure (3.1) to Figure (3.4) are the simulations based on the LMS algorithm. We select adaptive constant as = 0.05, and the iteration number of the simulation is 400. For Matlab code, see appendix. 43


1 -----r-----r-----r-----r-----r-----T-----T----a -1 ___ L _____ L _____ L _____ L _____ L _____ _____ ____ ----r-----r-----r-----r-----r-----T-----T-----2.5 o 50 100 150 200 250 300 350 44


4 -----r-----r-----r-----r-----r-----T-----T----2 -----r-----r-----r-----r-----r-----T-----T----_____ L _____ l _____ L _____ L _____ L _____ l _____ l ___ o 50 100 150 200 250 350 45


20 15 --------------------------------------10 5 -50 100 150 200 250 300 350 400 46


05 --------------------------------------02 0.1 ----.. ---------------------T -----T ----o 50 100 150 200 250 300 350 400 47


Figure (3.5) to Figure (3.8) are the simulations based on the RLS algorithm, where 0.8 and iteration number is 102, For Matlab code, see appendix. 48


o ------------; -------------------F--:--------: --------: --------:-----1.5 o 20 40 60 80 100 49


2 o -1 20 40 60 50 80 100


7 5 ___ _________ _________ L ________ ________ 3 2 1 o 20 40 80 100 51


0.6 0.4 --0.2 -0.1 o 20 40 60 80 100 52


Chapter 4 focuses on the discussion of the parallel model, beginning with the concept of linearity. A filter is said to be linear if the filtered, smoothed, or predicted quantity of the device is a linear function of the observations applied to the filter input. Otherwise, the filter is non-linear. Recall that the adaptive filter structure can be classified into two types: Finite Impulse Response (FIR) and Infinite Impulse Response (UR). The former one (FIR) is the linear filter. Let the transfer function of the adaptive FIR filter be ( 4.0.1) By applying signal as an input to the FIR filter, we obtain output, which is (4.0.2) Expressed in vector form, we have 53


= , (4.0.3) where and are both vectors, which are (4.0.4) (4.0.5) At time the output of the adaptive FIR filter is solely determined by the history of the inputs and has a linear function expression. Therefore, the FIR filter is a linear filter. While in the case of adaptive IIR filter, the transfer function of the filter has the following form Applying input the output ofIIR filter is -54 (4.0.6) (4.0.7)


or it can be written vector form, that is (4.0.8) where e and are all vectors, which are (4.0.9) (4.0.1 0) = ( 4.0.11) = -I) -2) 3) (4.0.12) At time the output of the adaptive IIR filter is determined not only by the history of the inputs, but also by the history of the filter outputs. There is no linear function that can discribe the behavior of the IIR adaptive filter. Although, the recursive behavior can potentially produce comparable results with far fewer coefficients, and consequently with a much less computation burden. However, 55


by using the IIR filter structure, the filter property displays as highly non-linear, which increases the stability and convergence problem. Simulation experiences show that IIR adaptive filters are prone to instability as a consequence of unbounded growth of the filter coefficients. Unlike the FIR filter, the convergence of the IIR filter may be to a local rather than a global minimwn. Moreover, the IIR filter converges slower than the FIR filter. In spite of these disadvantages, the structure of the IIR filter still remains remarkable in certain aspects, as can be shown from the following three problems. The block diagram shown in Figure 4.1 is a parallel adaptive filter model, where is a stable unknown plant. is the noise source. The corrupted signal contains the useful signal and the noise The measurable signals are and is the transfer function of the adaptive IIR filter. 56


The structure of IIR filter is described as follows: (4.1.1) where and are polynomials, and defined as z = 0+ + 2 Z + ... + (4.1.2) 57


( 4.1.3) Given input to the filter, the output of the filter is ( 4.1.4) clearly, equation (4.1.4) can be written as ( 4.1.5) This equation falls into the fonn ( 4.1.6) Grouping the delay operator with signal, we get = (4.1.7) By expending polynomials andB(z-1 we have 58


Equation (4.1.8) can be written vector fonn, that is If we define -1), and in the following way: = 59 ( 4.1.9) (4.1.10) (4.1.11) (4.1.12) ( 4.1.13)


then equation (4.1.9) can be written as = -1)]. (4.1.14) Further, let and -1) be defined as (4.1.15) (t-Il (4.1.16) Therefore, from equation (4.1.14), we obtain ( 4.1.17) From Figure 4.1, it follows that = (4.1.18) Squaring on both sides of equation (4.1.11) yields 60


(4.1.19) We can find the gradient of error square as (4.1.20) where the differentiation of is (4.1.21) Next, by separating the IIR filter parameters into two parts, the numerator and the denominator, we have =I ; (4.1.22) and for individual denominator element and numerator element we have (4.1.23) 61


(4.1.24) In conjunction with equation (4.1.23) and equation (4.1.24), we derive ;=1 (4.1.25) One: Ignore the second tenn in equation (4.1.25) completely, that is (4.1.26) Thus equation (4.1.25) becomes = = (4.1.27) Combining equation (4.1.27) and equation (4.1.20), we get 62


(4.1.28) Referring with equation (2.1.1), for the algorithm, the update estimation at iteration is (4.1.29) By substituting equation (4.1.28) into equation (4.1.29), we derive = (4.1.30) Corresponding to the algorithm, we have = (4.1.31) where is defmed as 63


(4.1.32) and is selected as The initial condition is = and > 0, Method Two: Referring to equation (4.1.25), let be defined as (4.1.33) Consequently, we obtain 64


Q = Substituting equation (4.1.34) into equation (4.1.25), we get ;=1 The above equation can be written vector fonn, that is = Q,.-3 where and -1) are vectors. For the algorithm, the update parameter follows with the definition of as 65 (4.1.34) ( 4.1.35) (4.1.36) (4.1.37)


= + .1 (4.1.38) where .1 1 is vector as shown in equation (4.1.39), that is (4.1.39) and is the coefficient vector of the polynomial in the denominator part of the IIR adaptive filter. Corresponding to the RLS algorithm, we have the following basic fonn: (4.1.40) -1) -l)ell(n -l)ell(n -1)], ell(n -l)ell(n -1) (4.1.41) = 66


The model given in figure 4.2 is an adaptive forward control model. Besides the unknown plant in the up channel as shown in Problem A, two more unknown plants exist: and which are all located in the down channel. Measurable signals include and is the adaptive IIR filter to be designed. is noise input. is the signal interested. The objective of the adaptive filter is to obtain a system output signal that matches with the signal 67


According to Figure 4.2, we have = and = Substituting equation (4.2.3) into equation (4.2.1), we have 68 ( 4.2.1) (4.2.2) (4.2.3)


(4.2.4) or = (4.2.5) Equation (4.2.5) can be written as = (4.2.6) From equation (4.2.6), if there exist polynomials A(z-I)and are the approximatios of the mixed unknown transfer functions shown below Z (4.2.7) ( 4.2.8) 69


(4.2.9) Z 0 + I Z + 2 Z + ... + (4.2.10) Then we can derive equation (4.2.11), but bear in mind that the equal sign is actual approximation sign. = or in vector form, zzo = [ ,,] where -1) and are all vectors: 70 ( 4.2.11) (4.2.12) (4.2.13)


(4.2.14) = ( 4.2.15) = ( 4.2.16) Let and be defmed as follows (4.2.17) (4.2.18) Then from equation (4.2.11), we obtain = ( 4.2.19) Equation (4.2.19) establishes the basis for the algorithms implementation. In the case of the LMS algorithm, we focus on the minimization of error square, while the case of the RLS algorithm, we focus on the minimization of the mean error 71


square. The bottom line is that it is possible for an adaptive IIR filter to adapt all three unknown plants within one application. Meanwhile, recall that the given equations (4.2.7) and (4.2.8) are the results of approximation. The length of both polynomial and are limited. Notice that for the general case the poles and zeros of three unknown plantsp(z-l) are assumed not reduceable. Hence, the impulse responses of the transfer functions and z actually have infinitely many terms. Therefore, the solution is always biased, no matter what kind of algorithm is applied. Figure 4.3 is an adaptive forward control with a feedback model. The structure is similar to problem B, which is one unknown plant in the up channel, and two unknown plants in down channel; yet for Problem C, there is a fourth unknown plant, which forms a feedback loop between signal and signal Measurable signals are is


the adaptive IIR filter to be designed. is noise input. is the signal interested. The objective of adaptive IIR filter is to obtain system output signal matched with the signal Referring to Figure 4.3, it follows -(4.3.1) 73


(4.3.2) (4.3.3) Substituting in equation (4.3.1) with equation (4.3.3), we have G( z z (4.3.4) or Equation (4.3.5) is equivalent to Obviously, we have 74


(4.3.7) Again, let's consider polynomials and as the approximations of the following transfer functions: == (4.3.8) (4.3.9) where and are vectors: (4.3.10) = 0+ I Z + + ... + (4.3.11) Therefore, we derive equation (4.3.12). Also bear in mind the equal sign is an actual approximation sign. 75


= (4.3.12) or in vector fonn, = + (4.3.13) where -1) and are all vectors: (4.3.14) (4.3.15) = (4.3.16) = (4.3.17) Let and be defined as follows 76


[A A] = (4.3.18) = [-1)] (4.3.19) Finally, we derive = (4.3.20) Recalling the result given from Problem B, and comparing it with equation (4.3.20), we find that even though we add one more unknown plant to the system, the adaptation procedure remains unchanged. Our adaptive IIR filter is truly capable of adapting four unknown plants simultaneously. Meanwhile, one should keep in mind that the solution to the problem is not the optimum solution. There is always a bias to the true system regardless the of type of algorithm applied. In spite of the accuracy issue, the method provides a simple means to approach a complicated situation, which leaves great room for potential implementation. For the and the algorithms implementation, refer to Problem B. 77


The following computer simulation demonstrates the performance of adaptive noise cancellers we designed for the models described in chapter 4, which are Problem A, shown Figure 4.1; Problem B, shown in Figure 4.2; and Problem C, as shown in Figure 4.3. The conditions of the simulations are: The information-bearing signal is lOcos(27l" and the disturbance noise model is where is white noise and statisfies = 0 and = 1. 78


The objective is to obtain a stable system output matching signal which is initially corrupted by the noise disturbance Referring to Figure 4.1, we assume that the unknown plant behaves as the transfer function given in equation (5.1.1), that is The transfer function of adaptive IIR filter to be designed has the form = +b1z-1 1 (5.1.1) (5.1.2) Notice in equation (5.1.2) we assume that the pole-order of (z is one, and the zero-order of is two. Actually, since is unknown, the 79


determination of the pole-zero-order of can only be done by estimation. The common procedure is assums low orders first, experimenting with simulations, and is adjusted as needed. Figure 5.1 to Figure 5.4 are the simulation results from the approximations based on Method One. We use the RLS algorithm as an adaptive algorithm. Total iteration in the simulation is 1020. We select = 1 and = 0.1 The simulation produces four figures as follows: Adaptive IIR Filter Parameter Estimation System Output Mean Square Error Mean Minimum Eigenvalue of the Inverse Correlation Matrix. At iteration 1019, the instantaneous parameter of the adaptive IIR filter is [ -0.2967 2.5414 -1.4753 0.7700 ]. The true system parameter applied is [ -0.4 2.5 -1.7 0.8 ]. 80


Since we use the RLS algoritiun, the adaptive filter numerator polynomial parameters = b are guaranteed to converge to the true system parameter. However, the denominator polynomial parameters, = can only converge to a bias of the true value. The reason is that in the IIR filter structure part of the controller parameter is also the function of the signal vector The factor has been ignored on the approximation we made. The requirement for this approximation is that the change in parameter must be a small value.


2 o -1 200 400 600 800 1000 1200 82


10 5 -10 -15 200 400 600 BOO 1000 1200 83


80 -----------------------------------------70 60 40 o 200 400 600 1000 1200 84


0.5 0.4 0.2 0.1 _______ L _______ L _______ L _______ L _______ ______ o -0.1 .L-... __ -'-__ -'-__ --1__ --l o 200 600 800 1000 1200 85


5.2 Referring to Figure 4.2 in addition to problem A, a more general case is given in problem B. Besides the unknown plant there are two more unknown plants and The noise canceller to be designed will adapt to all the three unknown plants. The objective is to produce a matching signal that would cancel the corruption noise so that the desired signal would be recovered from system output We use the following three transfer function models to be the three unknown plants, which are: = 2.5 -1.7z-' (5.2.1) = 1.5-1.3z' (5.2.2) (5.2.3) 86


The adptive IIR filter to be designed has the structure form (5.2.4) As we mentioned before, the pole-zero-orders of the IIR filter are estimated. Further adjustments may be applied if the result does not meet the specification. We use the RLS algorithm and Method One as the solution method to simulate the situation. The conditions of the simulations are based on Selecting = 1, and = 0.0012. Total iteration in simulation is 1500. the following list the figures (Figure 5.5 to Figure 5.11) result from simulations: Adaptive IIR Filter Denominator Polynomial Parameter Estimation. Adaptive IIR Filter Numerator Polynomial Parameter Estimation. System Output Original Corrupted Signal Mean Square Error Mean Minimum Eigenvalue of the Inverse Correlation Matrix 87


At iteration 1498, the parameter of the adaptive IIR filter is 0.5413 0.5413 0.3354 0.3354 0.1349 1.0628 -0.3215 -0.3215 -0.4122, correponding to the vector as where the elements of the vector are the parameters of the adaptive IIR filter, which is 88


1.2,.----------r-------r---------, 1 0.6 0.4 0.2 o -0.2 : : -0.4 -0.6 L...-. _____ --1. ______ .....L... _____ --' o 500 1000 1500 89


0.5 0.4 o --------------r---------------r---------------0.1 L..._____ --L_____ _____ ___I o 500 1000 1500 90


--------------, 20 --------------}---------------}---------------15 10 5 o -10 -------T---------------T----------------15 -----------o 500 1000 1500 91


___ ____________ 15 10 5 o -10 -15 t 500 92 1000 1500

PAGE 100

-10 ------------------------------------------o 500 1000 1500 93

PAGE 101

100 90 _1 ______________ 50 -------------r---------------r--------------o 500 1000 1500 94

PAGE 102

10 o .5 -1 .5 ---------2.5 ---------------r-------------r----------------t---------------t----------3.5 '-----------'-------......... ---------' o 1000 1500 95

PAGE 103

Referring to Figure 4.3, based on problem A and problem B, a more general structure is given in problem C. The fourth unknown plant is added to form a local feed back loop. We assume the system unknown plants having behavior models as following: (5.3.1) = (5.3.2) (5.3.3) (5.3.4) 96

PAGE 104

The adaptive IIR filter to be designed has the structure fonn as (5.3.5) For the adaptive algorithm, we use the RLS algorithm. Simulation conditions are selecting = 1, and = 0.001. Total iteration in simulation is 2000. Figure 5.12 to Figure 5.18 show the simulation results on the following SUbjects. Adaptive IIR Filter Denominator Polynomial Parameter Estimation Adaptive IIR Filter Numerator Polynomial Parameter Estimation System Output Original Corrupted Signal Mean Square Eerror Mean Minimum Eigenvalue of the Inverse Correlation Matrix At iteration 1998, the parameter of adaptive IIR filter is [1.1322 1.1322 0.1105 0.1105 1.0338 -0.3162 -0.4477 -0.2433], corresponding to the vector as 97

PAGE 105

where the elements of the vector are the parameters of the adaptive IIR filter, which are 98

PAGE 106

---------r-----------T----------0.4 __________ L ___________ L ___________ J __________ 0.2 -0.2 -0.4 o 500 1000 1500 2000 99

PAGE 107

1.2 -----------r----------0.6 0.4 --J _______ o -0.2 L.....___ .....l.____ ....L...... ___ o 500 1000 1500 2000 100

PAGE 108

__ --. 15 10 :-111111' '"'"' -10 -15 __ ______ o 500 1000 1500 2000 101

PAGE 109

15 ------10 5 111111 1111111111"11 1111111 0 -5 -10 -15 -20 0 500 1000 1500 2000 102

PAGE 110

20 15 ------r-----------r-----------,---------10 ------T----------5 o -10 ------r-----------r------------15 o 500 1000 1500 2000 103

PAGE 111

80.--------.---------.---------.--------, ----.-----------T-----------7----------50 40 ___________ L ___________ L ___________ __________ 20 o 1000 1500 2000 104

PAGE 112

10o ___ L ___________ L __________ o 1000 1500 105

PAGE 113

Simulations in Problems A, B and C are based on Method One mentioned in Chapter 4, which is an approximation solution. The simulations based on Method Two turned out less successful. By taking account of the recursive structure of the gradient estimation, the simulation experience shows that the algorithm converges more slowly and tends to be unstable easily. To avoid such an undesirable result, certain measurements must be applied so that the stability problem can be resolved during the adaptation processing. The adaptive parallel model described in Problems A, B and C were originally found in an article titled "Stochastic Parallel Model Adaptation: Theory and Application to Active Noise Canceling, Feedforward Control, IIR Filtering, and Identification," which is written by Dr.Wei Ren and Dr. P.R. Kumar published on IEEE TRANSACTION COMPUTATION CONTROL,VOL 37, NO.5, MA Y,1992. The main algorithms from the article are using are the Stochastic Gradient (SG) and Modified Least Square algorithm (MLS). The article also demonstrated the proof upon these two algorithms. 106

PAGE 114

1.2, 1.3 function []= opt(x,d,alfa) % example solving optimization problem with estimation method % x=2; d=5; alfa=1/8; Data using in simulation f=0; for i=1:5 e=d-f*x; J(i)=e*e; f=f+alfa*e*x; F(i)=f; end figure; plot(J,'wo'); grid; figure; plot(F,'wo'); grid; 107

PAGE 115

3.1 function []=firlms(alfa,A) % The example ofLMS algorithm in solving identification problem % A=[1.5 -2.7 -0.5 0.6]; alfa=0.05; Data using in simulation f=zeros(4,1); T=zeros(3,1); pp=zeros(4,4); E=O; V=O; y=0.0001; w=randn( 402,1); n=l; while 1 +n<400 xx( 1,1 )=w(n+ 1,1 )+0.9*w(n, 1); xx(2:4,1)=T; T=xx(1:3,1 y=A*xx; s=O; d=s+y; yhat=f*xx; e=d-yhat; 108 Noise generator % The output of the unknown system LMS Algorithm computation Adaptive identifier output The comparison error

PAGE 116

EE(n,l)=e; cl=e*alfa; f=f+c1 *xx; param( :,n)=f; E=E+e*e; MSE(n, 1 pp=pp+xx*xx'; L=eig(pp); LMIN=L(l,l); fork=2:4 if L(k, 1 )
PAGE 117

figure; plot(EE,'w'); grid; titleCSystem output e(n)'); figure; plot(param','w'); grid; titleCFIR identifier parameters'); 110

PAGE 118

3.5 function []=firrls(pO,A) %A=[1.5 -2.7 -0.50.6]; pO=0.8; Data using in simulation f=zeros(4,1); T=zeros(3,1);. pp=zeros(4); E=O; V=O; y=0.0001; p=p0*eye(4); w=randn(l 02, 1); n=l; while 1 +n<1 02 xx(l, 1 )=w(n+ 1,1 )+0.9*w(n, 1); xx(2:4,1)=T; T=xx(l :3,1); y=A*xx; s=O; d=s+y; yhat=f*xx; TT=xx'*p; Q=TT*xx; L=xx*TT; % Noise generator % Unknown system output % RLS (non weighted) algorithm computation % Adaptive identifier output Denominator part 111

PAGE 119

q=p*L; +Q))*q; c=p*xx; e=d-yhat; EE(n,l)=e; cl=e*c; f=f+cl; param( :,n)=f; E=E+e*e; MSE(n, 1 pp=pp+xx*xx'; L=eig(pp); LMIN=L(l,l); fork=2:4 if L(k, 1 )
PAGE 120

end figure; plot(LM,'w'); grid; title('Mean minimum eigenvalue of the inverse correlation matrix'); figure; plot(MSE,'w'); grid; title('Mean square error'); figure; plot(EE,'w'); grid; title(,System output'); figure; plot(param','w'); grid; title('FIR identifier parameters'); 113

PAGE 121

A 5.1 function []=IIRnois(pO,B,A) % Problem A %A=-O.4; B=[2.5 -1.70.8]; pO=O.l; Data using in simulation % where G(ql\-l)=B/(l +A f=zeros(4,1); T=zeros(2,1); pp=zeros(4); p=p0*eye(4); E=O; V=O;y=O.OOOl; w=randn(1022,1); n=l; while 1 +n
PAGE 122

Cxx(l,l)=( -l)*y; Cxx(2:4,1)=xx; y=f Cxx; TT=Cxx'*p; Q=TT*Cxx; L=Cxx*TT; q=p*L; p=p-(l/(l +Q))*q; c=p*Cxx; e=d-y; EE(n,l)=e; cl=e*c; f=f+cl; param( :,n)=f; E=E+e*e; MSE(n,l)=E/n; pp=pp+Cxx*Cxx'; L=eig(pp); LMIN=L(l,I); fork=2:4 Infonnation signal vector (-y,x) Adaptive IIR filter output % Denominator part % Numerator part New update weight % Update filter parameters Summation of error square % Mean square error % Signal matrix eigenvector

PAGE 123

if L(k, 1 )
PAGE 124

function[ ]=IIRfowd(B,A,HB,HA,PB,P A,PPO) % Problem B % Data using for simulation: % B=[2.5 -1.7 0.8]; A=-O.4; PPO=0.0012 where G(q"-l)=B/(l+A*q"-l) % HB=[1.5 -1.3 0.4]; HA=-0.5; where H(q"-I)=HB/(l +HA *q"-l) % PB=[1.2 0.5]; PA=[-1.6 0.64]; where P(q"-l)=PB/(l+PA*q"-l) nb=size(B'); na=size(A'); npb=size(PB '); npa=size(P A'); nhb=size(HB'); nha=size(HA'); nkb=nb+nha+npa-2; nka=na+nhb+npb-1; E=O; r=1; q=O.OOOl; % Set initial condition KB=ones(l ,nkb )*q; KA=ones(l,nka)*q; Controller: K(q"-l, t)=KB/(l+KA*q"-l) SigHi=ones(nhb, 1 )*q; capM=ones(nha, 1 )*q; 117 % Signal vector for H % Signal m vector for update

PAGE 125

SigGi=ones(nb, 1 )*q; cap Y=ones(na, I )*q; SigK.i=ones(nb, I )*q; capU=ones(nka, I )*q; SigPi=ones(npb, I )*q; cap Yhat=ones(npa, I )*q; mw=max(nha,nkb); M=ones(mw, I )*q; uw=max(nka,npb); U=ones( uw, I )*q; sw=max(nhb,nb); S=ones(sw,I)*q; nn=nka+nkb; sizk=nn( 1,1); :xx =zeros( sizk); PP=PPO*eye(sizk); H(l,I:nha)=HA; H(l,(nha+l):(nha+nhb))=HB; ha(I)=I;ha(2:(nha+l))=HA P(l,I:npa)=PA; P(l,(npa+l):(npa+npb))=PB; pa(l)=l; pa(2:(npa+I))=PA; G(l,I:na)=A; G(l,(na+l):(na+nb))=B; gael )=1 ;ga(2:(na+ 1 ))=A; Signal s vector for G Signal y vector for update Signal m vector for K Control signal vector for update Control signal vector for P Signal yhat vector for update Option for plotting pole-zero distribution of transfer function H,P,G

PAGE 126

figure; subplot(2,2, 1); zplane(HB,ha); grid; title(' Poles and Zeros of H( q"-l )'); subplot(2,2,2); zplane(pB,pa); grid; title(' Poles and Zeros of P( q"-l )'); subplot(2,2,3);zplane(B,ga); grid; title(' Poles and Zeros of G( q"-l )'); K(l,l :nka)=KA; K(l,(nka+ l):(nka+nkb))=KB; f=K'; [Si, W d]=usflsig(1520); n=l; while 1 +n<1500 v(n)=Si(n); s(n)=Wd(n); curr=s(n); pastvec=S; S=sigvec( curr ,pastvec); zl=nha; z2=nhb; % Obtain sinuous signal and white noise % Obtaining Yen) The down channel Noise sampling Update sen) % Update S buffer capMi=adjsiz(M,zl); % Resizing capM SigHi=adjsiz(S,z2); % Resizing SigHi Xm(l:nha,l)=(-l)*capMi; Xmnha+I):(nha+nhb),I)=SigHi; m(n)=H*Xm; curr=m(n); pastvec=M; 119 Update signal men)

PAGE 127

M=sigvec( curr,pastvec); Update M buffer capUi=adjsiz(U,zl); Resizing capU SigKi=adjsiz(M,z2); % Resizing SigKi Xu(1 :nka, 1 )=(-1 )*capUi; Xu( (nka+ 1 ):(nka+nkb), 1 )=SigKi; u(n)=K*Xu; curr=u(n); pastvec=U; U=sigvec( curr ,pastvec); z2=npb; % Update control signal u(n) % Update U buffer SigPi=adjsiz(U,z2); % Resizing SigKi Xhat(1 :npa, 1 )=(-1 )*cap Yhat; Xhatnpa+ 1 ):(npa+npb), 1 )=SigPi; yhat(n)=P*Xhat; z2=nb; % Update signal yhat(n) % The up channel SigGi=adjsiz(S,z2); resizing SigGi Xy(l :na, 1 )=(-1 )*cap Y; Xyna+ 1 ):(na+nb), 1 )=SigGi; y(n)=G*Xy; % Update signal yen) d(n)=v(n)+y(n); % Measuring corrupted signal den) e(n)=d(n)-yhat(n); prsigv=Xu; Pin=PP; 120 % Measuring error signaJ e(n)

PAGE 128

PP=rlsp(Pin,prsigv ,r); f=f+PP*prsigv*e(n); param(n,:)=f; K=f; curr=m(n); pastvec=M; M=sigvec( curr,pastvec); curr=u(n); pastvec=U; U=sigvec( curr,pastvec); curr=yhat(n); pastvec=cap Yhat; cap Yhat=sigvec( curr,pastvec); curr=y(n); pastvec=cap Y; cap Y=sigvec( curr,pastvec); E=E+e(n)*e(n); MSE(n)=E/n; XX=XX+Xu*Xu'; L=eig(XX); Lmin=L(1); fork=2:nn if L(k)
PAGE 129

end end LM(n)=Lminln; n=n+l; Minimum eigenvalue of inverse correlation matrix pden=param(:,l :nka); pnum=param( :,(nka+ 1 ):(nka+nkb )); ka(l)=1;ka(2:(nka+ l))=KA; subplot(2,2,4); zplane(KB,ka); grid; title('parameter'); figure; plot(LM,'w'); grid; title('Mean minimum eigenvalue of the inverse correlation matrix'); figure; plot(MSE,'w'); grid; title(,Mean square error'); figure; plot( d,'w'); title('Original corrupted system output'); figure; plot(e,'w'); grid; title('System output e(n)'); figure;plot(u,'w'); grid; title('Control output'); figure; plot(param(:,(l :nka)),'w'); grid;title(,Adaptive IIR filter parameter'); figure;plot(param(:,(nka+nkb)),'w'); grid;title('Adaptive IIR filter parameter'); param(l498,1 : (nka+nkb ))

PAGE 130

5.12 function[ ]=IIRfebc (B,A,HB,HA,PB,PA,FB,F A,PPO) % Problem C %Data in simulation %B=[2.5 -1.7 0.8]; A=-O.4; PPO=O.OOI, where G(qA-l)=B/(l+A*qA-l) %HB=[1.5 -1.3 0.4]; HA=-0.5; where H(qA_I)=HB/(l +HA *qA_l) %PB=[1.2 0.5]; PA=[-1.6 0.64]; where P(qA-l)=PB/(l+PA*qA-l) %FB=[l -0.2]; FA=0.2; where F(qA-l)=FB/(l+FA*qA-l) nb=size(B I ); na=size(AI); npb=size(PB I); npa=size(p A'); nhb=size(HBI); nha=size(HAI); nfb=size(FB I); nfa=size(F AI); g=max((nfa+nb ),(na+nfa+nhb+npb-3)) nkb=g; nka=nb+nfb-l; E=O; r=1; q=O.OOOI; ml=q; % Set initial condition 123

PAGE 131

KB=ones(l,nkb)*q; KA=ones(l,nka)*q; % Controller: K(q"-l, t)=KB/(l +KA *q"-l) SigHi=ones(nhb, 1 )*q; capM=ones(nha, 1 )*q; SigGi=ones(nb,l )*q; cap Y=ones(na, 1 )*q; SigKi=ones(nb, 1 )*q; capU=ones(nka, 1 )*q; SigPi=ones(npb, 1 )*q; cap Yhat=ones(npa, 1)* q; SigFi=ones(nfb, 1 )*q; mw=max(nha,nkb); M=ones(mw, 1 )*q; MM=ones(nkb, 1 )*q; uw=max(nka,npb); U=ones(uw, 1 )*q; sw=max(nhb,nb); S=ones(sw,l)*q; M 1 =ones(nfa, 1 ); nn=nka+nkb; sizk=nn(l, 1); XX=zeros(sizk); PP=PPO*eye( sizk); 124 Signal s vector for H Signal m vector for update % Signal s vector for G % Signal y vector for update % Signal m vector for K % Control signal vector for update % Control signal vector for P % Signal Yhat vector for update % Control signal vector for F

PAGE 132

% Obtaining combine parameter vector H(I,I:nha)=HA; H(1,(nha+l):(nha+nhb))=HB; K(I,1 :nka)=KA; K(1,(nka+ 1):(nka+nkb))=KB; P(1,I:npa)=PA; P(I,(npa+l):(npa+npb))=PB; G(I,I:na)=A; G(I,(na+I):(na+nb))=B; F(1,1 :nfa)=F A; F(l ,(nfa+ 1 ):(nfa+nfb ))=FB; f=K'; %Option to plot pole-zero distribution of transfer function H,P,G,F ha(1)=I; ha(2:(nha+I))=HA; pa(1)=I; pa(2:(npa+I))=PA; ga(1)=1 ;ga(2:(na+ 1))=A; fa(1 )=1 ;fa(2:(nfa+ 1 ))=F A; figure; subplot(2,2, 1); zplane(HB,ha); grid; titleC Poles and Zeros of H( q"-l )'); subplot(2,2,2); zplane(PB,pa); grid; title(' Poles and Zeros of P (q"-l )'); subplot(2,2,3); zplane(B,ga); grid; titleC Poles and Zeros of G( q"-l )'); subplot(2,2,4); zplane(FB,fa); grid; titleC Poles and Zeros of F( q"-l )'); [Si,Wd]=usflsig(2020); % Obtaining sino us signal and white noise n=l; while 1 +n<2000 v(n)=Si(n); % Obtaining Yen) 125

PAGE 133

s(n)=Wd(n); curr=s(n); pastvec=S; S=sigvec( curr,pastvec); zl=nha; z2=nhb; The down channel Measuring disturbance % Update sen) % Update buffer S capMi=adjsiz(M,zl); % Resizing capM SigHi=adjsiz(S,z2); % Resizing SigHi Xm(l :nha, 1 )=(-1 )*capMi; Xmnha+ 1 ):(nha+nhb), 1 )=SigHi; m(n)=H*Xm; mm(n)=m(n)+m 1; curr=mm(n); pastvec=MM; MM=sigvec( curr,pastvec); zl=nka; Update signal men); % Update buffer MM capUi=adjsiz(U,zl); % Resizing capU Xu(1 :nka, 1 )=(-1 )*capUi; Xunka+ 1 ):(nka+nkb), 1 )=MM; u(n)=K*Xu; curr=u(n); pastvec=U; U=sigvec( curr ,pastvec); z2=npb; 126 % Update control signal u(n) Update buffer U

PAGE 134

SigPi=adjsiz(U,z2); % Resizing SigKi Xhat(l :npa, 1 )=(-1)* cap Yhat; Xhat( (npa+ 1): (npa+npb ),1 )=SigPi; yhat(n)=P*Xhat; z2=nfb; SigFi=adjsiz(U,z2); % Update signal yhat(n) % Resizing SigKi Xf(l :nfa,I)=(-I)*Ml; Xfnfa+ 1):(nfa+nfb),I)=SigFi; ml=F*Xf; z2=nb; SigGi=adjsiz(S,z2); Update control signal u(n) % The up channel Resize SigGi Xy(l :na,l )=( -l)*cap Y; Xy((na+ 1 ):(na+nb),l )=SigGi; y(n)=G*Xy; Update signal yen) d(n)=v(n)+y(n); e(n)=d(n)-yhat(n); prsigv=Xu; Pin=PP; PP=rlsp(pin,prsigv ,r); f=f+PP*prsigv*e(n); param(n,:)=f; K=f; curr=m(n); pastvec=M; 127 % Measuring corrupted signal sample Measuring error e(n) % Update PP(n) % IIR filter parameters update % Storing parameter history Signal Vectors Update

PAGE 135

M=sigvec( curr,pastvec); curr=ml; pastvec=Ml; Ml =sigvec( curr,pastvec); curr=u(n); pastvec=U; U=sigvec( curr ,pastvec); curr-yhat(n); pastvec=cap Yhat; cap Yhat=sigvec( curr,pastvec); curr=y(n); pastvec=cap Y; cap Y=sigvec( curr,pastvec); E=E+e(n)*e(n); MSE(n)=E/n; XX=XX+Xu*Xu'; L=eig(XX); Lmin=L(1); fork=2:nn if L(k)
PAGE 136

n=n+l; end pden=param(:,(1:nka; pnum=param(:,(nka+l):(nka+nkb; figure; plot(LM,'w'); grid; title('Mean minimWl eigenvalue of the inverse correlation matrix'); figure; plot(MSE,'w'); grid; title('Mean square error'); figure; plot( e,'w'); grid; title('System output'); figure; plot( d,'w'); grid; title('Original corrupted signal'); figure; plot(u,'w'); grid; title('Control output'); figure; plot(pden,'w'); grid; title('Adaptive IIR filter denominator.parameters'); figure; plot(pnum,'w'); grid; title(,Adaptive IIR filter numerator parameters'); ka(1)=I; ka(2:(nka+I=KA; figure; zplane(KB,ka); grid; title(,Poles and Zeros of K( q"-l )'); param(1998,1 :nka+nkb) 129

PAGE 137

function[S, W d]=usflsig(L) end % Signal generator: S=lOcos2pin/30; Wd=w(n)+O.9w(n-I); w(n)-white noise L--retum signal size:fi'om n=l to n=L; % S(n)=1O*cos((2*pi*n)/30); n=l:L; S(n)=lO*cos((2*pi*n)/30); W=randn(I,L); Wdelay=O; form=l:L Wd(1,m)=W(1,m)+O.9*Wdelay; Wdelay=W(l,m); end 130

PAGE 138

function[U]=sigvec( curr ,pastvec) end L=size(pastvec ); U( 1,1 )=curr; Given current value and past sets of signal vector % Returning update signal vector U(2:L,1 )=pastvec(1 :(L-l), 1); 131

PAGE 139

function [SIGO] =adj siz(CAP ,d) end a=size(CAP); b=a(1,l); ifb> d SIGO = CAP(1 :d, 1); else SIGO=CAP; end Adjusting buffer size 132

PAGE 140

function[Pout ]=rlsp(Pin,prsigv ,r) end T=prsigv'*Pin; Q=T*prsigv; L=prsigv*T; q=Pin*L; Pout=(pin-(I/(r+Q))*q)/r; % Computing matrix Pen) % Denominator part % Numerator part Update weight 133

PAGE 141

1. Wei Ren and P.R.Kumar IEEE TRANSACTIONS ON AUTOMATIC CONTROL,VOL.37 NO.5, May 1992 2. Peter M. Clarkson, CRC Press, Inc pI63,329,pp247-249, 1993 3. Paul Glasserman AT &T bell Laboratories, pI 0, 1991 4. Bernard Widrow, Eugene Walach, Prentice Hall P T R Pretice-Hall, Inc., p59, 77,428, 1996 5. Craig Marven, Gillian Ewers, John Wiley & Sons, Inc., p19, 113, pp120-121 1996 6. Simon Haykin, Prentice Hall, Inc. A Simon & Schuster Company, pl,4, 1996 134