Citation
Recursive parametric system identification algorithms

Material Information

Title:
Recursive parametric system identification algorithms
Creator:
Fratino, Joseph Andrew
Publication Date:
Language:
English
Physical Description:
vi, 161 leaves : ill. ; 29 cm.

Subjects

Subjects / Keywords:
System identification ( lcsh )
Recursive functions ( lcsh )
Recursive functions ( fast )
System identification ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Thesis:
Thesis (M.S.)--University of Colorado at Denver, 1994.
Bibliography:
Includes bibliographical references (leaves 159-161).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Electrical Engineering.
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Joseph Andrew Fratino.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
31250653 ( OCLC )
ocm31250653

Downloads

This item is only available as the following downloads:


Full Text

PAGE 1

RECURSIVE PARAMETRIC SYSTEM IDENTIFICATION ALGORITHMS by Joseph Andrew Fratino B.S., The Ohio State University, 1987 A thesis' submitted to the Faculty of the Graduate School of the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Electrical Engineering 1994

PAGE 2

This thesis for the Master of Science degree by Joseph Andrew Fratino has been approved for the Department of Electrical Engineering by Jan T. Bialasiewicz Marvin F. Anderson 'I /d.'? 1!'1 Date

PAGE 3

Fratino, Joseph Andrew (M.S., Electrical Engineering) Recursive Parametric System Identification Algorithms Thesis directed by Assistant Professor Miloje S. Radenkovic ABSTRACT Three recursive identification algorithms are analyzed. The recursive least squares algorithm, the extended least squares algorithm, and the output error method. The mathematical properties as well as the derivation for each algorithm are discussed. A method for parameter set estimation for systems with unmodeled dynamics is also presented. Computer simulations are used to substantiate the theory for each of the algorithms. The simulations show that the perfo'rmance and convergence properties of the algorithms are satisfactory for practical applications. Two practical identification examples are also included. This abstract accurately represents the content of the candidate's thesis. recommend its publication. Signed: Miloje S. Radenkovic iii

PAGE 4

ACKNOWLEDGMENTS I would like to thank the Martin Marietta Corporation for supporting my graduate work at the University of Colorado at Denver, through their program for study under company auspices. Dr. Miloje Radenkqvic, my thesis advisor, provided me with guidance and inspiration throughout my thesis research. I would also like to thank my parents, who supported and encouraged me.

PAGE 5

CONTENTS CHAPTER 1. INTRODUCTION............................................................................. 1 1.1 Overview.............................................................................. 1 1.2 Thesis Outline..................................................................... 4 1.3 Notation and Terminology................................................ 5 2. SYSTEM IDENTIFICATION.......................................................... 6 2.1 Recursive Least Squares Algorithm............................... 6 2.2 Extended Least Squares Algorithm................................ 1 2 2.3 Output Error Algorithm....................................................... 1 4 2.4 Convergence Properties................................................... 17 2.5 Tracking Time-Varying Systems ...................................... 33 2.6 Model Validation ................................................................. 35 2.7 Practical Aspects ................................................................. 37 2.8 Systems with Unmodeled Dynamics ............................... 40 2.9 Applications ......................................................................... 43 3. SIMULATION EXAMPLES ............................................................ 45 3.1 Practical Examples ............................................................. 45 3.2 Simulated Models: Examples ........................................... 79 3.3 Tracking Time-Varying Systems ...................................... 91 v

PAGE 6

3.4 Unmodeled Dynamics ....................................................... 94 4. CONCLUSIONS ............................................................................. gg APPENDIX A. Matlab Input Files for Simulations .............................................. 1 01 REFERENCES ........................................................................................... 1 59 vi

PAGE 7

CHAPTER 1 INTRODUCTION 1.1 Overview System Identification is a technique used to build mathematical models based on observed input-output data. Mathematical models are necessary to represent systems in a variety of applications including: control systems, signal processing and prediction. Modeling the steering dynamics of a ship to control its desired path is an example in the control systems area. A common problem in the signal processing is to obtain a model for a communication channel in order to carry out channel equalization. An example of a system that predicted future values would be of interest could be the stock market. In all of the above cases it is necessary to have a mathematical model of the given system in order to design a controller/filter/predictor. Nonparametric models and parametric are two types of dynamic models used to represent systems. Nonparametric models are based on frequency response or step response. Parametric models represent systems in transfer function or difference equation form, which makes them the most suitable for most control/signal processing applications. This thesis 1

PAGE 8

is concerned with the identification of sampled discrete-time parametric models used in the design and tuning of digital control systems. There are two basic approaches for building a parametric dynamic model of a given system. The first method is based on the physical knowledge of a system and is developed using the laws of physics that govern a systems behavior. The models developed from this method are usually extremely complex and can only rarely be used directly for the design of control systems. Also, in many practical situations this type of direct modeling is not possible with the parameter values being very difficult to determine. The second approach uses experimental data (input-output data) for direct identification of dynamic models. This method, which is known as system identification, can be used in the majority of practical situations and has proved to be most valuable. System identification can be carried out in two different methods. The first method is commonly called off-line or batch identification. In this case, a batch of input-output data is collected from a system, and as a separate procedure, this batch of data is used to construct a model. The second method, which was developed from the off-line method, is called recursive identification. This method, which is the main topic of this thesis, processes the input-output data on-line (recursively). The main advantage to processing the data on-line is that this method can be used in conjunction with adaptive control, adaptive filtering, and adaptive signal processing problems. 2

PAGE 9

Three recursive identification algorithms will be presented and analyzed thoroughly in this thesis. They are the recursive least squares algorithm, the extended least squares algorithm, and the output error algorithm. The simplest of the algorithms is the least squares. The history of the least squares algorithm can be traced back to Plackett [18] in 1950. The first thorough treatment of applying the least squares algorithm to dynamic systems was done in 1968 by Astrom (1 ], with the basic results of this report summarized in 1971 by Astrom and Eykhoff [2]. The extended least squares algorithm is a natural extension of the least squares algorithm, with the capability to identify more complex systems. The extended least squares algorithm was independently derived by two authors Young [20] and Panuska [17] in 1968. The output error method was developed from the adaptive control viewpoint and was first discussed by Landau [6] [7]. in 1974 and 1976. This method is also very similar to the least squares algorithm, with improved results for identifying systems with disturbances acting only on the output. All of the above identified models, obtained from any of the three algorithms listed above, are structured such that they fit in with well known control theory problems. In the case for systems with unmodeled dynamics which is a concern for robust control, a method will be briefly discussed for determining the worst case bounds for the parameters. 3

PAGE 10

This thesis presents a thorough analysis of the three recursive identification algorithms discussed above, using simulations to coincide with the theoretical aspects of the algorithms. A correlation between robust control and identification will also be made. 1.2 Thesis Outline Chapter 2 starts with a derivation of the recursive least squares algorithm. The derivations for the extended least squares and the output error recursive algorithms are then presented, followed by a in depth study of the convergence properties of the algorithms. Then the topic of time varying system identification is briefly discussed, followed by the practical topics, model validation and practical aspects of system identification. The next subject covered is for systems with unmodeled dynamics and the relation to robust control. Finally, some of the applications of recursive identification are presented to end chapter 2. Chapter 3 presents simulation experiments to demonstrate the important aspects of system identification that were presented theoretically in chapter 2. Some practical examples are also included along with an example for unmodeled dynamics. In Chapter 4, conclusions are made between the theory presented in chapter 2 and the simulations that were made in chapter 3. 4

PAGE 11

Abbreviations RLS ELS OE w.p.1 P.E. PRBS Notations E[.] 9 9


PAGE 12

CHAPTER 2 SYSTEM IDENTIFICATION 2.1 Recursive Least Squares Algorithm Consider a dynamic system with an output signal y(t) and an input signal u(t). Suppose that these signals are sampled in discrete time t=1,2,3 ... and that the following linear difference equation relates the signals: y(t) + a 1 y(t-1) + ... + any(t-n) = b1 u(t-1-d) + ... + bmu(t-m-d) + e(t) (2.1) where e(t) is an unknown disturbance to the system and of the discrete time system. Denote q-1 as the backward shift (or delay) operator, defined as q 1 y(t) = y(t-1 ), equation (2.1) can be rewritten as A(q-1 )y(t) = qdB(q-1 )u(t) + e(t), (2.2) (2.3) where A(q-1) and B(q-1) are polynomials in the delay operator of order a and m, respectively A(q-1) = 1 + a1q-1 + .. ;+ anq-n B(q-1) = b1 q-1 + b2q-2 + ... + bmqm. (2.4) The parameters of the system can be expressed in terms of the following parameter vector 6

PAGE 13

aT= [a1 ... an b1 ... bm] (2.5) The lagged input-output data can also be expressed as a vector. This vector is known as the measurement or observation vector, and is given by cp(t)T =(-y(t-1) ... -y(t-n) u(t-1-d) ... u(t-m-d)], (2.6) using the relationships given in equations (2.5) and (2.6), equation (2.1) can now be written as y(t) = 9 T cp(t) + e(t). (2.7) The output {y(t)} from equation (2.7) can be thought of as an unknown linear combination of the components of the observation vector cp(t) plus the disturbance noise term e(t). The purpose of the least squares method is to estimate the parameter vector 9 from the measurements of the output y(t) and the input signal u(t). A common way to choose this estimate is to minimize the equation error term, which is the unknown disturbance term e(t). Defining the following criterion function: N T VN(9) = at[y(t) -9 cp(t)]2 1 (2.8) where at is a sequence of positive numbers included to make it possible to weight the observations differently. In most applications a1 is set equal to 1. Since VN(9) is quadratic in 9, it can be minimized from (2.8) by taking the derivative with respect to 9, and setting it equal to zero giving dVN = 2at[y(t)-9Tcp(t)]
PAGE 14

now solving for 9 gives ...... N N seN) =r atCt>Tr1r atCt)y(t)l. (2.9) provided the inverse exist. At this point it can be shown that equation (2.9) can be rewritten in a recursive fashion. Denote t R(t) = L Clk
PAGE 15

...... -= 9(t-l) +R-1(t) a1T(t). (2.14b) Equations (2.14a) and (2.14b) have the desired form for a recursive algorithm, however, there is one major problem. The algorithm is not well suited for computation since a matrix {R(t)} has to be inverted at each time step. In order to solve this problem, it is helpful to introduce F(t) = R"1(t) in accordance with the result of the following lemma. Lemma 2.1.1 Let A, B, C, and D be matrices of compatible dimensions, so that the product BCD and the sum A + BCD exist. Then (2.15) Proof Multiply the right-hand side of equation (2.15) by A + BCD from the right to show that it equals {I} This gives I+ A-1BCDA1B[DA-1B + c-1f1D-A1B[DA-1B + c1f1DA-1BCD = I+ A-1B[DA-1B + c-1r1{D[DA-1B + c-1f1CD-D-DA-1BCD} the right hand side of equation (2.15) is now reduced to I+ A1B[DA-1B + c-1r1 {0} =I, which proves (2. 15). Applying the results of lemma (2.1) to equation (2.14) with A= F-1 (t-1 ), B =
PAGE 16

= F(t1 ) F(t-1)
PAGE 17

F() () F(t-1)
PAGE 18

...... ...... -9(t) = 9(t-1) + l_R1 (t) o(t) + e(t). (2.24) 12

PAGE 19

Comparison of {2.24) with that of {2.7), which was used in the derivation of the RLS method reveals the models are alike and the RLS algorithm (2.17 and 2.18) can be tried to estimate the parameters 8. The problem is that the variables e{i) that are included in the measurement vector cpo(t) are not measurable, so the algorithm given in {2.17 and 2.18) cannot be implemented without making any adjustments. In order to solve this problem the components e(i) must be replaced with an estimate for them in the measurement vector cp0(t). Solving for e(t) in (2.20) gives e(t) = y(t) +aty(t-1) + ... +any(t-n)-btu(t-1-d)-... -bmu(t-m-d)-Cte(t-1)-... ere(t-r). The following sequence of estimates is available """'T-----,.. 8 (t) = [at(t) ... an(t) bt(t) ... bm(t) Ct(t) ... er(t)] so e{t) can be estimated by the following relation: .......... .......... ......... ...-... ......... -(t) = y(t) +at (t)y(t-1) + ... +an(t)y(t-n) bt (t)u(t-1-d) ... -bm(t)u(t-m-d) Ct (t)(t-1) ... 6;(t)e(t-r) (2.25) with T -cp (t) = [-y(t-1) ... -y(t-n) u(t-1-d) ... u(t-m-d) (t-1) ... e(t-r)] (2.26) and finally, (2.25) can be written as "'T ect> = yCt> 8 (t)cp(t). (2.27) The ELS algorithm can now be given by replacing cpo(t) {2.23), which has unmeasurable parameters, with the measurement vector cp(t) given in {2.27). The ELS algorithm is then give by 13

PAGE 20

""T e(t)=y(t)-8 (t-1)
PAGE 21

of a reference model, which defines the ideal output for the plant. Adjustments in the controller are made until the plant output coincides with the model output. for system identification, the recorded output from the system is compared to that of an adjustable model, and the model parameters are then updated until the difference cannot be further improved. This method was described in [7] [1 0]. It was developed for plant + disturbance models of the form q -dB(q-1) y(t) = u(t) + e(t), A(q-1) which can be written as A(q-l)y(t) = q-dB(q-1)u(t) + A(q-1)e(t), where the polynomials are defined as follows: A(q-1) = 1 + a1q-1 + ... + anq-n B(q-1) = b1q-1 + ... + bmq-m and e(t) is disturbance noise acting upon the output. Denote the undisturbed output as Y ( t) = q-dB(q1 ) u(t) m A(q-1) or Ym(t) = -a1Ym(t-l) ... -anYm(t-n) + b1u(t-l-d) + ... +bmu(t-m-d) 15 (2.29} (2.30) (2.31) (2.32) (2.33)

PAGE 22

The model output may now be calculated from the current estimates of the system parameters at time t, from the following relation ,.. ,.... ....... "' ,..., Ym(t) =-at (t)ym(t-1) ... an(t)ym(t-n) + bt (t)u(t-1-d) + ... +bm(t)u(t-m-d) (2.34) The main result of this equation is that {Ym(t)} is based only upon the input and output estimate sequences. It does not explicitly use the output {y(t)}. The parameter and measurement vectors are defined as """T e (t) = [at(t) ... biCt) ... and T ..... ..... cp (t) = [Ym(t-1) ... Ym(t-n) u(t-1-d) ... u(t-m-d)] Equation (2.32) can now be written as -r = e (t)cp(t), -r and at timet, before the estimate for e (t) is available, the model output will be -r YmCt) = e cr-1)cp(t). (2.35) The difference between the model output and the actual output can be minimized just like in the recursive least squares case, which leads to the same form for the algorithm with a different measurement vector. The output error recursive algorithm is given by -r = e (t-l)cp(t) ..... ..... """T 9(t) = S(t-1) + F(t)cp(t)[y(t)9 (t-l)cp(t)] 16

PAGE 23

F(t) = F(t-1)-F(t-1)
PAGE 24

In order for the estimates to converge to the exact parameter values 80 the second term on the right hand side of equation (2.38) must tend to zero as N approaches infinity. To examine this it is noted that this term converges to its expected value (or mean value) as N approaches infinity. The following relation exhibits this point. N E{atq>(t)e(t)) atq>(t)e(t)] t=l This expected value depends on the correlation between the disturbance term e(t) and the measurement vector cp(t). In order for this expected value to be equal to zero, the two terms e(t) and cp(t) must be uncorrelated. This will be true in the following two cases: 1) When the disturbance term is a sequence of random variables with zero mean values (white noise). Then e(t) is not dependent on what has happened up to time (t-1 ), so it follows that E{e(t)cp(t)}=O. 2) When the input u(t) is independent of the zero-mean noise sequence e(t), and the vector A(q-1) in (2.4) is equal to 1. Then there are no output {y(t)} terms in the measurement vector cp(t), and E{e(t)cp(t)}}=O. For n>O, q>(t) will contain lagged output terms (y(t-1) ... y(t-n)). If this is the case and e(t) is not white noise, then E{e(t)cp(t)}} is generally not equal to zero and the estimated parameters will not converge to the actual system parameters. This means that for the recursive least squares algorithm, the 18

PAGE 25

estimated parameters 9(N) will converge to actual system parameters So, usually only in the above two cases. It should be noted at this point, that the estimates for the recursive least squares case converge to the only stationary point of the criterion (which is quadratic in 9) and even though in the above two cases the estimates converge to this global minimum it is possible the parameters it converges to will not be the exact parameters because of the input u(t) that was applied. To illustrate this point, the choice of input will now be discussed. Choice of input The proper choice of an input signal {u(t)} will be illustrated by an example. Let the discrete time plant model be described by y(t) = -ay(t-1) + bu(t-1) (2.39) and consider the estimation model to be ...... ,., ...... y(t) = -ay(t-1) + bu(t-1). (2.40) Assume that the parameters verify the following relation: (2.41) Let the input signal u(t)=constant, the prediction error is then given by E(t) = y(t) -y(t)= [a-a]y(t-1) + [b -b]u(t-1), (2.42) equation (2.39) can be expressed as bq-1 y(t) = u(t). 1 + aq-1 (2.43) Substituting (2.43) into (2.42) and setting the prediction error=O gives [(a-a)bq-1 + (bb)(l + aq-l)]u(t) = [(bb)+ q-l(baab)] = 0. (2.44) 19

PAGE 26

The characteristics of u(t) can now be found so that equation (2.44) results in zero parametric errors. Denote ...... "' ...... b -b = a and ba -ab = 13 so that equation (2.44) can be written as (a+ 13q-l)u(t) = 0, (2.45) (2.46) which is a difference equation having an exponential type solution. Let u(t) = zt = esTt, (2.47) where T = sampling period. Equation (2.46) can now be written as (a+ 13z-l) zt = (za + 13> zt-1 = 0, (2.48) which is verified for z, which is the solution of the characteristic equation za+l3=0 so a (a= real), and the non-periodic solution u(t) = effft. (2.49) This leads to the verification of (2.46) and (2.44) without a=a and b=b. Because with u(t)=constant, corresponds to a=O and from equation (2.49) a= -13. (2.50) However, using relation (2.41 ), the following is true "' ...... ...... ,., '"' '"' -13 = a b -b= ab -ba ---lL.._ = l+a l+a Thus, if u(t)=constant only the steady state gain is identified. The correct identification of the plant model will occur when u(t) is not a possible solution to (2.46). Let 20

PAGE 27

u(t) = then equation (2.46) becomes jeiwTa + = 0. (2.51) (2.52) Since a and are real, cannot be a root of the characteristic equation, so e(t)=O will be obtained only if ..... ..... a + = 0 b=b and a=a. Thus, a non zero frequency sinusoid is required to identify the two parameters. This can also be applied to a general form for the system model n m y(t) =L aiy(t-i) + L biu(t-i-d) (2.53) i=l i=l where the number of parameters to identify is (m+n). In this case u(t) can be chosen as a sum of p-sinusoids p u(t) = L sin( wiTt) (2.54) i=l with p chosen to allow for good identification using the following general rule: m+n =even m+n =odd P>n+m -2 P>n+m+l -2 To summarize, as a condition on the input for good parameter identification. The input should be frequency rich in order to excite all the system parameters. A common term for such an input, is a persistently exciting (P.E.) input. 21

PAGE 28

There are two other general conditions for the convergence of the RLS algorithm, to the actual system parameters. The first condition is trivial, it states that the system to be identified can have no hidden poles and zeros. Which means, there can be no zero/pole cancellations in the system (i.e. A(q-1 ), and B(q-1) are coprime). The second condition was imposed when formulating the RLS algorithm. Equation (2.9} assumed that the inverse existed for the following matrix, which will be denoted by P: N -1 p-1 (N) = [ L cp(t}cpT(t}] t=1 (2.55} Noting that the matrix P will have an inverse only if it is a nonsingular matrix So the existence of an inverse matrix for P can be considered an additional condition for the RLS to converge properly. This condition on the measurement vector cp(t} also relates to the choice of input signal. If the input signal is not P.E., the matrix P may be close to being singular, and the RLS algorithm will not give good results. Some examples of this property are given in chapter 3. The rather informal discussion on convergence analysis above, will be analyzed in a more formal fashion next by looking at the convergence properties for the ELS and output error algorithms. A summary of the results of this analysis will be give at the end of this section. In order to analyze the convergence properties for the extended least squares and the output error algorithms, the following "Martingale convergence technique" will be used [12] [14] [19]. 22

PAGE 29

Lemma 2.4.1 [16] Let {Tn} be a sequence of nonnegative random variables and {Fn} a sequence of increasing adapted a-algebras (i.e. Tne Fn). Suppose (2.56) and .... :L an< oo w.p.1. 1 Then Tn converges w.p.1 to a finite nonnegative random variable T as n-+ oo. This will now be restated in a more suitable form. Lemma 2.4.2 Let {TnL {an+1 } and be sequences of nonnegative random variables, adapted to a sequence of increasing a-algebras {Fn} Suppose that and that .... L Cln
PAGE 30

now applying lemma 2.4.1 to the above equation proves lemma 2.4.2. Theorem 2.4.1 Let {e(t)} and {cp(t)} be sequences of scalars and vectors such that Define the sequences {S(t)} by ..... ..... 8(t) = S(t-1) + .1R.-1(t)q>(t)E(t) t R(t) = R(t-1) ,_: l[ q>(t)q>T (t) R(t-1)]. t Let, E(t) = E(t)[1t and suppose for some value eo. that ..... H(q-1)e(t) = -cpT[e(t)e0 ] + H(q1)e(t), where {e(t)} is a sequence of random variables such that E(e(t) I Ft-1) = 0 E(e2(t) I Ft-1) = cr2 and E(t) -e(t) e Ft-1 where F1 1 is the a-algebra generated by e(O), ... ,e(t-1 ),cp(O), ... ,q>(t), and H(q-1) is a casual, strictly stable transfer function such that Re[ 1 1 -2 1 ] > 0 T:/w -7t
PAGE 31

N JL, [e(t) e(t)f 0 w.p.I as 1 -""' Proof Let R(t) = t*R(t) and 9(t) = 9(t) e0 Then R -1 (t)m(t)e(t) 1R-1(t) = e(t-I> + R-1cr-I>e R(t) = R(t-1) +
PAGE 32

The two middle terms on the right hand side can be written as 4' '4' 2 28 (t)
PAGE 33

or E(T(t) I Ft-1) :S;T(t-1).l_T(t-1) + t t (2.69) The next step is to apply lemma 2.4.2 to (2.68); in order to do this, it is necessary to establish that and that the last term is summable. These assertions are proven in the following two lemmas: Lemma 2.4.3 t t L, aP A. L, a2(k) k=1 k=l where A.= in[7t = L, h(t-k)a(k), 1f(q-1 ) k=O where [ 1 _1.] = L h(k)q-k is defined as H(q-1). H(q-1) 2 k=O Thus, t t k L a(k)f3(k)= L a(k) L h(k-s)a(s) k=l k=l s=O = _l_f" a(k)eiJ
PAGE 34

= ...!._f a(k)eikwrRe[H(eiw)]dw;::: A,Lf a(k)eikwrdw bl bl which is finally reduced to t t I, = A. I, a2(k), k=l k=l which proves the lemma. In the second step, the exponential decaying effects from s .s.O were neglected. Lemma 2.4.4 00 L < oo w.p.1 n t where n is such that R(n-1) is invertible. Proof Note that R-1(t)
PAGE 35

where IRed is the operator norm of R(t). It follows that t tr RCt) = I lCk)j2 k=l and consequently 00 "" L s I tr R(t) T(n)R-1(n)
PAGE 36

because is obtained by exponentially stable filtering of { a(k) }. But e(t)e(t) = + i a(t), and therefore (2.64b) follows from (2.71) and (2.73). Theorem 2.4.1 is now proved. Theorem 2.4.2 will now be applied to the extended least squares algorithm. For the ELS algorithm ""T e(t) = y(t) -8 (t-1)T(t)R-1(t)
PAGE 37

Then "'T e(t) = y(t) -e cp which is equal to -T "'T e(t) = 6o 0 'ifw Co(eJW) that ..... T ..... [6(t) + 6o) R(t)[6(t) + 6o] 0 w.p.l as t-too and N L, [e(t)-e(t))2-t 0 w.p.1 as N-too 1 From (2. 78) it also follows that if 31 (2.78) (2.79) (2.80)

PAGE 38

t Ro(t) = 1. L 0 'Vw Ao(eJW) (2.83) To summarize the results from the preceding analysis, for the ELS and output error algorithms, with the true data being generated by (2.76), or for the output error case (2.76) with (2.82), such that the conditions Condition A 1 : Re[ 1. --21) > 0 'Vw (ELS) Co(eJW) Condition A 1 : Re[ 1. -12 ] > 0 'Vw (Output error) AQ(eJW) N Condition A2: limN-+sup L [y2(t) + u2(t)) <1 32 (2.84a) (2.84b) (2.85)

PAGE 39

-Condition A3: {e(t)} is a sequence of random variables such that (2.62) is true. are true, then ,., T ,., [e(t) + e01 + e0 ] 0 w.p.l as and N L (e(t)-0 w.p.l as 1 (2.86) (2.87) and if the model orders (the model set belongs to the model set of the actual system) and input are such that no two different models can give the same description of the system, meaning -Condition A4: A(q-1 ), B(q-1 ), and C(q-1) are coprime -Condition AS: u(t)-input is P.E. then e(t) w.p.l as (2.88) (2.89) (2.90) So if A1, A2, A3, A4 and AS are true the ELS algorithm will converge to the actual system parameters. For the output error algorithm, convergence to the actual parameters will take also place if A 1, A2, A3, A4 and AS are true. The RLS algorithm (C(q-1 )=1) converges to the actual system parameters if A2, A3, A4 and AS are true. 2.5 Tracking Time-Varying Systems Although the main purpose of this thesis is to discuss time-invariant systems, a brief discussion in this section will be devoted to systems with time varying parameters. Minor changes will have to be made to the 33

PAGE 40

algorithms presented in the previous sections in order to track time varying system parameters. The recursive formula for the inverse of adaptation gain F(t), was given in section 2.1 to be F(t) = [F-1 (t-1) + with (2.91) This is the adaptation gain for the RLS in the time-invariant case. The gain F(t) will tend to zero as t-?oo. This will not allow for the tracking of timevarying parameters. Rewriting (2.91) gives (2.92) now a weighting sequence a.1(t) will be implemented into (2.92), in addition to the weighting sequence a.2(t) to give F-1(t) =a.1 (t)F-1(t-1) + a.2(t)cp(t)cp T (t) (2.93) where 0 < a1 (t) :s;; 1 ; 0 :s;; a.2(t) :s;; 2 ; F(O) > 0. The weighting functions have opposite effects on the adaptation gain F(t), a. 1 (t) <1 tends to increase the adaptation gain F(t), while a. 2 (t) > 0 tends to decrease the adaptation gain F(t). Applying the matrix inversion lemma (2.1) to (2.88), one obtains F(t) =-1-[ F(t1 ) F(t-1)cp(t)cpT(t)F(t-1) ]. a.1(t) a.1(t)/a.2(t)+ cpT(t)F(t-l)cp(t) (2.94) now a.1(t) and a.2(t) can be chosen in order to track time-varying systems. There are many choices for the values of a. 1 (t) and a. 2 (t) [11), however, only the following case will be described here. Constant forgetting factor: a.1(t) = a.1 ; 0 < a. 1 < 1; a. 2 (t) = 1. 34

PAGE 41

Typical values for a1 are 0.95 < a1 0.99. This type of profile is suited to the identification of slowly time-varying systems. The RLS algorithm in the time-varying case, with a constant forgetting factor a2(t)=1, is given by """ ...... "'T act>= ect-1) + F(t)cp(t)[y(t)e (t-1)cp(t)l F(t) 1 [ F(t1 ) F(t-1)cp(t)cpT(t)F(t-l) ]. at(t) a1(t)/a2(t)+ cpT(t)F(t-l)cp(t) (2.95) An example of this case is given in chapter 3. 2.6 Model Validation After obtaining a estimated model for a given system from the three methods given in sections 2.1-2.3. It is necessary to validate this model. Since all of the models discussed herein were based on the minimization of the variance of the prediction error, which in all cases was essentially related to the unknown disturbance noise acting upon the system {e(t)}. This can also be thought of as the whitening of the prediction error. If the residual prediction error is found to be a white noise sequence, it follows that the identified model gives the best prediction for the plant output in the sense that the variance of the prediction error is minimized. It also means that since the residual prediction error is white, and a white noise is not correlated with any other variable, then all the correlation between the input 35

PAGE 42

and output of the plant are represented by the model and whatever remains unmodeled does not depend on the input. The validation procedure can be explained as, if all the variables listed below were chosen properly: 1 ) model structure 2) Recursive identification algorithm (for given model structure) 3) Degrees of polynomials A(q-1 ), B(q-1 ), C(q-1) and delay (d) Then the prediction error asymptotically tends towards a white noise, which implies limt-?oo E{ E(t)E(t-i) }= 0 i=l ,2,3, .... (2.96) A validation method, which is based off of the above principles is now given in the following steps. Step 1 -After identifying the model, create a input/output data set for the identified model using the same input that was used for the actual system. Step 2 Then calculate the prediction errors (E(t) = y(t)y(t)) using the input /output pairs found in step 1. Step 3-Test the prediction errors obtained in step 2, to see the if there uncorrelated (whiteness test). Step 4 Let E(t) be the centered sequence of the residual sequence errors, where centered means (measured value-mean value). Then compute the following normalized autocorrelations: N R(i) = L E(t)E(ti) t=l RNO = R(i) 1 R(O) for i=O; 1 ,2,3, ... n8 where (2.97) 36

PAGE 43

If the prediction error sequence is perfectly white and N is large, the RN(0)=1, and RN(i)=O for (i 1). This is usually only true in theoretical situations. In practice a common practical validation criterion is given by [11] RN(O) = 1, 1. (2.98) This test was defined knowing that for a white noise sequence RN(i) (i 0), that this sequence will have an asymptotically Gaussian (normal) distribution with a mean=O, and a standard deviation given by cr=1NN. The validation criteria implies that if RN(i) obeys the Gaussian distribution (O,cr=lNN ), then there is only a probability of 1.5% that the criteria will not be met. If several models of the same complexity were identified and they all pass the criterion (2.98), then one usually picks the model with the lowest value for (2.98). Also, models that have ( IRN(i)l) well within the criteria, may indicate a possible reduction in model complexity. One other point to be made is that after validating a model with the same input that was used in the identification process, a new validation should be carried out with a different input sequence. 2.7 Practical Aspects of System Identification This section deals with some of the practical aspects of system identification. In section 2.4 it was found, as a condition for good identification, that the input must be P.E. In practical situations the input used to meet this criterion is most often a pseudo-random binary sequence (PRBS). These are sequences of rectangular pulses, modulated in width, 37

PAGE 44

that approximate discrete-time white noise and thus have a spectral content rich in frequencies. They are characterized by a sequence length within which the variations in pulse width vary randomly, but over a large time horizon they are periodic with the period defined as the length of sequence. The PRBS are generated using shift registers with feedback, with the maximum length of a sequence being 2N-1, where N is the number of cells of the shift register. Another aspect of a PRBS is that the maximum duration of a pulse is equal to N (number of cells). Signal conditioning is also a practical aspect of system identification. The models used for identification, correspond to dynamic models which express output variations as a function of input variations around an operating point. In order for correct identification to occur, the DC component (i.e. operating point) from the input-output data must be removed. This can be achieved by finding the mean value for both input and output data. Then by creating a new input-output data set by subtracting the corresponding mean value from each data point, this will effectively remove the DC components. These techniques are usually used for off-line identification. The determination of an initial choice for model structure is also a practical issue, the general model one starts with is A(q-1 )y(t) = q-dB(q-1 )u(t) + C(q-1 )e(t). (2.99) The model structures analyzed in sections 2.1-2.3 are (2.99) with 51: C(q-1) = 1 => recursive least squares algorithm. 38

PAGE 45

S2: (2.99) as is =>extended least squares algorithm. S3: C(q-1) = A(q-1) =>output error least squares algorithm. In practice, without any prior knowledge of the system, the simplest model structure S1 (RL8) is usually chosen first. If poor results result with this model structure, the more complicated 81 (EL8) and 82 (OE} are then tried to identify the systefTl. After choosing a model structure, the initial choice must be made for the degrees of the unknown polynomials A(q-1 }, B(q-1) and C(q-1 ), along with a initial choice for the delay (d) value. The initial choice for the degree for A(q-1} can usually be for an industrial type plant (temperature, flow rate, etc ... }.or derived from a structural analysis for electromechanical type systems (robot arm, actuator, etc ... }. If no prior information is given for time delay (d), an initial choice of d=O is usually best. A good initial choice for the degree for B(q-1} is usually nb=2. As a rule, the degree of C(q-1) is usually chosen equal to the degree of A(q-1 ). These are just some general guidelines, in practice the goal is usually to obtain the simplest possible identified model that meets the validation criteria given in section 2.6. A simple test for determining the degrees for both the A(q-1} and B(q-1) polynomials is now discussed. In a practical identification situation, the degrees of the A(q-1) and B(q-1) polynomials can be increased in steps of one until no further benefits are achieved. A good way to analyze this is to look at the evolution of the variance of the residual prediction errors: 39

PAGE 46

. N ""'T 2 R(O) = E{ (y(t)-y(t))2 } = (y(t) a (t)cp(t)) t=l (2.1 00) as the degree for A(q-1) is increased, the criteria Rn.+l (0) 0.8Rn.(O) [11] can be checked, if this criteria is met it is unwise to increase the degrees of A(q-1) anymore. The same test should be applied to B(q-1 ). The values obtained from this exercise may not meet the overall validation criteria, if this is the case other structures must be tried. If a validated model is still not found then the degrees for A(q-1) and B(q-1) must be increased. Another practical issue to be concerned With is that of open-loop unstable systems. In order to identify unstable systems, the system first must be stabilized using feedback, then identification can be carried out in the closed-loop case. A practical example encompassing both model order and stability is given in chapter 3. 2.8 Systems with Unmodeled Dynamics In this section, the subject of identifying systems with parametric and non-parametric uncertainties will be briefly discussed. This is an issue that is related to robust control design. The problem can be stated as, given the input-output data (y(t), u(t) : t=1,2, ... ,N}, (2.101} transform this data into a model suitable for robust control design. The system that produced (2.1 01) will be assumed to be disturbance free and given as 40

PAGE 47

y(t) = G(q-1)u(t) where the following prior information is known about G(q-1 ): G(q-1) = G9(q-1)[1 + .!\W(q-1)] :s; 1 v (J) [ -1t, 1t] (2.1 02) (2.1 03} (2.104) and G(q-1) is a parametric transfer function with a dependence on _the parameter (2.1 05) The transfer function W(q-1) is also known and is stable. Since both the parameter set Sprior and W(q-1) are assumed known, equations-(2.1 00) (2.1 05) describe prior information about the system. The goal is to use this information and the measured data from the system to reduce the uncertainty of the parameters 9 e Sprior The model structure will be restricted to Ga = G9(q-1 ) = B9(q-1 )/A9(q-1), Ba = Ba(q-1 ) = b1q-1 + ... + bmq-m, Aa = Aa(q-1) = 1 + atq-1 + ... + anq-n, T e = [a1an b1bm1-(2.106) A worst case parameter set will now be described [5], which is guaranteed to contain the true parameters. Combining equations (2.1 02)-(2.1 06) gives A9y(t)-B9u(t) = .!\WB9u(t), (2.1 07} where the left hand side of equation (2.1 07) is the usual error term for the least squares algorithm. Now, noting that (2.1 08) 41

PAGE 48

and applying (2.1 08) to (2.1 07) gives the worst case equation error [5] E>wc = {BeiRP: II A9y-B9u IIN2 BaWu IIN2. (2.109) The above inequality describes a set of parameters that are dependent on the data set (2.1 01) and the prior information about the system (2.1 04 ), so it can be stated that, all parameters that are consistent with the measured data (2.1 01) and also the prior information (2.1 04) are included in the set (2.110) and because of the assumptions in (2.1 02)-(2.1 06), the true parameters values must also be contained in the set E>wc The computation of E>wc will now be given. First observe that: Ae(q-1 )y(t) Be(q-1 )u(t) = y(t) 9 T u] 'I'T = [0 Wwc = {Be IR P : II y 9 T


PAGE 49

equation (2.1 09) can now be written in quadratic form as ewe= {9eJRP: 9Tr9-+aS 0} where r =EN{ T-'lf'IIT) a =EN{y2 } (2.118) (2.119) (2.120) (2.121) Assuming r1 exist, the above equations can be expressed further as ewe= {9e JRP: (9-9c)Tf'(9-9c) S V } (2.122) ec = (2.123) and v a. (2.124) If r>O, then V>O, this is true because of the prior assumptions about the data. Hence r>O implies the 9wc will be an ellipsoid in RP with the center at 9c. For r
PAGE 50

this model a controller can be designed to achieve a desired system performance. The algorithms presented herein can also be used for recursive identification (on-line), which is their main purpose. In this mode the algorithms can track the dynamics of a plant that is time varying. This is very useful in adaptive control applications [3]. In order to design a regulator to control a plant with time-varying parameters, recursive identification algorithms must be used in conjunction with control laws to obtain a desired system performance. Recursive identification is also used for adaptive filtering and adaptive prediction. Some other examples of where these algorithms can be applied are adaptive spectral estimation, adaptive equalization, and adaptive noise cancellation. So the recursive identification algorithms presented here have a variety of practical applications. 44

PAGE 51

CHAPTER 3 SIMULATION EXAMPLES 3.1 Practical Examples In this section two practical systems will be studied. The first system is a furnace that operates in two different modes. The RLS and ELS algorithms will be used to identify this system for both modes of operation. The second example is a machine tool. This system is open-loop marginally stable because of an integrator in the system. The system will be identified using the RLS algorithm with feedback introduced (closed-loop). An attempt will be made to reduce the model order for the estimated system model. Comparisons will be made for the actual system versus the reduced order identified models. First a model description of the furnace will be given, this will be used in several of the simulations. Furnace System Description: The system to be modeled is a small furnace [4], Figure 3.1 shows a cross-sectional view of the furnace. The chamber is approximately one cubic foot and is well insulated from the outside. A electric heater with a one kilowatt power output rating is in the center. A metallic tube 1/2 inch in diameter and 6 inches in length containing a sample is placed inside the chamber. The temperature of the 45

PAGE 52

60 ds Figure 3.1 Cross-sectional view of the furnace. sample is measured by a thermocouple with a digital thermometer. Air can be pumped inside the chamber at a preset pressure through a 1/2 inch diameter pipe. The power input to the furnace is controlled by a solid state relay. The heat exchange dynamics of the hot air can be approximately written as Heat supplied by the heater to the air inside the chamber = Heat to raise the temperature of the air + Heat transfer to the atmosphere + Heat transfer to the sample + Heat carried out by air circulation 46

PAGE 53

It is assumed that the heat transfer rate is proportional to the temperature difference. Defining "fa.. as the heat transfer rate from the heater to the hot air. Then dT 'YaQ = Ca dt a+ kt(Ta-To) +k2(TaTs) + k3ro(Ta-To) (3.1) where Ta. T5 and T0 are the temperatures of the hot air, sample, and the furnace wall respectively; co is the rate of airflow; Ca is the thermal capacitance of the air; k1 is the thermal conductance between the sample and the furnace wall; k2 denotes the thermal conductance between the hot air and the sample; and k3 represents the convection heat-transfer coefficient. The dynamics of the sample holder is approximately Heat to raise the sample temperature = Heat supplied by hot air to sample + Heat loss by radiation to the wall + Heat supplied to the sample by radiation which can be expressed as (3.2) where Cs is the thermal capacitance of the sample and 1(4 is the thermal conductance of the between the sample and the furnace wall. 'YbQ represents the heat transfer rate by radiation from the heater to the sample. The furnace can be represented by a bilinear control system. Defining (xt= (Ta-To)); (x2= (Ts-To)); ut=Q. and u2=ro. The system is given by 47

PAGE 54

bJt (3.3) where kn = -(k1 + k2)/ca; k12= kvca: k21 = k2fcs; k22= -(k2 + !4)/cs; bn = -k3/ca; Ct = .Ya/Ca; and c2= "fb/cs. The furnace can operate with or without airflow. The furnace with airflow is used for temperature ramp down control and the furnace without airflow can be used to step-up, hold and temperature ramp up. The measured input signal to the heater is electrical power. The time constant of the heater is small compared with the furnace time constant. Therefore, the heater dynamics are modeled by a pure time delay.(t)._A input-output relation will now be found that will define the model structure and corresponding identification algorithm that will be used to estimate the unknown parameters. Let aT be the discretizing interval. The system model (3.3) can be represented by the following discrete-time state variable model: x(k+1)=[ an a.12] x(k)+ [ cu Ju(k-cr)+[c1]w(k) a21 a22 c22 c2 y(k) = [ 0 1 ] x(k) + v(k), k= 1 ,2, ... (3.4) where au= (1 + aTkn + aTbu'ih); a12= a21= a22= (1 + en= ATct; c12= and 'ih is a constant airflow rate for u2(t). Let u(k) denote the power input to the furnace at timek. The time delay is assumed to be a integer multiple of the sampling time AT and cr= The input-output model follows from (3.4) and is given by y(k + 2) =-a1y(k + 1)-a2y(k) + bu(kcr + 1) + h2u(k -cr) + + 2) for k=1 ,2,3,... (3.5) 48

PAGE 55

where a1=-(au+ a22); a2= aua22-a21a12; b1= c22 and b2= cua21auc22 + 2) is a composite noise term. The system is now in the form of 51 (section 2.6), and can be identified using the RLS algorithm. The system was first identified with no airflow to the chamber (mode 1 ). The system was estimated in [4], stabilizing the temperature at 350 degrees Celsius. then exciting the system with a PRBS input that varied the power level by %. Input-Output measurements were then taken every 60 seconds. The following parameters values were found to represent the system [4], in the form of (3.5) a 1 = -1.5781; a2= 0.5805; b 1 = 8.68; 2.0; d= 1, for mode 1 (no airflow). A similar experiment was conducted in 0. that identified the system with a constant airflow of 10 lb/in2 .. The parameters identified in this case were at= -1.5781; a2= 0.5805; b 1 = 8.68; 2.0; d= 1, for mode 2 (constant airflow). The purpose of the first example is to take the models found in [4], for the furnace, and use these models to represent the actual system in a simulation to identify the system parameters by using the RLS algorithm. Simulation 3.1: The system for the furnace in (mode 1) is given by (1 -1.5781q-1 + 0.5805q-2)y(t) = q-1(8.68q-1 + 2.00q-2)u(t) + e(t) (3.6) The system was simulated with u(t) and e(t) being independent Gaussian white noise sequences, with variances (0.01) and (0.0001 ), respectively. The RLS algorithm given in equations (2.17) was applied, with N=1 00 and the model set given by 49

PAGE 56

(1 + a1q-1 + a2q-2)y(t) = q-1(b1q-1 + b2q-2 )u(t) + e(t). (3.7) The parameter and measurement vectors are defined as ""'T ----9 (t) = [a1(t) a2(t) bt(t) b2(t)], cpT(t) = [-y(t-1) -y(t-2) u(t-2) u(t-3)]. (delay: d=1) (3.8) The initial values chosen for the algorithm were 9(0)=0, and F(O) = 500*1. Figures 3.2 and 3.3 show the parameter estimates as a function of time for the A(q-1) and B(q-1) polynomial coefficients given in (3.7). The actual system parameter values are also plotted and are from (3.6): b1 =8.68, b2=2.00, a1=-1.5781 and a2=0.5805. 2 ,_ Q) .... Q) E -1 a. I -2 .... -3 ... . . 0 . . . . . . . . ______ --J-al :- : 0 5 10 15 20 25 Figure 3.2 Simulation 3.1 : RLS algorithm, time history of parameter estimates a1 (t) and a2(t). 50

PAGE 57

10 ..... 0. 0 . . 1-Q) 8 ... ... : ........ : ..... .. : ....... : ....... : ....... : ...... : ..... . :l : : : : 7a > .... Q) Q) E e ....... -: ........ > . . 0 0 . . 0 . . 0 : 0 : 0 . . . 2 -:-;.;..: =-= :..:..o .-:-: -="'="":':== ---:-: --t-b2 m . . . .,...... : : : : : : o bz .... ; . .. . : ........ : ....... : ...... -: ...... : ...... . . . . . 5 10 15 20 t 25 30 35 40 Figure 3.3 Simulation 3.1 : RLS algorithm, time history of parameter estimates b1 (t) and b2(t). From these figures it is obvious that the parameter estimates converge to the actual system parameters very quickly. This is expected, because the conditions A2-A5, given in section 2.4, are all satisfied and the model set (3. 7) is equivalent to the actual system (3.6). The residual prediction error (y(t)-y(t)), is shown in figure 3.4. Theoretically, the residual prediction error should become a white noise sequence with a variance that is equal to the disturbance {e(t)} variance as time increases. Figure 3.4 shows this is true as time increases. 51

PAGE 58

..... ........ 0.2.---,----.---.----.----.---.----.---.----. . 0 1 0. 0 ; 0 : : : : 0 . ; . . . . . . . . . . 0 0 0 . . . . 0 -0.1 0 0 0 0 -0.2 1 0 20 30 40 50 60 70 80 90 t Figure 3.4 Simulation 3.1: RLS algorithm, time history of residual prediction error e(t). Simulation 3.2: The system for the furnace in (mode 2) is given by (1 1.3767q-l + 0.4173q-2)y(t) = (11.52q-l + 2.50q-2)u(t) + e(t) (3.9} The system was simulated with u(t} and e(t} being independent Gaussian white noise sequences, with variances (0.01) and (0.0001 ), respectively. The RLS algorithm given in equations (2.17}, was applied with N=1 00 and the model set given by (1 + a1q1 + a2q 2 )y(t) = ql(btq-1 + h2q-2)u(t) + e(t). (3.1 0) The parameter and measurement vectors are defined as 52

PAGE 59

"""'T ........ ........ ........ ........ a (t) = [al(t) a2(t) bl(t) b2(t)]' cpT(t) = [-y(t-1) -y(t-2) u(t-1) u(t-2)]. (delay: d=O) (3.11) The initial values chosen for the algorithm were 9(0)=0, and F(O) = 500*1. Figures 3.5 and 3.6 show the parameter estimates as a function of time for the A(q-1) and B(q-1) polynomial coefficients given in (3.1 0). The actual system parameter values are also plotted and are from (3.9): b1=11.52, b2=2.50, a1=-1.3767 and a2=0.4173. 4 :'---: ............... : ............... : ............... ; ............. -. . . . .... Q) ..... Q) 0 0 . .... C13 c. 0 . . . . -4 ........... : ............... : ............... : . .. .. ........ : ............. . . 5 10 15 20 25 t Figure 3.5 Simulation 3.2: RLS algorithm, time history of parameter estimates a1 (t) and a2(t). 53

PAGE 60

. 12 .:_ ...:.. ::...:. ...:.. .. ; . . 1aol/bl. . ......... .................. ..... ...... -.-............................................ ....... .! : : : : : : : 6 ....... : .. ....... : .......... : ......... : ......... : ......... : ........ : ...... .. : . : : : : [ 4 :::: ......... : : : m :kJs2 t . r .. : iI bz 5 10 15 20 t 25 30 35 40 Figure 3.6 Simulation 3.2: RLS algorithm, time history of parameter estimates b1 (t) and b2(t). From these figures it is obvious that the parameter estimates converge to the actual system parameters very quickly. This is expected, because the conditions A2-A5, given in section 2.4, are all satisfied and the model set (3.1 0) is equivalent to the actual system (3.9). The residual prediction error (y(t)y(t)), is shown in figure 3.7. As was the case in the first example, the residual prediction error is approaching a white noise sequence with a variance that is equal to the disturbance {e(t)} variance as time increases. 54

PAGE 61

. . ..... ....... ....... : 0 1 . . 0 0. \' -->--0.1 ............... . . .......................... . . . . ... 0 - . . . 10. 20 30 40 50 60 70 80 90 t Figure 3.7 Simulation 3.2: RLS algorithm, time history of residual prediction error e(t). Simulation 3.3: The system for the furnace in (mode 1) is modified by assuming a more complex disturbance noise representation, given by (1 -1.5781q-1 + 0.580Sq-2)y(t) = q-1(8.68q-1 + 2.00q-2)u(t) + (1 0.60q-l + 0.20q-2)e(t) (3.12) The system now has the 52 model structure (section 2.6). The ELS algorithm corresponds to systems of this structure. The system was simulated with u(t) and e(t) being independent Gaussian white noise sequences, with variances (0.01) and (0.0001 ), respectively. The ELS 55

PAGE 62

( i algorithm given in equations (2.28), was applied with N=1 00 and the model set given by (1 + a1q 1 + a2q-2)y(t) = q1(btq1 + b2q-2)u(t) + (1 + c1q 1 + c2q-2)e(t). (3.13) The parameter and measurement vectors are defined as ""'T -----9 (t) = [at(t) a2(t) bt(t) b2(t) Ct(t) c2(t)], cpT(t) = [ -y(t-1) -y(t-2) u(t-2) u(t-3) E(t-1) E(t-2)]. (d=1) (3.14) The initial values chosen for the algorithm were 9(0)=0, and F(O) = 1 000*1. Figures 3.8-3.1 0 show the parameter estimates as a function of time for the A(q-1 ), B(q-1) and C(q-1) polynomial coefficients given in (3.13). The actual system parameter values are also plotted and are from (3.12):-b1 =8.68, b2=2.00, a1 =-1.5781, a2=0.5805, C1 =-0.60 and C2=0.20. 2 ......... ; .. .:: -2 .... . . . ""' 0 0 0. 0 0 at : : 5 10 15 t 20 25 30 Figure 3.8 Simulation 3.3: ELS algorithm, time history of parameter estimates a1 (t) and a2(t). 56

PAGE 63

10 > . 0 0 . rQ) ..---" . . :::J 8 .. ,... .... .......................... ........... .............. .......... a; b1 > . ..... 2 6 ::: : : Q) : : : E . 4 :: ............ : ............ : ............ : ........... . . c. : : : : en 21-= ....,... == ::-. = ::-.,>="'='"'" =-= ___ ..;... b2 .. ; . ....................... : ..... ...... : ..... .... 10 20 30 t 40 50 60 Figure 3.9 Simulation 3.3: ELS algorithm, time history of parameter estimates b1 (t) and b2(t). . Q) . . :::J 0.5 ....... : ....... : ... ... ; ....... : ....... : ....... ; ....... : ....... : ... m : : : : : > : : L-. --:---:---:---.--:_:,.._ -------Q) . 0 . c. I () -0.5 -1 A Cl 100 200 A C2 :: : : : . . . . . 0 . ..... . . . . . 300 400 500 600 700 800 900 t Ct Figure 3.10 Simulation 3.3: ELS algorithm, time history of parameter estimates C1 (t) and C2(t). 57

PAGE 64

From these figures it is obvious that the parameter estimates converge to the actual system parameters very quickly for the A(q-1) and B(q-1) polynomials, while the C(q-1) parameters slowly converge to the actual parameters. In general the C(q-1) parameters will converge at a slower rate than the A(q-1) and B(q-1) parameters. The parameters converged to the true system parameters as shown in figures 3.8-3.1 0, this was expected, because the conditions A 1.;A5, given in section 2.4, are all satisfied and the model set (3.13) is equivalent to the actual system (3.12). Figure 3.11 verifies that condition A 1 is satisfied for the C(q-1) polynomial of the actual system. 1.5 '-co 0. 1 co Q) '0.5 0 1 2 3 4 5 6 w (radians) Figure 3.11 Simulation 3.3: plot of Re [ 1. -21] Q:s;oo:s;21t Co(e.Iro) 58

PAGE 65

The residual prediction error {y(t)y(t)), is shown in figure 3.12. As in the previous examples, the residual prediction error tends to a white noise sequence with a variance that is equal to the disturbance {e(t)} variance as time increases. --0 1 . . . . . . . . . . . . . . . . . . : . . . : . . . : . . . . . . : . . . .. . . . . . . . . . . . ... . . . . . . 0 . ................. . -0.1 . . . . . . . . -0.2 .______._ _____ ____. __________ .___ _____ ...._ _____ .......__ _____ _.__ _____ __.__ _____ ____,__ _____ ____._ _____ ___ 100 200 300 400 500 600 700 800 900 t Figure 3.12 Simulation 3.3: ELS algorithm, time history of residual prediction error e(t). Simulation 3.4: The system for the furnace in (mode 2) is modified by assuming a more complex disturbance noise representation, given by (1 -1.3767q-1 + 0.4173q-2)y(t) = (11.52q-1 + 2.50q-2)u(t) + (1 0.60q-1 + 0.20q-2)e(t) (3.15) As in the previous example, the ELS algorithm corresponds to systems of this structure. The system was simulated with u(t) and e(t) being 59

PAGE 66

independent Gaussian white noise sequences, with variances (0.01) and (0.0001 ), respectively. The ELS algorithm given in equations (2.28), was applied with N=1500 and the model set given by (1 + atq-t + a2q2 )y(t) = (btq-t + b2q2 )u(t) + (1 + Ctq-t + c2q2 )e(t). (3.16) The parameter and measurement vectors are defined as ...... T ----...... e (t) = [at(l) a2(t) bt(t) b2(t) Ct(l) C2(t)] cpT(t) = [-y(t-1) -y(t-2) u(t-1) u(t-2) e(t-1) E(t-2)]. (d=O) (3.17) The initial values chosen for the algorithm were 8(0)=0, and F(O) = 1 000*1. Figures 3.13-3.15 show the parameter estimates as a function of time for the A(q-1 ), B(q-1) and C(q-1) polynomial coefficients given in (3.16). The actual system parameter values are also plotted and are from (3.15): b1 =11_.52, b2=2.50, a1 =-1.3767, a2=0.4173, c1 =-0.60 and C2=0.20. Q) ::::J ,.. 1 ........... : a2 ............ : .............. : .............. ............. . ca r-______ _,__ ____ > . x;:-:-: e -1 ......... : .............. : ............. .. : ............... : ...... . ca . c.. r-. . at I : : c . . -2 ....... at: ........ :: : 5 10 15 20 25 t Figure 3.13 Simulation 3.4: ELS algorithm, time history of parameter estimates a1 (t) and a2(t). 60

PAGE 67

. . . .......... : . ... : ........ : ......... : . ... . . . ... . . . . bt . 10 .. .: ........ ........ ....... ....... : ..... . _;_ ....... ....... 8 . . . . . . ..... . . . . . E 6 .;::::: e : : : ca a. 4 .. ... -: ........ : ........ ........ : ........ : ........ > ........ : ....... 0 Cil 2 ...... : :-:.. .. ':"""":. :-:= . =:=-:.. .'":""': .. *.-:. ":"':" . 7--: .. .::=:. . .--:=. -:. .. .:":"". :-:.. -:-:.. b2 ..... . b2 . Or-,..,..,..: ......... : .......... : .. . . . 5 10 15 0 ; . . 20 t . 25 30 35 40 Figure 3.14 Simulation 3.4: ELS algorithm, time history of parameter estimates b1 (t) and b2(t). . Q) . . ::::l 0.5 ......... : .......... : .......... : ......... ; ......... : ......... : .......... : ... ca : : : : > : : : . . ,.... : : : C2" ........ : ........ : ....... .. : ........ : ........ : ........ : .. . . ... 0 ..... : Ct 200 400 600 800 1 000 1200 1400 t Figure 3.15 Simulation 3.4: ELS algorithm, time history of parameter estimates c1 (t) and C2(t). 61

PAGE 68

Once again, these figures show that the parameter estimates converge to the actual system parameters with the C(q-1) parameters converging at a slower rate thah the A(q-1) and B(q-1) polynomials. The convergence was expected, because the conditions A 1-AS, given in section 2.4, are all satisfied and the model set (3.16) is equivalent to the actual system (3.15). Condition A 1 was shown to be satisfied in example 3.3. The residual prediction error (y(t)-y(t)), is shown in figure 3.16, and is tending to a white noise sequence with a variance that is equal to the disturbance {e(t)} variance as time increases. -->: 0.2 0.1 ....... . .. . . . . ....... ........ ........ . ... . . ... . . . . , ... . . . . -0.1 ............ ......... 0 ... . . . . ... . . . . ... . . . . ... . -0.2 200 400 600 800 1000 1200 1400 t Figure 3.16 Simulation 3.4: RLS algorithm, time history of residual prediction error e(t). 62

PAGE 69

The next practical system to be analyzed is a machine tool. Machine tool system description: The physical process to be analyzed is the positioning mechanism of a microcomputer-controlled machine tool. A proportional flow valve provides flow to an hydraulic cylinder in proportion to an applied voltage. The flow is integrated in the cylinder to position the tool mass. The control signal is the voltage applied to the valve, while the output is the position of the tool. The output position is measured electronically as a voltage. The analog transfer function of the plant is given by [15) G(s) = (tos + 1) s('tts + 1)[(s/ron) 2 + + 1] (3.18) where to=0.31 sec, t1 =0.10 sec, ron=90 sec-1 and the damping ratio The open-loop system has a pole on the imaginary axis which would cause if the identification process were to take place with the system open loop. Therefore the following simulations will estimate the parameters for the system (3.18) with the unitary negative feedback introduced (closed loop). In practice this can be accomplished by using an operational amplifier. The open and closed-loop transfer functions are given by G(s) __ (25110s + 81 000) open-loop plant s4 + 73s3 + 8730s2 + 81000s G(s) = (25110s + 81000) closed-loop plant. s4 + 73s3 + 8730s2 + 106110s+ 81000 63 (3.19) (3.20)

PAGE 70

The closed-loop system will now be identified using the RLS algorithm. Simulation 3.5 The sampling time to be used for the identification process was ctiosen to be T5=0.050 sec. Since the input to the closed loop system will have to have a digital to analog converter {with a zero-order hold) connected to it, the closed-loop system will be discretized for the purposes of the simulations. The discrete-time model of the plant, with the zero-order hold, for a sampling time of T5=0.050 sec is given {in z-transform notation) by G(z) = 0.1162z3 0.0689z 2 0.0128z-0.0108 closed-loop plant. {3.?1) z4 -1.2724z3 + 0.231tz2 + 0.0390z+ 0.0260 This system model will be used with the addition of a disturbance noise {e{t)} acting on the measured output of the system, to give the following representation of the actual system: Ao(q-1 )y(t) = + e(t) where {d=O) and Ao(q-1) = 1 -1.2724q-1 + 0.2311q-2 + 0.0390q-3 + 0.0260q'"' Bo(q-1) = 0.1162q-1 0.0689q-2-0.0128q-3 0.0108q'"'. {3.22) (3.23) The system (3.22, 3.23) was simulated with u(t) and e(t) being independent Gaussian white noise sequences, with variances (1.0) and (0.01 ), respectively. The RLS algorithm given in equations (2.17), was applied with N=2000 and the model set given by (1 +a1q-1 +a2q-2 +a3q-3 +l!4q4)y(t)= (b1q-1 +b2q-2 +hJq-3 +b4q4)u(t) +e(t) (3.24) 64

PAGE 71

The parameter and measurement vectors are defined as ---9 (t) = [at (t) a2(t) a3(t) 14(t) bt (t) b2(t) b3(t) b4(t)] cpT(t) = [-y(t-1) -y(t-2) -y(t-3) -y(t-4) u(t-1) u(t-2) u(t-3) u(t-4)]. (3.25) ....... The initial values chosen for the algorithm were 9(0)=0, and F(O) = 1 o1 OJ. Figures 3.17 and 3.18 show the parameter estimates as a function of time for the A(q-1) and B(q-1) polynomial coefficients given in (3.24). The actual system parameter values are also plotted (3.23). From these figures it is shown that the parameter estimates converge to the actual system parameters. The convergence however, is very slow. This is due to the number of parameters that are being estimated (8) and the very small values for some of the coefficients. o.5...------......------....-----..........-----. ,. L a2 ,. a3 a3 ...... .......... .:. .. > ................ -1!4 1!4 0 "" ..... .... : .................. : .................. : ........... .' ..... .......... : ..... . .... : : . a. : I < -1 ....... ........ :-.... . ........ .. ,. at -1.5 u._ ____ .J....._ ____ ____ ____ ___J 0 500 1000 t 1500 Figure 3.17 Simulation 3.5: RLS algorithm, time history of parameter estimates a1 (t), a2(t), a3(t) and 8.4(t). 65

PAGE 72

0.15 rr-----------,,---------,.-------.-------, ........ -: (].) 0.1 ....... b .. ...... . .......... . . . ..... .. ...... 1 . ::I as > Cii 0.05 -:: : --. (].) . E : : e as c.. 0 ........... .. ... Ill -0.05 ......... .. ,.. b2 -0 1 .....__ ________ ________ ....._ ________ ____ 0 500 1000 t 1500 Figure 3.18 Simulation 3.5: RLS algorithm, time history of parameter estimates b1 (t), b2(t), b3(t) and b4(t). The convergence to the actual parameters in this case was expected because the conditions A2-A5, given in section 2.4, are all satisfied and the model set (3.24) is equivalent to the actual sys tem (3.22, 3.23). The residual prediction error (y(t)y(t)), is shown in figure 3.19. Figures 3.17 and 3.18 show that for both the B(q-1) and A(q-1) parameter estimates, that in each case two of the coefficients are very small compared to the others This is an indication a lower order model may be able to adequately represent the system. Therefore, an attempt will be made to use the RLS algorithm to estimate second and third order models. The guidelines given in section 66

PAGE 73

.... I -........... >-0 500 1000 t 1500 Figure 3.19 Simulation 3.5: RLS algorithm, time history of the residual prediction error e(t). 2.7, regarding model order, and also the validation criteria (2.98) given in section 2.6 will be used to determine if a second or third order model can adequately represent the system. Simulation 3.5 will now be re-run for 2nd and 3rd order model estimates. The 2nd and 3rd order simulations will use the same values for the input u(t) and the distubance noise {e(t)}. For the third order model the RLS algorithm given in equations (2.17), was applied with the model set given by (1 +a1q1 +a2q2 +a3q3)y(t)= (blq-1 +b2q2 +b3q3 )u(t) +e(t). The parameter and measurement vectors are defined as 67 (3.26)

PAGE 74

"'T .......... ,....... .......... ..,...... .......... ........... a (t) = [at(l) a2(t) a3(l) bt(t) h)(t)]' q>T(t) = [-y(t-1) -y(t-2) -y(t-3) u(t-1) u(t-2) u(t-3)]. (3.27) For the second order model the RLS algorithm given in equations (2. 17), was applied with the model set given by (1 +a1q-l + a 2 q-2)y(t)= (b1q-1 + +e(t). (3.28) The parameter and measurement vectors are defined as """T a (t) = [at(t) a2l, q>T(t) = [-y(t-1) -y(t-2) u(t-1) u(t-2)]. (3.29) Figures 3.20 and 3.21 the parameter estimates for the third order model parameter set, while figures 3.22 and 3.23 show the parameter estimates in the second order case. The corresponding residual P-rediction error (y(t) y(t)), is shown in figure 3.24 (third order model) and figure 3.25 (second order model). In each case, the RLS algorithm minimizes this error. The A(q-1) and B(q-1) polynomials found from the RLS algorithm for the !)econd and third order model estimates are listed below, Second order model: (1 -1.4031q-l + 0.4330q-2)y(t) = (0.1167q-l 0.0855q-2)u(t) + e(t) (3.30) Third order model: (1 -1.277q-l + 0.165q-2 + 0.136q-3)y(t) = (0.1167q-10.0705q-20.0225q-3)u(t) + e(t) (3.31) 68

PAGE 75

,. CD 0 ::I ctS .2 > .. : ; ....... a3: Q).-0.5 ................. : ................ ; ....... 0 : CD E -1 ...................... .. ........... ,. [ r at -1.5 ......... .. ...................................................... . 0 500 1000 1500 t Figure 3.20 Simulation 3.5: RLS algorithm, time history of parameter estimates a1 {t), a2(t) and a3{t). 0.15.-----------r---------.----............ ------, :: 0.1 r.............. ;_ .................. : ........... b1 : a; > 0.05 ................ .................................. : ... ... .... CD CD E 0 ...... ......... : ......... ... ... : ......... b, .... r 'Y . ...... 0 500 1000 t 1500 Figure 3.21 Simulation 3.5: RLS algorithm, time history of parameter estimates b1 {t), b2{t), and b3{t). 69

PAGE 76

. 1 . ....... .. ........... . . .......... : ................. : .... .......... a> a2 : : 0 ..... :-: :. Q) . -0.5 ................ ................. ... . .... ........ ... ........... . . . ... . : ... . A : . .... .: .. .. .. ';-: a1: -1.5 ................. : ............. : ........ .... .... : ..... ........ -2 ................. : . .. ....... . . . . . . . . . . . . . . . . . . . . . -2.5 .__ ____ ____ ____ _.L.._ ____ _.I 1000 1500 0 500 t Figure 3.22 Simulation 3.5: RLS algorithm, time history of parameter estimates a1 (t) and a2(t). o.1F; ................. : .... ............. : ............... a; > .... Q) Q) E Q. I 0 ................. : ........ CD -0.1 . 0 0 0 -0.2 '----------L------L..-----i....---___j 0 500 1000 t 1500 Figure 3.23 Simulation 3.5: RLS algorithm, time history of parameter estimates b1 (t) and b2(t). 70

PAGE 77

1 -1 -1.5'-------L---------lL---------l-------.i---------' 0 500 1000 1500 2000 2500 t Figure 3.24 Simulation 3.5: RLS algorithm, time history of the residual prediction error e(t), for third order model. 2 1 -->-1 -2 0 500 1 000 1500 2000 2500 t Figure 3.25 Simulation 3.5: RLS algorithm, time history of the residual prediction error e(t), for second order model. 71

PAGE 78

The validation procedure was performed with the three identified models off-line. The values for R(O) and RN(i) for all three estimated models were calculated using the identified models compared to the actual system over 500 iterations, the values are listed below: Second order model: R2nd(0)= 1.5200, RN(1 )=0.970, RN(2) = 0.942. Third order model: R3rd(0)= 0.8400, RN{1 )=0.744, RN(2) = 0.525, RN(3)=0.41 0. Fourth order model: Rtlh(0)=0.0098, RN{1 )=0.052, RN(2):::: 0.01 0, RN(3)::0:006, RN(4) = 0.003. In section 2. 7 a guideline for model orders was given as, if Rn.+1(0) s 0.8Rn.(O), where R(O) is the variance of the prediction errors, then no further increase in model order is recommended. For this example, using the above guidelines, the values for R(O) for all three models indicate that the fourth order model is needed to represent the system. The validation criterion is given in (2.98) as QRN(i)l s 0.0970 for N=500). The above values show only the fourth order model meets this criterion. It should be noted that the criterion is only a guideline, in some practical cases a less stringent criterion may be tolerated. The discrete-time closed loop models can be converted to continuous time models assuming a zero-order hold on the input. The corresponding second and third order continuous time models of the plant are 72

PAGE 79

Third order: G(s) = 2.8358s2 + 88.0815s + 281.7023 closed-loop plant (3.32) s3 + 39.9020s2 + 346.9935s + 285.2681 Second order: G(s) = 2.9427s + 18.4906 s2 + 16.7404s + 17.7202 closed-loop plant (3.33) The open-loop transfer function of the plant can be found from the relation G ( ) Gclosed-loop(S) open-loop S -G 1 -closed-loop(S) which gives the following open-loop plant transfer functions, Third order: .G(s) = 2.8358s2 + 88.0815s + 281.7023 s3 + 37.0662s2 + 258.9120s + 3.5659 Second order: G(s) = 2.9427s + 18.4906 s2 + 13.7976s0.7704 Open-loop plant (3.34) Closed-loop plant (3.35) Bode plot comparisons of the second and third order models compared to the fourth order model (which is the actual system), are give in figures 3.26 and 3.27 (closed-loop) and figures 3.28 and 3.29 (open-loop). 73

PAGE 80

These figures show the effects a reduced model can have on the frequency response of the plant. If the differences are large, problems will arise when trying to design a controller to control the plant. It is interesting to note that the second order model contains an unstable pole at S= 0.0556 rad/sec. The original plant had a marginally stable pole at s=O. 74

PAGE 81

0 ,., ,., ,., "" -10 .0 c -Q) "0 :l -20 "2 actual system ' -30 ---3rd order model 1 o1 1 o0 1 o 1 1 Frequency (rad/sec) -50 Ol Q) Q) -100 en ca ct __ actual system -150 ---3rd order model 10"1 10 101 102 Frequency (rad/sec) Figure 3.26 Simulation 3.5: Closed-loop Bode plot comparison, third order model versus actual system. 75

PAGE 82

..0 0 --10 Q) "'0 :::l ..... c: rn-20 cu :2 -30 __ actual system ---2nd oL-1....:. ___:_:_,;___.__ ............. __._.. .... _....___.____,_;,. ....................... o2 CJ) Q) "'0 -Frequency (radlsec} Q) -100 en cu ..c a.. -150 __ actual system ---2nd order model 1 o1 1 o 0 1 o 1 1 02 Frequency (radlsec) Figure 3.27 Simulation 3.5: Closed-loop Bode plot comparison, second order model versus actual system. 76

PAGE 83

20 "' .0 0 -Q) "0 .a 0 "2 -20 __ actual system ----3rd order model -40 L__....__.......__.__._ ......................... __.____.__..._ ........................ 1 o1 1 o0 1 o 1 1 o2 --100 C) Q) "0 -Frequency (rad/sec) -----Q) __ actual system J:: a. _150 ----3rd order model 10-1 10 101 102 Frequency {rad/sec) Figure 3.28 Simulation 3.5: Open-loop Bode plot comparison, third order model versus actual system. 77

PAGE 84

-.0 0 -Q) -g 0 -c: g> -20 __ actual system ---2nd order model 1 o1 1 0 1 o 1 1 o2 Frequency (rad/sec) --100 C) Q) "0 -Q) en ca a. -150 __ actual system ---2nd order model 10-1 10 101 102 Frequency (rad/sec) Figure 3.29 Simulation 3.5: Open-loop Bode plot comparison, second order model versus actual system. 78

PAGE 85

3.2 Simulated Models: Examples In this section several of the convergence properties for the algorithms, which were discussed in section 2.4, will be analyzed by simulation. Simulation 3.6: This simulation demonstrates the effect, the choice of input has on parameter estimation. The system to be identified is given as (1 -l.Sq-1 + 0.7q-2)y(t) = (l.Oq-1 + O.Sq-2)u(t). (3.36) The system was simulated with the input, u(t)= 0.1 sin(1 Ot). The RLS algorithm given in equation (2.17) was applied with the model set given by (1 + a1q1 + a2q-2)y(t) = + i>2q-2)u(t). (3.37) The parameter and measurement vectors are defined as "'T ----9 (t) = [a1 (t) a2(t) b1 (t) i>2(t)] cpT(t) = [-y(t-1) -y(t-2) u(t-1) u(t-2)]. (3.38) The initial values chosen for the algorithm were 9(0)=0, and F(O) = 1 000*1. Figures 3.30 and 3.31 show the parameter estimates as a function of time for the A(q-1) and B(q-1) polynomials. The actual system parameters, b1 =1.0, b2=0.5, a1 =-1.5 and a2=0.7 from (3.36), are also plotted. Figures 3.30 and 3.31 show the parameter estimates do not converge to the actual parameters, they converge to, b1 =0.867, b2=0.476, a1 =-1.22 and a2=0.44. Converge is not expected in this case because condition AS is not satisfied. The input is not P.E.. Another indicator of the non-convergence of the algorithm is shown in figure 3.32. Figure 3.32 is a plot of the minimum 79

PAGE 86

1 . .. .. ...... .. a2 : Q) t----..:.. ----:----:----:-----: ----: 0.5 r .......... :... ...... : ........... : ........... : .... i2 ...... : ............ : 0 ......... ........... ........... : .. ........ : .......... : ....... ... : E : : . e -o.5 : : as : -1 .......... ; ........... ; .......... :........ .. : . a 1 ..... :. . . . ... : <: \ .. : : : : . . . -1.5 r---:----:-------:---a;-:----: . 100 150 200 250 300 50 t Figure 3.30 Simulation 3.6: RLS algorithm, time history of parameter estimates a1 (t) and a2(t). 1.2 ........... : ........... : .............. : : bt : 1 t----:----:-----:----:----: ----: . . . Q) . . 0.8 r. ......... ... ........ ...... .... : . ........ : ......... bt .......... 0.6 .......... ; ........... ........... ; .......... b2 .......... : ............ : Q) . . 0.4 .. ......... .... ... : ............ : ............... : ............ : (a : : : : b2 : c.. . . 0.2 : : ....... ::: al : : . 0 -:: ':: . -0.2 ........... ; ... . 0 50 100 150 200 250 300 Figure 3.31 Simulation 3.6: RLS algorithm, time history of parameter estimates b1 (t) and b2(t). 80

PAGE 87

X 10-4 3 50 100 150 t 200 250 300 Figure 3.32 Simulation 3.6: time history of minimum eigenvalues for N -1 cp(t)cpT(t)1 l=l eigenvalues for the function given in equation (2.55)In section 2.4 it was noted that in order for this matrix to have an inverse, then it must be nonsingular. Figure 3.32 shows the minimum eigenvalues of this matrix tending to zero as time increases. This is a good indicator that the matrix is close to becoming singular and an indication that the RLS algorithm will not provide good results. The above identification procedure was repeated for an input {u(t)} that was a Gaussian white noise sequence with a variance of (1.0). Figures 3.33 and 3.34 now show that the parameter estimates converge to the actual parameters given in (3.36). This is expected, because condition AS is now 81

PAGE 88

1 . .. . . . .. ... ......... . . .. a2 m ; 0.5 .... ...... : ...... .:.. .... .: ....... : .... : """:"""':"' ......... > : : 0 .. : ...... ...... ....... : ...... .. .. ........... ) ...... l ...... ..... -0.5 ..... : ... ... : ....... : ....... : ...... : ... : ...... : ..... : ............. [ : : : : : < -1. "'i{""':""",' ...... ....... : .. .......... ...... :.. ...... -1 .5 r. ___ ....._ __ ....._ __ -:-----:---------:-----:--------"'1-a 1 10 20 30 40 50 60 70 80 90 Figure 3.33 Simulation 3.6: RLS algorithm, time history of parameter estimates a1 (t) and a2(t). m ::I ca > 1-m Q) 1.2 ..... : ...... : ...... : .. .. : ...... : . .. . : ...... ; ...... : ... : . ... 'l 1 : ; : .... : : : : : : bl 1\1 : : : bl : : : : : 0.8 .. : ...... : ...... : ..... :' ..... :.' . : ..... ...... : .. .. , ... .. . . . . . . . . ' 0 6 .................................... ..................... 0 000 : : : : : : : : : b : E 0.4 ca : : ,....., : : : : : : : ... : ...... : ... b2':' .... ,: ...... : ..... : .... : ...... : ..... ':' ..... . ' . ' ' a. I 0,2 . . 0 0 oo 0 0 0 o 0 0 ..... 0 ........ 0 ........................... . . co 0 0 0 0 ... . ............... ....... ................ .. ............. ...... .. ... . -0.2 ...... .: ....... ..... ............... . : 'o'' . 0''' 10 20 30 40 50 60 70 80 90 Figure 3.34 Simulation 3.6: RLS algorithm, time history of parameter estimates b1 (t) and b2(t). 82

PAGE 89

satisfied. The input is P.E .. A plot of the minimum eigenvalues of (2.55) is also given in figure 3.35. The minimum eigenvalues in this case are not tending to zero, indicating the algorithm is working properly. 1 0.8 0.6 0.4 0.2 0 20 40 60 80 100 t Figure 3.35 Simulation 3.6: time history of minimum eigenvalues for N
PAGE 90

The system was simulated with the input {u(t)} being a Gaussian white noise sequence (variance= 1.0). The RLS algorithm given in equation (2.17) was applied with the model set given by (1 + a1q1 + a2q2 )y(t) = (b1q1 + (3.40) The parameter and measurement vectors are defined as ....... T .,....,.. ......... ......... ......... e (t) = [a1(t) a2(t) b1(t) 1 cpT(t) = [-y(t-1) -y(t-2) u(t-1) u(t-2)]. (3.41) The initial values chosen for the algorithm were 9(0)=0, and F(O) = 1 000*1. Figures 3.36 and 3.37 show the parameter estimates as a function of time for the A(q-1) and B(q-1) polynomials given in (3.40). The actuar system parameters are from (3.39); b1 =1.0, b2= -0.6, a1 =-1.4 and a2=0.48. From these figures it is shown that the parameter estimates are not converging to the actual parameters. The parameter values that the RLS algorithm converged to are; b1 =1.0, b2= 0.303, a1 =-0.497 and a2=-0.2424. The parameter estimates do not converge to the actual values in this case, because condition A4 in section 2.4 is violated. 84

PAGE 91

Q) 1 . . . . .. : .... .. . . : . . . . . . . .. . ..... : . : : a2 : : . > 1----; ---:----: ----:---: ---.... Q) Q5 . . . . 0 ...... . .. i ............ : ..... : .. : .. az. : .... E . (ij : : : : a1 : a.-1 : .:: : . a1 : : <( 1----: --:-----: ----:---: ---. . . . . . . -2 ... .. .... : ...... .. .... ....... -:.......... : . . .. ....... . ... . . 50 100 150 200 250 t Figure 3.36 Simulation 3.7: RLS algorithm, time history of parameter estimates a1 (t) and a2(t). 1.5 ...... Q) bl .s 0.5 Q) E . : : 0 : : : : : . a. . : : -0.5 :._ .: .. : :.:. :::.:. :._ .:... : .. :.:. :._ .:... :.:: :._ .:... :.:. :._ . .:... :..:. :._ .:...::..:.. :._ .:... :..:. i-b2 . . -1 . .. . . . .. . . 0 : . -1 .5 .___ ____ __,_ ______ L__ ____ __,_ ____ _.JL__ ____ ._J_ ___ ___J 50 100 150 t 200 250 Figure 3.37 Simulation 3.7: RLS algorithm, time history of parameter estimates b1 (t) and b2(t). 85

PAGE 92

Simulation 3.8: This simulation demonstrates the behavior of the output error algorithm, when condition A 1 is not satisfied. The system to be analyzed is the furnace model (mode 2), from section 3.1, with the disturbance {e(t)} modified so the system is represented as (1 -1.3767q-1 + 0.4173q-2)y(t) = (11.52q-1 + 2.50q-2)u(t) + (1 -1.3767q-1 + 0.4173q-2)e(t) (3.42) The system now has the S3 model structure (section 2.6). The output error algorithm corresponds to systems of this structure. The system was simulated with u(t) and e(t) being independent Gaussian white noise sequences, with variance's (0.01) and (0.0001 ), respectively. The output error algorithm given in equations (2.36), was applied with the model set given by (1 + a1q-1 + a2q-2 )y(t) = (b1q-1 + + (1 + a1q-1 + a2q-2 )e(t). (3.43) The parameter and measurement vectors are defined as ..... T ............................ 9 (t) = [at(t) a2(t) bt(t)
PAGE 93

. . 2 ....... ; ....... ; ....... ...... ;. . . . . ..... ...... : .... . 1 '0 Q) -1 "' -2 a. . . . . . ..... : .... : ...... : ....... : ..... : ..... a2 .... : ..... : ...... ; .... --: ---: --:--_. --:---: --....--: --: --[,..,. : .:. : :: : a2-; .:. : . . . I . . . < -3 ... . : .. : .... : ...... : . .. : ..... : : ..... : .. . . . . . . . . -4 . . : ....... : ....... : ...... ...... -: ...... : ...... : ....... ; ...... : . ... . . . . 0 . 50 100 150 200 250 300 350 400 450 t Figure 3.38 Simulation 3.8: Output Error algorithm, time history of parameter estimates a1 (t) and a2(t). 30 .... 20 "' . . . 0 : '0 0 0 : . . . . . . 0 0 0 """' 0 0 : b2 : : . : a. . : . I 10 .. ... : .-:-. .0. .-b -; .-:.. 0 .. -:-. :-::.-:.. -:-. 0.:-: :-: .0. .-:-. .. -:.. co . 1 . . . . . . . . . . . : : . b2 : : : : ----o-------o-----. . . . 0 .............. .............. ....... .............. : ....... ............ 50 100 150 200 250 300 350 400 450 t Figure 3.39 Simulation 3.8: Output Error algorithm, time history of parameter estimates b1 (t) and b2(t). 87

PAGE 94

non-convergence of the output error algorithm is due to condition A 1 in section 2.4 being violated. Figure 3.40 illustrates the violation of condition A1. 1;5 1 0.5 .... ca 0 ca Q) .... -0.5 -1 -1.5 0 2 3 4 5 6 w (radians) Figure 3.40 Simulation 3.8: plot of Re [ 1 .12 ] .. Co(eJCO) Simulation 3.9: This simulation demonstrates the convergence of the output error algorithm when all conditions in section 2.4 (A 1-A5) are satisfied. The system to be analyzed is given by (1 0.2431ql + 0.490q-2)y(t) = (0.211ql 0.1539q + 0.9308q-3)u(t) + (1 -1.2431ql + 0.490q-2)e(t) (3.45) The system is in the form of the S3 model structure (section 2.6). The output error algorithm corresponds to systems of this structure. The system was simulated with u(t) and e(t) being independent Gaussian white noise 88

PAGE 95

sequences, with variances (1 0.0) and (0.01 ), respectively. The output error algorithm given in equations (2.28), was applied with the model set given by (1 + a1 q1 + a2q2 )y(t) = (b1q-1 + h2q2 + b3q3 )u(t) + (1 + a1q1 + a2q-2)e(t). (3.46) The parameter and measurement vectors are defined as ......... T ........... ,.......,_, ........ ........ ........ e (t) = [a1(t) a2(t) b1(t) h2(t) h)(t)]
PAGE 96

,.. . .... . : ...... . . . . . . . . .... 0.6 ".l.:..: ; ....... : .. a2 ...... 0.4 ..... ; ...... :.... : ..... : .. : ... .... : .... : ... . a; : : : . : 0.2 : : ; .. . .. :: ; ....... : : Q) . . . . .... . . . Q) 0 ..... : .. ... : ....... ; ...... : ...... : ....... .......... ... : ....... ; ..... E : ,.., : : : : e : at : : . co -02 ......................................................... a1 a. 1/ I <( -0.4 -. ................................................................. ..... . . . . . . . . . -0.6 . . . - 0 ............... . -O.B L..!.........-----'-50--1-'-0-0 -3...J..5-0 _4_._00--45...._0 ___. t Figure 3:41 Simulation 3.9: Output Error algorithm, time history of parameter estimates a1 (t) and a2(t). 1.5 r---,-----r-----.---,---.---r---.---.--.---, 1 ...... ; ....... : ...... ; ....... : ...... : ...... _:_ ...... ; ....... : ...... : ...... b3 :,..:::.--: : : ...,. ,, : : : b3. Q) . . I . . ...... : ..... : ... .... ... . : .. ......... . . . .. . . . . . . b, . . . . ...... . .. : ...... : .. .. : ..... : .. bt : ...... .. : ....... : ..... 0 .. b2 : -0.5 _j_ _i__ _j____...L____j 50 100 150 200 250 300 350 400 450 Figure 3.42 Simulation 3.9: Output Error algorithm. time history of parameter estimates b1 (t) and b2(t). 90

PAGE 97

...... ...... ctS a. ctS 1 a> ...... 0 1 2 3 4 5 6 w (radians) Figure 3.43 Simulation 3.9: plot of Re [ 1 2 1] .. Co(eJW) 3.3 Tracking Time-Varying Systems Simulation 3.10: This simulation will present the tracking of a time varying system. The system is represented by the furnace model in mode 1 (section 3.1) with a time varying parameter for b2. The system is (1 1.5781q-l + 0.5805q-2)y(t) = q-1(8.68q-l + (2.00 + cos(0.1t))q-2)u(t) + e(t) (3.48) The system was simulated with u(t) and e(t) being independent Gaussian white noise sequences, with variances (0.01) and (0.0001 ), respectively. The RLS algorithm given in equations (2.95), with a constant forgetting factor, was applied with the model set given by (3.49) 91

PAGE 98

The parameter and measurement vectors are defined as ...... T ........................ e (t) = [at(l) a2(t) bt(l) b2(t)] cpT(t) = [-y(t-1) -y(t-2) u(t-2) u(t-3)]. (delay: d=1) (3.50) The initial values chosen for the algorithm were 8(0)=0, and F(O) = 1 ooooJ. The value for at. was chosen as at=0.96. Figures 3.44 and 3.45 show the parameter estimates as a of time for the A(q-1) and B(q-1) polynomial coefficients given in (3.49). The actual system parameter values are also plotted and are from (3.48): b1 =8.68, b2(t)=2.00 + cos(0.1 t), a 1 =-1 .5781 and a2=0.5805. 10 ...... : ..... : ..... : ...... : ............. : ..... : ..... : ..... : ..... . . . . : : . . bl 8 ...... ; ...... : ...... : ....... ;_ .... .. ht : ...... j .... : ..... : ..... m . . 6 ...... ...... ...... : ....... ; ...... : ....... :_ ...... ; ....... : ....... : ...... > . .... : : -m 4 ...... : ...... : ...... ...... ...... : ...... : ...... : ....... : ....... ; ..... E : : : : 2 ...... ; ..... : ..... -: ..... .... : ..... ...... : ..... : .... : ..... . -2 ...... : ....... .. . ... : ....... : 0 : -: 50 100 150 200 250 300 350 400 450 t Figure 3.44 Simulation 3.10: RLS algorithm, time history of parameter estimates a1 (t), a2(t) and b1 (t). 92

PAGE 99

3 . ..... b2 .... : ....... ; .. ......... \, : .-,.... b2 ........ .................... 0 ... ............... 50 100 150 200 250 300 350 400 450 t Figure 3.45 Simulation 3.10: RLS algorithm, time history of parameter estimate b2(t). From these figures it is obvious that the parameter estimates converge to the actual system parameters very quickly for the time invariant parameters, while the time-varying parameter b2(t) is also being tracked fairly well. This example was included to demonstrate the algorithms capability for tracking time-varying parameters. This example clearly shows that the algorithms are capable of tracking time-varying systems by introducing the weighting sequences at(t) and a2(t). There are other choices for the weighting sequences [1 f] that could improve the tracking in this simulation, but these are not discussed in this thesis The residual prediction error (y(t)y(t)), is 93

PAGE 100

shown in figure 3.46, it reflects the time lag between the estimate and the time-varying parameter. .......... < >. .......... >. 0.1 -0,1 .. -0.2 L.......,_...J...._ __ __.i_ __ --1... __ ----I. ____ L.-_ __ ...1__ _.._ __ __.__ __ __ ___J 50 100 150 200 250 300 350 400 450 t Figure 3.46 Simulation 3.1 0: RLS algorithm, time history of the residual prediction error E(t). 3.4 Unmodeled Dynamics This section presents an example of a system with unmodeled dynamics. The system has an additive form of unmodeled dynamics of the form G = 09 + ll W. In section 2.8, systems with multiplicative forms were discussed. Simulation 3.11: The system to be analyzed is given as Ao(q-l)y(t) = Bo(q-1 )u(t) + 'Y(q-1 )u(t) (3.51) 94

PAGE 101

where Ao(q-1) = 1-1.3767q-1 + 0.4173q-2 Bo(q-1) = 11.52q-1 + 2.50q-2 'VIq-1 > JJ. 1 .. < t J\ (1-0.7q-1) 1-'j(3.52) The system was simulated with the input {u(t)} being a Gaussian white noise sequence (variance = 0.01 ). The RLS algorithm given in equation (2.17) was applied with the model set given by (1 + a1q1 + a2q2 )y(t) = (b1q1 + bzq2 )u(t). (3.53) The parameter and measurement vectors are defined as """T ........................ a (t) = [a1(t) 22(t) bt(t) bz(t)]
PAGE 102

2 1 cu > ..... Q) -Q) 0 E cu c. c:( -1 -2 0 If 50 a2 100 A at 150 t 200 250 300 Figure 3.47 Simulation 3.11 : RLS algorithm, J.L=0.50, time history of parameter estimates a1 (t) and a2(t). 100 150 200 250 300 t Figure 3.48 Simulation 3.11 : RLS algorithm, J.L=0.50, time history of parameter estimates b1 (t) and b2(t). 96

PAGE 103

1 0.5 Q) :J ctS 0 > ..... Q) -0.5 ctS c. -1 I <( t"::. -1.5 II" _____ at -2L_ ____ J_ ____ 0 50 100 150 200 250 300 t Figure 3.49 Simulation 3.11: RLS algorithm, time history of parameter estimates a1 (t) and a2(t). bt 12---------------------------bt Q) .2 10 co > ..... 8 Q) -Q) E 6 co ..... ctS c. I 4 CD J'o b2 2 w ..... b2 0 0 50 100 150 200 250 300 t Figure 3.50 Simulation 3.11: RLS algorithm, time history of parameter estimates b1 (t) and b2(t). 97

PAGE 104

estimates converging to the values b1 =12.22, b2(t)=2.30 a1 =-1.42 and a2=0.45. The increase in gain for the unmodeled dynamics, causes the parameter estimates to deviate further away from the nominal values. The gain on the unmodeled dynamics can be varied form zero (nominal parameters) to 1, which will give the largest perturbation from the nominal parameters. A worst case set of parameters can be developed from these runs. 98

PAGE 105

CHAPTER 4 CONCLUSIONS In this thesis, the overall aspects of system identification were discussed. Theoretical aspects of the algorithms were presented in chapter 2. The recursive least squares, extended least squares and output error algorithms were derived in sections 2.1-2.3. The convergence properties for all three algorithms were thoroughly analyzed in section 2.4. The topic of tracking time-varying system parameters was also briefly discussed in section 2.5. It was noted that the importance of tracking time-varying parameters was a critical issue for adaptive c.ontrol. Sections 2.6 and 2. 7 discussed some of the practical issues of system identification, including the important topic of model validation; Several validation and model order criteria were presented to help aid in practical identification techniques. The topic of unmodeled system dynamics and how it can relate to robust control type problems was discussed in section 2.8. The last section in chapter 2 presented the practical applications of recursive identification .. In this section it was shown that recursive identification has applications in many fields, including control (adaptive) and signal processing (adaptive filtering). 99

PAGE 106

The purpose of the simulations in chapter 3 were to show some of the theoretical properties presented in chapter 2 by simulation. In chapter 3 the theoretical convergence conditions that were discussed in section 2.4 were illustrated and confirmed by simulations. Two practical identification systems were also simulated. In the second case (machine tool), an attempt to reduce the system order was tried, but the lower order models did not meet the validation criterion presented in chapter 2. This example was included to illustrate how in practice, many 'system models may be identified and how the validation criterion can be used to compare different models and help choose the best model. The final example was included to show how unmodeled dynamics can be represented by a set of parameters which are deviations from the nominal system parameters. So in general, this thesis presented some of the important aspects. of system identification. The importance and applicability of system identification to practical problems, was shown through simulation examples. 100

PAGE 107

APPENDIX A MATLAB INPUT FILES FOR SIMUAL TION EXAMPLES

PAGE 108

o/o RLS ALGORITHMSIMULATION 3.1 o/o o/o SYSTEM: FURNACE delay: d=1 o/o % o/o o/o o/o o/o o/o A(q"'-1 )*y(t) = q"'-d*B(q"'-1 )*u(t) + e(t) A(q"'-1) = 1 + ( -1.5781 )*q"'-1 + 0.5805*q"'-2 B(q"'-1) = 8.6800*q"'-1 + 2.0000*q"'-2 o/o u(t) = input white noise with variance = 0.01 o/o e(t) =white noise with variance= 0.0001 o/o o/o INITIAL CONDITIONS clear y(1 )=0; y(2)=0; y(3)=0; error(1 )=0; error(2)=0; error(3)=0; yhat(1 )=0; yhat(2)=0; yhat(3)=0; epsilon(1 )=0; epsilon(2)=0; epsilon(3)=0; thetahat(1,1 :4)= [0 0 0 O]; thetahat(2,1 :4)= [0 o o 0]; thetahat(3,1 :4)= [0 o o O]; Fold=[500 o o 0;0 5000 O;O o 500 0;0 o 0 500]; landa(1 )=1.0; landa(2)=1.0; landa(3)=1.0; n=103; o/o INPUT NOISE GENERATION randn('seed',5) u(1 )= O; u(2)=0; for a=3:(n+ 1) 102

PAGE 109

u(a)=sqrt(.01 )*randn; end %DISTURBANCE NOISE GENERATION randn('seed' ,1) for a=1 :(n + 2) e( a)=sqrt(0.0001 )*randn; end % Least Squares Algorithm for t=3:n phi(t,1 :4)= [ -y(t) -y(t-1) u(t-1) u(t-2)]; y(t+ 1 )= 1.5781 *y(t) 0.5805*y(t-1) + 8.68*u(t-1) + 2.0*u(t-2) + e(t+ 1 ); yhat(t+ 1 )= thetahat(t,1 :4)*phi(t, 1 :4)'; epsilon(t+ 1 )= y(t+ 1)-thetahat(t,1 :4)*phi(t, 1 :4)'; Fnew= (1 /landa(t))*(Fold -(Fold*phi(t, 1 :4)'*phi(t, 1 :4)*Fold)/(landa(t) + phi(t, 1 :4)*Fold*phi(t,1 :4)')); thetahat(t+ 1,1 :4) = (thetahat(t, 1 :4)' + Fnew*phi(t, 1 :4)'*epsilon(t+ 1 ))'; Fold= Fnew; error(t+ t)= y(t+ 1) yhat(t+ 1 ); landa(t+ 1 )=1; end %PLOT set(gcf,'PaperPosition' ,[2,2,5.9,3.1 ]); c=n-3; fort=1 :c thetahatp(t,1 :4) = thetahat(t+3, 1 :4); errorp(t) = error(t+3); end !rm sim3_1.ps for P=1 :c a1 O(p) = -1.5781; a20(p) = 0.5805; b1 O(p) = 8.6800; b20(p) = 2.0000; end x=[1 :1 :c]; a1 =thetahatp(1 :c, 1 ); a2=thetahatp(1 :c,2); b1 =thetahatp(1 :c,3); b2=thetahatp(1 :c,4); plot(x,a1 ,'-' ,x,a2,' -',x,a1 O,'--',x,a20, '--') 103

PAGE 110

grid title(") xlabel('t') ylabei('Aparameter value') axis([1 25 -4.0 3.0]) print -append sim3_1 plot(x,b1 ,'-' ,x,b2,'-',x,b1 O,'--',x,b20,'--') grid title(") xlabel('t') ylabei('B parameter value') axis([1 40 -1.5 1 0 .5]) print -append sim3_1 b=[1 :1 :n+ 1]; plot(b,error) title(") xlabel('t') ylabel('y(t) y"(t)') grid axis([1 0 c -0.2 0.2]) print -append sim3_1 104

PAGE 111

% RLS ALGORITHMSIMULATION 3.2 % % SYSTEM: FURNACE d=O (delay) % % % % % % % A(qA-1 )*y(t) = qA-d*B(qA-1 )*u(t) + e(t) A(qA-1) = 1 + (-1.3767)*qA-1 + 0.4173*qA-2 % u(t) = input white noise with variance = 0.01 % e(t) = white noise with variance = 0.0001 % % INITIAL CONDITIONS clear y(1)=0; y(2)=0; yhat(1)=0; yhat(2)=0; yhat2(1 )=0; yhat2(2)=0; error(1 )=0; error(2)=0; epsilon(1 )=0; epsilon(2)=0; thetahat(1, 1 :4)= [0 o o O]; thetahat(2, 1 :4)= [0 o o O]; Fold=[500 o o 0;0 500 o 0;0 o 500 0;0 o 0 500]; landa(1 )=1.0; landa(2)=1.0; n=102; %INPUT NOISE GENERATION randn('seed',5) u(1)= 0; for a=2:(n+ 1) u(a)=sqrt(0.01 )*randn; end %DISTURBANCE NOISE GENERATION randn('seed', 1) for a=1 :(n + 2) 105

PAGE 112

e(a)=sqrt(0.0001 )*randn; end % Weighted Least Squares Algorithm for t=2:n phi(t,1 :4)= [-y(t) -y(t-1) u(t) u(t-1 )]; y(t+1 )= 1.3767*y(t) 0.4173*y(t-1) + 11.52*u(t) + 2.5*u(t-1) + e(t+ 1 ); yhat(t+ 1 )= thetahat(t, 1 :4)*phi(t, 1 :4)'; epsilon(t+ 1 )= y(t+ 1) -thetahat(t,1 :4)*phi(t,1 :4)'; Fnew= (1 /landa(t))*(Fold -(Fold*phi(t,1 :4)'*phi(t,1 :4)*Fold)/(landa(t) + phi(t,1 :4)*Foid*phi(t,1 :4)')); thetahat(t+ 1,1 :4) = (thetahat(t, 1 :4)' + Fnew*phi(t,1 :4)'*epsilon(t+ 1 ))'; Fold= Fnew; error(t+ 1 )= y(t+ 1) yhat(t+ 1 ); yhat2(t+ 1 )=. thetahat(t+ 1,1 :4)*phi(t, 1 :4)'; error2(t+ 1 )= y(t+ 1) yhat2(t+ 1 ); landa(t+ 1 )=1; end %PLOT lrm sim3_2.ps set(gcf,'PaperPosition',[2,2,5.9,3.1 ]); c=n-2; for t=1 :c thetahatp(t, 1 :4) = thetahat(t+2, 1 :4); errorp(t) = error2(t+3); end for p=1 :c a1 O(p) = -1.3767; a20(p) = 0.4173; b1 O(p) = 11.5200; b20(p) = 2.5000; end x=[1 :1 :c]; a1 =thetahatp(1 :c,1 ); a2=thetahatp(1 :c,2); b1 =thetahatp(1 :c,3); b2=thetahatp(1 :c,4); plot(x,a1 ,'-',x,a2,'-',x,a1 O,'--',x,a20,'--') grid title(") xlabel('t') 106

PAGE 113

ylabei('Aparameter value') axis([1 25 -4.5 5.5]) print -append sim3_2 plot(x,b1 ,'-',x,b2,'-',x,b1 O,'--',x,b20,'--') grid title(") xlabel('t') ylabei('B -parameter value') axis([1 40 -1.0 13.5]) print -append sim3_2 b=[1 :1 :n+ 1]; plot(b,error2) title(") xlabel('t') ylabel('y(t) y"(t)'} grid axis([1 0 c -0.20 0.20]) print -append sim3_2 107

PAGE 114

% EXTENDED LEAST SQUARES ALGORITHM SIMULATION 3.3 % % SYSTEM: FURNACE delay: d=1 % % A(q"-1 )*y(t) = q"-d*B(q"-1 )*u(t) + C(q"-1 )*e(t) % % A(q"-1) = 1 + (-1.5781)*q"-1 + 0.5805*q"-2 % % B(q"-1) = 8.6800*q"-1 + 2.0000*q"-2 % % C(q"-1) = 1-0.6*q"-1 + 0.20*q"-2 % % u(t) = input white noise with variance = 0.01 % e(t) = white noise with variance = 0.0001 % % INITIAL CONDITIONS clear y(1 )=0; y(2)=0; y(3)=0; error(1 )=0; error(2)=0; error(3)=0; error2(1 )=0; error2(2)=0; error2(3)=0; yhat(1 )=0; yhat(2)=0; yhat(3)=0; yhat2(1)=0; yhat2(2)=0; yhat2(3)=0; epsilon(1 )=0; epsilon(2)=0; epsilon(3)=0; epsilonb(1 }=0; epsilonb(2)=0; epsilonb(3)=0; phi(1,1 :6}= [0 0 0 0 0 0]; phi(2,1 :6}= [0 0 0 0 0 0]; 108

PAGE 115

thetahat(1,1 :6)= [0 o 0 o 0 0]; thetahat(2,1 :6)= [0 o o o o 0]; thetahat(3,1 :6)= [0 0 o o o O]; Fold=[1 000 0 o o o 0;0 1 000 o o o 0;0 o 1 000 0 0 0;0 0 0 1 ooo o 0;0 o o o 1 ooo 0;0 o o o o 1 000]; landa0=1.0; landa(1 )=1.0; landa(2)=1.0; landa(3)=1.0; landa2(1)=1.0; landa2(2)=1.0; landa2(3)=1.0; n=1000; %INPUT NOISE GENERATION randn('seed',5) u(1 )= 0; u(2)= 0; for a=3:(n+ 1) u(a)=sqrt(.01 )*randn; end %DISTURBANCE NOISE GENERATION randn('seed',1) for a=1 :(n + 2) e(a)=sqrt(0.0001 )*randn; end % Extended Least Squares Algorithm for t=3:n landa(t)=1; phi(t,1 :6)= [-y(t) -y(t-1) u(t-1) u(t-2) epsilonb(t) epsilonb(t-1 )]; y(t+ 1 )= 1 .5781 *y(t) 0.5805*y(t-1) + 8.68*u(t-1) + 2.0*u(t-2) + e(t+ 1) 0.6*e(t) + 0.2*e(t-1 ); yhat(t+ 1 )= thetahat(t,1 :6)*phi(t, 1 :6)'; epsilon(t+ 1 )= (y(t+ 1) -thetahat(t,1 :6)*phi(t,1 :6)')/(1 + phi(t,1 :6)*Fold*phi(t,1 :6)'); thetahat(t+ 1,1 :6) = (thetahat(t, 1 :6)' + Fold*phi(t, 1 :6)'*epsilon(t+ 1 ))'; yhat2(t+ 1 )= thetahat(t+ 1,1 :6)*phi(t, 1 :6)'; Fnew= (1 /landa(t))*(Fold (Fold*phi(t, 1 :6)'*phi(t, 1 :6)*Fold)/(landa(t) + phi(t, 1 :6)*Fold*phi(t, 1 :6)')); Fold= Fnew; epsilonb(t+ 1 )= y(t+ 1) thetahat(t+ 1,1 :6)*phi(t, 1 :6)'; yhat2(t+ 1 )= thetahat(t+ 1,1 :6)*phi(t, 1 :6)'; error(t+ 1 )= y(t+ 1) -yhat(t+ 1 ); 109

PAGE 116

error2(t+ 1 )= y(t+ 1) -yhat2(t+ 1 ); end %PLOT lrm sim3_3.ps set(gcf,'PaperPosition',[2,2,5.9,3.1 ]); c=n-3; for t=1 :c thetahatp(t, 1 :6) = thetahat(t+3, 1 :6); errorp(t) = error2(t+3); end for p=1 :c a10(p) = -1.5781; a20(p) = 0.5805; b1 O(p) = 8.6800; b20(p) = 2.0000; c1 O(p) = -0.6000; c20(p) = 0.2000; end x=[1 :1 :c]; a1 =thetahatp(1 :c, 1 ); a2=thetahatp(1 :c,2); b1 =thetahatp(1 :c,3);. b2=thetahatp(1 :c,4); c1 =thetahatp{1 :c,5); c2=thetahatp(1 :c,6); plot(x,a1 ,'-',x,a2,'-',x,a1 O,'--',x,a20,'--') grid title(") xlabel('t') ylabel('A-parameter value') axis{[1 30 -2.5 2.5]) %pause print -append sim3_3 plot(x,b1 ,'-',x,b2,'-',x,b1 0, '--',x,b20,'--') grid title(") xlabel('f) ylabei{'B -parameter value') axis{[1 60 -1.0 11 .0]) %pause 110

PAGE 117

print -append sim3_3 plot(x,c1 ,'-',x,c2,'-',x,c1 O,'--',x,c20,'--') grid title(") xlabel('t') ylabei('C parameter value') axis([1 c -1 .0 1.0]) print -append sim3_3 b=[1 :1 :n+ 1 ]; plot(b,error) title(") xlabel('t') ylabel ('y(t) y"(t)') grid axis([20 c -0.20 0.20]) print -append sim3_3 plot(b,error2) title(") xlabel('f) ylabel('y(t) y"(t)') grid axis([20 c -0.20 0.20]) print -append sim3_3 1 1 1

PAGE 118

% SIMULATION 3.3 CONDITION A 1 % % C(q"-1) = 1 0.6*q"-1 + 0.2*q"-2 % % CONDITION A1: Re [H(q"-1) = 1/C(q"-1) -1/2] > 0 % Ts=1; C=(1 -0.6 0.2); CX=(2 0 0); num=cx -c; den= 2*c; [h,w]=freqz(num,den, 1 OO,'whole'); x=real(h); plot(w,x) xlabel('w (radians)') ylabel('real part') axis([O 2*pi o 2]) !rm condition.ps set(gcf,'PaperPosition',[2,4,5.9,3.4]); print -append condition 112

PAGE 119

% EXTENDED LEAST SQUARES ALGORITHM SIMULATION 3.4 % % SYSTEM: FURNACE delay: d=O % % A(q"-1 )*y(t) = q"-d*B(q"-1 )*u(t) + C(q"-1 )*e(t) % % A(q"-1) = 1 + (-1.3767)*q"-1 + 0.4173*q"-2 % % B(q"-1) = 11.52*q"-1 + 2.5000*q"-2 % % C(q"-1) = 1 0.6*q"-1 + 0.2*q"-2 % % u(t) = input white noise with variance = 1.0 % e(t) = white noise with variance = 0.01 % % INITIAL CONDITIONS clear y(1)=0; y(2)=0; y(3)=0; error(1 )=0; error(2)=0; error(3)=0; error2(1 )=0; error2(2)=0; error2(3)=0; yhat(1 )=0; yhat(2)=0; yhat(3)=0; yhat2(1 )=0; yhat2(2)=0; yhat2(3)=0; epsilon(1 )=0; epsilon(2)=0; epsilon(3)=0; epsilonb(1 )=0; epsilonb(2)=0; epsilonb(3)=0; phi(1 '1 :6)= [0 0 0 0 0 0]; phi(2, 1 :6)= [0 0 0 0 0 0]; 113

PAGE 120

thetahat(1, 1 :6)= [0 0 0 0 0 0]; thetahat(2, 1 :6)= [0 0 0 o o 0]; thetahat(3, 1 :6)= [0 o o o o O]; Fold=[1 000 0 0 o o 0;0 1 000 0 0 0 0;0 0 1 000 0 0 0;0 0 0 1 000 0 0;0 0 0 0 1 000 0;0 0 0 0 0 1 000]; landa0=1.0; landa(1 )=1.0; landa(2)=1.0; landa(3)=1.0; landa2(1 )=1.0; landa2(2)=1.0; landa2(3)=1.0; n=1502; %INPUT NOISE GENERATION %rand('normal') randn('seed' ,5) u(1 )= 0; u(2)= 0; for a=3:(n+ 1) u(a)=sqrt(.01 )*randn; end %DISTURBANCE NOISE GENERATION randn('seed', 1) for a=1 :(n + 2) e(a),.;sqrt(0.0001 )*randn; end % Extended Least Squares Algorithm for t=3:n landa(t)=1 ; phi(t,1 :6)= [-y(t) u(t) u(t-1) epsilonb(t) epsilonb(t-1 )]; y(t+ 1 )= 1 .3767*y(t) 0.417;3*y(t-1) + 11 .52*u(t) + 2.5*u(t-1) + e(t+ 1) 0.6*e(t) + 0.20*e(t-1 ); yhat(t+ 1 )= thetahat(t, 1 :6)*phi(t, 1 :6)'; epsilon(t+ 1 ),; (y(t+ 1) thetahat(t, 1 :6)*phi(t,1 :6)')/(1. + phi(t, 1 :6)*Fold*phi(t, 1 :6)'); thetahat(t+ 1,1 :6) = (thetatiat(t, 1 :6)' + Fold*phi(t,1 :6)'*epsilon(t+ 1 ))'; yhat2(t+ 1 )= thetahat(t+ 1,1 :6)*phi(t,1:6)'; Fnew= (1/landa(t))*(Fold (Fold*phi(t, 1 :6)'*phi(t,1 :6)*Fold)/(landa(t) + phi(t,1 :6)*Fold*phi(t,1 :6)')); Fold= Fnew; epsilonb(t+ 1 )= y(t+ 1) thetahat(t+ 1,1 :6)*phi(t,1 :6)'; yhat2(t+ 1 )= thetahat(t+ 1,1 :6)*phi(t, 1 :6)'; 114

PAGE 121

error(t+ 1 )= y(t+ 1) yhat(t+ 1 ); error2(t+ 1 )= y(t+ 1) yhat2(t+ 1 ); end %.PLOT !rm sim3_ set(gcf,'PaperPosition',[2,2,5.9,3.1]); c=n-2; for t=1 :c thetahatp(t,1 :6) = thetahat(t+2,1 :6); errorp(t) = error2(t+2); end for P=1 :c a1 O(p) = -1.3767; a20(p) = 0.4173; b1 O(p) = 11.5200; b20(p) = 2.5ooo: c1 O(p) = -0.6000; c20(p) = 0.2000; end x=[1 :1 :c]; a1 =thetahatp(1 :c,1 ); a2=thetahatp(1 :c,2); b1 =thetahatp(1 :c,3); b2=thetahatp(1 :c,4); c1 :c,5); c2=thetahatp(1 :c,6); plot(x,a1 ,'-' ,x,a2,'-',x,a1 O,'"',x,a20,'--') grid title(") xlabel('t') ylabei('Aparameter value') axis([1 25 -3.0 2.0]) pause print -append sim3_ 4 plot(x,b1 ,'-',x,b2,'-',x,b1 O,'--',x,b20,'--') grid title(") xlabel('t') ylabei('Bparameter value') axis([1 40-1.0 13.5]) 115

PAGE 122

%pause print -append sim3_ 4 plot(x,c1,'-',x,c2, '-',x,c1 0, '--',x,c20, '--') grid title(") xlabel('t') ylabei('Cparameter value') axis([1 c -1.0 1.0]) print -append sim3_ 4 b=[1 :1 :n+1];. plot(b,error) title(") xlabel('t') ylabel('y(t) y"(t)') grid axis([20 c -0.20 0.20]) print -append sim3_ 4 plot(b,error2) title(") xlabel('t') ylabel ('y(t) y"(t)') grid axis([20 c -0.20 0.20]) print -append sim3_ 4 116

PAGE 123

% RLS ALGORITHM SIMULATION 3.5FOURTH ORDER % % SYSTEM: Machine Tool delay: d=O % % % A(qA-1 )*y(t) = qA-d*B(qA-1 )*u(t) + e(t) % % % % u(t) = input white noise with variance = 1.0 % e(t) = white noise with variance = 0.01 % % SYSTEM PARAMETERS % clear a1= -1.2723512962860; a2= 0.231 07876152302; a3= 0.03899362243312; a4= 0.02599112877901; b1= 0.11622182086040; b2= -0.06887597302984; b3= -0.01281646042233; b4= -0.01 081717095919; % INITIAL CONDITIONS y(1 )=0; y(2)=0; y(3)=0; y(4)=0; error(1 )=0; error(2)=0; error(3)=0; error(4)=0; yhat(1 )=0; yhat(2)=0; yhat(3)=0; yhat(4)=0; yhat1 (1 )=0; yhat1 (2)=0; yhat1 (3)=0; 117

PAGE 124

yhat1 (4)=0; yhat2(1 )=0; yhat2(2)=0; yhat2(3)=0; yhat2(4)=0; epsilon(1 )=0; epsilon(2)=0; epsilon(3)=0; epsilon(4)=0; thetahat(1, 1 :8)= [0 0 0 0 0 0 0 0]; 1 :8)= [0 0 0 0 0 0 0 0]; thetahat(3, 1 :8)= [0 0 0 0 0 o o O]; thetahat(4, 1 :8)= [0 0 0 0 0 0 o O]; Fold=[1 00000000 o 0 o o 0 o o;o 1 oooooooo 0 0 0 0 o 0;0 o 1 oooooooo 0 0 0 o 0;0 o o 100000000 0 0 0 0;0 0 0 0 100000000 0 0 0;0 0 0 0 0 100000000 0 0;0 0 0 0 0 0 100000000 0;0 0 0 0 0 0 0 1 00000000); .0; landa(2)=1.0; landa(3)=1.0; landa(4)=1.0; n=2000; %INPUT NOISE GENERATION %rand(' normal') randn('seed' ,5) u(1 )= 0; u(2)= 0; u(3)= 0; for a=4:(n+ 1) u(a)=sqrt(1.0)*randn; end %DISTURBANCE NOISE GENERATION randn('seed' ,1 ) for a=1 :(n + 2) e(a)=sqrt(0.01 )*randn; end % RLS Algorithm for t=4:n phi(t, 1 :8)= [ -y(t) -y(t-1) -y(t-2) -y(t-3) u(t) u(t-1) u(t-2) u(t-3)]; y(t+ 1 )= -a1 *y(t) a2*y(t-1) -a3*y(t-2) -a4*y(t-3) + b1 *u(t) + b2*u(t-1) +b3*u(t-2) +b4*u(t-3) + e(t+ 1); 118

PAGE 125

yhat(t+ 1 )= thetahat(t 1 1 1 :8)'; epsilon(t+ 1 )= (y(t+ 1) 1 1 :8)')/(1 + phi(t, 1 1 :8)'); thetahat(t+ 1 I 1 :8) = 1 :8)' + 1 :8)'*epsilon(t+ 1 ))I; Fnew= (1 /landa(t))*(Fold 1 1 :8)*Fold)/(landa(t) + phi(t, 1 :8)*Fold*phi(t, 1 :8)')); Fold= Fnew; yhat2(t+ 1 )= thetahat(t+ 1 I 1 1 :8)1 ; error(t+ 1 )= y(t+ 1) -yhat2(t+ 1 ); end %PLOT !rm sim3_ 4d.ps ]); C=n-4; for t=1 :c thetahatp(t, 1 :8) = thetahat(t+41 1 :8); errorp(t) = error(t+4); end x=[1 :1 :c]; a11 =thetahatp(1 :cl 1 ); a22=thetahatp(1 a33=thetahatp(1 a44=thetahatp(1 :c,4); b11 =thetahatp(1 :c,5); b22=thetahatp(1 b33=thetahatp(1 :c17); b44=thetahatp(1 xlabel('t') ylabei('Aparameter value') grid axis([O c -1.5 0.5]) print -append sim3_ 4d pause xlabel('t') ylabei('B -parameter value') axis([O c -.1 0.15]) grid print -append sim3_ 4d b=[1 :1 :n+ 1 ]; 119

PAGE 126

plot(b,error) xlabel('t') ylabel('y(t) -yA(t)') axis([O c -.5 .5]) print -append sim3_ 4d %Model validation fort=4:n phi(t, 1 :8)= [-yhat1 (t) -yhat1 (t-1) -yhat1 (t-2) -yhat1 (t-3) u(t) u(t-1) u(t-2) u(t-3)]; y(t+ 1 )= -a1 *y(t) a2*y(t-1) -a3*y(t-2) -a4*y(t-3) + b1 *u(t) + b2*u(t-1) +b3*u(t-2) + b4*u(t-3); yhat1 (t+ 1 )= thetahat(n, 1 :B)*phi(t, 1 :8)'; err(t) = y(t+ 1) -yhat1 (t+ 1) + e(t+ 1 ); end sumh=O; sumh1=0; sumj=O; sumj1=0; sumk=O; sumk1=0; suml =0; suml1=0; summ=O; summ1=0; for f=5:n h1 =error(f)*error(f); j1 =error(f)*error(f-1 ); k1 =error(f)*error(f-2); 11 =error(f)*error(f-3); m1 =error(f)*error(f-4); sumh1 = h1 + sumh1; sumj1 = j1 + sumj1; sumk1 = k1 + sumk1 ; suml1 = 11 + suml1; summ1 = m1 + summ1; end %Variance of the RESIDUALS ON-LINE ROon=sumh 1/n % Model Validation off-line g= #of iterations Q=505; for.d=5:g h=err(d)*err(d); 120

PAGE 127

j=err(d)*err(d-1 ); k=err( d)*err( d-2); l=err(d)*err( d-3); m=err( d) *err( d-4); sumh= h + sumh; sumj= j + sumj; sumk= k + sumk; sum I= I + suml; summ= m + summ; end %RESIDUALS OFF-LINE ROoff = sumh/g R01 off= sumj/g; R02off = sumklg; R03off = suml/g; R04off = summ/g; criteria= 2.17/22.36 RN1_ 4th= R01 off/ROoff RN2_ 4th= R02off/ROoff RN3_ 4th= R03off/ROoff RN4_4th= R04off/ROoff 121

PAGE 128

% LEAST SQUARES ALGORITHM SIMULATION 3.5 THIRD ORDER % % SYSTEM: Machine Tool delay: d=O % % % A(ql\-1 )*y(t) = qA-d*B(qA-1 )*u(t) + e(t) % A(qA-1) = 1 + a1 *ql\-1 + a2*ql\-2 + a3*ql\-3 % % B(qA-1) = b1*ql\-1 + b2*ql\-2 + b3*ql\-3 % % u(t) = input white noise with variance = 1 .0 % e(t) = white noise with variance = 0.01 % % SYSTEM PARAMETERS clear a1= -1.2723512962860; a2= 0.231 07876152302;, a3= 0.03899362243312; a4= 0.02599112877901; b1 = 0.11622182086040; b2= -0.06887597302984; b3= -0.01281646042233; b4= -0.01 081717095919; % INITIAL CONDITIONS y(1 )=0; y(2)=0; y(3)=0; y(4)=0; error(1 )=0; error(2)=0; error(3)=0; error(4)=0; yhat(1 )=0; yhat(2)=0; yhat(3)=0; yhat(4)=0; yhat1 (1 )=0; yhat1 (2)=0; yhat1 (3)=0; yhat1 (4)=0; 122

PAGE 129

yhat2(1 )=0; yhat2(2)=0; yhat2(3)=0; yhat2(4)=0; epsilon(1 )=0; epsilon(2)=0; epsilon(3)=0; epsilon(4)=0; thetahat(1, 1 :6)= [0 0 0 0 0 0]; thetahat(2, 1 :6)= [0 o o o 0 0]; thetahat(3, 1 :6)= [0 0 o o o OJ; thetahat(4, 1 :6)= [0 o o 0 0 0]; Fold=[1 00000000 0 0 0 0 0;0 1 00000000 o o 0 0;0 0 1 00000000 0 0 0;0 0 0 100000000 0 0;0 0 0 0 100000000 0;0 0 0 0 0 100000000]; landa(1 )=1.0; landa(2)=1 .0; landa(3)=1.0; landa(4)=1.0; n=2000; %INPUT NOISE GENERATION %rand('normal') randn('seed' ,5) u(1)= 0; u(2)= 0; u(3)= O; for a=4:(n+ 1) u(a)=sqrt(1.0)*randn; end %DISTURBANCE NOISE GENERATION randn('seed', 1) for a=1 :(n + 2) e(a)=sqrt(0.01 )*randn; end % Weighted Least Squares Algorithm for t=4:n phi(t, 1 :6)= [-y(t) -y(t-1) -y(t-2) u(t) u(t-1) u(t-2)]; y(t+ 1 )= -a1 *y(t)a2*y(H) -a3*y(t-2) -a4*y(t-3) + b1 *u(t) + b2*u(t-1) +b3*u(t-2) +b4*u(t-3) + e(t+ 1 ); yhat(t+ 1 )= thetahat(t. 1 :6)*phi(t, 1 :6)'; epsilon(t+ 1 )= (y(t+ 1) thetahat(t, 1 :6)*phi(t, 1 :6)')/(1 + phi(t,1 :6)*Fold*phi(t, 1 :6)'); 123

PAGE 130

thetahat(t+ 1,1 :6) = (thetahat(t, 1 :6)' + Fold*phi(t, 1 :6)'*epsilon(t+ 1 ))'; Fnew= (1/landa(t))*(Fold -(Fold*phi(t, 1 :6)'*phi(t, 1 :6)*Fold)/(landa(t) + phi(t, 1 :6)*Fold*phi(t, 1 :6)')); Fold= Fnew; yhat2(t+ 1 )= thetahat(t+ 1,1 :6)*phi(t, 1 :6)'; error(t+ 1 )= y(t+ 1) -yhat2(t+ 1 ); landa(t+ 1 )= 1 .0; end o/o PLOT lrm sim3_3d.ps set(gcf,'PaperPosition',[2,2,5.8,3.1 ]); c=n-4; for t=1 :c thetahatp(t, 1 :6) = thetahat(t+4, 1 :6); errorp(t) = error(t+4); end x=[1 :1 :c]; a11 =thetahatp(1 :c, 1 ); a22=thetahatp(1 :c,2); a33=thetahatp(1 :c,3); b11 =thetahatp(1 :c,4 ); b22=thetahatp(1 :c,5); b33=thetahatp(1 :c,6); plot(x,a11 ,' -' ,x,a22,' -',x,a33,'-') xlabel('t') ylabei('A-parameter value') grid axis([O c -2.0 0.5]) print -append sim3_3d pause plot(x,b11 ,'-' ,x,b22,'-',x,b33,'-') xlabel('t') ylabei('B parameter value') axis([O c -.1 0.15]) grid print -append sim3_3d plot(b,error) xlabel('t') ylabel ('y(t) y"(t)') print -append sim3_3d 124

PAGE 131

% Model validation for t=4:n phi(t,1 :6)= [-yhat1(t) -yhat1(t-1) -yhat1(t-2) -u(t) u(t-1) u(t-2)]; y(t+ 1 )= -a1 *y(t) -a2*y(t-1) -a3*y(t -2) -a4*y(t-3) + b1*u(t) + b2*u(t-1) +b3*u(t-2) + b4*u(t-3); yhat1 (t+ 1 )= thetahat(n, 1 :6)*phi(t, 1 :6)'; err(t) = y(t+ 1) -yhat1 (t+ 1 )+ e(t+ 1 ); end plot( err) xlabel('t') ylabel('y(t) y11(t)') axis([1 o c -1. 0 1.0]) print -append sim3_3d sumh=O; sumh1=0; sumj=O; sumj1=0; sumk=O; sumk1=0; sumi=O; suml1=0; for C=5:n h1 =error(c)*error(c); j1 =error(c)*error(c-1 ); k1 =error(c)*error(c-2); 11 =error(c)*error(c-3); sumh1 = h1 + sumh1; sumj1 = j1 + sumj1; sumk1 = k1 + sumk1 ; suml1= 11 + suml1; end %Variance of the RESIDUALS ON-LINE ROon=sumh 1/n % Model Validation off-line Q= #of iterations g=505; for d=5:g h=err(d)*err(d); j=err( d)* err( d-1 ) ; k=err( d) *err( d -2); l=err( d)* err( d-3); suri1h= h + sumh; 125

PAGE 132

sumj= j + sumj; sumk= k + sumk; sum I= I + suml; end rm sim3_3d diary sim3_3d % RESIDUALS OFF-LINE ROoff = sumh/g R01off = sumj/g; R02off = sumklg; R03off = suml/g; criteria= 2.17/22.36 RN1_ 4th= R01off/ROoff RN2_ 4th= R02off/ROoff RN3_ 4th= R03off/ROoff thetahat(n, 1 :6) diary off 126

PAGE 133

o/o LEAST SQUARES ALGORITHM SIMULATION 3.5 SECOND ORDER o/o o/o SYSTEM: Machine Tool delay: d=O o/o o/o o/o o/o o/o o/o o/o A(q"-1 )*y(t) = q"-d*B(q"-1 )*u(t) + e(t) A(q"-1) = 1 + a1 *q"-1 + a2*q"-2 B(q"-1) = b1 *q"-1 + b2*q"-2 o/o u(t) = input white noise with variance = 1.0 o/o e(t) = white noise with variance = 0.01 o/o o/o SYSTEM PARAMETERS clear a1= -1.2723512962860; a2= 0.231 07876152302; a3= 0.03899362243312; a4= o.025991128n901; b1 = 0.11622182086040;. b2= -0.06887597302984; b3= -0.01281646042233; b4= -0.01 081717095919; o/o INITIAL CONDITIONS y(1 )=0; y(2)=0; y(3)=0; y(4)=0; error(1 )=0; error(2)=0; error(3)=0; error(4)=0; yhat(1 )=0; yhat(2)=0; yhat(3)=0; yhat(4)=0; yhat1 (1 )=0; yhat1 (2)=0; yhat1 (3)=0; yhat1 (4)=0; 127

PAGE 134

yhat2(1 )=0; yhat2(2)=0; yhat2(3)=0; yhat2(4)=0; epsilon(1 )=0; epsilon(2)=0; epsilon(3)=0; epsilon( 4)=0; thetahat(1, 1 :4)= [0 0 o 0]; thetahat(2, 1 :4)= [0 0 0 0]; thetahat(3, 1 :4)= [0 0 0 0]; thetahat(4, 1 :4)= [0 0 0 0]; Fold=[100000000 0 o 0;0 100000000 0 0;0 0 100000000 0;0 0 0 100000000]; landa(1 )=1.0; landa(2)=1.0; landa(3)=1.0; landa(4f=1.0; n=2000; %INPUT NOISE GENERATION o/orand('normal') randn('seed' ,5) u(1)= 0; u(2)= 0; u(3)= 0; for a=4:(n+ 1) u(a)=sqrt(1.0)*randn; end %DISTURBANCE NOISE GENERATION randn('seed', 1 ) for a=1 :(n + 2) e(a)=sqrt(0.01 )*randn; end % Weighted Least Squares Algorithm for t=4:n phi(t, 1 :4)= [ -y(t) -y(t-1) u(t) u(t-1 )] ; y(t+ 1 )= -a1*y(t)a2*y(t-1) -a3*y(t-2) -a4*y(t-3) + b1 *u(t) + b2*u(H) + b3*u(t-2) +b4*u(t-3) + e(t+ 1 ); yhat(t+ 1 )= thetahat(t, 1 :4)*phi(t, 1 :4)'; epsilon(t+ 1 )= (y(t+ 1) thetahat(t, 1 :4)*phi(t, 1 :4)')/(1 + phi(t, 1 :4)*Fold*phi(t, 1 :4)'); thetahat(t+ 1,1 :4) = (thetahat(t, 1 :4)' + Fold*phi(t, 1 :4)'*epsilon(t+ 1 ))'; 128

PAGE 135

Fnew= (1/landa(t))*(Fold -(Fold*phi(t, 1 :4)'*phi(t, 1 :4)*Fold)/(landa(t) + phi(t, 1 :4)*Fold*phi(t, 1 :4)')); Fold= Fnew; yhat2(t+ 1 )= thetahat(t+ 1,1 :4)*phi(t, 1 :4)'; error(t+ 1 )= y(t+ 1) yhat2(t+ 1 ); landa(t+ 1 ) = 1 ; end %PLOT !rm sim3_2d.ps set(gcf,'PaperPosition' ,[2,2,5.8,3.1 ]); c=n-4; for t=1 :c thetahatp(t,1 :4) = thetahat(t+4, 1 :4); errorp(t) = error(t+4); end x=[1 :1 :c]; a11 =thetahat(1 :c, 1 ); a22=thetahat(1 :c,2); b11 =thetahat(1 :c,3); b22=thetahat(1 :c,4); plot(x,a11 ,'-',x,a22,'-') xlabel('t') ylabei('Aparameter value') grid axis([O c -2.5 1.5]) print -append sim3_2d pause plot(x,b11,' -' ,x,b22, '-') xlabel('t') ylabei('Bparameter value') axis([O c -.2 0.2]) grid print -append sim3_2d plot(b,error) %title('Error = y(t) y"(t)') xlabel('t') ylabel('y(t) y"(t)') print -append sim3_2d % Model validation for t=4:n 129

PAGE 136

phi(t, 1:4)= [-yhat1 (t) -yhat1 (H) -u(t) u(t-1 )); y(t+ 1 )= -a1*y(t) a2*y(t-1) -a3*y(t-2) -a4*y(t-3) + b1 *u(t) + b2*u(t-1) +b3*u(t-2) +b4*u(t-3); yhat1 (t+ 1 )= thetahat(n, 1 :4)*phi(t, 1 :4)'; err(t) = y(t+ 1) "yhat1 (t+ 1) + e(t+ 1 ); end plot( err) xlabel('t') ylabel('y(t) y"(t)') axis([1 0 c -10.0 1 0.0]) print -append sim3_2d sumh=O; sumh1=0; sumj=O; sumj1=0; sumk=O; sumk1=0; for C=5:n h1 =error(c)*error(c); j1 =error(c)*error(c-1 ); k1 =error(c)*error(c-2); sumh1 = h1 + sumh1; sumj1 = j1 + sumj1; sumk1 = k1 + sumk1 ; end %Variance of the RESIDUALS ON-LINE ROon=sumh 1/n % Model Validation off-line g= #of iterations Q=505; for d=5:g h=err(d)*err(d); j=err(d)*err(d-1 ); k=err(d)*err(d-2); sumh= h + sumh; sumj= j + sumj; sumk= k + sumk; end !rm sim3_2d diary sim3_2d thetahat(n, 1 :4) %RESIDUALS OFF-LINE 130

PAGE 137

ROoff = sumh/g R01 off = sunij/g; R02off = sumk/g; criteria= 2.17/22.36 RN1_ 4th= R01off/ROoff RN2_ 4th= R02off/ROoff diary off 131

PAGE 138

% SIMULATION 3.5 THIRD ORDER BODE COMPARISON % % Machine tool % % Continuos time models % % G1 (s) = (2511 Os +81 OOO)/(s"4 +73s"3 + 8730s"2 + 81 OOOs + 0) % clear num1=[0 0 0 25110 81000]; den1 =[1 73 8730 81 000 0]; % %2nd order approximation from least sqaures method (digital) den=[1-1.2770.1650.136]; num=[O 0.1167 -.0705 -.0225]; %Because of integrator in denominator close the loop [numc1,denc1]= cloop(num1,den1 ); % Sampling rate Tsa= 0.050; %Calculate Continuos time model using zoh for closed loop system [numc,denc]=c:l2cm(num,den,Tsa,'zoh'); % Calculate the open loop plant numo=numc; deno=(denc-numc); % Compare models with Bode plot W= logspace(-1,2); % Generate data [magc1,phasec1] = bode(numc1,denc1, w); [magc,phasec] = bode(numc,denc,w); [m.ag1,phase1] = bode(num1,den1,w); [magco,phaseco] = bode(nu'mo,deno,w); % Plot Comparison Closed loop systems-actual plant vs 3rd order approximation set(gcf,'PaperPosition',[2,2,5.9,8.1]); subplot(211) semilogx(w,20*1og1 O(magc1 ),'-', w,20*1og1 O(magc),'--') %title(' 4th order plant vs 3rd order approximation (Closed-loop)') xlabei('Frequency (rad/sec)') ylabei('Magnitude (Db)') text(0.15,-25,' __ actual system','sc') text(0.15,-30,'--3rd order model','sc') 132

PAGE 139

subplot(212) semilogx(w,phasec1,'-', w,phasec,'--') xlabei('Frequency (rad/sec)') ylabei('Phase (deg)') text(0.15,-125,' __ actual system','sc') text(0.15,-150,'---3rd order model','sc') !rm comp3rd.ps print -append comp3rd subplot(111) % Plot Comparison Actual plant vs 3rd order approximation (open loop) subplot(211) semilogx(w,20*1og1 O(mag1 ),'-', w,20*1og1 O(magco),'--') %title(' 4th order plant vs 3rd order approximation (Open-loop)') xlabei('Frequency (rad/sec)') ylabei('Magriitude (Db)') text(0.15,-25,' __ actual system','sc') text(0.15,-35,'----3rd order model','sc') subplot(212) semilogx(w;phase1 ,'-', w,phaseco,'--') xlabei('Frequency (rad/sec)') ylabei('Phase (deg)') text(0.15,-155,'_. -_actual system','sc') 3rd order model','sc') print -append comp3rd subplot( 111 ) !rm comp3rd diary comp3rd numc denc numo dena roots(num1) roots(den1) roots(numo) roots( de no) diary off 133

PAGE 140

% SIMULATION 3.5 BODE COMPARISON 2ND ORDER % % Machine tool % %Continuos time models % % G1 (s) = (2511 Ds +81 DDO)/(s"4 +73s"3 + 873Ds"2 + 81 ODDs + D) % num1 =[0 0 D 2511 D 81 000); den1 =[1 73 8730 81 000 0]; % %2nd order approximation from leastsqaures method (digital) num=[O 0.1167 -D.D855]; den=[1 -1.4D31 0.433]; % Because _of integrator. in denominator close the loop [numc1 ,denc1J= cloop(num1 ,den1 ); % Sampling rate Tsa= D.050; % Calculate Continuos model using zoh for closed loop system [numc,denc]=d2cm(num,den,Tsa,'zoh'); % Calculate the open loop plant numo=numc; deno=(denc-numc); % Compare models with Bode plot W= logspace(-1,2); % Generate data [magc1 ,phasec1] = bode(numc1 ,denc1 ,w); [magc,phasec] = bode(numc,denc,w); [mag1 ,phase1] = bode(num1,den1 ,w); [magco,phaseco] = bode(numo,deno,w); % Plot Comparison Closed loop systems-actual plant vs 2nd order approximation set(gcf,'PaperPosition',[2,2,5.9,8.1 ]); subplot(211) semilogx(w,2oog1 O(magc1 ),'-', w,2D.Iog1 O(magc),'--') %title(' 4th order plant vs 2nd order approximation (Closed-loop)') xlabei('Frequency (rad/sec)') ylabei('Magnitude (Db)') text(D.15,-27,' __ actual system','sc') text(D.15,-35,'----2nd order model','sc') subplot(212) 134

PAGE 141

semilogx(w,phasec1,'-', w,phasec,'--') xlabei('Frequency (radlsec)') ylabei('Phase (deg)') text(0.15,-135,' __ actual system','sc') text(0.15,-160,'---2nd order model','sc') !rm comp2nd.ps print -append comp2nd subplot(111) o/o Plot Comparison Actual plant vs 2nd order approximation (open loop) subplot(211 ) semilogx(w,20*Iog1 O(mag1 ), '-', w,20*1og10(magco),'--') %title(' 4th order plant vs 2nd order approximation (Open-loop)') xlabei('Frequency (radlsec)') ylabei('Magnitude (Db)') text(0.15,-25,' __ actual system' ,'sc') text(0.15,-35,'----2nd order model','sc') subplot(212) semilogx(w,phase1 ,'-', w,phaseco,'--') xlabei('Frequency (radlsec)') ylabei('Phase (deg)') text(0.15,-160,' __ actual system','sc'). text(0.15, -175,' ---2nd order model' ,'sc') print -append comp2nd subplot(111) !rm comp2nd diary comp2nd numc denc numo dena roots(num1) roots(den1) roots(numo) roots( dena) diary off 135

PAGE 142

% RLS ALGORITHMSIMULATION 3.6 u(t):.1sin10t % % SYSTEM: d=O (delay) % % % A(q"-t)*y(t) = q"-d*B(q"-t)*u(t) + e(t) % A(q"-1) = 1 + ( -1.5)*q"-1 + 0. 70*q"-2 % % B(q"-1) = 1.0*q"-1 + O.SOOO*q"-2 % % u(t) = Input= 0.1 *sin(1 Ot) % e(t) = 0 % INITIAL CONDITIONS clear y(1)=0; y(2)=0; yhat(1)=0; yhat(2)=0; error(t)=O; error(2)=0; epsilon(1 )=0; epsilon(2)=0: thetahat(1, 1 :4)= [0 0 0 0]; thetahat(2, 1 :4)= [0 0 0 0]; Fold=[1 ooo o o o;o 1 oob o o;o o 1 ooo o;o o o 1 OOO]; landa(1 )=1.0; landa(2)=1.o; n=302; % INPUT NOISE GENERATION randn('seed',5) u(1 )= 0; for a=2:(n+ 1) u(a)=0.1 *sin(1 O*a); end % RLS Algorithm for t=2:n phi(t,1 :4)= [ -y(t) -y(t-1) u(t) u(t-1 )] ; y(t+ 1 )= 1.5*y(t)0.70*y(t-1) + 1.0*u(t) + O.S*u(t-1 ); yhat(t+ 1 )= thetahat(t,1 :4)*phi(t,1 :4)'; epsilon(t+ 1 )= (y(t+ 1) thetahat(t,1 :4)*phi(t,1 :4)')/(1 + phi(t,1 :4)*Fold*phi(t,1 :4)'); 136

PAGE 143

thetahat(t+ 1,1 :4) = (thetahat(t,1 :4)' + Fold*phi(t,1 :4)'*epsilon(t+ 1 ))'; Fnew= (1/landa(t))*(Fold -(Fold*phi(t,1 :4)'*phi(t,1 :4)*Fold)/(landa(t) + phi(t, 1 :4)*Fold*phi(t,1 :4)')); Fold= Fnew; error(t+ 1 )= y(t+ 1) yhat(t+ 1 ); landa(t+ 1 )=1; end % CALCULATE MIN EIGENVALUES eigmin(1 )=0; peigmin=1 000; for d=10:n old =[0 0 0 0;0 0 0 0;0 0 0 0;0 0 0 0); new =[0 0 0 0;0 0 0 0;0 0 0 0;0 0 0 0]; for t=2:d in = phi(t,1 :4)'*phi(t,1 :4); new = in +old; old= new; end A=(1/d)*eig(new); eigmin(d)=min(A); eigmax(d)=max(A); %FIND SMALLEST VALUE FOR EIGMIN if eigmin(d) < peigmin mineig= eigmin(d); peigmin = mineig; end end %PLOT plot(1 :n,eigmin) xlabel('t') axis([1 0 n 0 3.5e-4]) lrm sim3_6.ps print -append sim3_6 pause plot(l:n,eigmax) axis([1 on 0 0.01]) print -append sim3_6 set(gcf,'PaperPosition',[2,2,5.9,3.1 ]); C=n-2; for t=1 :c 137

PAGE 144

thetahatp(t, 1 :4) = thetahat(t+2, 1 :4); errorp(t) = error(t+2); end for p=1 :c a10(p) = -1.500; a20(p) = 0.700; b1 O(p) = 1.000; b20(p) = 0.500; end x=[1 :1 :c]; a1 =thetahat(1 :c, 1 ); a2=thetahat(1 :c,2); b1 =thetahat(1 :c,3); b2=thetahat(1 :c,4); plot(x,a1 ,'-',x,a2,'-',x,a1 O,'--',x,a20,'--') grid title(") xlabel('t') ylabei('Aparameter value') axis([1 n -2.0 1.5]) pause print -append sim3_6 plot(x,b1 ,'-' ,x;b2,' -',x,b1 0,'--' ,x,b20,'--') grid title(") xlabel('t') ylabei('Bparameter value') axis([1 n -0.25 1.25]) pause print -append sim3_6 b=[1 :1 :n+ 1]; plot(b,error) title(") xlabel('t') ylabel('y(t) -yA(t)') grid print -append sim3_6 138

PAGE 145

% RLS ALGORITHM SIMULATION 3.6 u(t): white noise % % SYSTEM: d=O (delay) % % % A(q"-1 )*y(t) = q"-d*B(q"-1 )*u(t) + e(t) % A(q"-1) = 1 + (-1.5)*q"-1 + 0.70*q"-2 % % B(q"-1) = 1.0*q"-1 + O.SOOO*q"-2 % % u(t) =Input= white noise variance =1 % e(t) = 0 % INITIAL CONDITIONS clear y(1 )=0; y(2)=0; yhat(1)=0; yhat(2)=0; error(1 )=0; error(2)=0; epsilon(1 )=0; epsilon(2)=0; thetahat(1,1 :4)= [0 0 0 0]; thetahat(2,1 :4)= [0 0 0 0]; Fold=[1 000 0 0 0;0 toOO 0 0;0 0 1000 0;0 0 0 1 000]; landa(1 )=1.0; landa(2)=1.0; n=102; % INPUT NOISE GENERATION randn('seed',S) u(1)= O; for a=2:(n+ 1) u(a)=sqrt(1 )*randn; end % RLS Least Squares Algorithm for t=2:n phi(t, 1 :4)= [-y(t) -y(t-1) u(t) u(t-1 )]; y(t+ 1 )= 1 .S*y(t) 0.70*y(t-1) + 1.0*u(t) + O.S*u(t-1 ); yhat(t+ 1 )= thetahat(t, 1 :4)*phi(t, 1 :4)'; epsilon(t+ 1 )= (y(t+ 1) thetahat(t, 1 :4)*phi(t, 1 :4)')/(1 + phi(t, 1 :4)*Fold*phi(t, 1 :4)'); 139

PAGE 146

thetahat(t+ 1,1 :4) = (thetahat(t, 1 :4)' + Fold*phi(t, 1 :4)'*epsilon(t+ 1 ))'; Fnew= (1 /landa(t))*(Fold -(Fold*phi(t, 1 :4)'*phi(t, 1 :4)*Fold)/(landa(t) + phi(t, 1 :4)*Fold*phi(t, 1 :4)')); Fold= Fnew; error(t+ 1 )= y(t+ 1) yhat(t+ 1 ); landa(t+ 1 )=1; end % CALCULATE MIN EIGENVALUES eigmin(1 )=0; peigmin=1 000; for d=10:n old =[0 0 o 0;0 o o 0;00 0 0;0 o o OJ; new =[0 o o o;o o o o;o o o o;o o o O]; for t=2:d in = phi(t, 1 :4)'*phi(t, 1 :4); . new = in + old; old= new; end A=(1/d)*eig(new); eigmin(d)=min(A); eigmax(d)=max(A); %FIND SMALLEST VALUE FOR EIGMIN if eigmin(d) < peigmin mineig= eigmin(d); peigmin = mineig; end end %PLOT plot(1 :n,eigmin) xlabel('t') axis([1 0 n 0 1]) !rm sim3_6a.ps print -append sim3_6a plot(1 :n,eigmax) axis([1 0 n 0 80]) print-append sim3_6a set(gcf, 'Paper Position' ,[2,2,5.9,3.1 ]); c=n-2; for t=1 :c thetahatp(t, 1 :4) = thetahat(t+2, 1 :4); 140

PAGE 147

errorp(t) = error(t+2); end for P=1 :c a1 O(p) = -1.500; a20(p) = 0.700; b1 O(p) = 1.000; b20(p) = 0.500; end x=[1 :1 :c); a1 ,;,thetahat(1 :c,1 ); a2=thetahat(1 :c,2); b1 =thetahat(1 :c,3); b2=thetahat(1 :c,4); plot(x,a1,'-',x;a2,'-',x,a1 O,'--',x,a20,'--') grid title(") xlabel('t') ylabei('Aparameter value') axis([1 c -2.0 1.5]) %pause print -append sim3_6a plot(x,b1,'-',x,b2,'-',x,b1 grid title(") xlabel('f) ylabei('B parametervalue') axis([1 c -0.25 1.25]) %pause print -append sim3_6a b=[1 :1 :n+1); plot(b,error) title(") xlabel('t') ylabel ('y(t) y"(t)') grid axis([20 c -0.20 0 .20]) print -append sim3_6a 141

PAGE 148

% RLS ALGORITHM SIMULATION 3.7 pole/zero cancelation % % SYSTEM: FURNACE d=O (delay) % % A(qA-1 )*y(t) = qA-d*B(qA-1 )*u(t) +e(t) % % % % % A(qA-1) = 1 + (-1.40)*qA-1 + 0.480*qA-2 B(qA-1) = 1.0*qA-1 0.6*qA-2 % u(t) = input white noise with variance = 1 % e(t) = 0 % % INITIAL CONDITIONS clear y(1)=0; y(2)=0; yhat(1 )=0; yhat(2)=0; error(1 )=0; error(2)=0; epsilon(1 )=0; epsilon(2)=0; thetahat(1, 1 :4)= [0 0 0 0]; thetahat(2,1 :4)= [0 0 0 0]; Fold=[1oooo o o;o 1000 o o;o o 1000 o;o o o 1000]; landa(1 )=1.0; landa(2)=1.0; n=300; % INPUT NOISE GENERATION randn('seed' ,5) u(1)= 0; for a=2:(n+ 1) u(a)=sqrt(1 )*randn; end % Weighted Least Squares Algorithm for t=2:n phi(t,1 :4)= [-y(t) -y(t-1) u(t) u(t-1 )] ; y(t+ 1 )= 1 .400*y(t) 0.480*y(t-1) + t.OOO*u(t) 0.6*u(t-1 ); yhat(t+ 1 )= thetahat(t, 1 :4)*phi(t, 1 :4)'; 142

PAGE 149

epsilon(t+ 1 )= y(t+ 1) thetahat(t, 1 :4)*phi(t, 1 :4)'; Fnew= (1/landa(t))*(Fold-(Fold*phi(t, 1 :4)'*phi(t, 1 :4)*Fold)/(landa(t) + phi(t, 1 :4)*Fold*phi(t, 1 :4)')); thetahat(t+ 1,1 :4) = (thetahat(t, 1 :4)' + Fnew*phi(t, 1 :4)'*epsilon(t+ 1 ))'; Fold= Fnew; error(t+ 1 )= y(t+ 1) -yhat(t+ 1 ); landa(t+ 1 )=1; end %CALCULATE MIN EIGENVALUES eigmin(1 )=0; peigmin=1 000; for d=10:n old =[0 o o 0;0 o o 0;0 0 0 0;0 o o 0]; new =[0 0 0 0;0 0 0 0;0 0 0 0;0 0 0 0]; for t=2:d in = phi(t, 1 :4)'*phi(t, 1 :4); new = in + old; old= new; end A=( 1/d)*eig (new); eigmin(d)=min(A); eigmax(d)=max(A); % FIND SMALLEST VALUE FOR EIGMIN if eigmin(d) < peigmin mineig= eigmin(d); peigmin = rilineig; end end %PLOT. plot(1 :n,eigmin) xlabel('t') %axis([1 0 n 0 0.050]) !rm ls_cancel2.ps print -append ls_cancel2 pause plot(1 :n,eigmax) %axis([1 0 n 0 11 00]) print -append ls_cancel2 pause set(gcf,'PaperPosition',[2,2,5.9,3.1 ]); 143

PAGE 150

e=n-2; for t=1 :c thetahatp(t,1 :4) = thetahat{t+2,1 :4); errorp(t) = error(t+2); end !rm sim3_7.ps for p=1 :c a10(p) = -1.40; a20(p) = 0.48; b1 O(p) = 1.00; b20(p) = -0.60; end x=[1 :1 :c]; a1 =thetahatp(1 :c,1 ); a2=thetahatp(1 :c,2); b1 =thetahatp(1 :c,3); b2=thetahatp(1 :c,4); plot(x,a1 ,'-',x,a2,'-',x,a1 O,'--',x,a20,'--') grid title(") xlabel('t') ylabei('A parameter value') axis([1 c -2.5 2 .0]) print -append sim3_7 plot(x,b1,'-',x,b2,'-',x,b1 O,'--',x,b20,'--') grid title(") xlabel('t') ylabei('B parameter value') axis([1 c -1.5 2.0]) print -append sim3_7 b=[1 :1 :n+ 1]; plot(b, error) title(") xlabel('t') ylabel ('y(t) y11(t)') grid axis([1 o c -o.2 0 .2]) print -append sim3_7 144

PAGE 151

% OUTPUT ERROR ALGORITHM SIMULATION 3.8 % % SYSTEM: FURNACE d=O (delay) % % % % % % % A(q"-1)*y(t) = q"-d*B(q"-1)*u(t) + e(t) A(q"-1) = 1 + ( -1.3767)*q"-1 + 0.4173*q"-2 B(q"-1) = 11.52*q"-1 + 2.5000*q"-2 % u(t) = input white noise with variance = 0.01 % e(t) = white noise with variance = 0.0001 % % INITIAL CONDITIONS clear y(1 )=0; y(2)=0; yhat(1 )=0; yhat(2)=0; error(1 )=0; error(2)=0; epsilon(1 )=0; epsilon(2)=0; phi(1, 1 :4)= [0 0 0 0]; phi(2, 1 :4)= [0 0 0 0]; thetahat(1, 1 :4)= [0 0 0 0]; thetahat(2, 1 :4)= [0 0 0 0]; Fold=[1 000 0 0 0;0 1 000 0 0;0 0 1 000 0;0 0 0 1 000]; landa0=1.0; landa(1 )=1.0; landa(2)=1.0; landa2(1 )=1.0; landa2(2)=1.0; n=500; %INPUT NOISE GENERATION %rand('normal') randn('seed' ,5) u(1 )= 0; for a=2:(n+ 1) u(a)=sqrt(.01 )*randn; 145

PAGE 152

end %DISTURBANCE NOISE GENERATION randn('seed', 1) for a=1 :(n + 2) e(a)=sqrt(0.0001 )*randn; end % Output Error Algorithm for t=2:n landa(t)=1; phi(t, 1 :4)= [-yhat(t) -yhat(t-1) u(t) u(t-1 )] ; y(t+ 1 )= 1 .3767*y(t) 0.4173*y(t-1) + 11.52*u(t) + 2.5*u(t-1) + e(t+ 1) -1.3767*e(t) + 0.4173*e(t-1 ); yhat(t+ 1 )= thetahat(t, 1 :4)*phi(t, 1 :4)'; epsilon(t+ 1 )= y(t+ 1) thetahat(t, 1 :4)*phi(t, 1 :4)'; Fnew= (1 /landa(t))*(Fold (Fold*phi(t, 1 :4)'*phi(t, 1 :4)*Fold)/(landa(t) + phi(t, 1 :4)*Fold*phi(t, 1 :4)')); thetahat(t+ 1,1 :4) = (thetahat(t,1 :4)' + Fnew*phi(t, 1 :4)'*epsilon(t+ 1 ))'; Fold= Fnew; error(t+ 1 )= y(t+ 1) yhat(t+ 1 ); end %PLOT !rm sim3_8.ps set(gcf,'PaperPosition',[2,2,5.9,3.1 ]); c=n-2; for t=1 :c thetahatp(t, 1 :4) = thetahat(t+2, 1 :4); errorp(t) = error(t+2); end for P=1 :c a10(p) = -1.3767; a20(p) = 0.4173;' b1 O(p) = 11.5200; b20(p) = 2.5000; end x=[1 :1 :c]; a1 =thetahatp(1 :c,1 ); a2=thetahatp(1 :c,2); b1 =thetahatp(1 :c,3); b2=thetahatp(1 :c,4); plot(x,a1 ,'-' ,x,a2,'-',x,a1 O,'--',x,a20,'--') 146

PAGE 153

grid xlabel('t') ylabei('Aparameter value') axis([1 c -5.0 3.0]) print -append sim3_8 plot(x,b1 ,'-' ,x,b2,' -',x,b1 O,'--',x,b20,'--') grid xlabel('t') ylabei('Bparameter value') axis([1 c -0.5 40]) print -appendsim3_8 b=[1 :1 :n+ 1 ]; plot(b,error) title(") xlabel('t') ylabel ('y(t) y"(t)') grid axis([1 o c -0.20 0.20]) print -append sim3_8 147

PAGE 154

% SIMULATION 3.8: CONDITION A1 % % C(q"-1) = 1 1.3767*q"-1 + 0.4173*q"-2 % % % Ts=1; C=[1 -1.3767 .4173]; CX=[2 0 0]; num= ex-c; den= 2*c; [h,w]=freqz(num,den, 1 OO,'whole'); X=real(h); plot(w,x) xlabel('w (radians)') ylabel('real part') axis([O 2*pi -2 2]) lrm coprime_frn2.ps set(gcf,'PaperPosition',[2,4,5.9,3.4]); print -append coprime_frn2 148

PAGE 155

% OUTPUT ERROR ALGORITHM SIMULATION 3.9 % %SYSTEM % % y(t+ 1) = [B(q"-1 )/A(q"-1)J"u(t)+ e(t+ 1) % % B(q"-1 )= .2111q"-1 .1539q"-2 + .9308q"-3 % % A(q"-1 )= 1.0.2431 q"-1 + .4900q"-2 % % u(t) = input white noise with variance = 1 0 % e(t) = white noise with variance = 0.01 % % INITIAL CONDITIONS y(1)=0; y(2)=0; y(3)=0; yhat(1 )=0; yhat(2)=0; yhat(3)=0; epsilon(1 )=0; epsilon(2)=0; epsilon(3)=0; thetahat(1, 1 :5)= [0 o 0 0 0]; thetahat(2, 1 :5)= [O o o o OJ; thetahat(3, t:5)= [0 o 0 o O]; phi(1 '1 :5)=[0 0 0 00]; phi(2, 1 :5)=[0 0 0 0 0]; Fold=[10 0 0 0 0;0 10 0 0 0;0 010 o 0;0 0 010 0;0 0 0 0 10]; %NOISE GENERATION n=500; %INPUT NOISE GENERATION randn('seed' ,5) u(1 )=0; u(2)=0; for a=3:n u(a)=Sqrt(1 O)*randn; end %DISTURBANCE NOISE GENERATION randn('seed', 1 ) 149

PAGE 156

e(1 )=0; e(2)=0; for a=3:(n + 1) e(a)=sqrt(0,01 )*randn; end % OUTPUT ERROR METHOD for t=3:n landa(t)=1; phi(t, 1 :5)= [ -yhat(t) -yhat(t-1) u(t) u(t-1) u(t-2)]; y(t+ 1 )= .2431 *y(t) -.49*y(t-1) + .2111 *u(t) -.1539*u(t-1) + .9308*u(t-2) + e(t+ 1) -.2431 *e(t) + 0.49*e(t-1); yhat(t+ 1 )= thetahat(t, 1 :4)*phi(t, 1 :4)'; epsilon(t+ 1 )= y(t+ 1) thetahat(t, 1 :5)*phi(t, 1 :5)'; yhat(t+ 1 )= thetahat(t, 1 :5)*phi(t, 1 :5)'; Fnew= (1/landa(t))*(Fold (Fold*phi(t, 1 :5)'*phi(t, 1 :5)*Fold)/(landa(t) + phi(t, 1 :5)*Fold*phi(t, 1 :5)')); thetahat(t+ 1,1 :5) = (thetahat(t, 1 :5)' + Fnew*phi(t, 1 :5)'*epsilon(t+ 1 ))'; error(t+ 1 )= y(t+ 1) yhat(t+ 1 ); Fold= Fnew; end %GENERATE PLOTS !rm sim3_9.ps set(gcf,'PaperPosition',[2,2,5.9,3.1 ]); c=n-0; for t=1 :c thetahatp(t, 1 :5) = thetahat(t, 1 :5); errorp(t) = error(t); end for p=1 :c a1 O(p) = .;0.2431; a20(p) = 0.4900; b1 O(p) = 0.211 0; b20(p) = -0.1539; b30(p) = 0.9308; end % X=[1 :1 :n];_ a1 =thetahatp(1 :n, 1 ); a2=thetahatp(1 :n,2); b1 =thetahatp(1 :n,3); 150

PAGE 157

b2=thetahatp(1 :n,4); b3=thetahatp(1 :n,5); plot(x,a1 ,'-',x,a2,'-',x,a1 0, '--',x,a20,'--') grid title(") xlabel('t') ylabei('Aparameter value') axis([1 c -0.8 0.8]) print -append sim3_9 plot(x,b1 ,' -' ,x,b2,'-' ,x,b3, '-',x,b1 0, '--',x,b20,' --' ,x,b30, '--') grid title(") xlabel('t') ylabei('B -parameter value') axis([1 c -0.5 1.5]) print -append sim3_9 b=[1 :1 :f"!+ 1];_. plot(b,error) title(") xlabel('t') ylabel ('y(t) -. y"(t)') grid axis([1 0 c -0.20 0.20]) print -append sim3_9 151

PAGE 158

% SIMULATION 3.9: CONDITION A1 % % C(q"-1) = 1 0.2431*q"-1 + 0.490*q"-2 % % H(q"-1) = 1/C(q"-1) -112 % % Ts=1; C=[1 .490]; CX=[2 0 0]; num= ex-c; den= 2*c; [h,w]=freqz(num,den, 1 00, 'whole'); X=real(h); plot(w,x) %title('1/C(q"-1) -1/2') xlabel('w (radians)') ylabel('real part') axis([O 2*pi o 2]) !rm coprime_oerr_.ps set(gcf,'PaperPosition',[2,4,5.9,3.4]); print -append coprime_oerr 152

PAGE 159

% RLS ALGORITHM SIMULATION 3.10-TIME-VARYING % % SYSTEM: FURNACE delay: d=1 % % % % % % % A(q11-1 )*y(t) = q11-d*B(q11-1 )*u(t) + e(t) B(q11-1) = 8.6800*q11-1 + (2.0000 + 0.5"'cos(0. 1 t))*q11-2 % u(t) = input white noise with variance = 0.1 % e(t) = white noise with variance = 0.01 % % INITIAL CONDITIONS clear y(1 )=0; y(2)=0; error(1 )=0; error(2)=0; error(3)=0; yhat(1 )=0; yhat(2)=0; yhat(3)=0; epsilon(1 )=0; epsilon(2)..:.o; epsilon(3)=0; thetahat(1, 1 :4)= [0 0 0 0]; thetahat(2, 1 :4)= [0 0 0 0]; thetahat(3, 1 :4)= [0 0 0 0]; Fold=[1 0000 0 0 0;0 1 0000 0 0;0 0 1 0000 0;0 0 0 1 0000]; landa(1 )=0.96; landa(2)=0.96; landa(3)=0.96; n=500; %INPUT NOISE GENERATION randn('seed',5) u(1)= 0; u(2)= 0; for a=3:(n+ 1) 153

PAGE 160

u(a)=sqrt(.01 )*randn; end %DISTURBANCE NOISE GENERATION randn('seed', 1) for a=1 :(n + 2) e(a)=sqrt(0.0001 )*randn; end % Weighted Least Squares Algorithm for t=3:n phi(t, 1 :4)= [-y(t) -y(t-1) u(t-1) u(t-2)]; y(t+ 1 )= 1.578.1 *y(t) 0.5805*y(t-1) + 8.68*u(t-1) + (2.0 + 0.5*cos(0.05*t))*u(t-2) + e(t+ 1 ); yhat(t+ 1 )= thetahat(t, 1:4)*phi(t, 1 :4)'; epsilon(t+ 1 )= (y(t+ 1)-thetahat(t, 1 :4)*phi(t, 1 :4)')/(1 + phi(t, 1 :4)*Fold*phi(t, 1 :4)'); thetahat(t+ 1,1 :4) = (thetahat(t, 1 :4)' + Fold"phi(t, 1 :4)'*epsilon(t+ 1 ))'; Fnew= (1/landa(t))*(Fold(Fold*phi(t, 1 :4)'*phi(t, 1 :4)*Fold)/(landa(t) + phi(t, 1 :4)*Fold*phi(t,_1 :4)')); Fold= Fnew; error(t+ 1 )= y(t+ 1) yhat(t+ 1 ); landa(t+ 1 )=0.96; end %PLOT set(gcf,'PaperPosition' ,[2,2,5.9,3.1 ]); for p=1 :n a1 O(p) = -1.5781; a.20(p) = 0.5805; b10(p) = 8.6800; b20(p) = 2.0000 + 0.5*cos(0.05*p); end x=[1 :1 :n]; a1 =thetahat(1 :n, 1 ); a2=thetahat(1 :n,2); b1 =thetahat(1 :n,3); b2=thetahat(1 :n,4); plot(x,a1 ,'-' ,x,a2,'-',x,a1 O,'--',x,a20,'--',x,b1 ,'-',x,b1 0,'--') grid xlabel('t') ylabel('parameter value') axis([1 n -2.5 1 0.5]) pause !rm sim3_1 O.ps 154

PAGE 161

print -append sim3_1 0 plot(x,b2,' -' ,x,b20, '-'} grid %title('Time-varying parameter'} xlabel('t') ylabel('parameter value'} axis([1 n -1.0 4.0]} pause print -append sim3_1 0 b=[1 :1 :n+ 1 ]; plot(b,error) title(") xlabel('t') ylabel('y(t) y"(t)') grid axis([20 n -0.20 0.20]} print -append sim3_1 0 155

PAGE 162

% LEAST SQUARES ALGORITHM SIMULATION 3.11a % % SYSTEM: FURNACE d=O (delay) % % WITH UNMODELLED DYNAMICS gamma(t) % % A(ql\-1 )*y(t) = qA-d*B(qA-1 )*u(t) + e(t) + gamma(t) % % A(qA-1) = 1 + ( -1.3767)*ql\-1 + 0.4173*ql\-2 % % % % % % % % B(qA-1) = 11.52*ql\-1 + 2.5000*ql\-2 gamma(t)= [alpha/(1 O.?qA-1 )]*u(t-d) alpha =0.50 for first run; alpha =1.00 for second run; % u(t) ;,. input white noise with variance = .01 % e(t) = white noise with variance = 0.0001 % % INITIAL CONDITIONS clear y(1)=0; y(2)=0; yhat(1 )=0; yhat(2)=0; error(1 )=0; error(2)=0; epsilon(1 )=0; epsilon(2)=0; thetahat(1, 1 :4)= [0 0 o O]; thetahat(2, 1 :4)= [0 o 0 0]; Fold=[1 ooo o o O;O 1 ooo o O;O o 1 000 0;0 0 0 1 000]; landa(1)=1.0; landa(2)=1.0; gamma(1 ):::::0; gamma(2)=0; alpha=0.50; n=302; %INPUT NOISE GENERATION 156

PAGE 163

randn('seed' ,5) u(1)= 0; for a=2:(n+ 1) u(a)=sqrt(.01 )*randn; %fid = fopen('input2'); %u= fscanf(fid,'%g',inf); end %DISTURBANCE NOISE GENERATION randn('seed' ,1) for a=1 :(n + 2) e(a)=sqrt(0.0001 )*randn; end % Recursive Least Squares Algorithm for t=2:n phi(t,1 :4)= [-y(t) -y(t-1) u(t) u(t-1 )]; gamma(t+ 1 )= 0. 7*gamma(t) + alpha*u(t+ 1 ); y(t+ 1 )= 1 .3767*y(t) 0.4173*y(t-1) + 11.52*u(t) + 2.5*u(t-1) + e(t+ 1) + gamma(t+ 1); yhat(t+ 1 )= thetahat(t, 1 :4)*phl(t,1 :4)'; epsilon(t+ 1 )= y(t+ 1) -thetahat(t,1 :4)*phi(t,1 :4)'; Fnew= (1/landa(t))*(Fold -(Fold*phi(t,1 :4)'*phi(t,1 :4)*FoJd)/(landa(t) + phi(t,1 :4)*Fold*phi(t, 1 :4)')); thetahat(t+ 1,1 :4) = (thetahat(t,1 :4)' + Fnew*phi(t, 1 :4)'*epsilon(t+ 1 ))'; Fold= Fnew; error(t+ 1 )= y(t+ 1) yhat(t+ 1 ); landa(t+ 1 )=1; end %PLOT set(gcf,'PaperPosition' ,[2,2,5.9,3.1 ]); C=n-2; for t=1 :c thetahatp(t, 1 :4) = thetahat(t+2,1 :4); %errorp(t) = error2(t+3); end for p=1 :c a1 O(p) = -1.3767; a20(p)= 0.4173; b1 O(p) = 11.5200; b20(p) = 2.5000; end x=[1 :1 :c]; 157

PAGE 164

a1 =thetahat(1 :c, 1 ); a2=thetahat(1 :c,2); b1 =thetahat(1 :c,3); b2=thetahat(1 :c,4); plot(x,a1 ,'-',x,a2,'-',x,a1 O,'--',x,a20,'--') o/otitle('LS Algorithm -Unmodelled dynamics-gamma(t) (alpha=0.1 0)') xlabel('f) ylabel(' A-parameter value') o/oaxis([O c -4.0 12.0]) lrm sim3_11 a.ps print -append sim3_11 a plot(x,b1 ,'-' ,x,b2,'-',x,b1 O,'--',x,b20,'--') %title('LS Algorithm -Unmodelled dynamicsgamma(t) (alpha=0.1 0)') xlabel ('t') ylabei('Bparameter value') axis([O c -0.5 13.0]) print -appeno sim3_11 a b=[1 :1 :n+ 1]; plot(b,error) %title('Error = y(t) y"(t)') xlabel('t') ylabel('y(t) y"(t)') axis((20 c -.5 .5]) print -append sim3_11a !rm sim3_11 a diary sim3_11 a alpha thetahat(n, 1 :4) diary off 158

PAGE 165

REFERENCES 1. Astrom, K.J., "Lectures on the Identification problem -Least Squares method", Report 6806, Division of Automatic Control, Lund Institute of Technology: Lund, Sweden, 1968. 2. Astrom, K.J. and Eykhoff, P., "System Identificationa survey", Automatica, vel. 7, pp. 123-167, 1971. 3. Astrom, K.J. and Wittenmark, B., "On self-tuning regulators", Autornatica, vol. 9, pp. 185-199, 1973. 4. Baheti, R. and Mohler, R., "Experimental Results in the Modeling and Control of a Small Furnace", Journal of Dynamic Systems, Measurement and Control, vel. 103, pp. 307-374, 1981. 5. Boyd, S., Lau, M. and Kosut, A., "Identification of Systems with Parametric and-Nonparametric Uncertainty", American Control Conference, pp. 24122416, 1990. 6. Landau, I. D., "A survey of model reference adaptive techniques-theory and applications", Automatica, vel. 10, pp. 1974. 7. Landau, I.D., "Unbiased recursive identification using model reference techniques", IEEE Transactions on Automatic control, vel. AC-21, pp. 97-99, 1976. 159

PAGE 166

8. Landau, I. D., "Elimination of the real positivity condition in the design of parallel MRAS", IEEE Transactions on Automatic control, vol. AC-23, pp.1015-1 020, 1978. 9. Landau, l.D., An addendum to "Unbiased recursive identification using model reference techniques", IEEE Transactions on Automatic control, vol. AC-23, pp.97-99, 1978. 10. Landau, l.D. "Adaptive Control, the Model Reference Approach", Dekker, New York, 1979. 11. Landau, l.D., "System Identification and Control Design", Englewood Cliffs, NJ: Prentice Hall 1990. Ljung, L., "Asymptotic theory of prediction error estimates for dynamical systems", Report UTH-ISY-1-0220, Department of Electrical Engineering, Unkoping University, Linkoping, Sweden, 1978. 13. Ljung, L. and Soderstrom, T., "Theory and Practice of Recursive Identification", MIT Press, 1983. 14. Moore, J. and Ledwich, G., "Multivariable adaptive parameter and state estimators with convergence analysis", Journal of the Australian Mathematical Society, vol. SML-9, pp.197-205, 1980. 15. Morris, R. and Neuman, C.,"Noninvertible Digital Transfer Functions Arising From Minimum Phase Analog Process Models", Journal of Dynamic Systems, Measurement and Control, vol. 105, pp. 46-48, 1983. 16. Neveu, J., "Discrete Parameter Martingales", North-Holland, Amsterdam, 1975. 160

PAGE 167

17. Panuska, V., "A stochastic approximation method for identification of linear systems using adaptive filtering", Joint Automatic Control Conference, Ann Arbor, 1968. 18. Plackett, R.L., "Some theorems in least squares",Biometrika, vol. 37, p. 149, 1950. 19. Solo, V., "The convergence of AML", IEEE Transactions on Automatic Control, vol. AC-24, pp. 958-963, 1979. 20. Young, P.C.,"The use of linear regression and related procedures for the identification of dynamic processes", Proc. 7th IEEE Symposium on Adaptive UCLA, 1968. 161