
Citation 
 Permanent Link:
 http://digital.auraria.edu/AA00001968/00001
Material Information
 Title:
 Model reference adaptive control
 Creator:
 Gibbs, Alan Edwin
 Place of Publication:
 Denver, CO
 Publisher:
 University of Colorado Denver
 Publication Date:
 2000
 Language:
 English
 Physical Description:
 ix, 134 leaves : ; 28 cm
Subjects
 Subjects / Keywords:
 Adaptive control systems  Mathematical models ( lcsh )
Adaptive control systems  Mathematical models ( fast )
 Genre:
 bibliography ( marcgt )
theses ( marcgt ) nonfiction ( marcgt )
Notes
 Bibliography:
 Includes bibliographical references (leaf 134).
 Thesis:
 Electrical engineering
 General Note:
 Department of Electrical Engineering
 Statement of Responsibility:
 by Alan Edwin Gibbs.
Record Information
 Source Institution:
 University of Colorado Denver
 Holding Location:
 Auraria Library
 Rights Management:
 All applicable rights reserved by the source institution and holding location.
 Resource Identifier:
 47103486 ( OCLC )
ocm47103486
 Classification:
 LD1190.E54 2000m .G52 ( lcc )

Downloads 
This item has the following downloads:

Full Text 
MODEL REFERENCE ADAPTIVE CONTROL
by
Alan Edwin Gibbs
B.S., University of Colorado at Denver, 1997
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering
This thesis for the Master of Science
degree by
Alan Edwin Gibbs
has been approved
by
Jan T. Bialasiewicz
Hamid Z. Fardi
Miloje S. Radenkovic
Date
Gibbs, Alan Edwin (M.S., Electrical Engineering)
Model Reference Adaptive Control
Thesis directed by Professor Miloje S. Radenkovic
ABSTRACT
This thesis explores Model Reference Adaptive Control theory. Adaptive control
systems are utilized when the parameters of the system being controlled are unknown
and/or vary in time. The theory for the Normalized Gradient algorithm, Least Squares
algorithm, Model Reference Adaptive Control algorithms based on the normalized
gradient and least squares algorithms, and the Pole Placement algorithms are developed.
The design procedure when disturbances are present and when the system is non
minimum phase are discussed. The thesis also develops the methodology for robust
adaptive control systems as well as addressing practical issues and implementation of
robust controllers. Simulations utilizing the least square algorithm and the model
reference adaptive control algorithm based on least squares are performed with varying
noise levels injected into the controlled system. A robust model reference adaptive
controller is also developed and compared to the idealized model reference controller.
Using information from the simulations, global stability, tracking performance, and
robustness of the least square based controllers is investigated.
This abstract accurately represents the content of the candidates thesis. I recommend its
publication.
Signed
iii
Miloje S. Radenkovic
CONTENTS
Figures.................................................................... vi
Tables..................................................................... ix
Chapter
1. Introduction........................................................... 1
1.1. Model Reference Nonadaptive Control System............................ 4
1.2. Model Reference Adaptive Control Systems............................... 8
2. Model Reference Adaptive Control Algorithms........................... 11
2.1. Normalized Gradient Algorithm......................................... 12
2.2. Least Squares Algorithm............................................... 16
2.3. Model Reference Adaptive Controller................................... 19
2.4. Pole Placement Adaptive Controller.................................... 26
3. Algorithm Design Procedure for Disturbances........................... 31
4. Design Procedure When System is Nonminimum Phase..................... 34
5. Practical Issues and Robust Control Implementation.................... 37
5.1. Computational Delay................................................... 38
5.2. Sampling, Prefiltering, Postfiltering, and Data Filters............. 41
5.3. Parameter Tracking.................................................... 46
5.4. Covariance Resetting.................................................. 47
5.5. Estimator Windup and Methods to Combat Windup......................... 48
5.5.1. Conditional Updating.................................................. 49
5.5.2. Constant Trace Algorithms............................................. 49
5.5.3. Directional Forgetting................................................ 50
5.5.4. Leakage............................................................... 51
5.6. Robust Estimation..................................................... 52
IV
6. Simulations.............................................................. 54
6.1. Simulation using Least Mean Square Algorithm............................. 56
6.2. Results of Least Mean Square Simulations................................. 76
6.3. Simulation using Model Reference Adaptive Control Algorithm.............. 78
6.4. Results of Model Reference Adaptive Controller Simulations............... 93
6.5. Simulations using Robust MRAC with AntiAliasing Filter.................. 96
6.6. Results of Robust MRAC Antialiasing Filter Simulations................. 102
6.7. Robust Model Reference Adaptive Control Simulations..................... 103
6.8. Results of Robust Model Reference Control Simulations................... 117
7. Conclusion.............................................................. 118
Appendix
A Least Mean Square Algorithm............................................. 120
B ModelReference Adaptive Control Algorithm.............................. 123
C Robust ModelReference Adaptive Control Algorithm....................... 125
D Robust MRAC compared to Idealized MRAC Algorithm........................ 127
E Antialiasing Filter Algorithm.......................................... 130
References.................................................................... 134
v
FIGURES
Figure
1.1. Controller Design Process............................................. 2
1.2. Adaptive Control System............................................... 2
1.3. Nonadaptive Model Reference Control System........................... 3
1.4. Simplified Nonadaptive Model Reference Control System................ 3
1.1.1. Nonadaptive Closed Loop System....................................... 5
1.1.2. Nonadaptive Reference Model.......................................... 6
2.1. General Form of Adaptive Controller.................................. 11
2.4.1. Pole Placement Controller............................................ 27
5.1. Adaptive Robust Controller........................................... 38
5.1.1. Computational Delay.................................................. 39
5.2.1. Amplitude Curve for the Data Filter Hj (<7).......................... 45
6.1.1. Least Square X =0.98, No Noise, and No Variance ...... 57
6.1.2. Least Square X = 0.20, No Noise, and No Variance ...... 58
6.1.3. Least Square X=0.10, No Noise, and No Variance ...................... 59
6.1.4. Least Square X=0.09, No Noise, and No Variance ...................... 60
6.1.5. Least Square X=0.089, No Noise, and No Variance ..................... 61
6.1.6. Least Square X=0.087, No Noise, and No Variance ..................... 62
6.1.7. Least Square X=0.98, Noise = 2, and No Variance ...... 63
6.1.8. Least Square X = 0.50, Noise = 2, and No Variance ...... 64
6.1.9. Least Square X=0.20, Noise = 2, and No Variance ................... 65
6.1.10. Least Square A,=0.083, Noise = 2, and No Variance................ 66
6.1.11. Least Square X=0.98, No Noise, and With Variance = 2 ............ 67
VI
6.1.12. Least Square 1 = 0.50, No Noise, and With Variance = 2 .................. 68
6.1.13. Least Square X = 0.30, No Noise, and With Variance = 2 .................. 69
6.1.14. Least Square 1 = 0.08, No Noise, and With Variance = 2 .................. 70
6.1.15. Least Square 1 = 0.07, No Noise, and With Variance = 2 .................. 71
6.1.16. LeastSquare 1 = 0.98, Noise = 2, and With Variance = 2 .................. 72
6.1.17. LeastSquare 1 = 0.50, Noise = 2, and With Variance = 2 .................. 73
6.1.18. LeastSquare 1 = 0.09, Noise = 2, and With Variance = 2 .................. 74
6.1.19. Least Square 1 = 0.0044, Noise = 2, and With Variance = 2 ................. 75
6.3.1. MRAC Noise = 0, and 1 = 1 .................................................. 81
6.3.2. MRAC Noise = 0.0001, andl = l............................................... 82
6.3.3. MRAC Noise = 0.001, and 1 = 1 .................. 83
6.3.4. MRAC Noise = 0.005, and 1 = 1 .................. 84
6.3.5. MRAC Noise = 0.01, andl = l ................................................ 85
6.3.6. MRAC Noise = 0.1, andl = l ................................................. 86
6.3.7. MRAC Noise = 0, and 1 = 0.98 ............................................... 87
6.3.8. MRAC Noise = 0, and 1 = 0.5 ................................................ 88
6.3.9. MRAC Noise = 0, and 1 = 0.4 ................................................ 89
6.3.10. MRAC Noise = 0.01, and 1 = 0.98 ........................................... 90
6.3.11. MRAC Noise = 0.01, and 1 = 0.9 ............................................ 91
6.3.12. MRAC Noise = 0.01, and 1 = 0.5 ............................................ 92
6.5.1. Step Response of the Plant.................................................. 96
6.5.2. Step Response of the Closed Loop Plant with MRAC............................ 97
6.5.3. Step Response of the Closed Loop with Antialiasing Filter.................. 97
6.5.4. Gain Magnitude for the Plant................................................ 98
6.5.5. Phase Margin for the Plant.................................................. 98
vii
6.5.6. Gain Magnitude for the Closed Loop Plant with MRAC................................ 99
6.5.7. Phase Margin for the Closed Loop Plant with MRAC.................................. 99
6.5.8. Gain Magnitude for the Closed Loop Plant with Antialiasing Filter............ 100
6.5.9. Phase Margin for the Closed Loop Plant with Antialiasing Filter................. 100
6.5.10. Gain Magnitude Comparison....................................................... 101
6.5.11. Phase Margin Comparison......................................................... 101
6.7.1. Robust MRAC Noise = 0, and X = 0.98 ............................................. 103
6.7.2. Robust MRAC Noise = 0.001, and A, = 0.98 ........................................ 104
6.7.3. Robust MRAC Noise = 0.01, and X = 0.98 .......................................... 105
6.7.4. Robust MRAC Noise = 0.1, and X = 0.98 ........................................... 106
6.7.5. Robust MRAC Noise = 0 and X = 0.5 ......................................... 107
6.7.6. Robust MRAC Noise = 0 and X = 0.3 ......................................... 108
6.7.7. Robust MRAC Noise = 0.1 and X = 0.9 ....................................... 109
6.7.8. Robust MRAC Noise = 0.1 and A. = 0.9 ...................................... 110
6.7.9. Robust MRAC Noise = 0.001 and X = 0.98 ...................................... Ill
6.7.10. Robust MRAC Noise = 0.001 and X = 0.98 ...................................... 112
6.7.11. Robust MRAC Noise = 0.003 and X = 0.9 ...................................... 113
6.7.12. Robust MRAC Noise = 0.003 and X = 0.9 ...................................... 114
6.7.13. Robust MRAC Noise = 0.002 and X = 0.5 ...................................... 115
6.7.14. Robust MRAC Noise = 0.002 and X = 0.5 ...................................... 116
viii
TABLES
Table
5.2.1 Properties of Fourth Order Bessel Filters................................ 42
5.3.1. Relation Between jh and X .............................................. 47
ix
1. Introduction
The objective of this thesis is to explore Model Reference Adaptive Control systems.
Theoretical background, algorithm development, and simulation examples will be
presented. Adaptive control systems provide a systematic technique for automatic
adjustment of controllers in real time, to maintain a desired level of performance when
the parameters of the dynamic model are unknown and / or change in time. In order to
design a good controller the designer needs to:
1. Specify the desired closed loop performance
2. Know the dynamic model of the plant to be controlled
3. Have a controller design method making it possible to meet the desired performance
for the corresponding plant model
An adaptive control system of a plant or process is one that automatically tunes itself in
real time. The dynamic model of the plant can be obtained from input/output
measurements obtained in open or closed loop. Or put simply, the design of the
controller is accomplished from data collected from the system to be controlled.
Adaptive control systems can be viewed as tuning the controller in real time from data
collected in real time on the system being controlled. The block diagrams on the
following page in Figures 1.1 and 1.2 illustrate the adaptive control design process
discussed above.
1
Desired
Performance
Controller Plant
Design Model
Controller
Parameters
u
Figure 1.1 Controller Design Process
Desired
Performance
Controller
Parameters
Reference
Model
Adjustable
Controller
u
Plant
Figure 1.2 Adaptive Control System
Some systems that appear to be adaptive lack the structure for their controller to be self
tuning. These systems use known fixed parameters to produce the desired results and are
not selftuning to changes in the system being controlled. Nonadaptive control systems
may be thought of as a prefilter to the system being controlled and do not interactively
change the system input. Figures 1.3 and 1.4 below depict block diagrams of the non
adaptive model reference control system.
Figure 1.3 Nonadaptive Model Reference Control System
Figure 1.4 Simplified Nonadaptive Model Reference Control System
3
1.1 Model Reference Nonadaptive Control System
For a system as shown in Figures 1.3 and 1.4 we consider a system in the following
form
A{q~')y{O = 0~rf *(?"') (0 (111)
where (/) and y(t) are the system input and output respectively and d is the system
delay. We assume that A(q~) and B(q~x^ are coprime polynomials and given in the
form
A(qx)=\ + cixq' + + anqn (1.1.2)
B(q~l)= h + b\ + + K (113)
where q~l is the unit delay operator. We wish to design a controller where the closed
loop system gives the desired response to the reference signal r (t).
R(q~l)u(t) = T(ql)r(t)~ S(q~l)y(t) (1.1.4)
where R, S, and are polynomials of the form
^i(^_1) = l + r\ Q~l + + rd\ l (1.1.5)
S{q~l)=s0 + + *, q~"s 01.6)
and ns = max {deg Am d, deg ,41}
Given the design specifications co n (natural frequency) and (damping coefficient).
We can synthesize the nonadaptive reference model in the form
4
Am{
(1.1.7)
where ym (?) is the output of the reference model and
') 1 + 1 + + anm q am (1.1.8)
b0. + blm (1.1.9)
We assume that Am(q 1) and Bm (q ') do not have common factors. The block
diagrams in Figures 1.1.1 and 1.1.2 depict the closed loop system.
Controller
____ ______
Figure 1.1.1 Nonadaptive Closed Loop System
riO . Xg1) y (0
F A. C?';>
Figure 1.1.2 Nonadaptive Reference Model
5
If solve equations (1.1.1) and (1.1.4) for u{t) and equate we can solve for >>(7) .
A A(q')R(q') + qd B(q')s(q) (1.1.10)
Model reference design requires that the transfer function of the closed loop system
equal the transfer function of the reference model. From equations (1.1.7) and (1.1.10)
we are now in a position to derive
b.{iA A(q')R{q') + q"1 B(q')S(q~') A ,(q~) The polynomials T(q~1) and R{q X ) in equation (1.1.11) are given by (l.i.ii)
T(q') = B and R(q') = B(q~')R t(q~') and is obtained as a solution to the Diophantine equation (1.1.12)
A m{q~x) = A?~x)R i(?_1) +
Equation (1.1.13) has a causal solution if the polynomials R ,(
the form
RM~')= l + a^~x + + rd1
%")=5o +*iq'1 + +* ?~"fl (1.1.15)
where ns = max {n 1, deg Amd} and n is the degree of the polynomial A(q~l)
and deg Am denotes the degree of A m(q~X) The degree of is equal to d 1
6
and the degree of S[q 1) is equal to max {deg Am d, deg A 1} The control
structure of equation (1.1.4) is determined by equations (1.1.12) and (1.1.13) in terms of
the system polynomials , B(q~x) and the reference model polynomials
A m(<7_1) and B m(<7_l). We can easily check to see if the polynomials
T(q~'), and S[q~x^ satisfy the desired relation given in equation (1.1.11). If we
substitute R[qX) and T{c[~') from equation (1.1.12) into (1.1.11) we obtain
q* m(q') f B ,(q~')
A(q')B(q')Rlq')+qdB(q')S(q') A{q~')R ,(') + q~dS[q')
Also we know from equation (1.1.13) that A m(q~X) = A(q~x) R 1(^r_l) + q~d B(qj or
qd B(f')B m(q') qBj,q')
A(q")B{q")R,(q')+qB{q')S(q') A ,(q~')
From the above relation we can conclude that the closed loop transfer function is equal
to the transfer function of the reference model.
7
1.2 Model Reference Adaptive Control Systems
Any system being controlled requires a controller that can measure the systems output
against the intended output. Adaptive control systems adjust the controller in real time
to maintain the intended output. Adaptive control systems can use the output error of
the controlled system to maintain the intended output. Output error of the system being
controlled is determined by taking the difference between the reference signal and the
system output. The error signal could be used directly without modification as feedback
to close the system loop, however, this case would not be an adaptive controller.
In a system that utilizes adaptive control, the adaptive controller closes the feedback
loop around the system to be controlled. All adaptive control systems perform three
main functions:
1. Identification of system dynamics
2. Control decision making
3. Modification of control parameters
The adaptive controller uses the plants input and output in conjunction with desired
performance to identify the system dynamics. The adaptive controller then makes a
decision as to how the control parameters will be modified based on the system
dynamics identified in step one. Adaptive control systems can identify the plant
dynamics, compare the dynamics to the desired performance and then regulate the
8
dynamics of the system being controlled. Or the adaptive controller can identify the
system output and compare it to the reference signal before making a decision on the
adjustment to the controller. These two methods of control can be thought of as either
direct or indirect adaptive control.
The direct method of adaptive control calculates the controller parameters directly to
minimize the error signal. This method requires that the controller have enough
parameters to match the order of the plant being controlled.
The indirect method is used when the plant parameters are estimated to minimize the
error signal. The adaptive controller then calculates the controller parameters to produce
the desired performance. This necessitates the use of two separate sets of parameters.
The use of indirect adaptive control makes it possible for the control feedback signal to
be different than the error signal. The use of indirect control allows the controller the
possibility of recalculating the controller parameters at predetermined intervals instead of
every interval. The extra time allows the controller to improve plant identification and
the estimated plant parameters used to retune the controller may be more accurate. In a
higher order system this may be mandatory to allow the controller enough time to
process the plant controller parameter estimates in real time.
9
The heart of any adaptive controller is the Parameter Adaptation Algorithm (PAA). The
PAA is the core of the controller that adapts the estimates of the adjustable parameters
used to control the plant. A comprehensive plant model is mandatory in order to select
the appropriate PAA to control the plant. The algorithms discussed in this thesis assume
that the plant models have parameters that are linearly related to the input and output of
the plant. However, the parameters may be time varying and the plant model may be
nonlinear. The Model Reference Adaptive Control algorithms goal is to estimate the
unknown parameters of the plant in real time by minimizing the error that is recursively
calculated from the plant output and the reference model while utilizing previously
calculated parameter values.
10
2. Model Reference Adaptive Control Algorithms
Adaptive control algorithms examined in this thesis differ in the way information is
processed in real time in order to tune the controllers for achieving the desired
performance. In an adaptive control system we would like the steady state value of the
error to be zero. Figure 2.1 below illustrates the general form of an adaptive controller.
Figure 2.1 General Form of Adaptive Controller
A system where the error goes to zero in a minimum number of sampling periods is
referred to as a deadbeat controller. This leads us to select the fastest method possible to
achieve zero error, namely a steepest descent algorithm. Let us examine the normalized
gradient method.
11
2.1 Normalized Gradient Algorithm
Consider the system of the form
Where u{t) is the system input, _y(7) is the system output and the polynomials A(q~x)
and 5 1) have the form
A(
bo + bx q~l + + bm q~m ,b0* 0 (2.1.3)
For a given design specifications (0 (natural frequency) and C, (damping coefficient) we
can synthesize the adaptive reference model in the form
Am (q~')ym it) = ^d Bm (?"') K0 (Z14)
where ym (/) is the output of the reference model and
Am(q~')= 1 + ax m + " + anm (215)
Bm {q ') = b0m +blmq~1 + + bmm q~mm (2.1.6)
We assume that and Bm [q 1) do not have common factors. Our objective
is to design an adaptive controller to achieve zero tracking error where the output of the
plant y(t) minus the reference model output y*(t) goes to zero as t oo, or
[^(r) /(f)] = 0 (217)
Or the following functional criterion should be minimized
12
(2.1.8)
h = \[y{t)y{t)1\
If we write the adaptive control system model in the form
y(t+i)+ y{t)+a 2
= b0 u{t)+bxu(t1) +...+bB u(t nB)
Solving for y(t +1)
y(t +1) = a, y{t)a2 y{t\)...anAy{tnA) + ...
+ b0u(t)+bxu(tl) +.. .+bnu{tng)
We know from equation (2.1.10) that
At1)
y(t+1)= [ai a2
a
A
t>0
y{tnA)
u{t)
u(t l)
u(t~nB)
Let e0=[i a2 anA b0 bx bnB]
and
At)
At1)
At* a)
u(t)
u(t l)
u(tilB)
(2.1.9)
(2.1.10)
(2.1.11)
(2.1.12)
(2.1.13)
13
We now have
^ + l)=0or^(O (2.1.14)
The vector 0 0 is referred to as the system parameter vector and
measurement vector. From (2.1.14) it is easy to obtain
+ y.(> +1) =6or At) r.( + l) (2.1.15)
From (2.1.7) we know lim [f(0 ~ I'm (/) ] = 0 and from equation (2.1.15) we can
conclude the controller is optimal in the sense of (2.1.7) is given by
j(+i)=e.r At) (2l ie)
However, the system parameters are not always known, the adaptive controller becomes
>.('+!)= (21.17)
Where 0 (r) is an estimate of the real parameter vector 0O. We can obtain 0 (t)
from
e+.)=Â§W*4^
where p is the algorithm gain 0 < /j. < 2
We now take the derivative of 70 with respect to 0 (t)
5 h
d9{i)
= [>(0i.(0]
4xo/(<)]
ae{i)
And from previously we can derive
(2.1.18)
(2.1.19)
14
dQ{t) n )
(2.1.20)
We are now in position to derive from equations (2.1.18) and (2.1.20)
8(' + 1) = 9(t) + pi ^0 [X0 y.W] <2121)
The equation derived in (2.1.21) will be unstable. We can normalize the equation by the
use of the Euclidian norm of the measurement vector <)(f) The constant c is up to the
discretion of the designer. Most often a small value is chosen to avoid division by zero.
The Euclidian norm of (f).is
0 < c < oo Now (2.1.21) becomes
Â§(< + !)= 9{t)
V A1)
c + wolf
MOJ.W]
(2.1.22)
Where
0 < i < 2 Is the algorithm gain
(ii) 0 < c < oo Constant used at designers discretion
(iii) 0 (0) = 0 Initial condition of system parameter estimate vector
Equation (2.1.22) above is known as the Normalized Gradient Algorithm.
15
2.2 Least Squates Algorithm
In practical applications of adaptive control algorithms, better performance can be
obtained if we use the least squares algorithm. The normalixed gradient algorithm
discussed above will only be globally stable if
*{ A. ('*)11 > 0 i!=l (2.2.1)
For all 0 < co < 2k. This is called the strictly positive real condition. Condition (2.1.23)
is very restrictive and means that the algorithm will not be stable for all reference
models. We need to design an estimation algorithm such that the restrictive condition
above is avoided. Better performance can be obtained with the use of the least squares
algorithm. If we use instead of equation (2.1.22) the following algorithm
0 (*+1) = 0(0 + pi0 M WO ym{0] (222)
Where the matrix p (t) is defined by
Pif)_1 = Pit1)" + pi)=p* Po>0 (223)
The form of the least square algorithm shown in equation (2.2.2) is not practical to use in
a recursive situation, since it requires the algorithm to calculate the inverse of the matrix
p (t) at every iteration. The use of the Matrix Inversion Lemma (MIL) will enable us to
develop a form of the least square algorithm that does not require the inversion, thus
reducing the number of calculations needed at each iteration.
16
Let Abea (nxn) dimensional matrix, B a (ft x m) dimensional matrix of maximum
rank, C a (m x m) dimensional nonsingular matrix, and D a (/ft x ft) dimensional
matrix of maximum rank, Then the following MIL identity holds
(A + BCD)'1 = A~' A~x B(C~X + D A~x B)'x DA~x (2.2.4)
Taking the inverse of (2.2.3) yields
[/>(<) ~'] 1 = ' +
Taking the right side of equation (2.2.5) and matching it to the left side of the MIL and
letting A = p{t\) , B = <)(t), C = 1, and D = 0(t) T. We can derive
p(fi)(0 ^(0 rpjtv
l + ft) T p(t1) 4{t)
(2.2.6)
Rewriting equation (2.2.6) we now have
Pit) = P(t~l) ~
pjtw{t) At)Tpi*1)
l + 4{t)T pit1) 4{t)
With the conditions on pit) that
(2.2.7)
p (0) = p (0) T > 0 or p (0) is Hermitian
Then the conditions of the MIL will be satisfied for all t
Equation (2.2.7) above is known as the Recursive Least Square Algorithm (RLS). It is a
selftuning adaptive control algorithm that does not require any matrix inversions.
Implementation of the RLS algorithm requires that that p (0) be chosen as a symmetric
17
positive definite matrix. This will ensure that every matrix p(t) will also be symmetric
and positive definite.
18
2.3 Model Reference Adaptive Controller
Next we consider the design of a Model Reference Adaptive Controller based on the
Normalized Gradient algorithm. We consider a system in the following form
where u[t) and y(t) are the system input and output respectively and d is the system
delay. We assume that A(q~) and are coprime polynomials and given in the
form A(q')= 1 + a,q~' + +
B{q~')=b, +f.,
where q 1 is the unit delay operator. We wish to design a controller where the closed
loop system gives the desired response to the reference signal r (t).
R(q~l)u(t) = T(q'l)r(t)~ S(q~')y(t) (2.3.4)
where R, S, and are polynomials of the form
^i(9'') = l + + + rd~ (2.3.5)
S{
and ns = max {deg Am d, deg A 1}
Given the design specifications co n (natural frequency) and Â£n (damping coefficient).
We can synthesize the nonadaptive reference model in the form
19
Am(<7~')ym {t) =
(2.3.7)
where ym (?) is the output of the reference model and
1 + a\m <1 + + anm K
Bm (?') = Km +Km
(2.3.8)
(2.3.9)
We assume that Am[q 1) and Bm[q 1) do not have common factors. If solve
equations (2.3.1) and (2.3.4) for u{t) and equate we can solve for _y(?).
y(h
q'B(g~') T(q')
A(q')R{q') + q~d B{q')S(q')
(2.3.10)
Model reference design requires that the transfer function of the closed loop system
equal the transfer function of the reference model. From equations (2.3.7) and (2.3.10)
we are now in a position to derive
B(q)T{q') B
A(q')R(q') + q B{q')S{q') A .(')
(2.3.11)
If we drop the unit delay operator for convenience, equation (2.3.11) becomes
BT B m
AR + q BS ~ Am
The polynomials T and R in equation (2.3.12) are given by
T B and R = BR ,
m i
and S is obtained as a solution to the Diophantine equation
(2.3.12)
(2.3.13)
20
Am = ARl+q~d S
(2.3.14)
Equation (2.3.14) has a causal solution if the polynomials R ( and S have the form
1 + atq~] + + rd_{ q~d+' (2.3.15)
%')= s0+Slq~' + +sns q~nsl (2.3.16)
where ns = max {n 1, deg Amdj and n is the degree of the polynomial A[q~x)
and deg Am denotes the degree of A m{q~'). The degree of R{ (q~') is equal to d 1
and the degree of S[q~^) is equal to max (deg Amd, deg A 1}. The control
structure of equation (2.3.4) is determined by equations (2.3.12) and (2.3.13) in terms of
the system polynomials A[q_1) B(q~l) and the reference model polynomials
^ and Bm
Another way to obtain the solution for the controller polynomials is to take equation
(2.3.14) and multiply Am by the output error. We now have
Am(q~')[y(t+d) ym{t + dj\ = Ay(t+d) + S y(t) Amy(t + d) (2.3.17)
We know from equations (2.3.1) and (2.3.7) that
Ay(t+d) = B u(t) and Amym(t+d) = B mr(t) (2.3.18)
Substituting (2.3.18) into (2.3.17) we can obtain
Am[y(t + d)ym(t + d)]= & R\u{t)+S y(t)B mr{t) (2.3.19)
21
If we set the output error to zero \y{t + d) ym{t + d)\ = 0, we can derive from
equation (2.3.19) the structure of the model reference controller
BR{u(t) = S y(t) + Bm r (t) (2.3.20)
Where /?,(#') and S(q~') are obtained from the polynomial equation given in
(2.3.14). In other words the controller is the same as the one defined in equations (2.3.4),
(2.3.13), and (2.3.14).
We wish to derive the regressive form of equation (2.3.19). Let
a (?') = a (') +(') (2.3.21)
Where na. = deg a (q'1 j = degB (q1 J + deg/?(g')
= degfl (?) + (<*l)
Now we can write equation (2.3.19) as
An[y(t + d) ym(t + d)] = 0or B mr{t) (2.3.22)
Where the coefficients of the system parameter vector defined by equation (2.3.21) are
0or =[a0,all...,a.a; (2.3.23)
The measurement vector (j) (t) is given by
(0 = [k(0. ... ,u(tna)\y(t) ,y(t1),... ,^(?j] (2.3.24)
If we set the output error to zero [y(t + d)ym(t + d)\ = Q, equation (2.3.22)
becomes the following vector recursive form
22
e.^W = *.(')'(')
(2.3.25)
In the case where the real parameter 0 0 is unknown, adaptive control concepts can be
used to estimate the parameter vector.
Q(t + d\)Tm = Bm(q~') r{t) (2.3.26)
Where Q(t+d 1) is obtained by the parameter estimator derived in the normali2ed
gradient algorithm discussed in section 2.1. The adaptive parameter estimator is
0(^+t/) 0(t\d 1) +
M (.*) \y(t+d) ym(t + d)]
C + \m f
(2.3.27)
Where
0 < p < 2 Is the algorithm gain
(ii) 0 < c < oo Constant used at designers discretion
(iii) 0(0) = 9 Initial condition of system parameter estimate vector
The recursive algorithm given above in equation (2.3.27) will only be globally stable if
5iJ A. (04 >0 r = 1 (2.3.28)
For all 0 < 0) < 2k. This is called the stricdy positive real condition. Condition (2.3.28)
is very restrictive and means that the algorithm will not be stable for all reference
23
models. We need to design an estimation algorithm such that the restrictive condition is
avoided. If we filter both sides of equation (2.3.22) with
1
4 (
, we derive
[y(t + d) ym(t + d)] = 8/ 
B
4 (?" ) 4 (?' )
Let
v(0 =
l
4 (q~ )
0(0
Rewrite equation (2.3.29) using (2.3.30)
[y(t + d) ym(t + d)] = 0oV(O
Using the fact that equation (2.3.7) implies
y. (< + d) = 4= (<)
5
4, (^")
<0
r(t) (2.3.29)
(2.3.30)
(2.3.31)
(2.3.32)
Substituting equation (2.3.32) into (2.3.31) we have
y(t+d)ym(t + d)= 0 0r
(2.3.33)
And if we set the output error y(t + d) ym(t + d) = 0, equation (2.3.33) becomes
So TH<)= y.( + d)
The corresponding adaptive model reference controller becomes
Â§(t+dl)T y/(t) = ym(t + d)
(2.3.34)
(2.3.35)
Where Q (t + d 1) is the estimate obtained by the following estimation algorithm
Â§(l+d) = d+J1) + ^(ny(^)y.o + d)\ (23 36)
C + IU/(0 II
24
Where
(i) 0 < i < 2 Is the algorithm gain
(ii) 0 < c < oo Constant used at designers discretion
The Model Reference Adaptive Controller given in equations (2.3.35) and (2.3.36) is
based on the Normalized Gradient algorithm developed earlier in section 2.1. V/ (t) is
the filtered version of the measurement vector (f> (t). The adaptive controller given in
equations (2.3.35) and (2.3.36) is globally stable and does not require the condition
stricdy positive real condition given in (2.3.28). The Recursive Least Squares algorithm
developed in section 2.2 has the corresponding adaptive model reference form shown
below
Â§(t+d) = 0(t+d1) + p(t) y/{t) [.y(t + d) ym(t + d)]
Where the gain matrix p (t) is defined by
Pit) = p(t 1) +
p(t l)y(Q y/T(t) p(t 1)
1 +vT(t) P(t i) Wit)
With the conditions on p it) that
^(0) = P o P o > > and 0 () = 0i
(2.3.37)
(2.3.38)
25
2.4 Pole Placement Adaptive Controller
The performance of the system being controlled depends on the placement of the
closedloop poles. In an adaptive control system it is possible to assign the closedloop
poles to predefined positions. If we consider a system of the form
(0 (2.4.1)
Where u(t) is the system input, y(t) is the system output, and the polynomials A[q 1)
and B (q 1) are defined as
A(q~x)= 1 + a^q' + + a q~n (2.4.2)
B{q~')=K +blq~l +" + bm q'm b0 0 (2.4.3)
It is well known that control law has the following structure
R(q~')u(t) = S (ql)y{t) +y\t + d) (2.4.4)
Where d > 1 is the system delay and y *(t) is the command signal. The polynomials
and are given by
JR(4r,)=l + rl^1 + + rma q~n* (2.4.5)
S(q~')=s0 +Ji ?"* + + smi q~ms (24.6)
The location of the desired poles are specified by the polynomial A [q~X)
A *(q~l) = 1 + a\q~x + + a^m q~**' (2.4.7)
26
Where nA. = deg A *(<71) = 2n\
and n = maxjdeg ^(g1),deg
The polynomials R(q~X) and S[q~x) from the control law in equation (2.4.4) can be
obtained by solving the following Diophantine equation
A(g ') R(q') + q~d B(q~') S(q~') = A V) (2.4.8)
This polynomial equation has a unique solution only if
deg degS1^1) = n\ (2.4.9)
The closed loop system is depicted in Figure 2.4.1 below
Controller
in Disturbance
Figure 2.4.1 Pole Placement Controller
27
If we solve for u(t) in equation (2.4.4) and substitute into (2.4.1) we obtain
yO)
___________Ml!)_____________
A(q ') R(q') + q'd B(q') S(q)
/(<)
(2.4.10)
Next we substitute in the denominator of equation (2.4.10) from (2.4.8) to derive
y{t)
(2.4.11)
The poles of the closed loop system have the desired locations defined by the
polynomial A *(g_l). In an adaptive control system, A[q ^ and are not always
known. The objective of the designer is a controller such that the closed loop system has
the desired poles defined by the polynomial A *(# ') In this case, since A(q l) and
are unknown we need to identify the system parameters, or in other words the
coefficients of the polynomials A[q ') and B[q 1) The system given by equation
(2.4.1) can be written in the form
y(t) = axy(t1)aA y{tnA) + b0u(td) + bl u{td 1) +
(2.4.12)
+ bmtu(td nB)
Using previously derive control theory we can rewrite (2.4.12) as
y(t)=$J (j){t\)
(2.4.15)
Where
0
T
0
\a{ a2
a
bo
(2.4.16)
28
,u(td nB)
The Identification algorithm has the following form and is called the
prediction algorithm
Â§(/)=Â§(<!) +
C + U Itl
Where
(i) 0 < (J, < 2 Is the algorithm gain
(ii) 0 < c < oo Constant used at designers discretion
This algorithm is driven by the output prediction error
e(0 =X0 0(^1) r^l)
Where
y{t) = 0 (f 1) T
represents the prediction of the output signal. Since at time t 1
Q(t\)T= d,{t1), a(tl), b0(t1), bn(t~.
We can find estimates of the polynomials A(q ') and B[q~^ i.e.
A(t\,q~l)= 1 + ax(tl)q+ + aA(t1) q~"A
B(t\,ql)=b0(tl) + bl (t\)q~1 + + bms(t1) q~m* ,S0* 0
If we place these estimates into equation (2.4.8) we obtain
(2.4.17)
output error
(2.4.18)
(2.4.19)
(2.4.20
)] (24.21)
(2.4.22)
(2.4.23)
29
^U"1) Â£(fU) + q B(tl,q1) S(tl,q~l) = A *(q"') (24.24)
From which we can obtain estimates of the polynomials 1,<7_1) and S[t \,q X)
The adaptive Pole Placement controller can now be derived in the form
R(t\,q~x^u(td) = S(t\,q~x}y{td) +y (t) (2.4.25)
When
lim A{tl,q~l) = g) and lim B{t\,q~l) B{q~') (2.4.26)
Then we will have
lim R(t 1,<71) = q~l) and lim Â§(t1,^_1) = iS1 (q~') (4.2.27)
The poles of the closed loop system will be at the desired location defined by the
polynomial A (q 1).
30
3. Algorithm Design Procedure for Disturbances
The Model Reference Adaptive Controller in section 2.3 was developed from
B T
B
AR + q~dBS A
(3.1)
Without taking external disturbances into account. When external disturbances exist, it
B
is not sufficient to specify The output feedback in the system will pickup
additional dynamic disturbances that are not acted upon by the command signal. The
controller designer should strive to identify the physical properties of the disturbances.
We need to specify the observer dynamics A 0 ( q _1) which will take the disturbances
into account. The observer polynomial A Q(q ) is designed to minimize the
disturbances to the controlled system.
The designer needs to select the polynomials R, S, and T in equation (3.1) so that
equality is achieved. Let us choose to let
T=BmA0 (3.2)
Where A 0 (q 1) is the observer polynomial specified by the designer to take the
disturbances in the controlled system into account. We can determine the polynomials
R and S as a solution to the Diophantine equation
31
(3.3)
d+1
s
A R l + q~d S = A 0A m
The polynomials R and S have the form
+ +" + rd_i q
S[q~l)=s0 + sx q 1 ++5^ q
(3.5)
Where
n s = max {n 1, deg Am + deg A0 d}
n = deg A^q _l)
(3.4)
(3.6)
(3.7)
and polynomial R = BRX (3.8)
All of the controller polynomials have now been determined from equations (3.2), (3.3),
and (3.8). We can verify that the polynomials R, S, and T determined in equations
(3.2), (3.3), and (3.8) satisfy equation (3.1) by substituting T from equation (3.2) and R
from equation (3.8) we are able to derive
B T
BBmA0
BmA0
AR + q~aBS ABR,+q dBS A Rx + q~a S
(3.9)
We know from equation (3.3) A R + q d S = A 0A m. If we substitute into (3.9) we
can derive
B T
B A0 B
ms) __ <
AR + q~dBS AmA0
(3.10)
32
We know from equation (3.10) that the closed loop system will act like the specified
reference model. The observer polynomial A 0 (q 1) must always be a stable
polynomial with all of its zeros located inside the unit circle in the Z plane.
33
4. Design Procedure When System is
Nonminimum Phase
If the system being controlled is nonminimum phase, then the polynomial q I) has
unstable zeros located outside of the unit circle in the Z plane. Where the system is
given as
A(q~X)y{t) = q~d B{q~l)u(t) (4.1)
And the reference model is given by
Am (0 = ?'rf Bm (?) KO (4*2)
The reference model specifies the desired response to the command signal r (t ). The
objective is to design a controller.
*(*>(0 = r(?"M0 (43)
Such that the system given in equation (4.1) behaves like the reference model given in
equation (4.2). Or in other words, the error in the system goes to zero when the
controller is applied.
y{t)y.(>) = o (4.4)
In order to make the design possible we should specify reference model so that
= (4.5)
34
Where B is the polynomial containing the unstable zeros of the system. We can write
the polynomial B[q~x^ in the form
(4.6)
Where B + has all stable zeros. Our design is based on
BT Bm AR + q~d BS ~ Am (4.7)
Which means that the closed loop system transfer function obtained from equation (4.1)
is equal to the transfer function of the reference model. If we substitute equations (4.5)
and (4.6) into equation (4.7) we derive
B+ B~ T B~ B'm AR + q~d BS ~ Am (4.8)
This equation will be satisfied if we choose the controller polynomials R, S, and T as
follows
T = Bm and R = B*Rl (4.9)
Where R{ is obtained from the polynomial equation
ARl+q~d B~S = Am (4.10)
We can also use equation (4.10) to calculate the polynomial S. The polynomials Rx and
S have the following form
Â£.(
%')= + Si?'1 + +* (4.12)
35
Where
nR = deg B~ + d 1 (4.13)
n s = max {n 1, deg Am + deg A0 d} (4.14)
Thus from equation (4.10) we can derive R{ and S. And we can derive T and R from
equation (4.9)
36
5. Practical Issues and Robust Control Implementation
The motivation behind the development of adaptive control theory is to take into
account the uncertainty in the physical parameters of the system being controlled.
During the 1970s it was generally assumed that physical systems could be described
precisely by linear system models. These models assumed that the system parameters
were unknown and the upper bound of the system was known. As these control systems
were implemented, it was noticed that even if the controller had been designed to take
into account the presence of external disturbances or small modeling errors, the
controller could become unstable. The result of this has been the development of robust
adaptive control theory.
The methodology for robust control presented in this thesis develops a unified
framework for systematic robust control. Global stability analysis drives the design of the
adaptive controller in the presence of unmodeled dynamics and external disturbances.
The use of robust adaptive control systems makes it possible for the adaptive controller
to stabilize the controlled system by generating the correct parameters in the presence of
unmodeled disturbances or noise. The general form of a robust adaptive controller is
depicted in Figure 5.1 on the following page.
37
Process parameters
Figure 5.1 Adaptive Robust Controller
5.1 Computational Delay
As seen above in Figure 5.1, the filters and the digitaltoanalog / analogtodigital
converters will introduce computational delay into the controlled system between the
system input and output. This delay is introduced into the system in one of two ways
depending on how the control system is implemented. The first way involves measuring
the output of the system at some time tk, the measurement is then used to compute the
control signal which is implemented at time tk+l. The second way is the same as the first
except the control signal is implemented as soon as the control signal is computed as
shown in Figure 5.2 on the following page.
38
a b
Figure 5.1.1 Computational Delay
Both computational delays have disadvantages. The first as shown in Figure 5.1.1 (a) is
that the control signal will be delayed unnecessarily long and for the second as shown in
Figure 5.1.1 (b) is that the time delay may change as the load on the computer changes.
However, for both it is necessary to take the delay into account when designing the
control system. When the second way is utilized, it is advantageous to make the delay as
small as possible by performing as few operations as possible between the digitalto
analog / analogtodigital converters. Assuming that the regulator has the form
u(t)+r{ w(il) + ... +rk u(tk)
= t0uc(t)+... +rmuc(tm) s0y(t)
(5.1.1)
We can rewrite equation (5.1.1) as
39
m(0= roMc(0 joj,(0 + '(r1)
(5.1.2)
where
u'(t1)= txuc{t1) + ... + tmuc{tm) sl y{t 1) 
... s, y(tl)~ rxu{t1) ... rk u(tk)
The signal '(Â£ l) contains information that is available at the time t 1. In order to
make the delay as small as possible, the controller should perform the analogtodigital
conversion of y(t) and uc(t), compute equation (5.1.2), perform the digitaltoanalog
conversion, and then compute equation (5.1.3). This will significandy reduce the delay
when implemented in the controller. Except for two multiplications and two additions,
u(t) must be tested for limitations. The computational delay appears in the controlled
system the same way as time delay in the system dynamics, and is important to take into
account when designing the controller. The general rule of thumb is that the delay can be
ignored if it is less than 10% of the sampling period. In high performance systems it
should always be accounted for. Several iterations of the design process may have to be
performed, as the computational delay will not be known until the control system is
implemented, in order to tune the controlled system.
40
5.2 Sampling, Prefiltering, Postfiltering,
and Data Filters
The sampling rate used in an adaptive controller influences many properties of the
system being controlled, like the ability of the controlled system to follow the command
signal, rejection of load disturbances and measurement noise, and sensitivity to un
modeled dynamics in the system. The general rule for deterministic systems is to let the
sampling interval h be chosen such that
co 0 h 0.2 to 0.6 (5.2.1)
where co 0 is the natural frequency of the dominating poles of the closed loop system.
This is equivalent to 12 to 60 cycles per undamped natural period. The sampling
2 ji /
frequency is CO s = . In digital control systems it is necessary that signals be filtered
before they are sampled, and all components with frequencies above the Nyquist
frequency as shown in equation (5.2.2), should be eliminated.
(0N =
2
7t
h
(5.2.2)
If this is not done, a signal component with frequencies co > 6) N will appear as low
frequency components with the frequency
CO
, = (( + w) mod
(5.2.3)
The appearance of these frequencies is called aliasing, and the filters introduced before
the sampler are called prefilters or antialiasing filters. There are several choices for anti
aliasing filters. These are second or fourth order Butterworth, Bessel, and ITAE (integral
41
time absolute error) filters. The antialiasing filter can consist of one or several of the
above filters and have the form
G/M =
(0
s2 + 2ga)s + a>2
(5.2.4)
The Bessel filter has the property that its phase curve is approximately linear, which
implies that the waveform is also approximately invariant. When the antialiasing filter is
implemented with a Bessel filter, the time delay can be approximated. Assume the
bandwidth of the filter is chosen to be
I Gaa jv ) P
(5.2.5)
where Gaa (s)
is the transfer function of the filter and co N =
h
is the Nyquist
frequency. The parameter P is the attenuation of the filter at the Nyquist frequency.
Table 5.1 gives the approximate time delay Td as a function of P and also gives co N as
a function of the filter bandwidth co B.
p J(oB Tjh
0.05 3.1 2.1
0.1 2.5 1.7
0.2 2.0 1.3
0.5 1.4 0.9
0.7 1.0 0.7
Table 5.2.1 Properties of Fourth Order Bessel Filters
It is easily seen that the relative delay increases with attenuation, where for reasonable
attenuation the delay is more than one sampling period. It is obvious from the above
42
that the dynamics of the antialiasing filter must be taken into account during the design
process for the robust adaptive controller. If the antialiasing filter is implemented with a
Bessel filter, it is sufficient to approximate by a time delay, since the additional dynamics
cause no additional problems for an adaptive controller, because all parameters are
estimated. The time delay simply adds one more parameter to the system model. In
control systems where the sampling rate has to be changed, the usual implementation is
duel rate sampling. A high fixed sampling rate is used in conjunction with a fixed analog
filter. Then a digital filter can be used to filter the signal at a slower rate when needed.
This implies that fewer parameters have to be estimated.
The output of a digitaltoanalog filter is a piecewise constant signal, which means that
the signal fed to the D A filter is a piecewise constant signal that changes stepwise at
the sampling instants. This is adequate for most systems. However, some systems have
poorly damped oscillatory modes. The steps may excite these modes. In such cases it is
advantageous to use a filter that smoothes the signal from the D A converter. This
filter is called a postsampling filter. These filters may be simple continuous time filters
with a response time that is short in comparison with the sampling time or another
solution is to use dual rate sampling. This leads us to the conclusion that adaptive
control systems should be designed so that the output is piecewise linear between the
sampling instants. Then, fast sampling can be used to generate an approximation of the
signal, possibly followed by an analog postsampling filter.
43
If we assume that the process being controlled is given by the discrete time model
y(0 = G0(q) u(t) + v(t) (5.2.6)
and we know that the antialiasing filter contributes as part of the process G0 [q). The
noise or disturbance v(t) can be the sum of deterministic, piecewise deterministic, and
stochastic disturbances. The signal has low frequency and high frequency components.
In stochastic control problems it is important that the controller be tuned to a particular
disturbance spectrum where the estimate the disturbance characteristics is generated. In a
deterministic environment we are interested mainly in with the term G0(q) u(t) and not
particularly concerned with the term v(/). The presence of the term v{t) will create
difficulties in the parameter estimation. We can reduce the effect of v(t) if we filter the
input to the estimator with data filers having the transfer function Hj If we apply this
filter to equation (5.2.6). Then
yf{t) = G0(q) uf{t) + vf(t) (5.2.7)
where
yAt) = HAv) y{0 uAt) = (0 and v/(0 = fl/Wv(0
The proper choice of a data filter will make the relative influence of the term v(/)
smaller in equation (5.2.7) than in equation (5.2.6). The choice of the filter should also
emphasize the frequency ranges that are of primary importance in the control design.
44
The disturbance v(t) typically has significant low frequency components that should be
reduced. Very high frequencies should also be attenuated. The reason for this is that if
the model A[q)y^ (j) = B[(j) Uj{t) is fitted by a least squares model, it is desirable that
A{q)vf{t) be white noise. Since filtering with A implies that high frequencies are
amplified, it means that Vy (/) should not contain high frequencies. Therefore, the data
filter will have band pass characteristics as shown in Figure 5.2.1 below.
Figure 5.2.1 Amplitude Curve for the Data Filter H^q)
The center frequency is typically around the crossover frequency of the system being
controlled. A typical data filter is given by
Hf{q) = (5.2.8)
qa
45
5.3 Parameter Tracking
The ability of a robust adaptive controller to track variations in process dynamics is a key
property. In order to accomplish this it is necessary to discount old data. When the
process parameters are constant, it is desirable to have the parameter estimates based on
many measurements to minimize the effects of disturbances. When the process
parameters are changing, it can be misleading to base the estimates on many
measurements. The use of the exponential forgetting factor X gives us the means to
discard stale data. The least squares algorithm with exponential forgetting is given as
0(f + l) = 0(f) + p(f)0(f) (5.3.1)
where
p{t + \)~
p{t\){t) f{t)p{t1)
A, + T (t)p{t\)(j>{t)
(5.3.2)
where the sampling period h is chosen as the time unit. The forgetting factor is given by
X = eh,Tf (5.3.3)
where Tj is the time constant for X Table 5.3.1 on the following page gives different
values of X for different values of Tj jh.
46
Tf/h X
1 0.37
2 0.61
5 0.82
10 0.90
20 0.95
50 0.98
100 0.99
Table 5.3.1 Relation Between Tj jh and A.
It should be noted that it is possible to generalize the method of the forgetting factor
and have different forgetting factors for different parameters. This requires information
about the nature of the changes in the different parameters.
5.4 Covariance Resetting
In some controlled systems the parameters are constant over long periods of time and
then change abruptly. In these cases the forgetting factor is less than suitable. It is more
appropriate to reset the covariance matrix when the changes occur. When large changes
occur, reducing A, to a very small value, on the order of A, = 0.0001, can reset the
covariance matrix, when the parameters change. This is directly related to the excitation
of the controlled system by the command signal. Good excitation is obtained only when
the command signal continues to change.
47
5.5 Estimator Windup and Methods to Combat Windup
As mentioned in the previous section, exponential forgetting only works well when the
controlled system is properly excited all of the time. If the excitation is poor there are
problems with the controller. If there is no excitation in the system at all, i.e. <> = 0, the
equations for the estimator become
0(f + l) = $(/)
1 (55.1)
It is obvious from the equations in (5.4.1) that the estimate 0 is unstable with all
eigenvalues equal to 1, and the equation for the p matrix is unstable with all eigenvalues
equal to The estimates from these equations will always remain constant, and the p
matrix will grow exponentially. The result of this is that whenever () becomes different
from zero, the estimates will change drastically.
The phenomenon discussed above is called estimator windup. A similar situation occurs
when the regression vector is different from zero, but restricted to a subspace. When the
vector is constant, we obtain information only about the component of the parameter
that is in parallel to the regression vector. The component can be estimated reliably with
the exponential forgetting. The projection of the p matrix in this direction converges to
1 X, and the orthogonal part of the p matrix goes to infinity as X~. Estimator
windup is thus obtained by exponential forgetting combined with poor excitation.
48
5.5.1 Conditional Updating
The above, leads us in the direction of designing robust controllers with good excitation,
or the use of algorithms that minimize estimator windup. The use of conditional
updating is one way to combat estimator windup. The estimates and the covariance
matrix are only updated when there is excitation present in the controlled system. The
algorithms obtained are called algorithms with conditional updating or dead zones.
Correct detection of excitation should be based on calculation of covariances. Simpler
conditions are often used in practice. Common tests are based on the magnitudes of the
variations in the process inputs and outputs or other signals such as e and T. The
selection of conditions for updating is critical, if the criterion is too stringent, the
estimates will be poor because updating is done to infrequently. If the criterion is too
liberal we get covariance windup.
5.5.2 Constant Trace Algorithms
Constant trace algorithms are an additional way to keep the Pmatrix bounded by scaling
the matrix at each iteration. A popular method is to scale the matrix in such a way that
the trace of the matrix is constant. An additional refinement is to add a small unit matrix,
which gives the socalled regularized constant trace algorithm:
49
8 (<) = 0(t1) + K(t) (y(t) fi(t)T 0(11))
K(t) = P((l) *((<) (l + *i(r)r 0'
1+ !(()'/>(<!)#)
?4
P(<1)
p(,) =
P(l) r
+ c, /
(r(P(0)
(5.5.2.1)
where c,> 0 and c2 > 0. Typical values for the parameters are
104
x c, 1
(5.5.2.2)
The constant trace algorithm can also be combined with conditional updating.
5.5.3 Directional Forgetting
Directional forgetting is another means of accomplishing this. Exponential forgetting is
only done in the direction of the regression vector. The Pmatrix with exponential
forgetting is given by
P(f + 1) = XP~\t) + {t){t)T (5.5.3.1)
In directional forgetting we start with die equation
P_l(/ + 1) = P'X{t) + (0^(0r (5.5.3.2)
The matrix P~l (/) is decomposed as
p'(t)=p'(t) + ytft)#t)T
(5.5.3.3)
50
where P 1 (f) c)(f) = 0 This gives
(fm)f
(5.S.3.4)
Exponential forgetting is then applied only to the second term of equation (5.5.5), which
corresponds to the direction where new information is obtained. This gives
p{ (t +1) = P'1 (<) + X y{t) 4{t) {t)T + {t)T (5.5.3.5)
which can be written as
P_1(f + l) = P~\t) +
1 + (X1)
V
(fmt ,
A)A)r (5516)
Several variations of this algorithm exist. The forgetting factor X is sometimes made a
function of the data, and another method has the property that the Pmatrix is driven
toward a matrix proportional to the identity matrix when there is poor excitation.
5.5.4 Leakage
Leakage is another way to avoid estimator windup. In continuous time, the estimator is
modified by adding the term a {0 9}. This means that the parameters will converge to
00 when no useful information is obtained, that is when s = 0. A similar modification
can also be made to discrete time estimators. When the least squares algorithm is used, it
is also common to add a similar term to the P equation to drive it toward a specified
matrix.
51
5.6 Robust Estimation
Least squares based algorithms tend to operate optimally if the disturbances are white
noise. However, in practice, least squares based adaptive controllers have drawbacks
because the assumptions that the controllers are based on are violated. A single large
error will have a large influence on the estimates because the error is squared in the
criterion. This is a direct consequence of the Gaussian assumption that the probability of
large errors is very small. Control algorithms with very different properties are obtained
if it is assumed that the probability for large errors is not negligible. These algorithms will
have equations such as
0(0 = H*1)
d0(t)
dt
ps{t)
(5.6.1)
where the function /(e) is linear for small 8 but increases more slowly for a large 8 ,
such as
/(e)
Â£
1 + fllel
This has the effect of decreasing large errors.
(5.6.2)
In practical applications it can be shown that controllers need integral action to ensure
that calibration errors and load disturbances do not give steady state errors. This type of
situation can occur quite frequendy. In order to check if a particular controller has this
52
ability, we investigate possible stationary solutions. Consider the following selftuning
system based on least squares estimation and minimum variance
y{t + d) = R'(q~x)u{t) + (5.6.3)
and the regulator is
u(0 = ^(0 (564)
The conditions for a stationary solution are
^(^) = Jlim^2^(/t + r)#) = 0 r = ... ,d + l
k = 1
1 N
r^(T)=YimYjy{k + r)u(k) = 0 r = d,... ,d + k
k = 1
where k and / are the degrees of the R* and S' polynomials, respectively. These
conditions are not satisfied unless the mean value of y is zero. When an offset is present
the parameter estimates will get values such that /?*(l) = 0, that is, there is an integrator
in the controller.
The sections discussed above in Chapter 5 leads us to the following conclusion. The use
of antialiasing filters, post filters, data filters, forgetting factors, methods to combat
windup, and integral action furnishes the means to design robust adaptive controllers
that adapt well to noise and disturbances.
(5.6.5)
53
6. Simulations
The first simulation given in this chapter illustrates plant parameter estimation and
adaptive control. The examples used here for plant control utilize a servomotor.
The continuous time transfer function of the servomotor from voltage input to shaft
position output is given by
K
G(s) =
s(t s + l)
(6.1)
where K is the plant gain and T is the time constant. The servomotor parameters are
K = 30 T = 0.1 sec (6.2)
The discrete time transfer function is
H(z) =
^6p)
(z l) (z + a)
(6.3)
The discrete plant gain is
Kd=K(xar + T)
The constants are given by
(6.4)
a = e /z
ra + aTx
P =
(6.5)
ra t + T
54
where T is the sampling period, with a sampling rate of 40 msec. The discrete time plant
parameters are:
0.040/
a = e /cu = 0.6703
0 =
0.18*0.6703 + 0.6703*0.04 0.1
0.1*0.6703 0.1 + 0.04
= 0.8753
Kd = 30(0.1*0.6703 0.1 + 0.04) = 0.211
With these parameters the discrete time transfer function becomes
*(l).
w (zl)(z 0.6703)
From this transfer function the discrete time model obtained is
y(t) = H(q')u(t)
y(t +1) 1.67 y (t) + 0.67j/(fl)= 0.2(f) + 0.18(fl)
(6.6)
(6.7)
(6.8)
55
6.1 Simulation using Least Mean Squate Algorithm
The plant model previously derived is
,y(f + l) \.61 y{t) + 0.67y(fl) = Q.2u(t) + 0.18w(fl) (6.1.1)
The least square algorithm is
0 (f + 1) = 6{t) + p{t){t) (6.1.2)
where
/mi p{tm) T{t)p{t\)
(6.1.3)
where X gives exponential data weighting. When the parameter estimate vector 0 0 has
changed, then the most recent data collected from the measurement vector (j) (() is more
informative than past data. The use of exponential data weighting discards old data and
gives higher weight to the most recent data in the vector p(t). X is defined as
X{t) = X,X{t\) + (1^(0))
0 < Z(t) < 1
(6.1.4)
The reference model output is given as
ym (t) = 15 sin
30
(6.1.5)
The simulations were performed with Madab version 5.1. The initial conditions selected
a
for 0 in the simulations is
0 = [1 1 1 1]
(6.1.6)
56
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
0 100 200 300 400 500 600
Tim e
Figure 6.1.1 Least Square A. = 0.98, No Noise, and No Variance
57
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
1
0.5
0
0.5
0 100 200 300 400 500 600
Time
Figure 6.1. 2 Least Square A. = 0.20, No Noise, and No Variance
Theta Vectors
58
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
0 100 200 300 400 500 600
Time
Figure 6.1.3 Least Square X, = 0.10, No Noise, and No Variance
59
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
Figure 6.1.4 Least Square X = 0.09, No Noise, and No Variance
60
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
Figure 6.1.5 Least Square X = 0.089, No Noise, and No Variance
61
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
Figure 6.1.6 Least Square X=0.087, No Noise, and No Variance
62
o
In
Magnitude
Magnitude
Cs
K*
1
r
o
p
c/)
in
rt>
>>
II
o
VO
oo
'Z
o
cn
O
II
to
Â§
2
O
<
w
5
o
o
in
o
CD
Q)
<
CD
O
o
>
1
Tracking Error
Magnitude
Magnitude
dn cn
o o o
CO
*<
W
CD
3
O
c
Q
C
IS)
O
IS)
O
Reference Signal
Magnitude Magnitude Magnitude Magnitude
Reference Signal
System Output
Tracking Error
Figure 6.1.8 Least Square A, = 0.50, Noise = 2, and No Variance
64
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
Figure 6.1.9 Least Square X = 0.20, Noise = 2, and No Variance
65
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
Figure 6.1.10 Least Square X = 0.083, Noise = 2, and No Variance
66
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
50
0
50
50 100 150 200 250 300 350 400 450 500 550 600
1
0.5
0
0.5
0 100 200 300 400 500 600
Tim e
Figure 6.1.11 Least Square X = 0.98, No Noise, and With Variance = 2
Theta Vectors
1 ! 1 1 !
1 i i
 j j i ] j
i 1
S T T 1 i 1 1 1 i 1 1
i i i 1 i
1 1 i 1 i i 
1 I j i i 1
i 1 1
67
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
Figure 6.1.12 Least Square A, = 0.50, No Noise, and With Variance = 2
68
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
Figure 6.1.13 Least Square A, = 0.30, No Noise, and With Variance = 2
69
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
Figure 6.1.14 Least Square X = 0.08, No Noise, and With Variance = 2
70
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
Figure 6.1.15 Least Square X=0.07, No Noise, and With Variance = 2
71
Magnitude Magnitude Magnitude Magnitude
Reference Signal
System Output
50
0
50
50 100 150 200 250 300 350 400 450 500 550 600
1
0.5
0
0.5
0 100 200 300 400 500 600
Time
Figure 6.1.16 Least Square A, = 0.98, Noise = 2, and With Variance = 2
Theta Vectors
*y 
Tracking Error
j I j 1  1 j 1 i j
 j   i j I I i j
I ! i I i ~rr\vr^ 1 i
i i i s { 5
72
Magnitude Magnitude Magnitude Magnitude
Reference Signal
System Output
Tracking Error
Figure 6.1.17 Least Square X = 0.50, Noise = 2, and With Variance = 2
73
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
Figure 6.1.18 Least Square X = 0.09, Noise = 2, and With Variance = 2
74
Magnitude Magnitude Magnitude Magnitude
Reference Signal
Tracking Error
Figure 6.1.19 Least Square X = 0.0044, Noise = 2, and With Variance = 2
75
6.2 Results of Least Mean Square Simulations
The simulations shown in Figures 6.1.1 through 6.1.19 depict the performance of the
least mean square algorithm in four different scenarios:
(i) Simulation with no noise and no variance
(ii) Simulation with noise and no variance
(iii) Simulation with no noise and with variance
(iv) Simulation with noise and with variance
During each set of simulations X was varied between 0.98 down to a value where the
algorithm completely broke down. The simulations show the degradation in
performance of the algorithm as X decreases in value. Each set includes a figure that
shows the first occurrence of bursting and a figure that shows the total breakdown of the
algorithm as the X value decreases. The algorithm performed best in the idealked set of
simulations with no noise and no variance, with the first bursting observed at X =0.089
and total breakdown of the algorithm at X =0.087. This illustrates the fact that the
parameter estimates are still being generated are functional, even at low values for X .
However, as noise and variances are injected into the controlled plant, bursting is
observed at a much earlier time as ?iis decreased in value. These burstings begin to
occur in the 0.30< X <0.50 range. It must be noted here that the magnitude of the noise
and variance was selected as 2, compared to the magnitude of the reference signal and
plant output, which is 15 for these simulations. Total breakdown in these situations was
76
observed in the same range as the simulations where no noise and variances were
present, X < 0.87. The least square algorithm in all cases converged in approximately
eight to ten iterations. Even though the normalized gradient algorithm is not shown
here, this is much better performance than the normalized gradient algorithm.
The above leads us to conclude that for the least square algorithm, a higher value for the
forgetting factor X should be selected, ideally at X =0.98, for best performance in a
noisy environment or in an environment where disturbances are present.
The Matlab *.m file used for the least square simulations is given in the appendix.
77
6.3 Simulation using Model Reference Adaptive Control Algorithm
The model reference adaptive controller developed in section 2.3 is being utilized in the
following simulations. The plant to be controlled is given by
y(t) 1.7 y{t 1) + 0.9 y(t 2) = l.5u(tl) + 0.6u(t2) (6.3.1)
or in delay operator form the plant equation becomes
(l1.7<71 +0.9q~2)y{t) = q~l (1.5 + 0.6q~l)u(t) (6.3.2)
where the delay d = 1. We can also write the plant equation as
A(q') 1179 +0.9q
The zero of the plant is 1.5 = 0.6 q 1 ,
0.6
1.5
= 0.4. This zero is located inside the
unit circle in the z plane. The plant is stable and at minimum phase. We wish to
design for a closed loop system response that has a natural frequency of
CO = 0.9 rad/ and a damping response of C, = 0.8 We also design for a steady state
error of ess = 0 to a step input. A reference model with this form can be described by a
continuous time, second order system having the form
mi CD
44 = =r (6.3.4)
U(s) s +2 Â£cos + CD2
It is well known that continuous time systems reach steady state when 5 = 0. The
system given in equation (6.3.4) will have a steady state error if the output signal matches
78
y(.y)
the input signal when s = 0, ot ess = 0 if ^ = 1 at s = 0. For this case the model
in equation (6.3.4) becomes
Y(s) co2
= T = 1 so ess = 0
U(s) Q)2
(6.3.5)
We can rewrite the reference model in discrete time form
XO = g~ Bm
U(() Am
Kq
1 + a{q + a2q~
(6.3.6)
where
a, = 2(e<0,h)cos(^l C2 a0h
 02C(0oh
a2 = e
(6.3.7)
where h is the sample period. For a zero steady state error the system gain = 1 with
q = 1 and k0 = 1 + ax + a2. Using the desired values, CO 0 = 0.9 and ^ = 0.8, and the
sampling period h = 1, we can solve for a, and a2 .
at = l[e 8(09))cos Oy/l 0.82 0.9 = 0.835
a2 = e~2{om9) = 0.237
(6.3.8)
and
k0 = l + + a2 = 10.835+0.237 = 0.402
(6.3.9)
The polynomials R S, and T for the model reference controller are found below
79
T=Bm = k0 = 0.402
m u
R = BRX
R= (\.5 + 0.6 q~l) R^ and tf,=l
S = s0 + j, q~l = 0.865 0.663<7_1
Our equation for the model reference controller is now
(L5)n(f) = 0.6a(fl) +0.4 r(f) 0.865^)+ 0.663^1) (6.3.10)
or
u(t) = 0Au(t\) + 0.268r(f) 05161 y{t) + 0.442 j/(fl) (6.3.11)
The adaptive model reference controller is
0(t + d) =6{t + d1) +p{t)y/(t) [y(t+d) ym(t + d)] (6.3.9)
Where 0 (t + d ) is the estimate obtained by the following estimation algorithm
p{t) = p{t 1) +
p(t 1)\/(Q y/T{t) p(t 1)
1 +i//T(t) p(t 1) y/{t)
(6.3.10)
With the conditions on p (t) that
P(0) = P oP o > 0, and 0 (0) = 0,
The Model Reference Adaptive Controller (MRAC) simulations were performed with
Matlab version 5.1. The initial conditions selected for 0 in the simulations is
0 = [1 1 1 1].
80
MRAC System Output
50 100 150 200 250 300 350 400 450 500 550 600
Time
Tracking Error
L~ 1 1 1 \ i I i { j 1
pr V t r r f' r I f 1
________________i______j________________________________________________________
50 100 150 200 250 300 350 400 450 500 550 600
Theta Vectors
Figure 6.3.1 MRAC Noise = 0, and X = 1
81
Magnitude Magnitude Magnitude
MRAC System Output
Tracking Error
{ i 1 : i 1 i {
* j '!""" I l i
k ! i i 1 1 i 1
i i i 1 i
50 100 150 200 250 300 350 400 450 500 550 600
Time
Figure 6.3.2 MRAC Noise = 0.0001, and X = 1
82
Magnitude Magnitude Magnitude
MRAC System Output
Tracking Error
Figure 6.3.3 MRAC Noise = 0.001, and A, = 1
83
Magnitude Magnitude Magnitude
MRAC System Output
50 100 150 200 250 300 350 400 450 500 550 600
Time
Tracking Error
Time
Figure 6.3.4 MRAC Noise = 0.005, and X = 1
84
Magnitude Magnitude Magnitude
MRAC System Output
Time
Tracking Error
Figure 6.3.5 MRAC Noise = 0.01, and X = 1
85
MRAC System Output
Tracking Error
Figure 6.3.6 MRAC Noise = 0.1, and A, = 1
86
Magnitude Magnitude Magnitude
MRAC System Output
50 100 150 200 250 300 350 400 450 500 550 600
Time
Tracking Error
2
1 i
 t
  i j j i j i
_________ I I __________I I I I______ I______________
50 100 150 200 250 300 350 400 450 500 550 600
Time
Figure 6.3.7 MRAC Noise = 0, and X = 0.98
87
Magnitude Magnitude Magnitude
MRAC System Output
Time
Tracking Error
4 VV 4 VJy~ vV VV Pp yd
dOC OC OF QF. ac. OF FiC OF OF FF
50 100 150 200 250 300 350 400 450 500 550 600
Time
Figure 6.3.8 MRAC Noise = 0, and X = 0.5
88
Magnitude Magnitude Magnitude
MRAC System Output
Reference Signal
Time
Tracking Error
2
1
o
E 0
U)
CO
2 1
2
50 100 150 200 250 300 350 400 450 500 550 600
Time
Figure 6.3.9 MRAC Noise = 0, and X = 0.4
Theta Vectors
89
MRAC System Output
Reference Signal
Tracking Error
Figure 6.3.10 MRAC Noise = 0.01, and k = 0.98
90
Magnitude Magnitude Magnitude
MRAC System Output
50 100 150 200 250 300 350 400 450 500 550 600
Time
Tracking Error
Figure 6.3.11 MRAC Noise = 0.01, and X = 0.9
91

Full Text 
PAGE 1
MODEL REFERENCE ADAPTIVE CONTROL by Alan Edwin Gibbs B.S., University of Colorado at Denver, 1997 A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Electrical Engineering 2000
PAGE 2
This thesis for the Master of Science degree by Alan Edwin Gibbs has been approved by Jan T. Bialasiewicz Hamid Z. Fardi Miloje S. Radenkovic 1/ Date
PAGE 3
Gibbs, Alan Edwin (M.S., Electrical Engineering) Model Reference Adaptive Control Thesis directed by Professor Miloje S. Radenkovic ABSTRACT This thesis explores Model Reference Adaptive Control theory. Adaptive control systems are utilized when the parameters of the system being controlled are unknown and/ or vary in time. The theory for the Normalized Gradient algorithm, Least Squares algorithm, Model Reference Adaptive Control algorithms based on the normalized gradient and least squares algorithms, and the Pole Placement algorithms are developed. The design procedure when disturbances are present and when the system is non minimum phase are discussed. The thesis also develops the methodology for robust adaptive control systems as well as addressing practical issues and implementation of robust controllers. Simulations utilizing the least square algorithm and the model reference adaptive control algorithm based on least squares are performed with varying noise levels injected into the controlled system. A robust model reference adaptive controller is also developed and compared to the idealized model reference controller. Using information from the simulations, global stability, tracking performance, and robustness of the least square based controllers is investigated. This abstract accurately represents the content of the candidate's thesis. I recommend its publication. Miloje S. Radenkovic l11
PAGE 4
CONTENTS Figures ....................................................................................... Tables ....................................................................................... Chapter 1. Introduction ........................................................................ 1.1. Model Reference Nonadaptive Control System ............................... 1.2. Model Reference Adaptive Control Systems ................................... 2. Model Reference Adaptive Control Algorithms ................................ 2.1. N onnalized Gradient Algorithm ................................................ V1 lX 1 4 8 11 12 2.2. Least Squares Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3. Model Reference Adaptive Controller........................................... 19 2.4. Pole Placement Adaptive Controller . . . . . . . . . . . . . . . . . . . . . ... 26 3. Algorithm Design Procedure for Disturbances . . . . . . . . . . . . . . . ... 31 4. Design Procedure When System is Nonminimum Phase..................... 34 5. Practical Issues and Robust Control Implementation.......................... 37 5.1. Computational Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 38 5.2. Sampling, Prefiltering, Postfiltering, and Data Filters . . . . . . . . . . . . 41 5.3. Parameter Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.4. Covariance Resetting............................................................... 47 5.5. Estimator Windup and Methods to Combat Windup . . . . . . . . . . . . .. 48 5.5.1. Conditional Updating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 49 5.5.2. Constant Trace Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.5.3. Directional Forgetting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.5.4. Leakage.............................................................................. 51 5.6. Robust Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 52 lV
PAGE 5
6. Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6.1. Simulation using Least Mean Square Algorithm . . . . . . . . . . . . . . . .. 56 6.2. Results of Least Mean Square Simulations....................................... 76 6.3. Simulation using Model Reference Adaptive Control Algorithm . . . . . . 78 6.4. Results of Model Reference Adaptive Controller Simulations . . . . . . . .. 93 6.5. Simulations using Robust MRAC with AntiAliasing Filter . . . . . . . . ... 96 6.6. Results of Robust MRAC Antialiasing Filter Simulations . . . . . . . . . ... 102 6. 7. Robust Model Reference Adaptive Control Simulations . . . . . . . . . . .. 103 6.8. Results of Robust Model Reference Control Simulations . . . . . . . . . . 117 7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 118 Appendix A Least Mean Square Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 120 B ModelReference Adaptive Control Algorithm................................. 123 C Robust ModelReference Adaptive Control Algorithm........................ 125 D Robust MRAC compared to Idealized MRAC Algorithm..................... 127 E Antialiasing Filter Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . ... 130 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 134 v
PAGE 6
FIGURES Figure 1.1. Controller Design Process .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. . .. .. .. .. .. .. . .... 2 1.2. Adaptive Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3. Nonadaptive Model Reference Control System . . . . . . . . . . . . . . ... 3 1.4. Simplified Nonadaptive Model Reference Control System . . . . . . . . .. 3 1.1.1. Nonadaptive Closed Loop System . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.2. Nonadaptive Reference Model................................................... 6 2.1. General Form of Adaptive Controller . . . . . . . . . . . . . . . . . . . . . 11 2.4.1. Pole Placement Controller .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . .. .. .... 27 5.1. Adaptive Robust Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.1.1. Computational Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 39 5.2.1. Amplitude Curve for the Data Filter H1(q) ........ .. .. ...... .. ...... .. ... . 45 6.1.1. Least Square A =0.98, No Noise, and No Variance .. .. .. .. .. .... .. .... .. .... 57 6.1.2. Least Square A= 0.20, No Noise, and No Variance .. .. .. .. .. .. .. .. .. .. .. .. .. 58 6.1.3. Least Square A =0.10, No Noise, and No Variance .. .. .. .. .. .... .. .... .. .... 59 6.1.4. Least Square A =0.09, No Noise, and No Variance . .. . . .. .. .. .. .. ... 60 6.1.5. Least Square A = 0. 089, No Noise, and No Variance .. .. .. .. .. .. .. .. .. .. .... 61 6.1.6. Least Square A =0.087, No Noise, and No Variance .. .. .... .. .... .. .. .. .... 62 6.1.7. Least Square A =0.98, Noise= 2, and No Variance 6.1.8. Least Square A= 0.50, Noise= 2, and No Variance 6.1.9. Least Square A=0.20, Noise=2, and No Variance 63 64 65 6.1.10. LeastSquare A=0.083, Noise=2, andNoVariance..................... 66 6.1.11. Least Square A =0.98, No Noise, and With Variance= 2 .. .. .. .. .. .. .. .. 67 V1
PAGE 7
6.1.12. LeastSquare A.=0.50,NoNoise, andWithVariance=2 ................ 68 6.1.13. Least Square A. =0.30, No Noise, and With Variance= 2 . .. .. .. .. .. .. 69 6.1.14. Least Square A.= 0.08, No Noise, and With Variance= 2 . . . . . . . . 70 6.1.15. Least Square /...=0.07, No Noise, and With Variance= 2 .. . .. .. .. .. .. .. 71 6.1.16. Least Square A.=0.98, Noise=2, and With Variance=2 6.1.17. LeastSquare A.=0.50,Noise=2, andWithVariance=2 6.1.18. Least Square A. =0.09, Noise= 2, and With Variance= 2 72 73 74 6.1.19. Least Square A. =0.0044, Noise= 2, and With Variance= 2 . .. .. .. .. .. . 75 6.3.1. MRAC Noise= 0, and A.= 1 . . . . . . . .. . .. . . . . . . . . .. .. .. . . .. 81 6.3.2. MRAC Noise= 0.0001, and A.= 1 . . .. .. .. .. . . . . . . . . .. .. .. . . . 82 6.3.3. MRAC Noise= 0.001, and A.= 1 . . . .. .. . . . . . . . . . . . . .. . . . .. 83 6.3.4. MRAC Noise= 0.005, and A. = 1 . . . . .. .. .. . . . . . . . . . . . . .. . . .. 84 6.3.5. MRAC Noise= 0.01, and A.= 1 . .. . . .. . . .. .. .. . . . .. ... .. . . . .... 85 6.3.6. MRAC Noise= 0.1, and A.= 1 . . . . . .. .. .. .. . . . . . . . . .. .. .. .. . . .... 86 6.3.7. MRAC Noise= 0, and A.= 0.98 . . .. . . .. . . . . . . . . . . . . .. .. . . . 87 6.3.8. MRAC Noise= 0, and A. = 0.5 . . . . . .. .. . .. . . . . . . . . . .. . . . . .. 88 6.3.9. MRAC Noise= 0, and A.= 0.4 . . . . .. .. .. .. . . . . . . . . . . .. .. . .. .. 89 6.3.10. MRAC Noise= 0.01, and A.= 0.98 . .. .. .. .. .. . . . .. .. . .. .. .. .. . ... 90 6.3.11. MRAC Noise= 0.01, and A. = 0.9 . . .. .. .. .. .. . .. . . . . . . .. .. .. . . . . 91 6.3.12. MRAC Noise= 0.01, and A.= 0.5 . .. .. .. .. .. .. . . . . . . . . . . .. .. . . 92 6.5.1. Step Response of the Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 96 6.5.2. Step Response of the Closed Loop Plant with MRAC . . . . . . . . . . . . 97 6.5.3. Step Response of the Closed Loop with Antialiasing Filter.................. 97 6.5.4. Gain Magnitude for the Plant . . . . . . . . . . . . . . . . . . . . .. . .. .. . . 98 6.5.5. Phase Margin for the Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Vl1
PAGE 8
6.5.6. Gain Magnitude for the Closed Loop Plant with MRAC . . . . . . . . . . .. 99 6.5.7. Phase Margin for the Closed Loop Plant with MRAC . . . . . . . . . . . . 99 6.5.8. Gain Magnitude for the Closed Loop Plant with Antialiasing Filter . . .... 100 6.5.9. Phase Margin for the Closed Loop Plant with Antialiasing Filter............ 100 6.5.10. Gain Magnitude Comparison..................................................... 101 6.5.11. Phase Margin Comparison....................................................... 101 6.7.1. 6.7.2. 6.7.3. 6.7.4. 6.7.5. 6.7.6. RobustMRAC RobustMRAC RobustMRAC RobustMRAC RobustMRAC RobustMRAC Noise= 0, and A= 0.98 .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..... 103 Noise= 0.001, and A= 0.98 .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. 104 Noise= 0.01, and A= 0.98 .. .. .. .. .. .. .... .. .. .. .. .... .. .... 105 Noise=0.1, andA=0.98 .................................... 106 Noise= 0 and A= 0.5 .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..... 107 Noise= 0 and A= 0.3 . . . . . . . . . . . . . . . . . . . .... 108 6.7.7. RobustMRAC Noise=0.1 andA=0.9 109 110 6.7.8. Robust MRAC Noise= 0.1 and A= 0.9 6.7.9. Robust MRAC Noise= 0.001 and A= 0.98 .. .. .... .. ... .. ................... 111 6.7.10. Robust MRAC Noise= 0.001 and A= 0.98 .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... 112 6.7.11. Robust MRAC Noise= 0.003 and A= 0.9 6.7.12. RobustMRAC Noise=0.003 andA=0.9 113 114 6.7.13. Robust MRAC Noise= 0.002 and A= 0.5 .. .. .... ... .. .... .. ................. 115 6.7.14. RobustMRAC Noise=0.002andA=0.5 .................................... 116 Vll1
PAGE 9
TABLES Table 5.2.1 Properties of Fourth Order Bessel Filters . . . . . . . . . . . . . . . . . . ... 42 5.3.1. Relation Between 1j j h and A . . . . . . . . . . . . . . . . . . . . . . . . . 4 7 lX
PAGE 10
1. Introduction The objective of this thesis is to explore Model Reference Adaptive Control systems. Theoretical background, algorithm development, and simulation examples will be presented. Adaptive control systems provide a systematic technique for automatic adjustment of controllers in real time, to maintain a desired level of performance when the parameters of the dynamic model are unknown and / or change in time. In order to design a good controller the designer needs to: 1. Specify the desired closed loop performance 2. Know the dynamic model of the plant to be controlled 3. Have a controller design method making it possible to meet the desired performance for the corresponding plant model An adaptive control system of a plant or process is one that automatically tunes itself in real time. The dynamic model of the plant can be obtained from input/ output measurements obtained in open or closed loop. Or put simply, the design of the controller is accomplished from data collected from the system to be controlled. Adaptive control systems can be viewed as tuning the controller in real time from data collected in real time on the system being controlled. The block diagrams on the following page in Figures 1.1 and 1.2 illustrate the adaptive control design process discussed above. 1
PAGE 11
Pe Desired Controller ..... Plant rformance"' Design Model Controller Parameters Reference ... u y Model "' ... Plant Controller "' ,... r+ Figure 1.1 Controller Design Process Desired Adaptation ..... .... Pe rformance ,.. Scheme ..... .... Controller Parameters J Reference yModel .. Adjustable u Plant Controller .. ,.. ,.. I Figure 1.2 Adaptive Control System 2
PAGE 12
Some systems that appear to be adaptive lack the structure for their controller to be selftuning. These systems use known fixed parameters to produce the desired results and are not selftuning to changes in the system being controlled. Nonadaptive control systems may be thought of as a prefilter to the system being controlled and do not interactively change the system input. Figures 1.3 and 1.4 below depict block diagrams of the nonadaptive model reference control system. u Reference Model Plant Figure 1.3 Nonadaptive Model Reference Control System u Prefilter Plant y Figure 1.4 Simplified Nonadaptive Model Reference Control System 3 y
PAGE 13
1.1 Model Reference Nonadaptive Control System For a system as shown in Figures 1.3 and 1.4 we consider a system in the following form (1.1.1) where u(t) and y(t) are the system input and output respectively and d is the system delay. We assume that A(q1 ) and B (q1 ) are coprime polynomials and given in the form A( 1) 1 1 n q = + a1 q + ... + an q (1.1.2) (1.1.3) where q1 is the unit delay operator. We wish to design a controller where the closed loop system gives the desired response to the reference signal r ( t ). (1.1.4) where R, S, and are polynomials of the form R ( 1) 1 1 d1 1 q = + r1 q + + rd1 q (1.1.5) (1.1.6) and ns =max {deg Am d, deg A1} Given the design specifications ron (natural frequency) and (damping coefficient). We can synthesize the nonadaptive reference model in the form 4
PAGE 14
(1.1. 7) where y m ( t) is the output of the reference model and A ( 1) 1 1 nm m q = +aim q + ... + anm q (1.1.8) B ( 1) b b 1 b mm m q = Om + lm q + .. + mm q (1.1.9) We assume that Am(q1 ) and Bm (q1 ) do not have common factors. The block diagrams in Figures 1.1.1 and 1.1.2 depict the closed loop system. Controller r, r(t)1 ... ... T (q1) R (.q4) + ... ... 1 u(l) ... q "d B (q1 ): ... A (q.cl) )l(t) ... ... S{ql) .., R (q;t) I I Figure 1.1.1 Nonadaptive Closed Loop System t(t) ... Bm.(q1) A.w. (q 'l) Y_111(1) _., II"" II"" Figure 1.1.2 Nonadaptive Reference Model 5
PAGE 15
If solve equations (1.1.1) and (1.1.4) for u(t) and equate we can solve for y(t). q1 B(q1) T(q1) y(t)= A(q1 )R(q1) + qd B(q1 )s(q1) (1.1.10) Model reference design requires that the transfer function of the closed loop system equal the transfer function of the reference model. From equations (1.1. 7) and (1.1.1 0) we are now in a position to derive (1.1.11) The polynomials T( q I) and R( q I) in equation (1.1.11) are given by (1.1.12) and S(q1 ) is obtained as a solution to the Diophantine equation (1.1.13) Equation (1.1.13) has a causal solution if the polynomials R 1(q1 ) and S(q1 ) have the form R (q1)1 +a q1 + + r qd+l I I d1 (1.1.14) (1.1.15) where ns = max{n 1, deg Am d} and n is the degree of the polynomial A(q1 ) and deg Am denotes the degree of A m(q1 ). The degree of R1(q1 ) is equal to d1 6
PAGE 16
and the degree of S( q I) is equal to max { deg Am d, deg A 1} The control structure of equation (1.1.4) is determined by equations (1.1.12) and (1.1.13) in terms of the system polynomials A( q I) B( q I) and the reference model polynomials A m(q1 ) and B m(q1). We can easily check to see if the polynomials R(q1), T(q1), and S(q1 ) satisfy the desired relation given in equation (1.1.11). If we substitute R(q1 ) and T(q1 ) from equation (1.1.12) into (1.1.11) we obtain (1.1.16) Also we know from equation (1.1.13) that A m(q1 ) = A(q1 ) R 1(q1 ) + qd S(q1 ) or (1.1.17) From the above relation we can conclude that the closed loop transfer function is equal to the transfer function of the reference model. 7
PAGE 17
1.2 Model Reference Adaptive Control Systems Any system being controlled requires a controller that can measure the systems output against the intended output. Adaptive control systems adjust the controller in real time to maintain the intended output. Adaptive control systems can use the output error of the controlled system to maintain the intended output. Output error of the system being controlled is determined by taking the difference between the reference signal and the system output. The error signal could be used directly without modification as feedback to close the system loop, however, this case would not be an adaptive controller. In a system that utilizes adaptive control, the adaptive controller closes the feedback loop around the system to be controlled. All adaptive control systems perform three main functions: 1. Identification of system dynamics 2. Control decision making 3. Modification of control parameters The adaptive controller uses the plants input and output in conjunction with desired performance to identify the system dynamics. The adaptive controller then makes a decision as to how the control parameters will be modified based on the system dynamics identified in step one. Adaptive control systems can identify the plant dynamics, compare the dynamics to the desired performance and then regulate the 8
PAGE 18
dynamics of the system being controlled. Or the adaptive controller can identify the system output and compare it to the reference signal before making a decision on the adjustment to the controller. These two methods of control can be thought of as either direct or indirect adaptive control. The direct method of adaptive control calculates the controller parameters direcdy to minimize the error signal. This method requires that the controller have enough parameters to match the order of the plant being controlled. The indirect method is used when the plant parameters are estimated to minimize the error signal. The adaptive controller then calculates the controller parameters to produce the desired performance. This necessitates the use of two separate sets of parameters. The use of indirect adaptive control makes it possible for the control feedback signal to be different than the error signal. The use of indirect control allows the controller the possibility of recalculating the controller parameters at predetermined intervals instead of every interval. The extra time allows the controller to improve plant identification and the estimated plant parameters used to retune the controller may be more accurate. In a higher order system this may be mandatory to allow the controller enough time to process the plant controller parameter estimates in real time. 9
PAGE 19
The heart of any adaptive controller is the Parameter Adaptation Algorithm (P AA). The P AA is the core of the controller that adapts the estimates of the adjustable parameters used to control the plant. A comprehensive plant model is mandatory in order to select the appropriate P AA to control the plant. The algorithms discussed in this thesis assume that the plant models have parameters that are linearly related to the input and output of the plant. However, the parameters may be time varying and the plant model may be nonlinear. The Model Reference Adaptive Control algorithms goal is to estimate the unknown parameters of the plant in real time by minimizing the error that is recursively calculated from the plant output and the reference model while utilizing previously calculated parameter values. 10
PAGE 20
2. Model Reference Adaptive Control Algorithms Adaptive control algorithms examined in this thesis differ in the way information is processed in real time in order to tune the controllers for achieving the desired performance. In an adaptive control system we would like the steady state value of the error to be zero. Figure 2.1 below illustrates the general form of an adaptive controller ., r ( t ) u (t) Unknown y(t+l) ... Regulator .,.: ... ... ... ... System ... .... .,, ., F+ Parameter .... I: Estimator ... J Model I II"' I I y*(t+l) Figure 2.1 General Form of Adaptive Controller A system where the error goes to zero in a minimum number of sampling periods is referred to as a deadbeat controller. This leads us to select the fastest method possible to achieve zero error, namely a steepest descent algorithm. Let us examine the normalized gradient method. 11
PAGE 21
2.1 Normalized Gradient Algorithm Consider the system of the form (2.1.1) Where u(t) is the system input, y(t) is the system output and the polynomials A(q1 ) and B ( q I) have the form (2.1.2) (2.1.3) For a given design specifications ro (natural frequency) and l; (damping coefficient) we can synthesize the adaptive reference model in the form (2.1.4) where y m ( t) is the output of the reference model and A ( I) 1 I nm m q = + aim q + ... + anm q (2.1.5) B ( I) b b I b mm m q = Om + lm q + + mm q (2.1.6) We assume that Am(q1 ) and Bm (q1 ) do not have common factors. Our objective is to design an adaptive controller to achieve zero tracking error where the output of the plant y( t) minus the reference model output / ( t) goes to zero as t 4 oo or lim [y(t) /(t)] = 0 I+ ctJ (2.1.7) Or the following functional criterion should be minimized 12
PAGE 22
10 = _!_ [y(t)/(t) r 2 If we write the adaptive control system model in the form y(t+1)+ a1y(t)+a2y(t1)+ ... +anAY(tnA) = ... = b0 u(t)+b1 u(t1)+ ... +bn8u(tn8 ) Solving for y( t + 1) y(t+ 1) =a1 y(t)a2 y(t 1). .. anAy(tnA)+ ... + b0 u(t)+b1 u(t1)+ ... +bn8u(tn8 ) We know from equation (2.1.10) that Let and Y(t + 1) = [a a a b b b ] I 2 nA 0 I n8 b ... b ] I nB y(t) y(t 1) = y(tnA) u(t) u(t1) 13 y(t) y(t1) y(tnA) u(t) u(t1) (2.1.8) (2.1.9) (2.1.10) (2.1.11) (2.1.12) (2.1.13)
PAGE 23
We now have y(t+1)=8/ (t) (2.1.14) The vector 8 0 is referred to as the system parameter vector and is called the measurement vector. From (2.1.14) it is easy to obtain y(t+I)Ym(t+l) =8/ qj(t)Ym(t+l) (2.1.15) From (2.1.7) we know lim [y(t)Ym(t) J = 0 and from equation (2.1.15) we can l4ct:J conclude the controller is optimal in the sense of (2.1. 7) is given by (2.1.16) However, the system parameters are not always known, the adaptive controller becomes (2.1.17) Where 8 (t) is an estimate of the real parameter vector 8 0 We can obtain 8 ( t) from ) of 8 t + 1) = B(t J1 =(0 ) ao t where 1.1 is the algorithm gain 0
PAGE 24
a[y(t)ym(t)]() () t /}() t (2.1.20) We are now in position to derive from equations (2.1.18) and (2.1.20) 8(t+I)= e(t) +.u(t)[y(t)ym(t)] (2.1.21) The equation derived in (2.1.21) will be unstable. We can normalize the equation by the use of the Euclidian norm of the measurement vector ( t) The constant c is up to the discretion of the designer. Most often a small value is chosen to avoid division by zero. 1 The Euclidian norm of .is 0 < c < co Now (2.1.21) becomes c + Where e(t+l) = B(t) + (t) 2 [y(t) ym(t)] c + \\(t)l\ (i) 0 < Jl < 2 Is the algorithm gain (ii) 0 < c < co Constant used at designers discretion (2.1.22) (iii) S ( 0) = B Initial condition of system parameter estimate vector Equation (2.1.22) above is known as the Normalized Gradient Algorithm. 15
PAGE 25
2.2 Least Squares Algorithm In practical applications of adaptive control algorithms, better performance can be obtained if we use the least squares algorithm. The normalized gradient algorithm discussed above will only be globally stable if (2.2.1) For all 0 ro 2". This is called the strictly positive real condition. Condition (2.1.23) is very restrictive and means that the algorithm will not be stable for all reference models. We need to design an estimation algorithm such that the restrictive condition above is avoided. Better performance can be obtained with the use of the least squares algorithm. If we use instead of equation (2.1.22) the following algorithm e (t+ 1) = B(t) + p(t) (t) [y(t)Ynr(t)] (2.2.2) Where the matrix p(t) is defined by p(t) 1 = p(t1) 1 + (t) r, p(O)= Po I, Po> 0 (2.2.3) The form of the least square algorithm shown in equation (2.2.2) is not practical to use in a recursive situation, since it requires the algorithm to calculate the inverse of the matrix p(t) at every iteration. The use of the Matrix Inversion Lemma (MIL) will enable us to develop a form of the least square algorithm that does not require the inversion, thus reducing the number of calculations needed at each iteration. 16
PAGE 26
Let A be a ( n x n) dimensional matrix, B a ( n x m) dimensional matrix of maximum rank, C a ( m x m) dimensional nonsingular matrix, and D a ( m x n) dimensional matrix of maximum rank, Then the following MIL identity holds ( A + B CD ) 1 = A 1 A 1 B ( C 1 + D A 1 B) 1 D A 1 (2.2.4) Taking the inverse of (2.2.3) yields [p(t) 1] 1 = [p(t1) 1 + (t) T] 1 (2.2.5) Taking the right side of equation (2.2.5) and matching it to the left side of the MIL and letting A= p(t1) 1 B = C = 1, and D = (t) r. We can derive [ (t)_1]_1 = (tl) (t) T p(tl) p p 1 + (t) T p(tl) (t) (2.2.6) Rewriting equation (2.2.6) we now have (t) = (t1)p(tI)Ht) (t) T p(t1) p p I+ (t) T p(tl) (t) (2.2.7) With the conditions on p(t) that p(O) = p(O) r > 0 or p(O) is Hermitian Then the conditions of the MIL will be satisfied for all t Equation (2.2.7) above is known as the Recursive Least Square Algorithm (RLS). It is a selftuning adaptive control algorithm that does not require any matrix inversions. Implementation of the RLS algorithm requires that that p (0) be chosen as a symmetric 17
PAGE 27
positive definite matrix. This will ensure that every matrix p(t) will also be symmetric and positive defmite. 18
PAGE 28
2.3 Model Reference Adaptive Controller Next we consider the design of a Model Reference Adaptive Controller based on the Normalized Gradient algorithm. We consider a system in the following form (2.3.1) where u(t) and y(t) are the system input and output respectively and d is the system delay. We assume that A(q1 ) and B(q1 ) are coprime polynomials and given in the form A( I) 1 I n q = + al q + ... + an q (2.3.2) (2.3.3) where q1 is the unit delay operator. We wish to design a controller where the closed loop system gives the desired response to the reference signal r ( t ). (2.3.4) where R, J, and are polynomials of the form R ( I) I I d1 1 q = + rl q + + rdl q (2.3.5) (2.3.6) and ns = max{deg Am d, deg AI} Given the design specifications ro n (natural frequency) and c; n (damping coefficient) We can synthesize the nonadaptive reference model in the form 19
PAGE 29
(2.3.7) where Ym (t) is the output of the reference model and (2.3.8) B ( I) b b I b mm m q = Om + lm q + + mm q (2.3.9) We assume that Am(q1 ) and Bm (q1 ) do not have common factors. If solve equations (2.3.1) and (2.3.4) for u(t) and equate we can solve for y(t). (2.3.10) Model reference design requires that the transfer function of the closed loop system equal the transfer function of the reference model. From equations (2.3.7) and (2.3.10) we are now in a position to derive (2.3.11) If we drop the unit delay operator for convenience, equation (2.3.11) becomes B T = Bm AR + qd BS Am (2.3.12) The polynomials T and R in equation (2.3.12) are given by T = B m and R = B R I (2.3.13) and S is obtained as a solution to the Diophantine equation 20
PAGE 30
A m = A R 1 + q d S (2.3.14) Equation (2.3.14) has a causal solution if the polynomials R 1 and S have the form R ( 1) I 1 d+1 q +a q ++r q 1 1 d1 (2.3.15) S(q1 )=So + s1 q1 + ... +sns qns1 (2.3.16) where ns = max {n I, deg Am d} and n is the degree of the polynomial A( q 1 ) and deg Am denotes the degree of A m(q1). The degree of R1 (q1 ) is equal to d 1 and the degree of S(q1 ) is equal to max{deg Am d, deg Al}. The control structure of equation (2.3.4) is determined by equations (2.3.12) and (2.3.13) in terms of the system polynomials A(q1 ) B(q1 ) and the reference model polynomials A m(q1 ) and B m(q1 ). Another way to obtain the solution for the controller polynomials is to take equation (2.3.14) and multiply Am by the output error. We now have Am (q1 )[y(t+d)Ym (t +d)]= R1 Ay(t+d) + S y(t)Am y(t+d) (2.3.17) We know from equations (2.3.1) and (2.3.7) that Ay(t+d) = B u(t) and Amym(t+d) = B mr(t) (2.3.18) Substituting (2.3.18) into (2.3.17) we can obtain Am [y ( t +d) y m ( t + d)] = B R1 u( t) + S Y ( t) B m r( t) (2.3.19) 21
PAGE 31
If we set the output error to zero [y ( t +d) y m ( t + d) J = 0, we can derive from equation (2.3.19) the structure of the model reference controller B R1 u(t) = S y(t) + Bm r (t) (2.3.20) Where R1 (q1 ) and S(q1 ) are obtained from the polynomial equation given in (2.3.14). In other words the controller is the same as the one defined in equations (2.3.4), (2.3.13), and (2.3.14). We wish to derive the regressive form of equation (2.3.19). Let Where na. = deg a (q1 ) = degB (q1 ) + degR(q1 ) = degB (q1 ) + (d1) Now we can write equation (2.3.19) as (2.3.21) (2.3.22) Where the coefficients of the system parameter vector defined by equation (2.3.21) are (2.3.23) The measurement (t) is given by = [u(t), u(tl), ... ,u(tna); y(t) ,y(t1), ... ,y(tns)] (2.3.24) If we set the output error to zero [y ( t +d) y m ( t + d) J = 0, equation (2.3.22) becomes the following vector recursive form 22
PAGE 32
(2.3.25) In the case where the real parameter 8 0 is unknown, adaptive control concepts can be used to estimate the parameter vector. (2.3.26) Where e (t+d 1) is obtained by the parameter estimator derived in the normalized gradient algorithm discussed in section 2.1. The adaptive parameter estimator is Where (i) (ii) e (t+d) = B(t+d 1) + J.l rjJ(t) [y(t+d)Ym (t +d)] c +II r/J(t) 112 Is the algorithm gain O
PAGE 33
models. We need to design an estimation algorithm such that the restrictive condition is avoided. If we filter both sides of equation with 1 _1 we derive Am (q ) [y(t+d)Ym (t +d)]= 9/ 1 _1 B m _1 r(t) (2.3.29) Am (q ) Am (q ) Let 1 \V (t) = Am (q1) (2.3.30) Rewrite equation (2.3.29) using (2.3.30) (2.3.31) Using the fact that equation (2.3.7) implies (2.3.32) Substituting equation (2.3.32) into (2.3.31) we have y(t+d) ym (t +d)= 9/ Ym (t +d) (2.3.33) And if we set the output error y ( t +d) y m ( t + d) = 0, equation (2.3.33) becomes 9/ = Ym (t +d) (2.3.34) The corresponding adaptive model reference controller becomes e (t+d 1) T '1/(t) = Ym (t +d) (2.3.35) Where S ( t + d 1) is the estimate obtained by the following estimation algorithm S (t+d) = B(t+d 1) + JL '1/(t) [y(t+d)Ym (t +d)] c + II '1/ Ct) 112 24 (2.3.36)
PAGE 34
Where (i) Is the algorithm gain (ii) O 0, and 8(0) = B1 25
PAGE 35
2.4 Pole Placement Adaptive Controller The performance of the system being controlled depends on the placement of the closedloop poles. In an adaptive control system it is possible to assign the closedloop poles to predefined positions. If we consider a system of the form (2.4.1) Where u(t)is the system input, y(t) is the system output, and the polynomials A(q1 ) and B ( q I ) are defined as A( 1) 1 1 n q = + al q + ... + an q (2.4.2) (2.4.3) It is well known that control law has the following structure R(q1 )u(t) =S (q1 )y(t) + y "(t +d) (2.4.4) Where d ;?: I is the system delay and y ( t) is the command signal. The polynomials (2.4.5) S( 1) 1 m5 q =So +s1 q + ... + sms q (2.4.6) The location of the desired poles are specified by the polynomial A "(q1 ) (2.4.7) 26
PAGE 36
Where and The polynomials R(q1 ) and S(q1 ) from the control law in equation (2.4.4) can be obtained by solving the following Diophantine equation This polynomial equation has a unique solution only if The closed loop system is depicted in Figure 2.4.1 below Controller r, I I H(t) R s I L _______________ J Figure 2.4.1 Pole Placement Controller 27 Plant q dB A (2.4.8) (2.4.9) Disturbance
PAGE 37
If we solve for u ( t) in equation (2.4.4) and substitute into (2.4.1) we obtain (2.4.10) Next we substitute in the denominator of equation (2.4.10) from (2.4.8) to derive (2.4.11) The poles of the closed loop system have the desired locations defined by the polynomial A (q1). In an adaptive control system, A (q1 ) and B(q1 ) are not always known. The objective of the designer is a controller such that the closed loop system has the desired poles defined by the polynomial A *(q1). In this case, since A(q1 ) and B ( q l) are unknown we need to identify the system parameters, or in other words the coefficients of the polynomials A ( q t) and B ( q l) The system given by equation (2.4.1) can be written in the form (2.4.12) + b u(td n ) m8 B Using previously derive control theory we can rewrite (2.4.12) as y(t )=8 / (tl) (2.4.15) Where b ... b ] I n8 (2.4.16) 28
PAGE 38
$(t1) = [ y(t1), ,y(tnA), u(td),u(td1), (2.4.17) u( t d n 8 ) ] The Identification algorithm has the following form and is called the output error prediction algorithm Where S(t)= B(t1) + ,u(tl) 2 [y(t)B(tl)T(t1)] c + jj(t1 )jj (i) 0 < j.l < 2 Is the algorithm gain (ii) 0 < c < oo Constant used at designers discretion This algorithm is driven by the output prediction error e(t) =y(t) S(t1) T (t1) Where y(t) = S(t1) T (t1) represents the prediction of the output signal. Since at time t I (2.4.18) (2.4.19) (2.4.20 8(t1f = [a1(t1), ... a,/t1), b0(t1), ... E,8 (t1)] (2.4.21) We can find estimates of the polynomials A ( q 1 ) and B ( q 1 ) i.e. (2.4.22) (2.4.23) If we place these estimates into equation (2.4.8) we obtain 29
PAGE 39
(2.4.24) From which we can obtain estimates of the polynomials R(tl,q1 ) and S(tl,q1). The adaptive Pole Placement controller can now be derived in the form R.(tl,q1)u(td) =s(tl,q1)y(td) +y. (t) (2.4.25) When Then we will have The poles of the closed loop system will be at the desired location defined by the polynomial A ( q 1). 30
PAGE 40
3. Algorithm Design Procedure for Disturbances The Model Reference Adaptive Controller in section 2.3 was developed from B T Bm = AR + qd BS Am (3.1) Without taking external disturbances into account. When external disturbances exist, it is not sufficient to specify B m The output feedback in the system will pickup Am additional dynamic disturbances that are not acted upon by the command signal. The controller designer should strive to identify the physical properties of the disturbances. We need to specify the observer dynamics A 0 ( q ) which will take the disturbances into account. The observer polynomial A 0 ( q _,) 1s designed to nururmze the disturbances to the controlled system. The designer needs to select the polynomials R S and T in equation (3.1) so that equality is achieved. Let us choose to let T=BmAo (3.2) Where A 0 ( q _,) is the observer polynomial specified by the designer to take the disturbances in the controlled system into account. We can determine the polynomials R and S as a solution to the Diophantine equation 31
PAGE 41
A R 1 + q d S = A 0 A m (3.3) The polynomials R and S have the form RI(q1)= 1 + rl q1 + ... + rd1 qd+I (3.4) S(q1)= So+ sl q1 + ... +s ns q ns (3.5) Where n s = max {n 1, deg Am + deg A0 d} (3.6) (3.7) and polynomial R = B R1 (3.8) All of the controller polynomials have now been determined from equations (3.2), (3.3), and (3.8). We can verify that the polynomials R, S, and T determined in equations (3.2), (3.3), and (3.8) satisfy equation (3.1) by substituting T from equation (3.2) and R from equation (3.8) we are able to derive (3.9) We know from equation (3.3) A R 1 + qd S = A 0A m. If we substitute into (3.9) we can derive BT BmAo Bm ,= = A R + q d B S A m A0 A m (3.10) 32
PAGE 42
We know from equation (3.10) that the closed loop system will act like the specified reference model. The observer polynomial A 0 ( q l) must always be a stable polynomial with all of its zeros located inside the unit circle in the "Z" plane. 33
PAGE 43
4. Design Procedure When System is Nonminimum Phase If the system being controlled is nonminimum phase, then the polynomial B ( q I) has unstable zeros located outside of the unit circle in the "Z" plane. Where the system is gtven as (4.1) And the reference model is given by (4.2) The reference model specifies the desired response to the command signal r ( t ) The objective is to design a controller. (4.3) Such that the system given in equation ( 4.1) behaves like the reference model given in equation (4.2). Or in other words, the error in the system goes to zero when the controller is applied. (4.4) In order to make the design possible we should specify reference model so that B m ( q I) = B ( q I) B ( q I) (4.5) 34
PAGE 44
Where Bis the polynomial containing the unstable zeros of the system. We can write the polynomial B(q1 ) in the form (4.6) Where B + has all stable zeros. Our design is based on B T Bm = AR + qd BS Am (4.7) Which means that the closed loop system transfer function obtained from equation (4.1) is equal to the transfer function of the reference model. If we substitute equations (4.5) and (4.6) into equation (4.7) we derive Am (4.8) 1bis equation will be satisfied if we choose the controller polynomials R S, and T as follows T = B and R = B + R1 (4.9) Where R1 is obtained from the polynomial equation (4.10) We can also use equation (4.10) to calculate the polynomial S. The polynomials R1 and S have the following form (4.11) S( I) I ns q =So + Sl q + ... +sns q (4.12) 35
PAGE 45
Where nR = deg B+ d 1 n s = max {n 1, deg Am + deg A0 d} (4.13) (4.14) Thus from equation (4.10) we can derive R1 and S. And we can derive T and R from equation (4.9) 36
PAGE 46
5. Practical Issues and Robust Control Implementation The motivation behind the development of adaptive control theory is to take into account the uncertainty in the physical parameters of the system being controlled. During the 1970's it was generally assumed that physical systems could be described precisely by linear system models. These models assumed that the system parameters were unknown and the upper bound of the system was known. As these control systems were implemented, it was noticed that even if the controller had been designed to take into account the presence of external disturbances or small modeling errors, the controller could become unstable. The result of this has been the development of robust adaptive control theory. The methodology for robust control presented in this thesis develops a unified framework for systematic robust control. Global stability analysis drives the design of the adaptive controller in the presence of unmodeled dynamics and external disturbances. The use of robust adaptive control systems makes it possible for the adaptive controller to stabilize the controlled system by generating the correct parameters in the presence of unmodeled disturbances or noise. The general form of a robust adaptive controller is depicted in Figure 5.1 on the following page. 37
PAGE 47
Design Controller p ararn et ers Process ararneters Figure 5.1 Adaptive Robust Controller 5.1 Computational Delay Plant As seen above in Figure 5.1, the filters and the digitaltoanalog / analogtodigital converters will introduce computational delay into the controlled system between the system input and output. This delay is introduced into the system in one of two ways depending on how the control system is implemented. The first way involves measw:ing the output of the system at some time t k the measurement is then used to compute the control signal which is implemented at time t k + 1 The second way is the same as the first except the control signal is implemented as soon as the control signal is computed as shown in Figure 5.2 on the following page. 38
PAGE 48
a 1 u Time Figure 5.1.1 Computational Delay b 1 u Computational Delay Time Both computational delays have disadvantages. The first as shown in Figure 5.1.1 (a) is that the control signal will be delayed unnecessarily long and for the second as shown in Figure 5.1.1 (b) is that the time delay may change as the load on the computer changes. However, for both it is necessary to take the delay into account when designing the control system. When the second way is utilized, it is advantageous to make the delay as small as possible by performing as few operations as possible between the digitaltoanalog / analogtodigital converters. Assuming that the regulator has the form u(t)+r1 u(t1) + ... +rk u(tk) = 10 uc(t)+ ... +rm uc(t m) s0 y(t) ... s1 (t I) (5.1.1) We can rewrite equation (5.1.1) as 39
PAGE 49
where u'(t1) = t1 uc(t1) + ... + tm uc(tm)s1 y(t1)... s1y(tl)r1 u(t1)... rk u(tk) (5.1.2) (5.1.3) The signal u'(t 1) contains information that is available at the time t 1. In order to make the delay as small as possible, the controller should perform the analogtodigital conversion of y(t) and uc(t), compute equation (5.1.2), perform the digitaltoanalog conversion, and then compute equation (5.1.3). This will significandy reduce the delay when implemented in the controller. Except for two multiplications and two additions, u(t) must be tested for limitations. The computational delay appears in the controlled system the same way as time delay in the system dynamics, and is important to take into account when designing the controller. The general rule of thumb is that the delay can be ignored if it is less than 10% of the sampling period. In high performance systems it should always be accounted for. Several iterations of the design process may have to be performed, as the computational delay will not be known until the control system is implemented, in order to tune the controlled system. 40
PAGE 50
5.2 Sampling, Prefiltering, Postfiltering, and Data Filters The sampling rate used in an adaptive controller influences many properties of the system being controlled, like the ability of the controlled system to follow the command signal, rejection of load disturbances and measurement noise, and sensitivity to unmodeled dynamics in the system. The general rule for deterministic systems is to let the sampling interval h be chosen such that ro 0 h 0.2 to 0.6 (5.2.1) where ro 0 is the natural frequency of the dominating poles of the closed loop system. This is equivalent to 12 to 60 cycles per undamped natural period. The sampling frequency is ro s = 2 /"{. In digital control systems it is necessary that signals be filtered before they are sampled, and all components with frequencies above the Nyquist frequency as shown in equation (5.2.2), should be eliminated. (5.2.2) If this is not done, a signal component with frequencies ro > {J) N will appear as low frequency components with the frequency (J) a = J( (OJ + OJ N) mod OJ s) OJ N I (5.2.3) The appearance of these frequencies is called aliasing, and the filters introduced before the sampler are called prefilters or antialiasing filters. There are several choices for antialiasing filters. These are second or fourth order Butterworth, Bessel, and ITAE (integral 41
PAGE 51
time absolute error) filters. The antialiasing filter can consist of one or several of the above filters and have the form (5.2.4) The Bessel filter has the property that its phase curve is approximately linear, which implies that the waveform is also approximately invariant. When the antialiasing filter is implemented with a Bessel filter, the time delay can be approximated. Assume the bandwidth of the filter is chosen to be (5.2.5) where G00 (s) is the transfer function of the filter and roN= 7r lS the Nyquist h frequency. The parameter p is the attenuation of the filter at the Nyquist frequency. Table 5.1 gives the approximate time delay as a function of p and also gives ro N as a function of the filter bandwidth ro B p (J) N/OJB 1'.i I h 0.05 3.1 2.1 0.1 2.5 1.7 0.2 2.0 1.3 0.5 1.4 0.9 0.7 1.0 0.7 Table 5.2.1 Properties of Fourth Order Bessel Filters It is easily seen that the relative delay increases with attenuation, where for reasonable attenuation the delay is more than one sampling period. It is obvious from the above 42
PAGE 52
that the dynamics of the antialiasing filter must be taken into account during the design process for the robust adaptive controller. If the antialiasing filter is implemented with a Bessel filter, it is sufficient to approximate by a time delay, since the additional dynamics cause no additional problems for an adaptive controller, because all parameters are estimated. The time delay simply adds one more parameter to the system model. In control systems where the sampling rate has to be changed, the usual implementation is duel rate sampling. A high fixed sampling rate is used in conjunction with a fixed analog filter. Then a digital filter can be used to filter the signal at a slower rate when needed. This implies that fewer parameters have to be estimated. The output of a digitaltoanalog filter is a piecewise constant signal, which means that the signal fed to the D A filter is a piecewise constant signal that changes stepwise at the sampling instants. This is adequate for most systems. However, some systems have poorly damped oscillatory modes. The steps may excite these modes. In such cases it is advantageous to use a filter that smoothes the signal from the D A converter. This filter is called a postsampling filter. These filters may be simple continuous time filters with a response time that is short in comparison with the sampling time or another solution is to use dual rate sampling. This leads us to the conclusion that adaptive control systems should be designed so that the output is piecewise linear between the sampling instants. Then, fast sampling can be used to generate an approximation of the signal, possibly followed by an analog postsampling filter. 43
PAGE 53
If we asswne that the process being controlled is given by the discrete time model y(t) = G0(q) u(t) + v(t) (5.2.6) and we know that the antialiasing filter contributes as part of the process G0(q). The noise or disturbance v(t) can be the swn of deterministic, piecewise deterministic, and stochastic disturbances. The signal has low frequency and high frequency components. In stochastic control problems it is important that the controller be tuned to a particular disturbance spectrum where the estimate the disturbance characteristics is generated. In a deterministic environment we are interested mainly in with the term G 0 ( q) u( t) and not particularly concerned with the term v( t) The presence of the term v( t) will create difficulties in the parameter estimation. We can reduce the effect of v( t) if we filter the input to the estimator with data filers having the transfer function H1 If we apply this filter to equation (5.2.6). Then (5.2.7) where y1(t) = H1(q) y(t) The proper choice of a data filter will make the relative influence of the term v(t) smaller in equation (5.2.7) than in equation (5.2.6). The choice of the filter should also emphasize the frequency ranges that are of primary importance in the control design. 44
PAGE 54
The disturbance v(t) typically has significant low frequency components that should be reduced. Very high frequencies should also be attenuated. The reason for this is that if the model A( q) y 1 ( t) = B( q) u 1 ( t) is fitted by a least squares model, it is desirable that A( q) v 1 ( t) be white noise. Since filtering with A implies that high frequencies are amplified, it means that v 1 ( t) should not contain high frequencies. Therefore, the data filter will have band pass characteristics as shown in Figure 5.2.1 below. Figure 5.2.1 Amplitude Curve for the Data Filter HAq) radl /sec The center frequency is typically around the crossover frequency of the system being controlled. A typical data filter is given by () (1a)(q1) H J q = __.:___..:_;_ __;_ qa (5.2.8) 45
PAGE 55
5.3 Parameter Tracking The ability of a robust adaptive controller to track variations in process dynamics is a key property. In order to accomplish this it is necessary to discount old data. When the process parameters are constant, it is desirable to have the parameter estimates based on many measurements to minimize the effects of disturbances. When the process parameters are changing, it can be misleading to base the estimates on many measurements. The use of the exponential forgetting factor A gives us the means to discard stale data. The least squares algorithm with exponential forgetting is given as 8 (t+l) = o(t) + p(t)(t) (5.3.1) where (t) = _!_[ (t+l) p(t1)(t) T(t)p(t1)] p A p A+T(t)p(tl)(t) (5.3.2) where the sampling period h is chosen as the time unit. The forgetting factor is given by (5.3.3) where T1 is the time constant for A. Table 5.3.1 on the following page gives different values of A for different values of 1j j h. 46
PAGE 56
T1jh A. 1 0.37 2 0.61 5 0.82 10 0.90 20 0.95 50 0.98 100 0.99 Table 5.3.1 Relation Between1jjh and A. It should be noted that it is possible to generalize the method of the forgetting factor and have different forgetting factors for different parameters. This requires information about the nature of the changes in the different parameters. 5.4 Covariance Resetting In some controlled systems the parameters are constant over long periods of time and then change abruptly. In these cases the forgetting factor is less than suitable. It is more appropriate to reset the covariance matrix when the changes occur. When large changes occur, reducing A. to a very small value, on the order of A. = 0.0001, can reset the covariance matrix, when the parameters change. This is directly related to the excitation of the controlled system by the command signal. Good excitation is obtained only when the command signal continues to change. 47
PAGE 57
5.5 Estimator Windup and Methods to Combat Windup As mentioned in the previous section, exponential forgetting only works well when the controlled system is properly excited all of the time. If the excitation is poor there are problems with the controller. If there is no excitation in the system at all, i.e. = 0, the equations for the estimator become 9 (t+I) = o(t) p(t+l) = ..!_ p(t) A. (5.5.1) It is obvious from the equations 1fl (5.4.1) that the estimate e is unstable with all eigenvalues equal to 1, and the equation for the p matrix is unstable with all eigenvalues equal to J{ The estimates from these equations will always remain constant, and the p matrix will grow exponentially. The result of this is that whenever becomes different from zero, the estimates will change drastically. The phenomenon discussed above is called estimator windup. A similar situation occurs when the regression vector is different from zero, but restricted to a subspace. When the vector is constant, we obtain information only about the component of the parameter that is in parallel to the regression vector. The component can be estimated reliably with the exponential forgetting. The projection of the p matrix in this direction converges to 1 f..., and the orthogonal part of the p matrix goes to infinity as A1 Estimator windup is thus obtained by exponential forgetting combined with poor excitation. 48
PAGE 58
5.5.1 Conditional Updating The above, leads us in the direction of designing robust controllers with good excitation, or the use of algorithms that minimize estimator windup. The use of conditional updating is one way to combat estimator windup. The estimates and the covariance matrix are only updated when there is excitation present in the controlled system. The algorithms obtained are called algorithms with conditional updating or dead zones. Correct detection of excitation should be based on calculation of covariances. Simpler conditions are often used in practice. Common tests are based on the magnitudes of the variations in the process inputs and outputs or other signals such as e and P r. The selection of conditions for updating is critical, if the criterion is too stringent, the estimates will be poor because updating is done to infrequently. If the criterion is too liberal we get covariance windup. 5.5.2 Constant Trace Algorithms Constant trace algorithms are an additional way to keep the Pmatrix bounded by scaling the matrix at each iteration. A popular method is to scale the matrix in such a way that the trace of the matrix is constant. An additional refinement is to add a small unit matrix, which gives the socalled regularized constant trace algorithm: 49
PAGE 59
e (t) = e(t1) + K(t) (y(t)rfJ(tf e(t1)) K(t) = P(t 1) r/J(t) (A + r/J(t f P(t 1) r/J(t) r1 p t p t 1 ( ) 1 (( ) P(t1) r/J(t)r/J(tf P(t1)] A 1 + r/J(tf P(t1)(t) P(t) P(t) = c1 tr(P(t)) + c2 I where c1 > 0 and c2 0. Typical values for the parameters are The constant trace algorithm can also be combined with conditional updating. 5.5.3 Directional Forgetting (5.5.2.1) (5.5.2.2) Directional forgetting is another means of accomplishing this. Exponential forgetting is only done in the direction of the regression vector. The Pmatrix with exponential forgetting is given by (5.5.3.1) In directional forgetting we start with the equation (5.5.3.2) The matrix P1(t) is decomposed as (5.5.3.3) 50
PAGE 60
where pI (t) Ht) = 0. This gives y(t) = (T (t) (t )) (5.5.3.4) Exponential forgetting is then applied only to the second term of equation (5.5.5), which corresponds to the direction where new information is obtained. This gives p1(t+l) = .P1(t) + t..y(t)(t)(tf + (t)(tf (5.5.3.5) which can be written as Several variations of this algorithm exist. The forgetting factor /.. is sometimes made a function of the data, and another method has the property that the Pmatrix is driven toward a matrix proportional to the identity matrix when there is poor excitation. 5.5.4 Leakage Leakage is another way to avoid estimator windup. In continuous time, the estimator is modified by adding the term a. ( ()0 0). This means that the parameters will converge to 8 when no useful information is obtained, that is when E = 0 A similar modification can also be made to discrete time estimators. When the least squares algorithm is used, it is also common to add a similar term to the P equation to drive it toward a specified matrix. 51
PAGE 61
5.6 Robust Estimation Least squares based algorithms tend to operate optimally if the disturbances are white noise. However, in practice, least squares based adaptive controllers have drawbacks because the assumptions that the controllers are based on are violated. A single large error will have a large influence on the estimates because the error is squared in the criterion. This is a direct consequence of the Gaussian assumption that the probability of large errors is very small. Control algorithms with very different properties are obtained if it is assumed that the probability for large errors is not negligible. These algorithms will have equations such as 8 ( t) = o ( t 1) + p( t) ( t 1) J ( &( t)) dB(t) P &(t) dt (5.6.1) where the function /(E) is linear for small E, but increases more slowly for a large E, such as & J( E) = 1 +a.,.......,1&1 (5.6.2) This has the effect of decreasing large errors. In practical applications it can be shown that controllers need integral action to ensure that calibration errors and load disturbances do not give steady state errors. This type of situation can occur quite frequently. In order to check if a particular controller has this 52
PAGE 62
ability, we investigate possible stationary solutions. Consider the following selftuning system based on least squares estimation and minimwn variance and the regulator is s u(t) = . y(t) R The conditions for a stationary solution are ry ( 't ) = lim _!_ f y( k + 'l') y( k) = 0 'l' = d' ... 'd + l NHto N k=l ryu ( 'l') = lim _!_ f y( k + 'l') u( k) = 0 'l' = d' ... 'd + k N+ct:J N k=l (5.6.3) (5.6.4) (5.6.5) where k and l are the degrees of the R" and s polynomials, respectively. These conditions are not satisfied unless the mean value of y is zero. When an offset is present the parameter estimates will get values such that R*(l) = 0, that is, there is an integrator in the controller. The sections discussed above in Chapter 5 leads us to the following conclusion. The use of antialiasing filters, post filters, data filters, forgetting factors, methods to combat windup, and integral action furnishes the means to design robust adaptive controllers that adapt well to noise and disturbances. 53
PAGE 63
6. Simulations The first simulation given in this chapter illustrates plant parameter estimation and adaptive control. The examples used here for plant control utilize a servomotor. The continuous time transfer function of the servomotor from voltage input to shaft position output is given by G (s) s(r s + 1) (6.1) where K is the plant gain and 't is the time constant. The servomotor parameters are K = 30 't = 0.1 sec The discrete time transfer function is H(z)Kd (z(zl)(z +a) The discrete plant gain is Kd = K ( 't a T + T) The constants are given by T/ cx.=eiT p = Ta +aTT TaT+ T (6.2) (6.3) (6.4) (6.5) 54
PAGE 64
where T is the sampling period, with a sampling rate of 40 msec. The discrete time plant parameters are: 0.040/ a = e 101 = 0.6703 f3 = 0.18*0.6703 + 0.6703*0.040.1 = 0.8753 0.1 0.6703 0.1 + 0.04 Kd = 30(0.1 *0.67030.1 + 0.04) = 0.211 With these parameters the discrete time transfer function becomes ( ) 0.2 (z + 0.875) H z = ::''::.........,(z l)(z0.6703) From this transfer function the discrete time model obtained is y(t) = H(q1)u(t) y(t+l) l.67y(t) + 0.67y(tl)= 0.2u(t) + 0.18u(t1) 55 (6.6) (6.7) (6.8)
PAGE 65
6.1 Simulation using Least Mean Square Algorithm The plant model previously derived is y(t+1) 1.67y(t) + 0.67y(t1)= 0.2u(t) + 0.18u(t1) (6.1.1) The least square algorithm is e (t+1) = o(t) + p(t)t;b(t) (6.1.2) where (t) = _!_[ (t+1 )_p(t1)t;b(t) tPT(t)p(t1)] p A p A+t;br(t)p(t1)t;b(t) (6.1.3) where A gives exponential data weighting. When the parameter estimate vector 9 0 has changed, then the most recent data collected from the measurement vector ( t) is more informative than past data. The use of exponential data weighting discards old data and gives higher weight to the most recent data in the vector p ( t) A is defined as A ( t) = A0 A( t 1) + ( 1A( 0)) 0 < A(t) < 1 The reference model output is given as Ym (t) = 15sin( 2 37t0t) (6.1.4) (6.1.5) The simulations were performed with Matlab version 5 .1. The initial conditions selected for e in the simulations is e = [1 1 1 1] (6.1.6) 56
PAGE 66
Q) "'C .a 'E Cl Ill Reference Signal 20 .............. l ................. i ................. ................. i ................ J ................. t ......... ( ............ .l ................. ................ ................ t ............. rir r L r 1 o t .......... ..... I ........ ..... I .... t t i i i i l ; i 0 ........ l ............ i 000 ................... i .......... .i .......................... i ............. i ...................... l ............ ......... :::2: 10 Q) "'C .a 'E Cl Ill :::2: Q) "'C .a 'E 01 Ill :::2: Q) "'C .a 'E 01 m :::2: T : t 2 0 t!Ooooooooooo!ooooooooooooool"""""'"""!''"''""""'''l""'"'''"'""i"""""""'"'i""''"""'"''f"""""'''"''t"'"""'"''"t""''"""" 50 0 50 50 0 50 0.5 0 0.5 0 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error .............. ................. .................................. ................. ................ 1, ................. ................. j ................. ................ ................ l.. .......... .. I I i i ; i i i i i I I i I I i I i I I i I I I I I Theta Vectors I I I I i I ... _._ ... ................. .i .................................. .!.. ................................. .................................. L ................................ L ........... .............. .. I I I i I ................................ I.. ................................. L ................................ l.. ................................ l.. ................................ i.. ............................ .. I i i I i j i I i : i I 100 200 300 Time 400 500 600 Figure 6.1.1 Least Square A.= 0.98, No Noise, and No Variance 57
PAGE 67
Reference Signal 20 t ......... J. ................ ................. I.!.. ............... I ................. ................. L. .............. !.. .............. ................ t ! 1 : ! l I i I I ro r. r, r 1i C1) 1 0 ......... t ....... i ......................... ........ ............ .... i ........ .... .... ........ t ......... t ........ ___ L __ j ______ L __ j ______ l __ j ______ l __ j __ ;:: 0 I ' I ; I I I : i :2 10 ..  .i ..................... !.. ........... ....................... !.. ........... !. ........................ !. ......... l .. ..... Q) u .a ;:: Cl co :2 Q) u .a c Cl ctl :2 Q) u .a ctl :2 i i i I i i :t .. .. ... .f ....... 20 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 0 50 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error I i I I I i i I I so ti;; r .. I I i : i I I l i : I I i f i i ; ____ 1Y I i 50 trrrijrttl 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors i I 0.5 jittt: i i 0 litrt 0.5 t I I i I I I I I 0 100 200 300 400 500 600 Time Figure 6.1. 2 Least Square A. =0.20, No Noise, and No Variance 58
PAGE 68
Reference Signal 20 fl""""""""""'""""""""""""".! ................. i ................. j ................. ................. ................. f!. .............. f ............. r!r i r L rl! r r!r r l rlr Q) 1 0 .... J ......... .......................... t ......... ......................... i .......... ; ..................... l ............. ........ ""0 I ! i I I I i o t! !I!!t t 10 .......... L ......... ..! ..................... L ......... ..!. .......... ...... L ......... ..!. ........................ !. ........ 1 ......... .J.'I''' i '': c+i'i I i I I i I I 20 rI"""""I"!:.. .. r+tt""T""""""""""""" 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 Q) ""0 .a 0 c: Cl co :::2: 50 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error ! 'I i ; I i ; i I 50 tloo "! looi!ttoot o 11 I I I I I I ! I u I i I I I i I I I I 50 I! i i j' I i j ji : I : : I I : 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors Q) 0.5 ""0 .a c: I I i I l ................................ ............... Cl co 0 :::2: i i i i I rlTTT 0.5 I i i i 0 100 200 300 400 500 600 Time Figure 6.1.3 Least Square /... = 0.1 0, No Noise, and No Variance 59
PAGE 69
Reference Signal 20 10 tifiiilittt r! ri rIj.ri ;' rf r' '!' i r,r,r,r,] '2 1 I I I I I I 0 00 ....... l ............ 1 ...................... ...... 000 i ....................... ............ i ...................... .i. ,&. I I I I t I i i 10 20 50 Q) "0 .a 0 '2 Ol co 50 50 Q) "0 .a 0 '2 Ol co 50 Q) 0.5 "0 .a ', co 0 0.5 i i i i I I i L.. I 'I'I ! '! 'T 'i 't 'I''.............. ................. r ................ ................. ................. ................. ................. l''""''"'''''''l"'""'"''"'''t''''""''''''''!''"''''"'''"'t'''''''''''''' 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 150 200 250 300 350 400 450 500 550 600 Time Tracking Error i I ; I i I I i i I i ...... 1 ..... .. ,................. .. ............. l ................. r.. ytt i j i i i i i I I I i I i I i I i i I i i ; i II 0 I .............. J ... ........... r ................ j ................. J ................. ................. r ................ I ................. J ................. f ................ f ................ i ............. i I i I i i I I I i I 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors I I I I I I I i I ................................ ................................... ... ............... ........... t .................................. t .................................. t'''"'''""''"''''"''''''"'' i i i i ! ! ; liil ......................... f ............................... i i i i I I I : i l ; l i t i 100 200 300 Time 400 500 600 Figure 6.1.4 Least Square A. =0.09, No Noise, and No Variance 60
PAGE 70
Reference Signal 20 .............. l ................ .i ........... ... J ................. ; ................. t. ................ i ................ J .. ............ J ................. l ................ r ................ l ............. i i i i I I I I ! rir ri r rt' r!r r !r ; r 1 o t i I .. .... I i .... t ..... t o t i I !! t t 1 0 .......... ........... i ..................... i ............. i ...................... i. ........... .i ......................... 4 .......... i ......... + + 1Q) "C .2 c: Cl Cll :2 Q) "C .a 'E Cl Cll :2 20 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 0 50 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error i i i i I i I i i 50 ............ +!+i""+rj .. ..... ,............ t .... t .. t .. i i i I I I I ! . I I I I i I. I i I 1' I l i i I ! 1 I I I 50 0.5 0 0.5 0 t t1+!11+_, __ 1t50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors I I lf.............................. !...... t......... ............ .... .!. ____ ....................... l I i f + ............ +.. j I j i i I t I 100 200 300 Time 400 500 600 Figure 6.1.5 Least Square 'A. =0.089, No Noise, and No Variance 61
PAGE 71
Reference Signal 20 ,,,,,,,. ... rir r r r!r r' rir r 1 o t .. t.. 1 t .. t c 0 ......... .l ............. i ...................... j . i ...................... i ............ .i ...................... .l ............ L ......... i i i i i I I I 10 .:::l.= .. !.::: .::::.::: + .. rc t .... = .. l==: r.= :._ Q) c 2 c: OJ Ill Ql c .3 c: Cl ca :::2: 20 .. 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 ; i ; 0 .1 .............. i .............. : i i 1 50 l 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error l i i t .............. f ............ .. j j 50 +++llll................ i l i I i I ! I I .............. l ............. I I ! i I I I I o II. 1 i : l i 1 1 u I i i i i I I I . I I .. l. .............. l... .......... j ! 50 tljj!! ..... 0.5 0 0.5 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors ; n ; ; i l !_ .............................. t,t .. t ..... '= =i '""' I 0 l i i .................. 1 i i i i 100 i ! 200 300 Time 400 500 600 Figure 6.1.6 Least Square A=0.087, No Noise, and No Variance 62
PAGE 72
Reference Signal 2 0 Q) 1 0 ... t .......... !, ..... ..... '::: ... :., .. : .... 1 .... = ........... =t= .. l ....... g f I i i I I I ..., I I t I i I 'E 0 ... r : : .... "l' ........ "I ...................... .................. .. i i I i ! I "" : I I : I I I : 1 o .......... i ..... ... j ................... f I ........ t .... t ........ If'i'': : L....J.
PAGE 73
20 Q) 10 "'0 .2 0 c: Cl tO ::2 10 20 50 Q) "'0 .2 0 c: Cl tO ::2 50 Q) 0.5 "'0 .2 c: Cl tO 0 ::2 0.5 0 Reference Signal I ! I I ; ! I I rrr .. r .. rrri.. rr ri r ri ++jlljlit++50 100 150 200 50 100 150 200 250 300 350 400 System Output 250 300 Time 350 400 Tracking Error ! i 450 500 550 600 450 500 550 600 .. r .. r .. r .. rfJjrtr 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors ................................ ji .......................... t ........................ "'""'t""'"'""""""''"""'""'{ .............................. .. i i I I i l 100 200 300 Time 400 500 600 Figure 6.1.8 Least Square A. =0.50, Noise= 2, and No Variance 64
PAGE 74
OJ "0 ;:: C) 10 Q) "0 "6, 10 Q) "0 ;:: C) 10 Q) "0 ;:: C) 10 20 10 0 10 20 0 50 0 50 0.5 0 0.5 0 Reference Signal ,j, .1 .i.!.ri ; l ; .; tit1t1t1.f. ''; L! L4i'', L4L.. i'L 50 100 50 100 I 150 200 250 300 350 400 System Output 150 200 250 300 Time 350 400 Tracking Error ............ .;. ................................. ... i i I i I 450 500 550 600 450 500 550 600 r ........... ................. r 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors 100 200 300 400 500 600 Time Figure 6.1.9 Least Square A=0.20, Noise= 2, and No Variance 65
PAGE 75
Reference Signal 2 0 ................................ ................. : ................. ................ .1 ................. ................. ................. : ................. ................ 1 ............................. .. 4 : 10 .. t1tit1ti" .3 'E 0 I ; I : I ::iiE 10 .. .. Q) '0 .a c: Cl co ::iiE 20 rt 50 100 150 200 250 300 350 400 450 500 550 600 System Output .. ! 0 I I \ .. ln .... ...... 1t .. f .......... l ........... .. rr llJ I I I 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error i I I 50 .............. ,i. ........ ; j . .. i ............ ............... j .... . j .... .. ................ ............ .. i i i ! I i o Ll jL 1 J.J J! "'"' ,Art .,M i r !'.'1"' r r .i 'II! IV' n i"lr I 1 I I I ! 50 ............ ; .......... r .............. i ................. r .............. i r! ........ r i r i 0.5 0 0.5 0 i l i i I i I : I i 50 100 150 200 250 300 350 400 450 500 550 600 100 200 Theta Vectors 300 Time 400 500 600 Figure 6.1.10 Least Square A. =0,083, Noise= 2, and No Variance 66
PAGE 76
20 Q) 10 "'C .2 0 'E Ol co :2 10 20 50 Q) "'C .2 0 'E Ol co :2 50 50 Q) "C .2 0 'E Cl co :2 50 Q) 0.5 "C .2 c: Cl co 0 :2 0.5 0 Reference Signal tli!i!ii .. ttt i rirf!..r+' r+r" i: t1r1rlri_: 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error i : I I '1 1 . I . I I rrrrrrrrrrr : .. l I .............. L ............. .I.. ............. ..I. ................ l ................. ............... ..!. .............. ..!. .............. ..I. ................ L ............ J. .............. L. .......... i I i i i i ! I 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors l I I lt++i i i E l r:ltl i i 100 200 300 Time 400 500 600 Figure 6.1.11 Least Square A. =0.98, No Noise, and With Variance= 2 67
PAGE 77
11) ""0 .a c: Ol C\l Reference Signal 2 0 .............. l ................. ................. ................ .! ................. ............... ..!. ............... .L. ............... l. ................ ................ ................ ............. I I I I I I ' ri rii rrf r'r! rt r' r! r,..f 1 : :: :: :::. [ :: ] ::: :::: :::: ::: t .:::: ::: J :::: :::: ::: :::: ::: :::: t :::: :::: ::i ::: :::. ::: :::: :::: t :::: .::: ::! :::: ::: :: I I ! i I ::2 10 I i i i I I .. .. + 'l '....J, ; l + ....J l _; ', '+LJ i ''fll ... ...... ........ l ............ ................ tt ............. t ............ .. 20 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 11) ""0 .a 0 c: Ol C\l ::2 50 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 50 flllllj .. ftt I I I I i i ! i I o J ' !, ...... ut ' , t r1, .. ,r 11) ""0 .a c: Ol C\l ::2 5o .............. t" ............... ................. l ....... : ........ t ................ ................. ................. ................ T ............ t ................ f" ........ ... t ............ .. 0.5 0 0.5 0 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors I I I I I ................................ j,: .................................... i.... ....... .................................................... .................................. .............................. .. i I 0 ilA ; I ! i'1l i ' r .......................... j" .................................. 1 .................................. l .................................. 1 .............................. .. i ! l i 100 200 300 Time 400 500 600 Figure 6.1.12 Least Square A.= 0.50, No Noise, and With Variance = 2 68
PAGE 78
20 Q) 10 "0 2 c: 0 10 20 50 Q) "0 2 0 c: Ol co 50 50 Q) "0 2 0 c: Ol co 50 0.5 0 Reference Signal : I I I J I I I I r r1 r1..I rirr1 .} r! rlrr{ i ! i i ! j : i ! I I l i ! I ! ; : !Lj L+ 1 Li : iLL:i:i=l.......... li .. l+ .............. ++ : : : : i : :: : :: : : 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error I .............. !.1i I I 1 I I : 1: i.:. rrrrrrT .. J ! i ! l, ,JJ 1 : 1 : : : ll. jT'T I f"""""iiii i i i i 1 1 1 i 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors i 'I j ... I ,j i rr! i i i i i j!.J ...... ttt ! ! i 0 100 200 300 Time 400 500 Figure 6.1.13 Least Square A=0.30, No Noise, and With Variance= 2 69 600
PAGE 79
Reference Signal 2 0 .. ................. ................ J.. ............... ................. j ........... j ............ .i ................. L .............. f .. ........ f ............. I.i....1 .! i.....i .= I I : : : 1 0 ......... t ....... .................... ........ j ......................... ........ j ...................... t ......... t ........ _a i : i I i i c: o .......... f ....... ...................... +....... I ........... ....... i ........ + ...................... + ........ f ....... w I ! i l I :2 1 o .......... + ........ ..................... + ........... ! r L T L jL L Q) ""C .2 c: [J) Ill :2 20 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 0 50 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error ' I : ,. ' 50 i......... ................. !... .......... ......... i I I i ; i i t .......... i .. f ............ i I ! i ; ! ! ' If ... I I i I i i i i ll" .... ............ _. l+ ..... i ... o I i I i i . i r .............. T .... ...... l : I 50 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors 0.5 ==='. N .. ..... ...... ...... ;.....r f_........,_ :1 0 i . ....................... !,........ I i r .. i r .. r .. ,. I 0 100 200 300 Time 400 500 600 Figure 6.1.14 Least Square 'J..=0.08, No Noise, and With Variance= 2 70
PAGE 80
Reference Signal 2 0 .............. r,i, rl !, ,i, r+ ,1, i i, rl r1 0 .. .... ... f ... ... i ..... .... ..... .... ... ... i .... .... i .... .... ................ i .... .... i .... ... .... ... ....... f ......... f .... ... .. ! I I I I I .3 0 ......... l ........... .i ...................... t ........... .i ....................... .i ............ !. ...................... ............. ......... "2 10 .......... L ........... ..................... ........... ..!. ...................... ............ .!. ........................ .......... ..... : ... 20 ... ....... t ............... 50 Q) "0 .3 0 ", Ill ::2: 50 50 Q) "0 .3 0 "2 01 Ill ::2: 50 Q) 0.5 "0 .3 "2 01 Ill 0 ::2: 0.5 50 100 150 200 250 300 350 400 450 500 550 600 System Output I ........................ 1. ................ ................. ; ................. ; ................. 1 ................. .;. ................ 1 ................ 1 ............. 1 I 1 1 1 i i i i i I 1 : I I i I i ! i I I I I I I I 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error I I I I I I tt......... tyilyttri l i I I J J I l i l I I I I i i i i i i i I ! i ! : Ll ...... !. ............... .l ................. i ................. ................. !. ................ l ................ ................ l ............. ll r 1 i ' r I. I i I I I I I f I I f I j I i ! i rjlltttt50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors I I i . I I lllrlllh! ri ..................... """"'I' i i ...................... """"!""'""""""'. I 0 100 i i ; i I I I 1 I i .... 1 ................................... .................................. T .................................. t .............................. .. ! l l : t f i i i i I ; I 200 300 Time 400 500 600 Figure 6.1.15 Least Square A. =0.07, No Noise, and With Variance= 2 71
PAGE 81
20 10 Reference Signal "f+lloooooooi!ooiooooooooo+tff r1r r 1 r r!r r r:r 1.. __ l __ j ______ l __ J ______ l __ J ______ L __ j __ .. 0 : i i i I i .. __ L __ J ______ L_ .. J ______ L __ J ______ t __ J __ .. .a ::E 10 20 0 50 Q) "0 .a 0 c: 0") co ::E 50 Q) 0.5 "0 .a "2 0") co 0 ::E 0.5 0 L+LI'', L! iL. L! LIL', + L!'Lti!!!il .. .. lttt 50 50 100 150 200 100 150 200 250 300 350 400 System Output 250 300 Time 350 400 Tracking Error 450 500 550 450 500 550 j i i i ! 600 600 ' I I I : : I I I I ; : ....................... rt ........................................................................ ............. i I i i I i I J I I J I I I I j I i I i i i i i I i i l ; i : i : I +oo;+..1.!ttt i i i i i i I I t i 50 100 150 200 250 300 350 400 450 500 Theta Vectors i "\, i l l i I 550 600 ""......... .. ............................... i.......... ..................... t .................................. t .................................. t"'""'"'''"""""""""" ; i j ; I i I i _l_ ................................ i .................................. J ................................... l .................................. l .................................. ............................... i I ! I : I I I l I i i i 100 200 300 Time 400 500 600 Figure 6.1.16 Least Square 'A. =0.98, Noise= 2, and With Variance= 2 72
PAGE 82
Q) "'0 .a '2 Reference Signal 20 ...... ;= .. t ... ;= .......... ...... ; ....... L ...... :+ .. : ......... I .. : ..... L ...... : .. i .. : ...... J ..... : ...... t ......... : .. L= .... "3. ............ .. 1 o f i .... .... I ................. I .. i .... f .. f I I I I i I I o ......... t ... ... 1 ............... I ... i .................... t .......... ...................... t ....... t ........ :2 10 ..... _L __ J __ ... ___ ... L1L... J __ .. i ; ; I l I : : Q) "'0 .a '2 Cl co :2 Q) "'0 .a '2 Cl co :2 20 ............ + ............... .:. .............. + ................ ................. : ................. ................. : ............... + ............... + .............. + ................ ............ .. 50 0 50 0.5 0 0.5 0 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error I I 1 r 1 1 ! I .............. r ........... i .. i ................. l ................ lr 50 100 150 200 250 300 350 400 450 500 550 600 100 200 Theta Vectors 300 Time 400 500 600 Figure 6.1.17 Least Square 'A.= 0.50, Noise= 2, and With Variance= 2 73
PAGE 83
Q) "'C .3 '2 g Reference Signal 2 o .............. l.. .............. .!.. ............... !.. ............... !.. ............... i ................. i ................. i ................. i ..... ....... _.!. ............... t. .............. L.. .......... L i i I I I ! 1 0 ......... t ............ .......................... .......... "i .......................... ........... "i ...................... t" ......... "t ....... I I i i I I : : 0 i ; ; I I .. r1r,r,r,.. :2 10 I I i i i i I I .. r, .. .. r,.. T T I + iQ) "'C .3 '2 Ol Cll :2 Q) '0 .3 '2 Ol Cll :2 Q) "'C .3 :2 2o .............. t ................. ................. ................. ............... T ................ 1 ............... + ................ : ................. t ................ t ................ t ............. 50 0 50 50 0 50 50 100 150 200 250 300 350 400 450 500 550 600 System Output 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error i I I I I I i .............. ........... r .......... ................. rr ......... r .. ... ... r r ........ i i ......... i 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors j... '"!' ..... i i I 0 100 200 300 Time 400 500 600 Figure 6.1.18 Least Square A. =0.09, Noise= 2, and With Variance= 2 74
PAGE 84
Reference Signal 2 0 +l!!!f+t+J _:: r =J e J _: =t g:ij .__ c!:.__ .__, .__ ', ''+.__ .__ .__ '+ u i'.__ Ql c ..3 "2 Cl Ill ::2 Ql c ..3 "2 Cl Ill ::2 Ql c ..3 "2 Cl Ill ::2 i ! ! ! ! i l 2 0 t!!i!!jjttt 50 0 50 50 0 50 0.5 0 0.5 50 100 150 200 250 300 350 400 450 500 550 600 System Output i i I ; ''""t .. + .. .. I ! I I ; I I I 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error ............ .............. .j ................. j  : !. i .............. ,o. : i i i '!'t i i i : ! i : l l .. ,J, L
PAGE 85
6.2 Results of Least Mean Square Simulations The simulations shown in Figures 6.1.1 through 6.1.19 depict the performance of the least mean square algorithm in four different scenarios: (i) Simulation with no noise and no variance (ii) Simulation with noise and no variance (iii) Simulation with no noise and with variance (iv) Simulation with noise and with variance During each set of simulations /..was varied between 0.98 down to a value where the algorithm completely broke down. The simulations show the degradation in performance of the algorithm as /.. decreases in value. Each set includes a figure that shows the fust occurrence of bursting and a figure that shows the total breakdown of the algorithm as the /.. value decreases. The algorithm performed best in the idealized set of simulations with no noise and no variance, with the first bursting observed at /.. =0.089 and total breakdown of the algorithm at /.. =0.087. This illustrates the fact that the parameter estimates are still being generated are functional, even at low values for /.. However, as no1se and variances are injected into the controlled plant, bursting is observed at a much earlier time as /.. is decreased in value. These burstings begin to occur in the 0.30 /.. 0.50 range. It must be noted here that the magnitude of the noise and variance was selected as 2, compared to the magnitude of the reference signal and plant output, which is 15 for these simulations. Total breakdown in these situations was 76
PAGE 86
observed in the same range as the simulations where no nmse and variances were present, A ::; 0.87 The least square algorithm in all cases converged in approximately eight to ten iterations. Even though the normalized gradient algorithm is not shown here, this is much better performance than the normalized gradient algorithm. The above leads us to conclude that for the least square algorithm, a higher value for the forgetting factor A should be selected, ideally at A =0.98, for best performance in a noisy environment or in an environment where disturbances are present. The Matlab *.m file used for the least square simulations is given in the appendix. 77
PAGE 87
6.3 Simulation using Model Reference Adaptive Control Algorithm The model reference adaptive controller developed in section 2.3 is being utilized in the following simulations. The plant to be controlled is given by y(t)1.7 y(t 1) +0.9 y(t 2) = 1.5u(t 1)+ 0.6u(t 2) or in delay operator form the plant equation becomes where the delay d = 1. We can also write the plant equation as = ql (1.5+0.6q1 ) 11.7q1 +0.9q2 (6.3.1) (6.3.2) (6.3.3) The zero of the plant is 1.5 = 0.6 q I 0. 6 = 0.4 .. This zero is located inside the 1.5 unit circle in the "z" plane. The plant is stable and at minimum phase. We wish to design for a closed loop system response that has a natural frequency of ro = 0.9 rafsec, and a damping response of l; = 0.8. We also design for a steady state error of ess = 0 to a step input. A reference model with this form can be described by a continuous time, second order system having the form Y(s) ro 2 U(s) s2 + 2s OJS + OJ2 (6.3.4) It is well known that continuous time systems reach steady state when s = 0. The system given in equation (6.3.4) will have a steady state error if the output signal matches 78
PAGE 88
Y(s) the input signal when s = 0, or ess = 0 if U(s) = 1 at s = 0. For this case the model in equation (6.3.4) becomes Y(s) co 2 U(s) = lV2 = 1 so ess = 0 (6.3.5) We can rewrite the reference model in discrete time form (6.3.6) where (6.3.7) where h is the sample period. For a zero steady state error the system gain= 1 with q = 1 and k0 = 1 + a1 + a2 Using the desired values, co 0 = 0.9 and c; = 0.8 and the sampling period h = 1 we can solve for a1 and a2 a1 = 2( 0.82 0.9 = 0.835 2(0.8)(0.9) 0 237 a2e . (6.3.8) and k0 =1 + a1 + a2 = 10.835+0.237=0.402 (6.3.9) The polynomials R S and T for the model reference controller are found below 79
PAGE 89
T = Bm = k0 =0.402 R=BR1 R= (1.5 + 0.6 q1 ) R1 and R1 =1 S = s0 + s1 q1 = 0.865 0.663q1 Our equation for the model reference controller is now (1.5)u(t) = 0.6u(t1)+0.4r(t)0.865y(t)+0.663y(t1) or u(t)=0.4u(t1) + 0.268r(t)0.5767y(t) + 0.442 y(t1) The adaptive model reference controller is (6.3.10) (6.3.11) S(t+d) =O(t+d1) +p(t)lf/(t) [y(t+d)ym(t+d)] (6.3.9) Where 8 ( t + d ) is the estimate obtained by the following estimation algorithm () ( 1 ) p(t1)\jl(t) lf/r(t) p(t1) p t = p t + .::..__: __;__:__:__:____:__''=''1 +lf/T(t) p(t 1) lf/(t) (6.3.10) With the conditions on p(t) that p(O) = p 0 I, p 0 > 0, and 9 (0) = B, The Model Reference Adaptive Controller (MRAC) simulations were performed with Matlab version 5.1. The initial conditions selected for 9 in the simulations is 8=[1111]. 80
PAGE 90
MRAC System Output 20 10 (I) "0 .3 0 "2 1 \ I I I I I I I I lr 'r !r \r !1lU I f I I ! i I i '''"''t ''''''' j ''''"' ,,, .. ,,j ooooooo ''''"'' ooooooo '''"''! ''''"' "'''"i '"''''  I '  i : i g> 10 20 ....... \::: ....... ......... ...... ......... .............. ......... ......... ...... ........ ...... ........ ........ .. i 'i Li ', '! '. L' L' '! ', ', '50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 20 1irf.!1I,_ 1,_.. !....L 10 ............ ....................... ................................ .................................................. .a 0 10 (I) "0 .3 "2 Cl tO (I) "0 .3 "2 g> 'i , .___, 'i '! .___, .___, .___, , .___, .___, 50 0 50 2 0 1 2 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors .J ! l 1 : : : : ............ ................. r........ i i tt' .. J 1 1 i i I I 1 1 i i r f 1 i 1 i 1 50 1 00 150 200 250 300 350 400 450 500 550 600 Time Figure 6.3.1 MRAC Noise= 0, and /.. = 1 81
PAGE 91
MRAC System Output 20 10 Q) 'C .a 0 "2 II I rL!,_ i.I,!,.Lj,.1.1.l\ :I II I I I I I : : I t ...... ........ .............. .............. ....... ....... ....... j ...... ...... Cl Ill ::2 10 20 \::: ....................................................................................... ................................................................... \._! \._' \.._! .._i ..._, '! ...._, ...._, Li \..._ 50 100 150 200 250 300 350 400 450 500 550 600 Rererence Signal 20 r L 10 ........................................................................................................................................................ .................... .a 0 ::2 10 '! 'i L...! '! '! '! 'i 'i Li L...! 50 Q) 'C .a 0 "2 fi ::2 50 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors hi i I i I 11': + .............. ................. ................ .! ................. l ................ l ................ l ............. . v : ................ : ................. ii:f ................. lf.. .. I I I I I I I I I .a 0 ::2 1 +l:+tt I I I I i I ' ! I I 0 I I : I I I I I ! I 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.3.2 MRAC Noise= 0.0001, and 'A= I 82
PAGE 92
MRAC System Output ir !..!r!lr:r:r:r I u I tl I : I I 10 ....... 4 .............. j ............... ............. .! ............... l ............... j .............. .............. i .............. = = . = ........... : i ! .a 0 l : ::E 10 \::: ................................................................................................................................................................. .. ', ', ', ': ....._, .__! \.' ': ...._, \.: I I : ; I I ; ! 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal i__ L L '10 ................................................................................................................................................................................... .a '2 0 i ::E 10 20 50 Q) "0 .a 0 '2 Cl co ::E 50 2 Q) "0 .a 0 '2 Cl co ::E 1 2 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error rr .. i i ! I i I i t i : : : I I I i i I i I i I 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors b! l I i I I I I I I ICtiooooo oooJoooooooooooooooolooooooooooooooooj,,,,,,,,,,,,,,,,,i,,,,,,,,,,,,,,,,j,,, .. ,,,,,.,,,,,foooo i i ! I ""'"'"" : I I : : I I I I : I .... t ................ ; ................. l ................. i ................. i ................. i ................. i ................. l ................. t ............... 1.............. r ............. I I l i I I ; : 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.3.3 MRAC Noise= 0.001, and A,= 1 83
PAGE 93
MRAC System Output II I i i i i i I I i 1 0 1\r. ..... ..1(: ........ :r.= lF Jr........ ................. : ................. ................. ................. ................ ................ 1 ............. I i i i 1,......., iJi i i I .a 0 1 ............... .' ...... ............. ............. .' ............. i ............. .r: ....... ........ ....... h ....... !r= .... .. ::iE 10 ....... \::: ................ ......................................................................... _, ................................................................... 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal L L 10 ..................... ........................................... ............................. .............................................................................. .. .a 0 ::iE 10 ! '! '! '! 'i '! 'I 'i '! 'i 'i 50 Q) "C .a 0 '2 Ol Ill ::iE 50 2 Q) "C :::J 0 Ol Ill ::iE 1 2 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors L! I I ..... J ...... ....... .!. ............... J.. .............. .i ................. l.. ............. ..l. ............... J ................ !... ............ .L ............. .L ........... I I l I l I l L I l I J i ................ ................. 1 ............... ,............. l ............... ................. l ................. ................. ttr.. 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.3.4 MRAC Noise= 0.005, and /.. = 1 84
PAGE 94
.a 0 ............................................................................ ............................................................... p ........ rJ, ........ l.v, i .__ ", i I 1 10 ....... ......................... t::: .......... ...... ==r .................................. ................. ,............ t ............... ........ ........ t ............ .. ', l I I ' I I .a '2 Cl 10 Q) "C .a '2 Cl 10 i i I 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 20..,........., L L L L 10 .................................................................................................................................................................................. .. 0 10 20 50 0 50 ', 50 100 150 200 250 300 Time 350 Tracking Error ': 400 450 500 550 600 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors b! I I I i L l I ............ l"''''''''"''"f"'"'' I I ; f i .a 0 r : i I i I : I ! I I I i i I i I I I i i i i l i : 1 I : I j i :: I T ""'T""""""""i""'""""i"'"""'"""'!""'""""""i"""'""'""'!"""""""'"j"'"""'"'""i'"""""""t"""'"""'"t'"'""""""j"""'"""' i i I I I : I I i 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.3.5 MRAC Noise= 0.01, and')...= 1 85
PAGE 95
MRAC System Output 100 80 Ql 60 "0 .a 40 c: tn co 20 ::;E 0 20 i i I I i I i I l t ................. l ................. !! .. ......... l ................. ................. ................. T ................ t ................ t ............. I I I I I i i l I i .............. yl .... !"! .. .. l""'''''''ttt .............. l ................. L ............... .! ................ J ................ .! ................ J ................ J ................ I ................. ............. i i i i : : I i i + ... I I I I ............. 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 20....,,........ I Ql "0 10 .a c: 0 lijl ::i 10 I !_ !,...._ '; ; ', ; __ ;_ '. '. , ', L, L...i ._i .__i I ; ; ; ; ; l 50 Ql "0 .a 0 c: tn co ::;E 50 2 Ql "0 .a 0 c: tn co ::;E 1 2 50 100 150 200 250 300 350 Time Tracking Error 400 450 500 550 I i i I i I I I I i 600 ........ t ................. J ................. j ................. j ................. ................. j ................. j ................. j ................. f ................ f ................ f ............ .. I 2 I I I I I I I I I I I ! I I I I I I I I ! I i I 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.3.6 MRAC Noise= 0.1, and A. = 1 86
PAGE 96
20 Q) 10 "C .2 0 c: :::!!: 10 20 20 10 .2 ', 0 co :::!!: 10 M RAC System Output .............. ................. ................. l ................. i ................. J .................................. ................. j ................. ................ .............. + ...... ...... 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal ! I I I II 'IIIIi.... ...... ....... . ..... ........ .... . ....... ...... . . .... ........ ........ ........ ....... ........ ...... ........ ....... .. ..... ...... .__I .__i . .....__ ...___ ...___! , ...___ ...___, ...___ I i 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 50 0 I I I I i I I I I l i i I I i I I I I i i i i I !Y i j i I i I i i i i i l I I ! i I ; I I j I i i i 50 50 1 00 150 200 250 300 350 400 450 500 550 600 Theta Vectors .2 0 :::!!: 1 oJ ll. I j h I i I J i t j I ........... .... . ......... 1................ j ................. ................. i .............. j ................. l ................. ................ ................ + .. I I I I i I I i i i I i ; I I I '1 i 1 I ............................ r ............... r ........... r ........... ............... r ............... ( ................ 1 ................ r ............ r ............. r .......... .. .............. r ............. i ................ l ...... ...... r .............. ............... l ................. 1 ................ l_ ............. t............. l .......... .. 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.3.7 MRAC Noise= 0, and A.= 0.98 87
PAGE 97
20 10 .3 0 :::ii: 10 20 20 50 100 150 l l \..!rM RAC System Output 200 250 300 350 400 450 500 550 600 Reference Signal l ir\..; I I_ t1 0 ............................ .. ............................................... .. .............. ........ ............ ..................................................... .. .3 '2 0 :::ii: 10 '; I i '! '! ', 'i 'i i .__, 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.3.8 MRAC Noise= 0, and/.. = 0.5 88
PAGE 98
Q) "C .i3 c: Cl Ill ::2 Q) "C .i3 c: Cl Ill ::2 Q) "C .i3 c: Cl Ill ::2 Q) "C 20 10 0 10 20 20 10 0 10 20 .i3 c: 0 Cl Ill ::2 1 MRAC System Output 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal =i=i=l==:=i=t ........ i::J ........ !=i=i= ', 50 100 150 200 250 ', 300 Time ', i 350 Tracking Error ! '! Li L400 450 500 550 600 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.3.9 MRAC Noise= 0, and/..= 0.4 89
PAGE 99
MRAC System Output 20 Q) 10 "t:l .3 0 '2 Ol co :::2: 10 20 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 20 iI I1,..._ li10 ..... ....... ....... ....... ......... ........ ........ ........ ...... ........ ....... ........ ........ ........ ........ ........ .. ...... ........ ....... ........ ....... ....... ....... ...... Q) "t:l .3 0 '2 ..... ....... ....... ....... ,,_, ....... ........ ...... ....... ....... ....... ........ ....... ........ ....... ........ ....... ....... ........ ....... ....... ....... ..... Ol co :::2: 10 ..... ....... ....... ....... ........ ........ ...... ........ ....... ....... ........ ........ .. ..... ........ ....... _,, .. ........ ....... ....... ....... ...... ....... ..... . ', 1......i L! '; L! Li L, ', L; Li L; i ; ; ; 20 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 50 Q) "t:l z 0 '2: Ol nJ :::2: I 50 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors 2 Q) "t:l :I 0 Ol Cll :::2: 1 2 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.3.10 MRAC Noise= 0.01, and f..= 0.98 90
PAGE 100
20 Q) 10 "0 .a 0 "2 Cl ca 10 20 20 10 Q) "0 .a 0 "2 Cl ca 10 20 50 Q) "0 .a 0 "2 Cl ca 50 2 Q) "0 .a 0 "2 Cl ca 1 2 r50 !', 50 50 50 100 ir', 100 100 100 150 !_ ', i 150 150 150 M RAC System Output 200 250 300 350 Reference Signal !_ !!200 250 300 Time 350 Tracking Error 200 250 300 350 Theta Vectors 200 250 300 350 Time 400 l'. i 400 400 400 Figure 6.3.11 MRAC Noise= 0.01, and/...= 0.9 91 450 !r450 450 450 500 !r', i 500 500 500 550 600 !r', i 550 600 550 600 550 600
PAGE 101
20 10 .3 ', 0 co ::: 10 20 20 10 .3 0 ::: 10 50 1f...... ....... .. ..... 100 150 ' f.f......... ........ ........ ........ MRAC System Output 200 250 300 350 400 450 500 550 600 Reference Signal L ....' !....i. .... ....... ....... ........ ........ ........ ........ ........ ........ ........ ...... ........ ...... ........ ...... ...... L...i ', '. ': i i Li 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.3.12 MRAC Noise= 0.01, and')..,= 0.5 92
PAGE 102
6.4 Results of Model Reference Adaptive Controller Simulations The simulations shown in Figures 6.3.1 through 6.3.9 depict the performance of the model reference adaptive controller. During the model reference adaptive controller simulations, random white noise was injected into the output until the controller lost the ability to reliably track the reference signal. The forgetting factor A. was also varied between 1 down to 0.4 where the algorithm had lost the ability to track the reference signal without noise injected into the system. The simulations show the degradation in performance of the controller as more and more noise is injected into the system. It is readily apparent that MRAC controller does not track as well when extraneous noise is injected into the system as well as the recursive least square controller examined in sections 6.1 and 6.1.1. The magnitude of the noise level compared to the reference signal level or the output of the system is two magnitudes smaller. The controller is only able to track with a noise level of less than 0.001 is present at the output of the system (See Figures 6.3). As noise levels are increased to a higher level we begin to see signs of windup in the system output, theta vectors, and error (See Figures 6.3.4 thru 6.3.6). Further increases in the noise levels at the output above the 0.001 levels, rapidly degrades the performance of the controller. The forgetting factor A. was also utilized, in order to compare performance of the MRAC against that of the recursive least squares controller, with the addition of the 93
PAGE 103
forgetting factor. The MRAC controller was able to function down to a A value of 0.5 with no noise injected into the system, however, we begin to see the first slight signs of bursting (See Figure 6.3.8). At values of less than 0.5 for A, the bursting becomes more pronounced. The addition of noise to the system in conjunction with A only made the tracking performance of the controller degrade even further. The output of the system exhibits a more ragged appearance with the addition of some slight bursting (See Figures 6.3.10 thru 6.3.12). The error continued to oscillate and the output exhibits a somewhat ragged appearance with bursting and signs of windup. The simulations show that the estimated parameters (theta vectors) never do return to their original values, thus the distorted output. The convergence of the estimated parameters with no noise in the system takes 17 to 18 iterations, twice as long as the least square controller. The addition of noise to the system only makes it more difficult for the controllers estimated parameters to converge. When noise is injected into the output the convergence can take as long as 125 to140 iterations. This is compared to the least square algorithm, which in all cases converged in approximately eight to ten iterations. The above leads us to conclude that for the MRAC algorithm does not handle noise in the system well. This noise should be accounted for in the design of the controller, for best performance in a noisy environment or in an environment where disturbances are present. This drives us in the direction of robust control. 94
PAGE 104
The Matlab *.m file used for the MRAC simulations is given in the appendix. 95
PAGE 105
6.5 Simulations using Robust MRAC with AntiAliasing Filter The model reference adaptive controller utilized in section 6.3 is being used for the following robust controller simulations, where the controller has been made considerably more robust by applying the principles of robust control discussed in Chapter 5. The addition of a prefilter, or as it is sometimes called, an antialiasing filter, contributes greatly to the stability of the system. The following Figures 1 through 11, depict the step response and Bode plots of the plant, the closed loop system without the antaliasing filter, and the closed loop system with the antialiasing filter. Step Response of Plant QJ 0.8 "C :e a. 0.6 0.4 0.2 0 !I I I ! I ! ! I i I ! I 1 2 ti+ritl+ i I i ,. I : j I I +++ j+t ++1=t= == I I I I l ! ,.. ..... f 1 i i 0 20 40 60 80 100 120 140 160 180 200 Time Figure 6.5.1 Step Response of the Plant 96
PAGE 106
Step Response of Plant with MRAC Q) 0.8 "0 c. 0.6 0.4 i i I l I I I 1 2 ttl!fitll I I I i I I I I I I I i I I : I ttit .. ooooooooil ltij I I i ! I l I I I i i rrrrrrrrrI I : I I ! I I v I' I I ! i : i : I i i I i 1 : 1 I i 1 llrtrrrl" 0.2 0 ! I I 1 I 1 I I ii I I l I I 0 20 40 60 80 100 120 140 160 180 200 Time Figure 6.5.2 Step Response of the Closed Loop Plant with MRAC 1.2 Q) 0.8 "0 c. 0.6 0.4 0.2 0 0 Step Response for Closed Loop with Antialiasing Filter i i ! I I I I j I I I I ' I I I I i i I : I .................. .................... .................... I ................... T ................... t .................... f .................... ;. ...... .. ........ ; ..................... 1 ................ .. i I I I' I' i ' I I I I I I ................. t... .................................. ... ............. + ................. + ................... !....... i ............. ................ .. i i i I I I .................. f .................... f .................... j .................... t ............... + ................... 1 ................... + ......... ........ .. ................. ................ .. i I I i I I I I I I I I : . : I oooooooooooooooofofooooooOlOoooooo.,ooo ''""'"""'"''"T'"''''''"''''''"'' oOooooooootooooooooo!!Ooo : I I I ! i i : I 'I I i : I ,, ! I i ! I ....................................... + .................... ............................................................ l' .................... l' .................... : ...................................... I ' I I I I ' : i I' .................. f .................... i............ ..) .................... f .................... t .................... L .................. L ................. j ..................... ................ .. i I I I .,. I I ! I i I 1 I l I i 20 40 60 80 100 Time 120 140 160 180 200 Figure 6.5.3 Step Response of the Closed Loop with Antialiasing Filter 97
PAGE 107
m :!:!. c: "(ii (!) 0 2 4 6 8 10 12 14 16 Magnitude for Plant 1 o2 Frequency (rad/sec) Figure 6.5.4 Gain Magnitude for the Plant Q) !? c Q) "' lt1 J:: c.. 10 0 10 20 30 40 50 60 70 80 Phase Margin for Plant i ! !il!i I 1 'iiilil I i ll1illii ! i i i!ii! ! ! ii'i ! ! :'li I i illllii I: !:!Iii! ; I :.:1::: : i :I::;: i i 1il"i l" !1!11!, I I I 1 11 1 1 I'"," II,'" 1 p11ritr+.l, I ,, !! '' "'i I I I "I I I I I ' I '.! ' '' i p i !!!!!!!! !:i!HH I" i ili.l I ii!!.ll! !!i:!L! ii!i:ii ........ ..... l ... l .. l .. l.ll!li .......... l ..... l ... l .. l.i.!lli ....... _.l ..... l ... j . !.l.!.!!l.... i..1.l .. !.ll.l!l ....... _l .. _.l ... .. l.l.i.; i l ........ .l .... l.1 .. 1.1.1.1 : 11 11 il::. 1 1 i!!H! : 1: '1i1 1:n 1 ,, li!'i1'P 11 i 11:11 ; 1 1 11 !111 1 : : i n 1 = : :! q !i i 1 !j! 1 1 1 d : 1 i 1 f ... l+it!.!it+ +++Hii : i 'I i i iii li I !! ! I II 1,1,1 ill II I I ,. i ,Ill il i i I' 'I ., ill. ;11 ,, i i i iii i l : 1 [ H? i i i I: HI I I i I I I II I I I I i H! rrrrrmrrTtHtnri1,rriTtrrlrtrltr!tlftiTit r1tnn i '!!iiiii I !;11:::: I lldil ; lllil : ;"': I "jiii i i I! iiiii I I lli'ili i lj'i'i I i : i :lui : : I IIIII: i lil l i! j!jil I I ll!l!!! I! !ll!l! 1! i 1: iii jlii!i I I i jllll +lt!!"ftt! .. j1 1 .... +h+ +if...+ :t+!l : ., i ! ! !!! : i I iii; "I 'I I ,,, d I i I il p I .I i iii "I"' i i i 'i! I !1ill!i I 11!1!!!! ; oi!!!! I i liiilll i j 1ljli: I i l1'!1i = : 1 = : i = i i ij = 1 i I 1! 1n 1 1 1 t i I 11! = ! i =! :d = = I! n : = = : =: = i i i! I !!h : ! I I I l! i IH i i I! II iii j f i! iii! i i! i iii 10"5 10"4 103 10"2 10"1 10 101 Frequency (rad/sec) Figure 6.5.5 Phase Margin for the Plant 98
PAGE 108
OJ :s. c: ca (9 5 0 5 10 15 20 25 30 105 Magnitude for PI ant with M RAC Frequency (rad/s ec) Figure 6.5. 6 Gain Magnitude for the Closed Loop Pla.nt with MRAC Q) g' Q) 0 20 40 60 80 100 120 140 160 180 105 Phase Margin for Plant with MRAC Frequency (rad/sec) Figure 6.5.7 Phase Margin for the Closed Loop Pla.nt with MRAC 99
PAGE 109
150 c: "iii (!) Magnitude for Closed Loop with Filter Frequency (rad/sec) Figure 6.5.8 Gain Magnitude for the Closed Loop Plant with Antialiasing Filter Phase Margin for Closed Loop with Filter Figure 6.5.9 Phase Margin for the Closed Loop Plant with Antialiasing Filter 100
PAGE 110
A Magnitudes Figure 6.5.10 Gain Magnitude Comparison Q) !!! :if 300 "C Q) !Q .c 0.. All Phase Margins Frequency (rad/sec) Figure 6.5.11 Phase Margin Comparison 101
PAGE 111
6.6 Results of Robust MRAC Antialiasing Filter Simulations It is easily seen in the Bode plots in Figures 8 through 11, that the closed loop robust adaptive controlled system with the addition of the antialiasing filter effectively removes all of the higher frequency elements beyond the Nyquist frequency at (J) N = 1! = !!_ = 6.28 X I o2 rad I A fourth order Bessel lowpass filter was chosen h 50 /sec for the antialiasing filter. An additional 180 of phase margin and 75dB of gain attenuation is added by the filter at the Nyquist frequency ro N and above. The slight rise in phase and gain of the closed loop system is also eliminated with the use of the antialiasing filter. The step response with the filter added to the system in Figure 3 shows the elimination of all over shoot from 113% to zero. However, the rise time of the system increases significantly from 0.54 seconds to 52.91 seconds. This graphically shows the dominate poles of the system moving to the right, closer to the jro axis in the complex frequency plane. The addition of the antialiasing filter to the controlled system has the over all effect of removing all of the higher frequency components above the Nyquist frequency, which will eliminate aliasing problems in the controlled system. The following simulations of the robust adaptive control system assumes that the antialiasing filter is being utilized. As was previously discussed in Chapter 5, integral action in the control algorithm in conjunction with the forgetting factor A. has been added to the adaptive controller for a more robust controlled system. 102
PAGE 112
6. 7 Robust Model Reference Adaptive Control Simulations Q) "C .a ;:: Cl (Q ::!: Q) "C .a ;:: Cl (Q ::!: Q) "C .a ;:: Cl (Q ::!: Q) "C .a ;:: Cl (Q ::!: Robust M RAC System Output 20 i 10 I I I I I I .0.! ................. 1 ................. ................. ....... .. jrr 0 10 20 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 20 ; ; I ; ! ! 1 0 ..... ....... ....... ................ ....... ........ ....... ............... ................ ........ ...... ..... 0 10 20 20 10 0 10 20 2 0 1 2 ', ; 50 1 00 150 .___, ''200 250 300 Time , I 350 Tracking Error j 400 450 500 550 I I 11 I i i 600 ; I ; I :!'<= r= r= r=:: nr: r=j r= r= r= r= F 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors I I riiiilil+tt= =I = t=l= = t =!= l l i J i i I I i i l 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.1 RobustMRAC Noise= 0, and A.= 0.98 103
PAGE 113
Q) "0 .a c: g> Robust MRAC System Output r I i : I l I I 1 0 ............. L. ............ ..i. ................ l ................. j ................. ................. ............... ) ................. l ................. t. ............... ............ .t. ............ 0 10 l i l 1 i i I : I i ............. .; ... ........ i i I i i i I I Q) "0 .a c: [J) co i i i I i i ; ; 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 20...,.......,.,, I I I ! ! ! 1 0 ..... ....... ....... ........................ ....... ........ ....... ....... ........ ........ ....... ........ ...... ............ 010 '! .__! .__! ._I ! '! Q) "0 10 .a c: 0 [J) co 10 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors .. Q) "0 .a c: 0 [J) co 1 ; 1 ! I I i i I I tjjrrlrl +ij iiilii{I ; i I I i i i I I i i i 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.2 Robust MRAC Noise= 0.001, and A= 0.98 104
PAGE 114
Robust M RAC System Output 20 : I I 10 Q) "'C i i i i I I I I i i ............. t ................. j ................. l ................. i ................. l ................. : ................. j ................. l ................. ................ l ................ ............. ; i ! ! I i .a 0 "2 Cl ca ::2 10 '''''' 20 l I l I l l I i I l I I i i i i 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 20............, I i :: :: : : I : rr''1 'rff'r'r'r'r10 ................................................... ............... ........... .......................................... .......................................... {! .a 0 ca ::2 10 'i .___, i ': ''i .___, , ', 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 20............. i 1 0 '''' {! .a 0 ca ::2 1 0 Q) "'C .a "2 Cl ca ::2 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors 0 1 2 I i i ! ! I I ........., ; ; I I I I I I I : i I ! i i I I 1 I j I I I I i I I i I i I 11 i ! I I I I I I I I .............. l ................. ................. l ................. l' ................ r ............. l ................. i ................. r ................ f ................ i ................ l ............ .. i ! ! I I I i I i ; I I I I ; : I ; 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.3 Robust MRAC Noise= 0.01, and A.= 0.98 105
PAGE 115
Robust MRAC System Output 20 l 10 ............. ................. i. ................ ................. i .. Q) "C .3 0 OE C) ctl :a: 10 20 ................................... ................ ................ ............. i i I i I i ! i i i 000000 OOOoOToOOooooooooooooroooooooooororoooroooorooooooooiooooooooooooooooo!ooooooooooooooroooooooroooooooooro.o ....... o 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 20 I i I I I I I I I I 10 00000 0000000 0000000 .ooooooo.oooooOOOOOOOoooOOooOoooooooooooooooo 000000000000000 oooooooooooooooooooooooooooooooooooooooooooooooooooooo 000000000000000 ooooooooooooooo 0000000000000 Q) "C .3 OE C) ctl :a: .3 OE 0 10 20 10 0 :a: 10 .....__, .....__, .....__, i 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors .3 OE 0 :a: 1 ....... ................ J ..... ........ .i ................ J ................. t ................ !. ................ t ................ i ................. l............. 1 ........... ,_.! ............. l ! ! i I I ! 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.4 Robust MRAC Noise= 0.1, and 'A= 0.98 106
PAGE 116
10 .a c: 0 :2: 1 0 20 i i I I I i i i I I r .. rrrrrrrrrr 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 10 ................... .............................................................................................. ............... .a c: 0 :2: 10 ! ._I .__! .__! .__! .__ '! '! Q) c .3 0 :2: 10 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors .. .. .. .. .. +.. ttfllll .. 'l ... ................ .f.. .............. } .............. l ............. t I I I I I I I ! I .............. yrrrrrr .. r ........... T ................ ............. ., ............ .. ! r E I I I I r r I I I I I I I i i i i i i 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.5 Robust MRAC Noise= 0 and A.= 0.5 107
PAGE 117
Robust M RAC System Output i I I ! I I 1 o .............. l. ................ L ............... L .............. L ............... l ................. l ...... l i ! i I .... ................. ................ . i Q) '0 .a 'E 0 :a: 10 i i l i i i i i I I "'""'"""1''"'''" .. "'"'!""' .. !oooooooojoooo i ''''''''''''''I''''''""''"''T'''''''"'''" ! I I i ,' ! f i I 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 20......,...... I I I I ! I ! I I 10::::: ....... =: ........ = ........ = ...... .'=: ....... C ........ ....... = ........ ::: ........ = ........ = ........ = ...... 21 .a 0 :a: 10 ..... = ... ... = ......... = ........ :=_..... =, ........ C, ........ :=, ........ := ......... = ........ = ........ :=, ....... .=_ I r ! i t I I ! 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error 20......,n..or.,. i 10 ............................... J._ .............. j f .. : .. ........ 21 .a 'E 0 :a: 10 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors :,, H i li. r ,.. 21 .a 'E 0 :a: 1 ........................ :.1 i i v i i m i ;, ; r ; i 1\._ ........... i''"''''''''i'''1 ...... I ................. ,. i"'''"''"""'!""""""' .................... t ..... ......... t_ .......... .. : r 'Lr.... ,.i 1 t"'''"'''l .. ...... !! tt ............. } ......... .... ir ........... ...... .. ..... ..... f ......... ,.1 1 i 1 i 1 l l 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.6 Robust MRAC Noise= 0 and 1.. = 0.3 108
PAGE 118
Q) "'0 .2 Robust MRAC System Output 10 : ; 0 ................................... ................ J ................. l ................ i ................ l ............. 10 ++++'''lll i i I i I I i i i I I I : I ! : I : : 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 20......,.....,,.,,. I10 Q) "'0 .2 0 "2 Cl 10 10 '! ', 'i I I i ! 20 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error r' rri _:: II I i ; I I i i : I 1' ____ _L ____ 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors t.:. ...... tllll!ll+ft I .2 ", 0 tiifliil .. fft I I l i i i I I ; ! I I I I i I I I I I 10 1 .............. .. .. r"''''''''"'i"''"''''''''''''j"'''''''''''''''' (''''''''''''''''f''''''''''"'' .. T''''''''''''''''T'''"''' .. ''''''j''"'''"'''" I I i I I I I ; i i f i I i i i i ! 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.7 Robust MRAC Noise= 0.1 and A.= 0.9 109
PAGE 119
.3 '2 10 0 Robust MRAC System Output ''"'''"''''t'''''''''''''''+'''''''"'''''''l'"''"''''''''"'"'''''''''''"'''"''''"'''''''''''''''''''''''''''''''"''''''''''''''"''""'''""+'''''''"''''''f'''"'''''''''''t''"'''''"''' I I ! i I :::!!: 10 ....... f ................. ................. ................. iir ........... I I I I i I i ! ! 50 100 150 200 250 300 350 400 450 500 550 600 Reference Signal 20....,........ I I ! ; I I I 10 .................................................................................................................................................................................. .. .3 '6, 0 Ill :::!!: 10 .____, L...i .____, ", ..._, ..._, ", .._, L....i .____, .____, '20 ; ! 50 100 150 200 250 300 350 400 450 500 550 600 Time Tracking Error I .3 '2 1 0 ooo I I I I ! ; 0 :::!!: 10 50 100 150 200 250 300 350 400 450 500 550 600 Theta Vectors .3 ', 0 Ill :::!!: 1 ...... f ................. ................. ................. ................. i ................. l ............... + ................ : ............... ..f. ............. ..f. ............. + ............ i ! i I I I ! ! I I I i ! I i '''0''''''''"+'""''0''''''""'1"'00'''"'''0000'!'"'"''00'''000''!''0 '000000000000!0oooooooooooooo!"''''"''''''l'"""'""'''"''''"i'"'"'''""'"""f"""'"""""'''f"'"'"""'""'"t"""'""'" i i I I 1 I I i i I y : I : i j l : : .............. 1 ................. ................. ................. ................. 1 ................. ................. ................. 1 ................. 1 ................ f ................ T ............. I i I I I i i I ! i i I I i i I I I 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.8 Robust MRAC Noise= 0.1 and/...= 0.9 110
PAGE 120
Q) "tl 2 c: Cl ctl ::;!; Q) "tl 2 c: Cl ctl ::;!; Q) "tl 20 15 10 5 0 5 10 15 20 2 0 1 2 2 c: 0 g ::;!; 1 System Outputs I I !"'If\ 1\ rP ........ ............. .............. ............... .............. ....... ................... .......... : ! 1f......._ fi.__ ;_ ,_ f50 100 150 200 250 300 350 400 450 500 550 600 Time Robust Theta Vectors 50 100 150 200 250 300 350 400 450 500 550 600 MRAC Theta Vectors 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.9 Robust MRAC Noise= 0.001 and 'A= 0.98 111
PAGE 121
Robust Tracking Error 20 10 .3 0 :::?; 10 .3 20 50 100 150 200 250 300 350 400 450 500 550 600 MRAC Tracking Error i !1 I f I ! i o k f !, ! + 1=+c= i I I I I i I I i i I i ! . .! I I : : I I i i i I I I I ; I I i I I 50 100 150 200 250 300 350 400 450 500 550 600 Robust M RAC Adaptive Gain i I I ! i . i I : I : I I I ...... ' I I i 1 i ................ i ................. i ............... l ................ ............... 'l ............... l .............. l ............... 1 ................ j" ........... I i ! I i I I i ! ! I ! I I I i I :::?; 0.5 I i I I I I I i I J rt .... ................. ................. ................. i .... ... ...... ................. ................. tr .. r ............. I i I i i i ! I I i i i I I I I i ! ! I I i I i i 50 100 150 200 250 300 350 400 450 500 550 600 MRAC Adaptive Gain I I I f ! ! I I ............ rr"! .. r ........... ................. 1 ................. i ................. i .......... .... T ................ T ................ T ............ .. ............ f ................. l ................. j ................. j ................. l ................. j ................. j ................. l ................ { ................ f ................ f ............ .. 8 6 i I I I ! I i ............ .t ................. 1 ................. 1 ................. : ................ .1 ................. j ................. 1 ................. ................................................... .:. ............. f I i I i ! ! I : ! I i i I I I 0 ............ .............. .. ,_ .. .... .... ................. ... ........... 1 ................. .................................. .................................. .c. .............................. 4 2 I I I I I i: 0 'LLL1L'LL'L' ______ L_ ____ _L ______ l_ ____ _L ____ __j 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.10 Robust MRAC Noise= 0.001 and A.= 0.98 112
PAGE 122
System Outputs 20 15 I I I l I I I I . '1\ i\._ IL I 1 : : :""""" ...... ! i i i i i : l 10 : l 5 i : . tIf'c.1Q) "0 .2 0 c: Cl co ::2 5 ...... ........ ........ ;:::::: ........ ;=: """'P' """'1'""""'r""""'r''"""""= ......... t:::::......... = ......... ::::10 15 20 50 100 150 200 250 300 350 400 450 500 550 600 Time Robust Theta Vectors 2 Q) ................. j ................. l ................. j ................. l ................. ................. ............... 1................ t ................ J ................ f ............ "0 .a 0 c: Cl Cll ::2 1 2 I i j l I I I I I .............. { ................. ... ............ j ................. l""''''''''l"''''''"'''''''"'''"''''''''!"""'''''''''''"''''''''''''''f .. J""''''J ............. ....., 1 I i I I I Ll 11 i I '} i I I I I I ............. r ............... r............... ................. ................. ................ l ................ ................ T ................ r ............. r .............. r ......... .. i i i I 1 i i i i I 50 1 00 150 200 250 300 350 400 450 500 550 600 MRAC Theta Vectors 2 Q) "C .a 0 c: Cl Cll ::2 1 2 50 1 00 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.11 Robust MRAC Noise= 0.003 and A.= 0.9 113
PAGE 123
20 10 0 :a; 10 20 0 Robust Tracking Error 50 100 150 200 250 300 350 400 450 500 550 600 MRAC Tracking Error 50 100 150 200 250 300 350 400 450 500 550 600 Robust M RAC Adaptive Gain 1.5...,,,,..,..,,.,. I I 0 o I ! I ! rFt= ........ l.. ............... l ................. l ................. l ............... _, ................. ............... + ................ f ............... + ............ + ............ 1. .. j1l1++1t t t50 100 150 200 250 300 350 400 450 500 550 600 MRAC Adaptive Gain 10 8 Q) "C 6 3 '2 Cl 4 Ill :a; 2 0 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.12 Robust MRAC Noise= 0.003 and/...= 0.9 114
PAGE 124
Q) c c: Cl t"ll :2 Q) c c: Cl t"ll :2 System Outputs I ; I i I 15 I j i f r... i i f i : : I 10 t ...... J ....... ....... i i l,__ i.__ ',._ 5 lit"R AC R bu tc or ro I rvlfw I I I I i ! I l I j i I 11! .. ; i i i 0 ................. ............... .. ..................... 5 ...... :::::. ........ ;::::::: ........ r:::= ........ = ...... IIIII\ r 10 15 : ll'irfr>;It'; l\rr1 111r .. ......... u ............. I I i i I 2 0 1 2 2 0 1 2 50 100 150 200 250 300 350 400 450 500 550 600 Time Robust Theta Vectors F i ; ; ; ; ; i ; ; ; lt++!!++tttllllll..i.ltfl I 'I ' 'I I i I j i ! ! i i I I j I i I j i I I l i i i l j i 50 100 150 200 250 300 350 400 450 500 550 600 MRAC Theta Vectors 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.13 Robust MRAC Noise= 0.002 and A= 0.5 115
PAGE 125
Robust Tracking Error : : : I ; ! : i t..  .................. ,_,,, .. ,_,,,,_,,_ f i i i I i I I .... .1. _____ ...... L.______ L______ ! ! i i 20 Gl 10 "C 2 0 'E Ol ca 10 20 50 100 150 200 250 300 350 400 450 500 550 600 MRAC Tracking Error 50 Gl "C 2 0 ', ca 50 50 100 150 200 250 300 350 400 450 500 550 600 Robust M RAG Adaptive Gain 1.5 I i i i I i I ! ! ! ! Gl "C 2 ;:: Ol ca 0.5 __ i i i i l i I i rrrrrrittIr0 j : i i i l i i i i I I i i i I I I! i i i = i : i I 50 100 150 200 250 300 350 400 450 500 550 600 MRAC Adaptive Gain 10 8 Gl "C 6 2 ;:: Ol 4 l'O 2 0 50 100 150 200 250 300 350 400 450 500 550 600 Time Figure 6.7.14 Robust MRAC Noise= 0.002 and A = 0.5 116
PAGE 126
6.8 Results of Robust Model Reference Control Simulations It is clearly seen in Figures 6.7.1 through 6.7.14 that the robust model reference adaptive controller outperforms the idealized model reference adaptive controller. This is especially evident in Figures 6.7.9 through 6.7.14 where the robust MRAC was simulated simultaneously with the idealized MRAC, utilizing the same plant, reference signal and noise input. The robust controller can be seen maintaining a clean system output while the idealized MRAC exhibits extreme bursting or even total breakdown. The robust controllers theta vectors are able to maintain smooth parameter estimates in the same environment that drives the idealized MRAC to breakdown. This illustrates the integral action in the robust controller in conjunction with the forgetting factor ')., Even though the robust controller exhibits slight signs of windup, this would have been taken care of if the antialiasing filter had been incorporated into the actual controlled system to eliminate aliasing. The robust controller is able to handle noise levels that are two magnitudes greater than the idealized controller. The above illustrates the point that robust design methods must be used when designing a control system to account for noise and unexpected disturbances in the controlled system. The Matlab *.m files used for the robust MRAC simulations are given in the appendix. 117
PAGE 127
7. Conclusion The design of a control system involves many steps. The system being controlled needs to be studied and a decision made as to what as tO' what control model will be used as the controller. In order to accomplish this a model of the system to be controlled needs to be developed and then analyzed to determine the system properties. Then in conjunction with the performance specifications for the system, a decision can be made as to what type of controller can be designed to achieve the desired performance specifications. The model reference adaptive controllers considered in this thesis represent a natural compliment to optimal linear control systems. The controllers allow the designer to cope efficiently with practical problems. The general procedure for the design of single input single output, globally stable, robust controllers has been given for the least squares and model reference adaptive controllers. The theoretical development for the normalized gradient and least mean square controllers has been given, and also the model reference adaptive controller based on either the least square or normalized gradient. The pole placement controller has also been discussed. The general design procedure when disturbances are present has also been considered as well as the procedure when the controlled system is nonminimum phase has been developed. The importance of robustness has been investigated for satisfactory performance of the controller in the presence of unmodeled dynamics and 118
PAGE 128
noise in the system. Practical issues and implementation for robustness has also been covered. Sampling, prefilters, postfilters, and data filters have been shown to be an important piece of the robust design. The use of forgetting factors has been discussed as a means to ensure good parameter tracking by resetting the covariance matrix. The two different types of windup have been covered as well as the means to combat windup by the utilization of conditional updating, constant trace algorithms, directional forgetting, and leakage. Finally, robust estimation has been explored to show the importance of using integral action in the denominator of the adaptive least squares algorithms to achieve robust performance. The simulations in this thesis utilize the recursive least square algorithm and the model reference adaptive controller based on the least square model. In the sets of simulations noise and variances are injected into the controlled systems at the output. The forgetting factor A. has also been utilized in each set of simulations. The phenomenon of burstings (sudden spikes in the plant output) has been investigated along with the complete breakdown of the controller in the presence of high noise levels or disturbances. A robust model reference adaptive controller was also developed utilizing the concepts discussed above and compared to the idealized model reference adaptive controller. The simulation results are discussed for tracking performance and stability in light of the abovementioned factors. 119
PAGE 129
Appendix A: Least Mean Square Algorithm % ############################################# % % Least Mean Square Algorithm % based on Least Squares % % Filename "Least" % % Alan Gibbs % % ############################################# clear all format short e % Initial Conditions y(l)=O.O; y(2)=0.0; y(3)=0.0; u(l)=O.O; u(2)=0.0; u(3)=0.0; pp=eye (4); t_hat(3,1:4)=[1.0 1.0 1.0 1.0]; % Number of Iterations n=lOOO; alpha='2; lamda=.98; % Number of Iterations % Scaling Factors % % Least Square Algorithm for the SelfTuning Controller for t=3:n w(t)=alpha*randn*t/t; w(t+1)=alpha*randn*((t+l)/(t+1)); yr(t)=l5*square(2*pi*t/30); yr(t+1)=15*square(2*pi*(t+1)/30); %yr(t)=l5*sin(2*pi/50*t); %yr(t+l)=l5*sin(2*pi/50*(t+l)); 120 % Random White Noise Signal % % Reference Signals % Square wave % % % Sin wave %
PAGE 130
if t<=200; dl(t)=O; d2(t)=O; elseif t>200 & t<250; dl(t)=alpha*randn*((t+l)/(t+l)); d2(t)=alpha*sin(2*pi/4*(t+l)); else t>=250; dl(t)=O; d2(t)=O; end % % % % % Used for disturbance % injected into the % controled system % at either the input % or the output. % t_hat(t1l)*y(t)+t_hat(t12)*y(tl)+yr(t+l) ) ; y(t+1)=1.67*y(t)0.67*y(t1)+0.2*u(t)+ 0.18*u(t1)+W(t+1)+dl(t) %+w(t+1)+d1(t) phi(t11:4)=[y(t) y(t1) u(t) u(t1)]; p=(1/lamda)*(pp((pp*phi(t11:4) '*phi(tl1:4)*pp)/ (lamda+phi(t11:4)*pp*phi(tl1:4) '))); pp=p; t_hat (t+11 1:4) =t_hat (t I 1:4) + (p*phi (t 1 1:4) I) I* (y (t+1) yr ( t+l)) j % Tracking Error e (t) =Y (t) yr (t); end % Plots T=l:n+l; T2=1:n; figure(1) subplot(211)1plot(T1yr1 'k')1title('Reference Signal'); ylabel('Magnitude'); grid; axis([l 600 25 25]); subplot(212)1plot(T1Y1 'k')1title('System Output'); xlabel('Time');ylabel('Magnitude'); grid; axis([l 600 50 50]); figure(2) zoom on 121
PAGE 131
subplot(211)1plot(T21e1 'k')1title('Tracking Error'); ylabel('Magnitude'); grid; axis ( [1 600 75 75]); subplot(212)1plot(T1t_hat(:11)1 'k' 1 T 1 t_hat (: 13) I 'k' IT I t_hat (: 14) 1 'k') ; title('Theta Vectors'); xlabel('Time') ;ylabel('Magnitude'); grid; axis([O 600 0.5 1]) clc 122
PAGE 132
Appendix B: ModelReference Adaptive Control Algorithm % ############################################# % % ModelReference Adaptive Controllers % based on Least Squares % % Filename "Mod Ref" % % Alan Gibbs % ############################################# clear all close all format short e % Define n=1000; Yref=15*square(2*pi*[1:n]/50); %Command signal W=O.OOOO*randn*(1:n); P=eye(4); THETA=[l 1 1 1] '; PLANT=[1.5; 0.6; 1.7; 0.9]; %Plant REF_m=[0.834983; 0.236927; 0.401945]; %Reference Model Y=zeros(1,n); Y_m=zeros (1,n); U=zeros(1,n); uf=zeros(1,n); E=zeros(1,n); lamda=0.4; for k=3:n1 end PHI= [U(k) U(k1) Y(k) Y(k1)) 1 ; U(k)=(1/THETA(1))*(THETA(2:4) '*PHI(2:4)+0.4*Yref(k))+W(k); PHI=[U(k) U(k1) Y(k) Y(k1)] '; PHI_m=[Y_m(k) Y_m(k1) Yref(k)] 1 ; Y_m(k+1)=REF_m'*PHI_m; Y(k+1)=(PLANT'*PHI); E(k+l)=Y(k+1)Y_m(k+1); P=(1/lamda)*(PP*PHI*PHI'*P/(lamda+PHI'*P*PHI)); THETA=THETA+P*PHI*E(k+1); THET(k+1,1:4)=THETA'; 123
PAGE 133
T=1:n; T2=1:n; figure(1) subplot(211)1plot(T1Y1 'k'); title('MRAC System Output') ylabel('Magnitude'); grid; axis([1 600 25 25]); subplot(212)1plot(T1Yref1 'k'); title('Reference Signal') xlabel('Time'); ylabel('Magnitude'); grid; axis ( [1 600 20 20]); figure(2) subplot(211)1plot(T1E1 'k'); title('Tracking Error') ylabel('Magnitude'); grid; axis ( [1 600 so SO]); subplot(212)1plot(T21THET(:11)1 'k'1T21THET(:12) 1 I k I I T2 I THET ( : I 3 ) I I k I I T2 I THET ( : I 4 ) I I k I ) i title('Theta Vectors') xlabel('Time'); ylabel('Magnitude'); grid; axis ( [ 1 6 0 0 2 2] ) ; 124
PAGE 134
Appendix C: Robust ModelReference Adaptive Control Algorithm % ############################################# % % ModelReference Adaptive Controllers % Robust MRAC % based on Least Squares % % Filename "Mod Ref robust" % % Alan Gibbs % % ############################################# clear all close all format short e % Define n=1000; yref=15*square(2*pi*[1:n]/50); w=0.01*randn*(1:n); p=eye(4); theta= [1 1 1 1] 1 ; plant=[l.S; 0.6; 1.7; 0.9]; ref_m= [0. 834983; 0.236927; 0. 401945] ; y=zeros (1,n); y m=zeros (1,n); u=zeros (1,n); e=zeros(1,n); lamda=0.98; for t=3:n1 % Command signal % Plant % Reference Model phi= [u (t) u (t1) y (t) y (t1)] 1 ; u(t)=(1/theta(1))*(theta(2:4) 1*phi(2:4)+0.4*yref(t))+w(t); phi= [u(t) u(t1) y(t) y(t1)] 1 ; phi_m=[y_m(t) y_m(t1) yref(t)] 1 ; y_m(t+1)=ref_m1*phi_m; y(t+1)=(plant1*phi); res=y(t+1)phi1*theta; %Compute residue p=(1/lamda)*(pp*phi*phi1*p/(1+phi1*p*phi)); est=p*phi; % den=lamda+phi1*est; %Update estimate gain=est/den; % 125
PAGE 135
e(t+1)=y(t+1)y_m(t+1); theta=theta+gain*res; p=(pest*est'/den)/lamda; Theta(t+111:4)=theta'; end T=1:n; T2=1:n; T3=1 :n1; % Plots figure(1) % Error % Update parameter estimates % Update P matrix % For theata plots subplot(211)1plot(T1Y1 'k'); title('Robust MRAC System Output') ylabel('Magnitude'); grid; axis ( [1 600 20 20]); subplot(212)1plot(T1yref1 'k'); title('Reference Signal') xlabel('Time'); ylabel('Magnitude'); grid; axis ( [1 600 20 20]); zoom figure(2) subplot(211) 1plot{T1e1 'k'); title('Tracking Error') ylabel('Magnitude'); grid; axis ( [1 600 20 20]); subplot(212) 1plot(T21Theta(:11) 1 'k'1T21Theta{:12) 1 'k' 1 T2 1 Theta ( : 1 3) 1 'k' 1 T2 1 Theta ( : I 4) I 'k' ) ; title('Theta Vectors') xlabel('Time'); ylabel('Magnitude'); grid; axis ( [ 1 6 o o 2 2] ) ; 126
PAGE 136
Appendix D: Robust MRAC compared to Idealized MRAC Algorithm % ############################################# % % ModelReference Adaptive Controllers % Comparison of MRAC & Robust MRAC % based on Least Squares % % Filename "Mod Ref 1" % % Alan Gibbs % % ############################################# clear all close all format short e % Define n=1000; yref=15*square(2*pi*[1:n]/50); Yref=yref; %Command signal w=0.001*randn*(1:n); W=w; p=eye(4); P=p; theta=[1 1 1 1] '; THETA=theta; plant=[1.5; 0.6; 1.7; 0.9]; PLANT=plant; % Plant ref_m=[0.834983; 0.236927; 0.401945]; %Reference Model REF_m=ref_m; y=zeros(1,n); Y=y; y_m=zeros(1,n); Y_m=y_m; u=zeros(1,n); U=u; e=zeros(1,n); E=e; lamda=O. 98; % % Robust Model Reference Adaptive Controller for t=3:n1 phi= [u ( t) u ( t 1) y ( t) y ( t 1) ] ; u(t)=(1/theta(1))*(theta(2:4) '*phi(2:4)+0.4*yref(t))+w(t); phi=[u(t) u(t1) y(t) y(t1)] '; phi_m=[y_m(t) y_m(t1) yref(t)] '; y_m(t+1)=ref_m'*phi_m; y(t+1)=(plant'*phi); %+w(t+1) res=y(t+1)phi'*theta; %Compute residue p=(1/lamda)*(pp*phi*phi'*p/(1+phi'*p*phi)); 127
PAGE 137
est=p*phi; % den=lamda+phi'*est; %Update estimate gain=est/den; % e(t+1)=y(t+1)y_m(t+1); %Error theta=theta+gain*res; % Update parameter estimates p=(pest*est'/den)/lamda; %Update P matrix Theta(t+1,1:4)=theta'; %For theata plots Gain(t+1)=(theta(1, :)+theta(2, :))/(1+theta(3, :)+theta(4, :)) ; end % Model Reference Adaptive Controller for k=3:n1 PHI= [U(k) U(k1) Y(k) Y(k1) l I; U(k)=(1/THETA(1))*(THETA(2:4) '*PHI(2:4)+0.4*Yref(k))+W(k); PHI=[U(k) U(k1) Y(k) Y(k1)] '; PHI_m=[Y_m(k) Y_m(k1) Yref(k)] '; Y_m(k+1)=REF_m'*PHI_m; Y(k+1)=(PLANT'*PHI); E(k+1)=Y(k+1)Y_m(k+1); P=(1/lamda)*(PP*PHI*PHI'*P/(lamda+PHI'*P*PHI)); THETA=THETA+P*PHI*E(k+1); THET(k+1,1:4)=THETA'; GAIN(k+1)=(THETA(1, :)+THETA(2, :))/(1+THETA(3, :)+THETA(4, :)) ; end T=1:n; T2=1:n; T3=1:n1; % Plots figure(1) plot(T,y, 'k',T,Y, 'k'); title('System Outputs') xlabel('Time'); ylabel('Magnitude'); grid; axis ( [1 600 20 20]); gtext('MRAC Robust Controller'); gtext('MRAC Controller'); figure(2) subplot (211) ,plot (T2, Theta (:, 1), 'k', T2, Theta (:, 2), 1 k 1 T2, Theta ( : 3) 'k' T2, Theta ( : 4) 'k' ) ; title('Robust Theta Vectors') ylabel('Magnitude'); grid; axis ( [1 600 2 2]); 128
PAGE 138
subplot(212) ,plot(T2,THET(:,1), 'k',T2,THET(:,2), I k I T2 THET ( : 3 ) I k I T2 THET ( : 4 ) I k I ) ; title('MRAC Theta Vectors') xlabel('Time'); ylabel('Magnitude'); grid; axis ( [ 1 6 0 0 2 2] ) ; figure(3) subplot(211) ,plot(T,e, 'k'); title('Robust Tracking Error') ylabel('Magnitude'); grid; axis ( [1 600 25 25]) ; subplot(212) ,plot(T,E, 'k'); title('MRAC Tracking Error') ylabel('Magnitude'); grid; axis([1 600 50 50]); figure(4) subplot(211) ,plot(T,Gain, 'k'); title('Robust MRAC Adaptive Gain'); ylabel('Magnitude'); grid; axis ( [ 1 6 0 0 0 1 5] ) ; subplot(212) ,plot(T,GAIN, 'k'); title('MRAC Adaptive Gain'); xlabel('Time'); ylabel('Magnitude'); grid; axis ( [ 1 6 0 0 0 10] ) ; 129
PAGE 139
Appendix E: Antialiasing Filter Algorithm % ############################################# % % Step Responses and Bode Plots of Robust % Model Reference Adaptive Controller % based on Least Squares with % Antialiasing Filter % % Filename "aa Filter" % % Alan Gibbs % % ############################################# clear all close all elf format short e % Define Plant b=[l.5 0.6]; a=[l 1.7 0.9]; % Define MRAC b_m=[l 0.4]; a_m=[0.268 0.5767 0.442]; % Plant with MRAC num=conv(b,b_m); den=conv(a,a_m); % Antialiasing Filter [B,A]=besself(4, (pi/50)); % Adjust gain of antialiasing filter B=B*l. 65; % Closed loop system 130
PAGE 140
NUM=conv(B,num); DEN=conv(A,den); w = logspace(5, 1, 75); [mag, phase] = bode(b,a,w); [mag1, phase1] = bode(num,den,w); [mag2, phase2] = bode(NUM,DEN,w); magDB = 20 log10(mag); magDB1 20 log10(mag1); magDB2 = 20 log10(mag2); % Plots % Plot of Step Response for Plant figure(l) sys1 = step(b,a,0:200); plot(sys1,'k') grid; axis([O 200 0 1.4]) title('Step Response of Plant') xlabel ( 'Time' ) ylabel('Amplitude') % Plot of Step Response for Plant with MRAC figure(2); sys2 = step(num,den,0:200); plot ( sys2 'k' ) grid axis ( [ 0 2 0 0 0 1. 4] ) title('Step Response of Plant with MRAC') xlabel ( 'Time' ) ylabel ('Amplitude') % Plot of Step Response for Closed Loop with Filter figure(3); sys3 = step(NUM, DEN,0:200); plot ( sys3, 'k' ) grid axis([O 200 0 1.4]) title('Step Response for Closed Loop with Antialiasing Filter') xlabel ( 'Time' ) ylabel('Amplitude') 131
PAGE 141
% Plot for Magnitude of Plant in dB figure(4) semilogx(w, magDB, 'k') grid title('Magnitude for Plant') xlabel('Frequency (rad/sec) ') ylabel('Gain (dB)') % Plot for Phase Margin of Plant in rad/sec figure(S) semilogx(w, phase, 'k') grid title('Phase Margin for Plant') xlabel('Frequency (rad/sec) ') ylabel('Phase degree') % Plot for Magnitude of Plant with MRAC in dB figure(6) semilogx(w, magDBl, 'k') grid title('Magnitude for Plant with MRAC') xlabel('Frequency (rad/sec) ') ylabel('Gain (dB)') % Plot for Phase Margin of Plant with MRAC in rad/sec figure(?) semilogx(w, phasel, 'k') grid title('Phase Margin for Plant with MRAC') xlabel('Frequency (rad/sec) ') ylabel('Phase degree') % Plot for Magnitude of Closed Loop and Filter in dB figure(S) semilogx(w, magDB2, 'k') grid title('Magnitude for Closed Loop with Filter') xlabel('Frequency (rad/sec) ') ylabel('Gain (dB)') % Plot for Phase Margin of Closed Loop and Filter in rad/sec figure(9) semilogx(w, phase2, 'k') grid 132
PAGE 142
title('Phase Margin for Closed Loop with Filter') xlabel('Frequency (rad/sec) ') ylabel('Phase degree') % Plot all of the above Magnitudes in rad/sec figure(lO) semilogx(w, magDB, 'k' ,w, magDBl, 'k',w, magDB2, 'k') grid title('All Magnitudes') xlabel('Frequency (rad/sec) ') ylabel('Gain (dB)') %gtext('Plant') %gtext('Plant and Controller') %gtext('Plant, Controller & Filter') % Plot all of the above Phase Margins in rad/sec figure(ll) semilogx(w, phase, 'k',w, phasel, 'k',w, phase2, 'k') grid title('All Phase Margins') xlabel('Frequency (rad/sec) ') ylabel('Phase degree') %gtext('Plant') %gtext('Plant and Controller') %gtext('Plant, Controller & Filter') 133
PAGE 143
References Astrom, K. J. & Wittenrnark, B.: Adaptive Control, AddisonWesley, Reading, Mass., 1995 Biran, A. & Breiner, M.: MAIIABjor Engineers, AddisonWesley, Reading, Mass., 1995 Brogan, W.: Modern Control Theory, Prentice Hall, Upper Saddle River, New Jersey, 1974 Chong, K P. & Stanislaw H. Z.: An Introduction to Optimization, John Wiley & Sons, New York, New York, 1996 Doyle, J. C., Francis B. A. & Tannenbawn A. R.: Feedback Control Theory, Macmillan Publishing Co., New York, New York, 1992 Feng, G. & Lozano, R.: Adaptive Control Systems, Newnes., Oxford, Great Britain, 1999 Landau, I. D., Lozano, R. & M'Saad M.: Adaptive Control, Springer, London, Great Britain, 1997 Landau, Y. D.: Adaptive Control, Marcel Dekker, New York, New York, 1979 Rohrs, C. E., Melsa J. L. & Schultz D. G.: Unear Control Systems, McGraw Hill, New York, New York, 1969 134

