Adaptive control based on discrete-time model-reference technique

Material Information

Adaptive control based on discrete-time model-reference technique
Tamer, Molalegn Tegegne
Place of Publication:
Denver, Colo.
University of Colorado Denver
Publication Date:
Physical Description:
v, 100 leaves : illustrations ; 29 cm

Thesis/Dissertation Information

Master's ( Master of Science)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
Department of Electrical Engineering, CU Denver
Degree Disciplines:
Electrical Engineering
Committee Chair:
Radenkovic, Miloje S.
Committee Members:
Bialasiewicz, Jan T.
Bose, Tarmal


Subjects / Keywords:
Adaptive control systems ( lcsh )
Discrete-time systems ( lcsh )
Adaptive control systems ( fast )
Discrete-time systems ( fast )
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )


Includes bibliographical references.
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Electrical Engineering.
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Molalegn Tegegne Tamer.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
28985699 ( OCLC )
LD1190.E54 1993m .T36 ( lcc )

Full Text
Molalegn Tegegrie Tamer
B.S., University of Colorado at Denver, 1991
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering

This thesis for the Master of Science
degree by
Molalegn Tegegne Tamer
has been approved for the
Department of
Electrical Engineering
Jan T. Bialasiewicz

Tamer, Molalegn Tegegne (M.S., Electrical Engineering)
Adaptive Control Based on Discrete-Time
Model-Reference Technique
Thesis directed by Associate Professor Miloje S. Radenkovic
This thesis presents discrete-time model-reference adaptive systems (MRAS)
as one of the main approaches to adaptive control. Design procedure when
disturbances are present and when the system is non-minimum phase is discussed It
also examines the robustness properties of existing adaptive control algorithms to
unmodeled plant high-frequency dynamics and unmeasurable output disturbances. The
thesis also gives a review of pole placement design and relations between MRAS and
self-tuning regulator (STR). Simulations of the behavior of the algorithms are also
This abstract accurately represents the content of the candidate's thesis. I recommend
its publication.
Miloje S. Radenkovic

I wish to express my gratitude to Dr. Miloje S. Radenkovic for his continuing
support and direction during the course of the thesis work. Gratitude is also extended
to Dr. Jan T. Bialasiewicz and Dr. Tamal Bose for serving in the graduate committee.

1. Introduction......................................................1
2. The Model-Reference Adaptive Systems Problem......................3
2.1. Design Procedure When Disturbances Are Present................ 11
2.2. Design Procedure When the System Is Non-Minimum Phase .........13
3. Pole Placement Design............................................15
4. Robust Adaptive Control ........................................ 20
5. Relations Between MRAS and STR ................................. 33
6. Simulations......................................................36
7. Conclusion...................................................... 91
Normalized Gradient Algorithm..................................92
Least Square Algorithm.........................................95

1. Introduction
The design of high-performance control systems generally requires the use of
adaptive control techniques when the parameters of the controlled process either are
unknown or vary with time. Among various alternative methods, the use of the
technique known as the model-reference adaptive systems (MRASs) is today one of
the most feasible approaches for the practical implementation of adaptive control
systems [1], [3], [9], [18].
The basic idea of the method is illustrated in Figure 1.1, where a reference
model is used to specify the desired performances of the basic loop, consisting of the
controlled process and a classical controller. In Figure 1.1 the classical feedback
closed-loop controller is used in the basic loop, but in some cases feedback controllers
and prefilters can be used as well [13], [25]. The actual output of the controlled
process is compared with the reference model output and the parameters of the
controller are modified in such a way that the response of the controlled process
follows that of the reference model. There are two loops in Figure 1.1: an inner loop,
which provides the ordinary control feedback, and an outer loop which adjusts the
parameters in the inner loop. The inner loop is assumed to be faster than the outer
Model-reference adaptive systems (MRAS) were originally derived for the
servo problem in deterministic continuous systems. The idea and the theory have been
extended to cover discrete-time systems and systems with stochastic disturbances [1],
[3]. In this thesis a single input, single output (SISO), discrete-time certainty

equivalent direct MRAS (i.e. in which the controller parameters are updated directly)
with parallel configuration as in Figure 1.1 is considered.
Figure 1.1 The model-reference adaptive system.

2. The Model-Reference Adaptive Systems Problem
For a system with adjustable parameters (as in Figure 1.1) the model-reference
adaptive method gives a general approach for adjusting the parameters so that the
closed-loop transfer function will be close to a prescribed model. This is called the
model following problem. One important question is how small we can make the error
e. This depends on the model, the system, and the command signal. If it is possible to
make the error equal to zero for all command signals, then perfect model following is
achieved which can only be obtained in idealized situations [3], [13],[18].
Consider the system given in the form
A(q-X)y(t) = q- dB(q-1)u(t) (2.1)
where y(t) and u(t) are system output and input respectively, d is system delay. It is
assumed that A(q_1) and B(q_1) are coprime polynomials. Polynomials A(q_1) and
B(q_1) are given in the form
A(q_1) = 1 + ajq*1 + ... + aAq-nA
and B(qx) = b0 + b^q-1 + ... + bBqnB, bo 0 (2.2)
where q_1 is the unit delay operator. Our objective is to design controller
R(q-i)u(t) = T(q-i)r(t) S(q-1)y(t) (2.3)
where R, S, and T are polynomials so that the closed loop system gives the desired
response to the command signal r(t).
For a given design specification (wn natural frequency and £n damping
coefficient) we can synthesize our reference model in the form
Am(q'l)ym(t) = q" dBm(q'1)r(t) (2.4)

where ym(t) is the output of the reference model and
Afnfa"1) = 1 + aimT1 + + anAmm<1 nAm
BmCq'1) = bom + bimT1 +... + b^^q^Bm .
We assume that Bm(q_1) and Am(q'1) do not have any common factors. Block
diagrams of the closed-loop system and the reference model are illustrated in Figures
2.1 and 2.1.1.
Figure 2.1 Block diagram of the closed-loop system.
B (q ) y(t)
rft) m -l m
A (q )

Figure 2.1.1 Block diagram of the reference model.

Next we explain non-adaptive model-reference design.
Substituting (2.3) into (1.1) it is not difficult to see that the closed-loop system is given
q-"B(q-i)T(q-i)r(t) (2.5)
y(t) = __________________________________.
A(q-l)R(q-i) + q-JB(q-)S(q-i)
Model reference design requires that the transfer function of the closed-loop system is
equal to the transfer function of the reference model. Consequently from (2.4) and
(2.5) it follows
B(q-l)T(q-l) Bm(q-l)
__________________.____________ = ____________ (2.6)
A(q-l)R(q-l) + q^B(q-l) S(q-l) Am(q-l)
Then the polynomials T(q-*), R(q*1) and S(q_1) from the controller equation (2.3) are
given by
T(q-1)= Bm(q-l), R(q-l) = B(q-l)R1(q-l)
and S(q~1) is obtained as a solution of the Diophantine equation (also called Bezout
Am(q*l) = A(q-i)R1(q-l) + q^S(q-l). (2.8)
This equation has causal solution if polynomials R^q-1) and S(q'1) have the form
R^q*1) = 1 + ^q-1 +... + rd.1q-d+1 (2-9)
and S(q*1) = Sq + Sjq-1 +... + sn$q-ns (2.10)
where ns = max {n 1, deg A, d} and n is the degree of the polynomial A(q_1) and
deg Am denotes degree of the polynomial Am(q_1).

The degree of the polynomial RiCq'1) is equal d -1 and the degree of the polynomial
SCq*1) is equal max {deg Am d, deg A -1}. Thus control structure (2.3) is specified
by (2.7) and (2.8), in terms of the system polynomials A(q_1)and B(q-1) and the
reference model polynomials Am(q-1) and Bm(q-1).
The above solution can also be obtained in a slightly different way. Namely,
starting from equation (2.8) we can obtain
Am(q'1)[y(t + d) ym(t + d)] = Amy(t + d) Amym(t + d). (2.11)
After substituting Am(q-1) from (2.8) into the first term on the right hand side of the
above eq., we have
= RiAy(t + d) + Sy(t) Amym(t + d). (2.12)
Since from (2.1)
Ay(t+d) = Bu(t) (2.13)
and from (2.4)
Amym(t + d) = Bmr(t), (2.14)
from (2.12) we derive
AJy(t + d) ym(t + d)] = BRlU(t) + Sy(t) Bmr(t). (2.15)
In (2.12 2.15) we ommitted unit delay operator q*1 in the corresponding polynomials.
If the context is clear this will also be done in the future.
If we set y(t + d) ym(t + d) = 0, from the above equation it follows that model-
reference controller is given by
BRlU(t) = -Sy(t) + Bmr(t) (2.16)
where R^q*1) and S(q_1) are obtained from polynomial equation (2.8), i.e., the

controller is the same as the one defined by (2.3) and eqs. (2.7) and (2.8).
In the following adaptive model reference controller is considered.
Let us denote
R(q-l) = B(q-l)R1(q-l) (2.17)
deg R(q_1) = deg B(q-1) + deg R1(q-.1) = deg B(q-1) + d -1.
Then (2.15) can be written in the form
Am[y(t + d) ym(t + d)] = 0oT where
e0T = tr0* rl- rnR > s0> sl sns] (2.19)
while (rt, i = o, i,..., nR) are the coefficients of the polynomial R(q_1) defined by
(2.17). Measurement vector <|>(t) is given by the following equation
nR = deg R(q-!)
ns = deg S(q-1).
Setting y(t + d) = ym(t + d), from (2.18) we obtain the model reference controller in
the form
e0T^(t) = Bm(q-i)r(t) (2.21)
which represents the vector form of the controller (2.16).
In the case when we do not know real parameter 0O we will apply adaptive

control concept. This means that instead of (2.21) we will use the following adaptive
control law
§(t + d l)^(t) = Bm(q-l)r(t) (2.22)
where 0(t + d -1) is obtained by the estimation scheme
6(t + d) = 6(t + d 1) +
p<|)(t)[y(t + d) ym(t + d)]
c + ll (2.23)
(i) 0 < |x < 2
(ii) 0 (iii) _J_______
c + ll algorithm gain
normalizing factor
(iv) 0(0) = 0! initial condition
and Estimator (2.23) is known as the normalized gradient algorithm. It will be globally
stable if
Re {Am(ei)-^} >0 i* = -l (2.24)
for all 0 £ /£ 2jc [7]. This is the so called strictly positive real (SPR) condition.
Condition (2.24) is very restrictive and means that the algorithm will not be
globally stable for all reference models (for all desired polynomials Am(q-1)).
Therefore we need to design estimation algorithm so that condition (2.24) will be

avoided This can be done in the following way.
Let us filter both sides of (2.18) by A^q-1). Then we obtain
y(t + d) ym(t + d) = 60T(t)v(t) (Bmr(t) / Am) (2.25)
where y(t) = 1 <}>(t) (2.26)
and (t) is given by (2.20). Using the fact that (2.4) implies
ym(t + d) =
from (2.25) it follows that
y(t + d) ym(t + d) = 0OT \|/(t) ym(t + d). (2.28)
If we put y(t + d) ym(t + d) = 0 in (2.28), model reference controller becomes
e0Tv(t) = ym(t + d). (2.29)
Corresponding adaptive model reference controller is given by
0(t + d 1)T \|/(t) = ym(t + d) (2.30)
where 0(t + d -1) is the estimate obtained by the following estimation algorithm
0(t + d) = 0(t + d 1) +
HÂ¥(t)[y(t + d) ym(t+d)1
c + IIV|/(t)ll2
0<^i<2, 0 < c < ~,
and \|/(t) is the filtered version of <|)(t) and it is given by (2.26). Algorithm (2.31) in
order to be globally stable does not require condition (2.24).

In practical applications better performance can be obtained if we use least-
square version of the algorithm (2.31) which has the form
0(t + d) = 9(t + d -1) + P(t)V(t)[y + d) ym(t + d)] (2.32)
where the matrix P(t) is defined by
P(t-l)y(t)V(t)Tp(t-l) (2.33)
P(t) = P(t -1) -________:____________
P(0) = P0I, Po>0; 0(0) = 0i.
In general when the external disturbances exist, it is not sufficient to specify
(Bm / An,)- With output feedback, there will be additional dynamics that are not excited
by the command signal. In other words we should also specify the observer dynamics
A^q'1) which will take care of disturbances. Observer A^q-1) is designed so that
influence of the disturbances is minimized, i.e., perfect disturbance rejection is
achieved. For that reason we need to know physical properties of the disturbances [1],
[3], [21], [22]. This will be examined in the next section.

2.1 Design Procedure When Disturbances Are Present
As in the previous case (without disturbances) (2.6) is the base for the design,
i.e. polynomials R, S, and T in the controller (2.3) should be chosen so that (2.6) is
In this case we will chose
T=BmA0 (2.34)
where Ao(q_1) is the observer polynomial specified by the designer. We are
emphasizing that in order to specify A^q"1), designer should know properties of the
Let us now determine polynomials R and S as a solution of the Diophantine
AR1 + q-dS = A0Am. (2.35)
Substituting T from (2.34) and R from (2.17) into the left-hand side of (2.6) we derive
BT BBmAo BmAo (2.36)
AR + qdBS ABRi + q^BS ARj + qdS
Using (2.35) from (2.36) it follows
BT BmAo Bm (2.37)
AR + q'dBS AmAo Am
i.e., (2.6) is satisfied, which means that the closed-loop system will behave like the

specified reference model.
Remark: Observer polynomial Ao(q_1) is always stable polynomial i.e., its zeros are
inside the unit circle in the Z plane.
If the system (2.1) is non-minimum phase then polynomial B(q_1) has unstable
zeros i.e., zeros outside the unit circle in the Z plane. In this case a different design
procedure is used [1], [3], [22], [23] which will be discussed in the next section.

2.2 Design Procedure When the System Is Non-Minimum Phase
In this case to make design possible we should specify reference model so that
Bm(q-i)=B-(q-l)B'n,(q-l) (2.38)
where B" is the polynomial containing unstable zeros of the system (2.1). Note that
B(q-!) can be written in the form
B(q-i) = B+(q-i)B-(q-1) (2.39)
where B+ has all its zeros stable. Design is based on (2.6), which means that the
closed-loop system transfer function obtained from (2.1) and (2.3), is identical to the
transfer function of the reference model. After substituting (2.38) and (2.39) into (2.6)
we derive
B+b-t BB'm (2.40)
AR + q"dB+B'S Am
This equation will be satisfied if we chose controller polynomials R, S and T as follows
T = A0(q-l)B1m(q-l) (2.41)
where Aq is the observer specified by the designer and
R = B+R! (2.42)
while Rj and S are obtained from the polynomial equation
AR! + q-dB'S = A0Am. (2.43)
Thus, from (2.43) we find Rt and S and from (2.41) and (2.42) we determine
T and R, by which the desired controller is determined. Note that in this case

polynomial R1 has the form
RiCq1) = 1 + riq-1 +... + rnRq-nR
where nR = deg B" + d -1, while polynomial S(q_1) has the form given by (2.10).
Note that the adaptive control algorithm can be derived in a similar way as in
chapter 2.
Model-following problem can also be solved using pole placement design. An
overview of pole placement design is given in the next chapter.

3. Pole Placement Design
Model-following generally means the case in which the closed-loop output is
made to follow a model specified by (2.4), i.e. both poles and zeros of the model are
specified in the formulation. Pole placement, on the other hand, specifies only the
closed-loop poles. It is one of the simpler direct design procedures. The key idea is to
fined a feedback law such that the closed-loop poles have the desired locations. To
make a satisfactory control system it is necessary to consider the zeros of the open-loop
system when specifying the closed-loop poles [1], [3], [13], [22],
Let the system be given as in (2.1).
Control law has the following structure:
R(q-l)u(t) = -S(q-i)y(t) + r*(t + d) (3.1)
where r*(t) is the command signal, while
R(q_1) = 1 + riq*1 + ... + rnRq"nR
and S(q-1) = Sq + Sjq-1 +... + ssq'ns- (3.2)
Location of the desired poles is specified by the polynomial
A*(q_1) = 1 + ajV +... + a^q- nA* (3.3)
nA* = deg A*(q_1) = 2n -1 (3.4)
n = max{deg A(q-1), deg B(q-1)}. (3.5)
Polynomials R(q_1) and S(q_1) from the controller (3.1) can be obtained by solving the
following Diophantine equation

A(q-1) R(ql) + q-^q-OSfo*1) = A*(q-).
This polynomial equation has unique solution if and only if
deg RCq*1) = n -1 and deg SCq*1) = n -1 (3.7)
where n is defined by the relation (3.5).
Block diagram of the closed-loop system is illustrated in Figure 3.1
Figure 3.1 Block diagram of the closed-loop system.

Next we verify the performance of the resulting closed-loop system.
Substituting u(t) from (3.1) into system equation (2.1), it is not difficult to obtain
B(q-l)r*(t) (3.8)
y(t) = ________________________________________
A(q-l)R(q-l) + q-dB(q-l)S(q-l)
where from (3.6) we obtain
B(q-1) r*(t) (3.9)
y(t) = _____________________
i.e., the poles of the closed-loop systems have desired locations defined by polynomial
When the system is unknown, our objective is to design adaptive controller so
that closed-loop system has desired poles defined by polynomial A*(q-1). In this case
we cannot solve polynomial equation (3.6), since the polynomials A(q_1) and B(q-1) are
not known. Therefore we need to identify system parameters (coefficients of the
polynomials A(q_1) and B(q_1)).
Let us design the identification algorithm. Note that the system can be written
in the form
y(t) = -atftt 1) -... -aAy(t nA) + b0u(t d) + b^t d -1)
+... + bBu(t d nB) (3.10)
y(t) = e0T(t-D
where 0OT = [a^ ..., aA; b0, b1?..., bB]

Identification algorithm has the following form
0(t) = 0(t-l) +
u6(t-nrv(ti- o(t-ivi>(t-m
c + - 1)11 2
0 <|J.<2, 0 This is so called output error algorithm. The reason for this name is that algorithm is
driven by the output error
e(t) = y(t) §(t 1)T <|)(t -1) (3.15)
y(t) = 0(t-l)T(j,(t-l) (3.16)
represent the estimate of the output signal.
0(t 1)T = [£j(t -1), ..., 8^(1 -1); b0(t 1),..., bnfi(t -1)], (3.17)
in time (t -1), we can find the estimates of the poynomials A(q_1) and B(q*1),
i.e., A(t 1, q-1) = 1 + a^t ^q1 +... + aA(t l)q-nA
B(t -1, q-i) = ho(t -1) + bi(t l)(q-i) + ... + bnB(t- l)q"nB (3.18)
If we replace these estimates into (3.6), we obtain
A(t -1, q-i)R(t 1, q1) + q* B(t -1, q i)S(t -1, q*i) = A*(qi) (3.19)
from where we can calculate estimates of the polynomials
R(t 1, q-1) and S(t 1, q_1).

Finally we can derive adaptive pole placement controller in the form
R(t -1, q-i)u(t d) = -S(t -1, q i)y(t d) + r*(t). (3.20)
limt-oo ^(t -1. q'1) = ACq-1)
limt^00B(t-l,q-i) = B(q-i) (3.21)
we have
lim t -> R(t 1, q*1) = R(q-J)
limt^00S(t-l,q-i) = S(q-i) (3.22)
and the poles of the closed-loop system will be at the desired locations defined by
polynomial A*(q-1).

4. Robust Adaptive Control
In this chapter we will discuss robustness aspects of the model reference
adaptive control systems.
The initial motivation behind the development of adaptive control theory was the
need to account for uncertainty in the parameters and structure of physical systems. In
the 1970s significant progress was made in adaptive control theory, under the
assumption that physical systems are described precisely by linear system models [7],
[18]. In these works, it was assumed that the system parameters are unknown and that
the relative degree and an upper bound on the order of the system are known. At the
beginning of the 1980s a disturbing fact was discovered, namely, that an adaptive
controller designed for the case of a perfect system model in the presence of external
disturbances or small modeling errors, can become unstable. As a result of this
discovery, the issue of robustness of adaptive systems has received a great deal of
attention within the adaptive systems community [16], [24], [26], [27],[30].
In order to guarantee stability, a variety of modifications of the algorithms
originally designed for the ideal case have been proposed, such as a and Ej
modification, relative dead zone, signal normalization, projection of the parameter
estimates and the like. In the context of deterministic adaptive control, many interesting
results were obtained. Some of these results are concerned with the local stability of
the adaptive system and with the explanation of the stability and instability mechanism
in adaptive algorithms. Important results which address the global stability of the
adaptive system in the presence of unmodeled dynamics were also established. An

excellent unification of the existing robust deterministic adaptive control methods is
developed in [12], where most of the significant works in the area of deterministic
adaptive control are cited.
In the case of stochastic robust adaptive control, there is no satisfactory theory
explaining the behavior of the adaptive system in the presence of unmodeled dynamics.
Here a unified framework for a systematic and quantitative theory of deterministic and
stochastic robust adaptive control is presented A new methodology for the global
stability analysis and consequently for the design of adaptive systems in the presence of
unmodeled dynamics and external disturbances is developed [26]. The present
approach is quite natural and is based on the construction of corresponding Lyapunov
functions for the different periods of adaptation. The proposed method makes it
possible to treat robust deterministic and stochastic adaptive systems in a unique way,
under formally identical assumptions. The usefulness of the developed methodology is
demonstrated on the robust deterministic and stochastic adaptive control problems.
Upper bounds of the tracking errors, input and output signals are precisely established
in terms of the algorithm design parameters and corresponding H norms. Also it is
shown that whenever, as a consequence of incorrect parameter estimates, an adaptive
system becomes unstable, the adaptive algorithm will stabilize itself by generating
correct parameter estimates. This self stabilization mechanism and the possible
occurrence of the burstings are characterized analytically. The proposed method
provides a precise characterization of the class and size of the admissible unmodeled
dynamics. Specifically, it is shown that the intensity of the tolerated unmodeled

dynamics, expressed in terms of the corresponding H norm, does not depend on the
algorithm gain and is identical to the case of nonadaptive robust control.
The results presented herein shows that projection of the parameter estimates in
a certain compact convex set is, in fact, sufficient to guarantee global stability in the
presence of unmodeled dynamics and external disturbances [26], [29]. In the case of
deterministic adaptive control, a normalization sequence will be employed [12], [26].
In the case of stochastic adaptive control, stochastic gradient algorithms with vanishing
gain sequences is utilized. The obtained results do not require persistently exciting
condition to be satisfied [26].
Let us consider the following discrete-time SISO system with unmodeled
Afo-Wt + 1) = B(qi)[l + A!(q-i)]u(t) + A(q-i)A2(q-i)u(t)
+[1+A3(qi)]w(t+1) (4.1)
where {y(t)}, {u(t)}, and (w(t)} are output, input, and disturbance sequences,
respectively, while q-1 represents the unit delay operator. The polynimials A(q_1) and
BCq'1) describe the nominal system model and are given by
Afq-1) = 1 + ajq-1 + ... + a^q^A
B(q-i) = b0 + b^q-i) + ... + bBq-nB . (b0* 0). (4.2)
In (4.1) Aj(q_1), i = 1,2 denote multiplicative and additive system perturbations. The
dynamics of complex and unstructured external disturbances are represented by the
operator A3(q_1). The transfer functions Ai(z'1), i = 1,2,3 are causal and stable. The

system itself need not be stably invertable. Unstable zeros of the system may be
"hidden in the unmodeled multiplicative and additive system perturbations [30]. In
(4.1) {w(t)} is the external disturbance and it is assumed to be a white noise sequence
satisfying the following assumption:
Si) Deterministic adaptive control (DAQ: w(t) < k <*>
We will stabilize system (4.1) by designing the adaptive controller so that for a given
reference signal y*(t) the following functional criterion is minimized:
Stochastic adaptive control (SAC): E{w(t + 1) l?t} = 0
E{w(t + 1)2 l?t} = aw2
DAC: J = (y(t) y*(t))2
SAC: J = lim i I (y(t) y*(t))2
N--N t=l
where we assume the following assumption.
S2) {y*(t)} is assumed to be bounded deterministic sequence, i.e ly*(t) 1^ ni! < .
Note that the system model (4.1) can be written in the form
Tl(t) = 60T(t) y*(t + 1) + 7(0
ai> -anA b0, bj,..., bng]
<>(t)T = [y(t),..., y(t nA + 1); u(t),..., u(t nB)]
Tl(t) = y(t + 1) y*(t + 1) IAw(t + 1)
y(t) = A0(q-1)u(t) + [A3(q-1) + (1 IA)]w(t + 1)

W) = BCq-^jCq-l) + A(q-i)A2(q-i). (4.9)
In (4.7) and (4.8), IA is the indicator function defined as follows
T f 0 for DAC
Ia=={i for SAC (4-10)
From (4.4) it is obvious that when y(t) = 0, the control law optimal in the sense of (4.3)
is given
oT assuming a minimum phase nominal system model. Since 0O is unknown, we use as
the adaptive control law the "certainty equivalence variant" of (4.11), i.e.,
0(t)T(t) = y*(t +1) (4.12)
where 0(t) is an estimate of 0O
From (4.4) and (4.12) it follows that the closed-loop adaptive system is given
Ti(t) = -z(t) + 7(t) (4.13)
z(t) = 5(t)T and
S(t) = 0(t) 0O. (4.15)
From (4.12) it is not difficult to obtain
B(q_1)u(t) q[ A(q-!) l]y(t) = y*(t + 1) z(t) (4- !6)
Combining (4.7), (4.13), and (4.16) yields

B(q*l)u(t) = A(q-1)[- z(t) + y*(t + 1)] + [A(q*l) l][IAw(t + 1) + y(t)]. (4.17)
Substituting u(t) from (4.17) into (4.8), we obtain
(B(q-i) Ao(q-i)[A(q-i) l]}y(t) = Ao(q-i)[A(q-i)[z(t) y*(t + 1)]
+ {Ao(q-i)[A(q-i)-l]IA
+ [A3(ql) + l-IA]B(q-i)}w(t+l). (4.18)
Concerning system model (4.1) the following is assumed:
S3) A lower bound Xq, 0 < Xq < 1, for which the zeros of B(z-1) and the poles of the
transfer functions
D^z-i) = A0(z-1)A(z'1) / (B(z-i) Aq(z-1)[ A(z-i) -1]}
D2(z*1) = {Ao(z-i)[ A(z-l) 1]IA + [A3(q-l) + 1 IA]B(z-i)}
is known,
X is a fixed number satisfying
0 < X < 1, for DAC
X = 1, for SAC. (4.19)
Based on the available prior information related to the nominal system model, we
introduce the following assumption.
S4) The compact convex set 0 which contains 0O, the sign of b0 and a lower
bound bo mi of lb0l, are known. Without loss of generality, we assume that bo > 0 and

b0 min
For the estimation of 0O the following algorithms are proposed
0 (t + 1) = *{ § (t) +f^j 4>(t)[y(t + 1) y* (t + 1)]}, 0 < n 1 (4.20)
where f[.) projects orthogonally onto 0 and there exists a finite constant do so that
110 (t) 0oll2 £ do < , and b0(t) t b0 m;n > 0 for all t ^ 1. The algorithm gain
sequence is given by
DAC: r(t) = n0 + n^(t)2 , 0 where
(t)2 = Xn,), (t 1)2+ ll(t)ll2 (4.22)
and A is a fixed number satisfying
Aj + Aq < A < 1, 0 k. Aj Aq (4.23)
where Aq is defined by assumption S3)
SAC: r(t) = 2r(t) (4.24)
SAC: r(t) = max { 2 max ll(x) II2, r(t)1_E }, 0 < e < ~ (4.25)
15T< t
r(t) = r(t -1) + ll(t)ll2 +1, r(0) > 1 (4.26)
Let us defined the following H norms
11 A0(z) A(z) 11 \
Cy(M= B(z) A0(z) [A(z) 1] 11 ir

II A0(z)[A(z) 1]Ia+[A3(z) + 1 Ia]B(z) || x
Cw &) = I B(z) A0(z) [A(z) 1] 11 h~ (4*27)
where X is given by (4.23).
Let us determine the admissible unmodeled dynamics quantified by the constant Cy (X)
which is defined by (4.27). We assume that
Pi-1'2 -d-lOCyft)- 2C7^)2 >0-
Let us now determine Lyapunov type difference inequalities which describe the
behavior of the parameter estimation errors for the proposed adaptive control schemes.
A. Deterministic Adaptive Control
f M- \ z(t)2
2n(l-n)|z(t)|.|y(t)|+ li^t)2 + *(t) (4.28)
where z(t) is given by (4.14) and
V(t) = Il0(t) II2 (4.29)
where I5(t) is defined by (4.15).
B. Stochastic Adaptive Control
V(t + 1) £ V(t) 2n ( 1 ^ ) "f(t)
2n(l-n)|z(t)|.hf(t)|+ n^t)2
+ r(t)
z(t)w(t+ 1)
+ 2^ ?(t)

where z(t) is given by (4.14).
The recursive inequalities (4.28) and (4.30) provide the motivation for investigating the
global stability of the following recursion for t ^ 1
S(t + 1) S(t) ll x z(t)2
+ 0
, P(t)2
+ 2V- f(t) +f(t)2 <4'31>
where 0 < |i < 1 and all variables are real valued with finite initial conditions.
In (4.31)
S(t + 1) = 2\i { d(t) £e(j)T (J)(j) w(j + 1) } > 0, t k 1 (4.32)
d(t) = Cdr(t), 0 (4.33)
a = alt \ < a{ < 1; if SAC: r(t) = 2r(t)
a=l-E,0(x) II2, r(t)1_e
i^t while r(t) is given by (4.26)
P(t)2 = 2|J.2 ll(J)I|2 w(t+ l)2.

In adaptive control, filtering and prediction problems, V(t) in (4.29) usually
represents the squared norm of the parameters estimation error, while z(t) represents the
state of the adaptive system, and contains information related to the performance of the
adaptive system(tracking error, prediction error, etc.). Similarly, y(t) describes the
effects of the unmodeled dynamics and external disturbances. Also r(t) represents an
algorithm gain sequence (normalization sequence) which assumes a different form for
different algorithms.
In the presence of unmodeled dynamics and external disturbances, the adaptive
control algorithm possesses not only self-tuning but also a self-stabilization property
[26]. The latter means the following: whenever the adaptive system becomes unstable,
as a consequence of the incorrect parameter estimates, the adaptive algorithm will
stabilize itself by generating correct parameter estimates. During its operation, the
adaptive controller passes through two phases characterized by the time intervals
and Tk defined by
Qk = ftk- CTk)> Tk = [k ^k + l) k ^ 1. (4.35)
The sequences xk and k ^ 1 are defined as follows
1 A ij < O! < x2 < a2 <... < xk < ak < xk +! <... (4.36)
so that
W(t + 1)£0forte QkandW(t + 1) >0for te Tk (4.37)
where the sequence W(t) will be referred to as a bursting function and defined as

W(t +1) = n5>j { [l \ + (1 [l) Cy (X) + £ Cy j=l
z(j)2 2(1 n) I z O) N 7(j)l
" ^7(j)2 2[d(j) d(j -1)] } (4.38)
where the constant Cy (A.) is the admissible unmodeled dynamics which is defined by
(4.27). It turns out that the behavior of the sequence W(t) is crucial for the
convergence of the recursive scheme (4.31) and consequently for the stability of the
sequence z(t). Detail mathematical treatment of the global stability analysis of the
difference inequality is given in [26], where it is shown that despite the presence of
unmodeled dynamics all signals in the adaptive loop are bounded.
It is not difficult to see that assumption S5) can be written in the form
Pi = [ (1 Cy(*.)]{ 1 [(1 -Cy(X)] } (4.39)
which implies that the restriction on the tolerated unmodeled dynamics is given by the
following relation
Cy(X) In the literature on adaptive control, it has thus far been proved that robustness is
ensured provided that Cy (X) is sufficiendy small. Relation (4.40) shows that Cy (X)
does not have to be small and as far as global boundedness is concerned, adaptive
control has robustness comparable to a controller which uses the true system
parameters. Since in the case of SAC, X = 1, condition (4.40) becomes Cy (1) < 1,
where Cy (1) is defined by (4.27). Obviously, in this case the intensity of the

admissible unmodeled dynamics does not depened on any algorithm parameters. In the
case of DAC, Cy (A,) depends on X which is used in the algorithm design (4.21). In
fact we need to know X satisfying (4.23). In contrast to many existing results, it is
obvious from (4.40) that the size of the tolerated unmodeled dynamics does not depend
on the algorithm gain p.
Bursting manifests itself as sudden spike in the output which may be of a
periodic pattern. For a while the system might show unstable behavior and later show
absolute normalcy. It is quite surprizing because a diverging system suddenly gets
back into the converging regime. The need to study bursting stems from the biological
point of view. It is obvious that the nervous system closely resembles an adaptive
system. The nervous system adapts to the stimulus exactly the same way as an
adaptive system. Each neuron in the nervous system can be treated as an adaptive
element. It basically makes or breaks the contact with the other neuron so as to pass the
neuron signal. When one of the neurons suddenly shows deviation from the usual
behavior (similar to bursting), the entire system will go berserk for a while. A good
example is an epileptic person. Bursting is analogous to the appearance of convulsions
in a person. For some time, the system shows discrepancy but then comes to the
original state. If the cause of bursting can be identified and modeled, it can be
extrapolated to the human system and various kinds of epileptic disorders may be
resolved [28].
In the time interval Qk, the bursting function W(t +1) < 0, implies the stability
of the input and output signals for t e Qk. From (4.37) it is clear that in these time

intervals no characterization of the function V(t + 1) + S(t + 1) / r(t) can be made. This
function may diverge, thereby generating drifts of the parameter estimates. As a
consequence of this, controller parameters may escape from the set of stabilizing
controllers and the adaptive system will be become unstable. Accordingly, the time
intervals corresponds to the drift phase of the adaptive algorithm. As time
progresses, the norm nz(t) of the residual z(t) becomes larger than iiy(t) and the
bursting function W(t + 1) given by (4.38) becomes positive. From the relation (4.37)
it is obvious that these periods of operation of the adaptive system corresponding to the
time intervals Tk, k > 1. Therefore, drift of the parameter estimates in the time
intervals gives rise to the bursting phenomenon. The Lyapunov function V(t + 1) +
[S(t + 1) + W(t + 1)] / r(t) decreases for t Tk, where fast adaptation takes place. As a
consequence of this, the parameter estimates re-enter the set of stabilizing controllers.
It is obvious that the time intervals Tk, k 11, correspond to the self-stabilization phase
of the adaptive system. In order to stabilize the system faster over the time intervals Tk
the algorithm gains p do not have to be small. Specifically, a large p allows for
stronger influence of large tracking errors on the estimation of the controller
parameters. On the other hand, in order to slow down the parameter drift in the time
intervals a small p is required. The problem is that the bursting function W(t + 1)
is not measurable, and consequently the designer cannot change the coefficient p during
the operation of the adaptive system.

5. Relations Between MRAS and STR
The relation between model-following and pole placement establishes the
connections between model reference adaptive systems and self-tuning regulators
(STR) [2],[3], [20],. This will be explained in this chapter.
The model-reference adaptive systems originated from a deterministic servo
problem and the self-tuning regulator from a stochastic regulation problem. In spite of
their different origins it is clear from Figure 5.1 and Figure 5.2 that they are closely
Figure 5.1 Block diagram of a model-reference system (MRAS).

Plant parameters
Figure 5.2 Block diagram of a self-tuning regulator (STR).
Both systems have two feedback loops. The inner loop is an ordinary feedback
loop with a process and a regulator. The regulator has adjustable parameters which are
set by the outer loop. The adjustments are based on feedback from the process inputs
and outputs. However, the methods for design of the inner loop and the techniques
used to adjust the parameters in the outer loop are different The regulator parameters
are updated directly in the MRAS in Figure 5.1. In the STR in Figure 5.2 they are
updated indirectly via parameter estimation and design calculations. This difference is,
however, not fundamental, because the STR may be modified so that the regulator

parameters are updated directly [1], [3], [6], [19], [20].
The indirect self-tuning algorithms are straightforward applications of the idea
of self-tuning. They can be applied to a wide range of control design and parameter
estimation methods. There are three main difficulties with the method. The stability
analysis is complicated because the regulator parameters depend on the estimated
parameters in a complicated way. Usually a set of linear equations in the controller
parameters has to be solved. The map from process parameters to regulator parameters
may have singular points. This happens in design methods based on pole placement,
for example, if the estimated process models have poles and zeros that coincide.
Common poles and zeros have to be eliminated before the pole placement problem can
be solved. Stability analysis has therefore been carried out in only a few cases. To
ensure that the parameters converge to the correct values, it is necessary that the model
structure be correct and that the process input be persistently exciting.

6. Simulations
Some of the properties of the algorithms are illustrated through simulations in
this chapter. The simulations have been done using the simulation program MATLAB.
The program listings are in the Appendix.
Example 6.1:
A continuous-time system with the transfer function
G(s) = 0.15e-0-45s
s + 0.15
sampled with a sampling time of T = 0.45sec. gives the discrete-time system
y(t) = 0.8607y(t 1) + 0.0792u(t 1) + 0.0601u(t 2) (6.1.1)
or in delay operator form
y(t)[l 0.8607q1] = q1[0.0792 + 0.0601qi]u(t).
The reference signal is a square wave with amplitude 1 and a period of 100 samples.
Also chosen are
Am(T1) = 1 1.5q-J +0.6q-2
Ao^) = 1
B" = 1
Bm = Am(l) = 0.1 .
The sampled model has a zero z = -0.759.
From (2.7), (2.8), (2.9) and (2.10)
S = 0.6q_1 0.6393

R = 0.0792 + O-Offllq1
and T = 0.1.
Substituting the values of R, S and T in controller (2.3) we obtain
u(t) = 8.072y(t) 7.576y(t -1) 0.759u(t 1) + 1.263r(t). (6.1.2)
This is the non-adaptive model-reference controller design. In case we do not know the
parameters model-reference adaptive design is used. From the simulations we can see
that algorithm (2.32) is much superior in terms of convergence rate compared with
(2.31). The control signal has an oscilliatory tendency because of the cancellation of
the zero at -0.759.
Simulations have also been done
(i) in the presence of unmodeled dynamics
y(l/l- 0.9q_1 )u(t)
where 0.001 ^ y ^ 0.03
and external disturbance
(1 0.8q-! )w(t)
where w(t) is a white noise sequence satisfying E{ w(t)} = 0, E{ w(t)2} =1
(ii) for time varying parameter
AJq-1) = 1 -1.5 + 0.7cos(it / 25)t)q1 + 0.6 q2
(iii) for a constant reference signal
r(t) = 50.

Example 6.2:
Let the system to be controlled be described by the transfer function
This can be regarded as a normalized transfer function of a DC motor, i.e., the DC
motor can be described by a second-order model with one integrator and one time
constant The system input is the voltage to the motor and the output is the shaft
position. The time constant is due to the mechanical parts of the system, while the
dynamics due to the electrical parts are neglected. The pulse transfer operator for the
sampling period T = 0.5sec. is
It is not difficult to see that the sampled data system has a zero z = -0.84, which is
inside the unit circle but poorly damped. At the same time system has two poles 1
and z2=0.61. We would like to design controller (2.3) so that the closed-loop transfer
function has a response which corresponds to a natural frequency wn = 1 rad/sec. and a
damping coefficient = 0.7.
These specifications corresponds to the following system after discretization
q-iBCq-1) q*1 (0.107 + 0.09^*)
A(q-1) 1 l.ieq1 + 0.61q 2
Am l-1.32q-1+0.5q-2
From (2.7), (2.8), (2.9) and (2.10)
R = 0.107 + 0.09q-l
S =0.29-0.1 lq-1

T = 0.18 .
Substituting the values of R S and T in controller (2.3) we obtain
(0.107 + 0.09q-l)u(t) = 0.18r(t) (0.29 0.1 lq"1) y(t). (6.2.3)
As in the previous example the model-reference adaptive design is used in the
simulations with the presence of unmodeled dynamics. Burstings are observed which
will generate unacceptable or unstable control behaviour.

Example 6.3:
This example aims at illustrating, on a real system (air heater). The diagram of
the system and of the digital control loop is represented in figure 6.3.1.
Figure 6.3.1 Digital control of an air heater.
The air is heated by means of a resistor supplied by a computer-controlled thyristor
power amplifier. The variable controlled is the air temperature at the output, which is

measured by a thermocouple. A sampling period of T = 4sec. and a process model
dead time d = 1 was chosen. The sampled data system is described by a third-order
(1 1.384q-i + 0.589q-2 0.094q3)y(t) = q'1 (0.237 +'1 0.074 q*2)u(t).
It has three poles: z1 = 0.7878, Z2 = 0.2981 + 0.1745j and z3 = 0.2981 0.1745j.
And two zeros z1 = 0.3566 and z2= -0.8756.
A reference model of the following structure is chosen
q^BJq'1) q-i(0.9973 + 0.0353 q'1 + 0.0229q-2)
Am(q-i) 1 + 0.047 lq"1 + 0.0092q-2 0.0009q-3
From (2.7), (2.8), (2.9) and (2.10)
R = 0.237 + O-^q-1 0.074q~2
S = 1.4311 0.5798q-1 + 0.0931q~2
T = Bm = 0.9973 + 0.0353 q'1 + 0.0229q'2
Substituting the values of R, S and T in controller (2.3) we obtain
(0.237+ O.^q"1 0.074q-2)u(t) = (0.9973 + 0.0353 q*1 + 0.0229q*2 )r(t)
- (1.4311 0.5798q-1 + 0.0931q-2) y(t).
Burstings are also observed in the simulations due to the presence of unmodeled

Fig. 6.1 Parameter estimate for the 1st. order using NGA.
time (sec. )

Fig. 6.2 Parameter estimate for the 1st. order using NGA.
t ime (sec )

Fig. 6.3 Parameter estimate for the 1st. order using NGA
t ime (sec )

Fig. 6.4 Parameter estimate for the 1st. order using NGA.
time (sec.)

y (t),r(t)
Fig. 6.5 Output and reference signals for the 1st. order.
tine (sec.)

Fig. 6.6 Control signal for the 1st. order.
tine (sec.)

Fig. 6.7 Tracking error for the 1st. order.
t ine (sec.)

Fig. 6.8 Model output for the 1st. order.
fs r\ n n r
L l^-l u L
400 600
800 1000
t ine (sec )

Fig. 6.9 Parameter estimate for the 1st. order using LSA.
time (sec.)

Fig. 6.10 Parameter estimate for the 1st. order using LSA.
time (sec.)

Fig. 6.11 Parameter estimate for the 1st. order using LSA.
X 1 ' 1 1
0.9 -
0.8 -
0.7 -
0.6 -
0.5 -
0.4 -
0.3 -
0.2 -
0.1 _
0 < 1 I
0 200 400 600 B00 1000
t ime (sec )

Fig. 6.12 Parameter estimate for the 1st. order using LSA.
time (sec. )

Fig. 6.13 Output and reference signals for the 1st. order.
tine (sec.)

Fig. 6.14 Control signal for the 1st. order.
tine (sec.)

Fig. 6.15 Tracking error for the 1st. order.
tine (sec.)

ym (t)
Fig. 6.16 Model output fop the 1st. order.
tine (sec.)

Fig. 6.17 Fop the 2nd. order with unnodeled dynamics.
time (sec.)

Fig. 6.18 For the 2nd. order with unmodeled dynamics.
time (sec.)

Fig. 6.19 Fop the 2nd. order uith unnode led dynamics.
tine (sec.)

Fig. 6.20 For the 2nd. order with unnodeled dynamics.
tine (sec.)

Fig. 6.21 Output and ref. signals for the 2nd. order.
tine (sec.)

Fig. 6.22 Control signal for the 2nd. order.
time (sec.)

e (t)
Fig. 6.23 Tracking error for the 2nd. order.
tine (sec.)

ym (t)
Fig. 6.24 Model output for the 2nd. order.
time (sec.)

Fig. 6.25 Fop the 3rd. order with unmodeled dynamics.
time (sec.)

Fig. 6.26 Fop the 3rd. order uith unnodeled dynamics.
< -0.2
B 200 400 600 800 1000
time (sec.)

Fig. 6.2? Fop the 3rd. order uith unmodeled dynamics.
time (sec.)

Fig. 6.28 Fop the 3rd. order with unnodeled dynanics.
tine (sec.)

Fig. 6.29 For the 3rd. order uith unmodeled dynamics.
time (sec.)

Fig. 6.30 For the 3rd. order with unmodeled dynamics.
t ime (sec )

Fig. 6.31 Output and reference signals for the 3rd. order.
tine (sec.)

F i sr. 6.32 Control signal for the 3rd. order.
tine (sec.)

e (t)
Fig. 6.33 Tracking error for the 3rd. order.
t ine (sec )

yn (t)
Fig. 6.34 Model output for the 3rd. order.
t ine (sec.)

Fig. 6.35 For yn(t + l)=p(t)=50 with unnode led dynamics.
t ine (sec.)

Fig. 6.36 Fop ymCt+1)=r(t)=50 with unmodeled dynamics.
t ime (sec )

t ine (sec )

Fig. 6.38 For yn(t+1)=r(t)=50 uith unnodeled dynamics.
0 200 400 600 800 1000
time (sec.)

xl04 Fig. 6.39 Output and ref. for ym(t+l)=r(t)=S0
t ime (sec.)

u (t )
xl06 Fig. 6.40 Control signal for yn(t+l)=r(t)=50.
tine (sec.)

e (t)
xl04 Fig. 6.41 Tracking error for ym(t+l)=r(t)=50.
time (sec.)

Fig. 6.42 Fop the tine-varying with unnod. dyn. and white noise.
t ine (sec )

Fig- 6.43 For the tine-varying with unnod. dyn. and white noise.
tine (sec.)

Fig. 6.44 Fop the time-varying with unnod. dyn. and white noise.
time (sec. )

Fig. 6.45 For the tine-uarying with unnod. dyn. and uhite noise.
time (sec.)

xl03S Fig. 6.46 Output and ref. for the tine-varying.
1 1 1 1 1 1 > - I 1 1 1 1 1 ]
1 1 1 1 -
0 200 400 600 800 1000
tine (sec.)

u (t )
xl036 Fig. 6.47 Control signal for the tine-varying.
tine (sec. )

e (t)
xl035 Fig. 6.48 Tracking error for the tine-varying
tine (sec.)

x!026 Fig. 6.49 Model output for the tine-varying
t ine (sec.)

7. Conclusion
One concludes that the model-reference adaptive systems represent a natural
complement to the optimal linear control systems, with an index of performance
allowing one to cope more efficiently with practical problems.
Even though the model-reference technique may be used for other purposes
(identification) and with other types of models (series, series-parallel), in this thesis
only the parallel model direct control action is discussed. A general procedure for the
design of globally stable model-reference adaptive systems with only input-output
measurements was given. The advantage of implicit algorithms (direct MRAC) over
the explicit algorithms is established. It has been shown that the design computations
are eliminated in implicit algorithms, due to the direct estimation of the controller
parameters. Since the implicit algorithms usually have the disadvantage that all process
zeros are cancelled, other design strategies such as the pole placement design and STR
design methods are also explored. The importance of the SPR condition in the analysis
and design of model-reference adaptive systems was also emphasized.
In this thesis work it is investigated that robustness against unmodeled
dynamics is essential for a good adaptive algorithm. The importance of globally stable
adaptive schemes which perform satisfactorily under real conditions involving noise,
nonlinearities, time-varying plant parameters and perhaps most important of all due to
truncation errors resulting from mismatching between plant and model (i.e. unmodeled
dynamics) is discussed. The phenomenon of burstings (sudden spikes) is investigated
and simulation results are shown for the proposed robust algorithms.

Normalized Gradient Algorithm
delete diary
diary on
% algorithm gain
[L = 1.99;
% normalizing constant
c = 1;
% given value of the polynomial
Bm = 0.1; T = Bm;
% initial conditions
ym(2) = 0; ym(3) = 0; Uf{2) = 0; y^l) = 0; yf<2) = 0; yf(3) = 0; u(2) = 0; y(3) = 0;
so(3) = 0; s1(3) = 0;r0(3) = 0;r1(3) = 0;
for t = 3 : 997;
% reference signal
r(t) = square((pi / 50) t);
% model output signal
ym(t + 1) = 1.5 ym(t) 0.6 ym(t 1) + Bm r(t);
% filtered control signal

Uf(t) = (- rj(t) Uf(t -1) + So(t) yf % control signal
u(t) = Uf(t) Uf(t -1) + 0.6 u^t 2);
% plant output
y(t+l) = 0.8607 y(t) + 0.0792 u(t) + 0.0601 u(t-l);
% filtered plant output
yf(t + 1) = 1.5 yfi) 0.6 yf(t -1) + y(t+l);
% tracking error
e(t) = y(t+l) ym(t + 1);
% estimation scheme
w = (^ e(t) / (c + (y^t)2 + yf(t -1)2 + Uf(t)2 + Uf(t -1)2));
% parameter estimates
So(t + 1) = so(t) + w (-yf(t));
sx(t + 1) = si(t) + w (-y^t -1));
?0(t + 1) = f0(t) + w Uf(t);
?i(t + 1) = rx(t) + w Uf(t -1);
plot (sq), xlabel ('time(sec)'), ylabel ('s0'), title ('parameter estimate')
plot (sj), xlabel ('time(sec)), ylabel ('sj'), title ('parameter estimate')

plot (?o), xlabel ('time(sec)'), ylabel ('r0'), title ('parameter estimate1)
plot (?i), xlabel ('time(sec)'), ylabel ('?!*), title ('parameter estimate')
plot (y), xlabel ('time(sec)'), ylabel ('y(t), r(t)'), title ('system output and reference
hold on
plot (r)
hold off
plot (u), xlabel ('time(sec)'), ylabel ('u(t)'), title ('control signal')
plot (e), xlabel ('time(sec)'), ylabel ('e(t)'), tide ('tracking error')
plot (ym), xlabel ('time(sec)'), ylabel Cym(t)'), title ('model output signal')
diary off

Least Square Algorithm
delete diary
diary on
% initial conditions
ym(2) = 0; ym(3) = 0; Uf<2) = 0; y{l) = 0; yf(2) = 0; ytf) = 0; u(2) = 0; y(3) = 0;
So(3) = 0; Sl(3) = 0; r0(3) = 0; ri(3) = 0;
% identity matrix
1 = [100 0;0 1 00;00 10; 000 1];
% initial condition
Pi = 100 I;
for t = 3 : 997;
% reference signal
r(t) = square((pi / 50) t);
% model output signal
ym(t + 1) = 1.5 ym(t) 0.6 ym(t -1) + 0.1 r(t);
% filtered control signal
UfO) = (- rx(t) Uf(t -1) + s0(t) yf(t) + Sj(t) y^t -1) + ym(t + 1)) / ?0(t);
% control signal
u(t) = Uf(t) Uf(t -1) + 0.6 Uf(t 2);