Robustness of extended least squares parameter estimation in the presence of unmodeled dynamics

Material Information

Robustness of extended least squares parameter estimation in the presence of unmodeled dynamics
Sierzchula, David Raymond
Publication Date:
Physical Description:
v, 100 leaves : illustrations ; 29 cm


Subjects / Keywords:
Least squares ( lcsh )
Parameter estimation ( lcsh )
Least squares ( fast )
Parameter estimation ( fast )
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )


Includes bibliographical references (leaves 99-100).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Electrical Engineering.
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by David Raymond Sierzchula.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
30674322 ( OCLC )
LD1190.E54 1993m .S54 ( lcc )

Full Text
David Raymond Sierzchula
B.S., Rensselaer Polytechnic Institute, 1985
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering

This thesis for the Master of Science
degree by
David Raymond Sierzchula
has been approved for the
Department of
Electrical Engineering

Sierzchula, David Raymond (M.S., Electrical Engineering)
Robustness of Extended Least Squares Parameter Estimation in the Presence of
Unmodeled Dynamics
Thesis directed by Assistant Professor Miloje S. Radenkovic
Robustness of least squares parameter estimation to unmodeled dynamics
without using parameter projection is an unsolved problem. It is examined here, in the
context of a discrete-time SISO self-tuning regulator, using computer simulation
experiments to supplement ongoing research. Convergence behavior of least squares
estimates is first examined by simulating an optimal predictor applied to a simple
ARMA system. Numerical examples demonstrate stability of the self-tuning controller
when the intensities of unmodeled dynamics and unstructured external disturbances are
bounded appropriately. In these examples, parameter estimates converge without using
parameter projection. Behavior of the scalar quantity p(t), the time-varying term in the
denominator of the covariance matrix calculation, demonstrates the idea of a self-
excitation mechanism in the adaptive loop.
This abstract accurately represents the content of the candidate's thesis. The form and
content of this abstract are approved. I recommend its publication.
Miloje S. Radenkovic

First and foremost, the author wishes to express deep appreciation for the help,
patience and understanding of Tracy L. Godfrey, who provided incalculable emotional
support, technical help, and a fine personal computing system.
Professor Miloje S. Radenkovic provided the topic and served as the thesis
advisor, and his research results served as a point-of-departure for investigation. He
also provided positive feedback to the author at a time when it was sorely needed.
Martin Marietta Astronautics financially supported the authors studies through
its Study Under Company Auspices program. Special gratitude is also due to the
authors supervisor, Thomas C. Redfield, as well as the authors department and
coworkers, for their patience and encouragement when time away from work was
needed to perform research.
Professors Jan T. Bialasiewicz and Douglas A. Ross were kind enough to serve
as committee members, and the author greatly appreciates their efforts in reading and
commenting on this work and attending its defense.
Finally, the author thanks God for providing the strength to complete this

1. Introduction.......................................................1
1.1. Early Background of Adaptive Control..............................2
1.2. Robustness........................................................3
1.3. Least Squares and the Self-Tuning Controller......................5
1.4. Overview of Analysis..............................................7
2. Parameter Estimation Using Extended Least Squares..................9
2.1. The Recursive Extended Least Squares Algorithm....................9
2.2. The Optimal Predictor.............................................12
2.2.1. Algorithm Development...........................................13
2.2.2. Simulations.....................................................18
3. The Self-Tuning Controller.........................................27
3.1. Development of Algorithm..........................................27
3.2. Review of Stability Results.......................................32
4. Adaptive Control Simulations.......................................37
4.1. Simple Nu merical Examples........................................37
4.1.1. Case 1..........................................................37
4.1.2. Case II.........................................................53
4.2. ATS-6 Satellite.................................................61
5. Conclusions........................................................75
APPENDIX A: MATLAB SCRIPTS.............................................77

Robustness of an important class of parameter estimation and adaptive control
algorithms is examined here; the primary tool used in this inquiry is simulation by
digital computer.
Adaptive control algorithms have great potential for practical application, but the
current immaturity of adaptive control theory requires safety nets and modifications to
standard algorithms in order to assure stable and safe performance for systems in use.
These safety nets and modifications serve to complicate the realization of adaptive
control systems by imposing hardware requirements that increase cost; robustness
without such modifications would ease implementation. Adaptive control theory is still
maturing; global stability and robustness results are widely scattered in the literature,
and unification is not yet comprehensive. Thus, research continues whose objective is
to verify the feasibility of a variety of adaptive control algorithms in realistic situations.
The following sections in this chapter present a historical perspective of
adaptive control from early attempts to the present investigation in robustness, illustrate
the motivation for robustness research, and review results for one particularly important
combination of adaptive control algorithm (direct self-tuning) and parameter estimator
(extended least squares). This lays the groundwork for computer simulations of the
considered system, which will build upon analytical stability results recently generated
and demonstrate simplification of the conditions necessary for its stability.

1.1. Early Background of Adaptive Control
The definition of adaptive control4' has long been a matter of debate, but there
seems to be at least partial agreement [1]. Generally, it encompasses any regulator that
can modify its own behavior or adjust its own parameters in response to learning of, or
changes in, the dynamics of the process it controls and the disturbances associated with
that process; it excludes classical constant-gain feedback systems. Descriptions often
applied to adaptive control systems are learning and self-organizing. Some in the
adaptive control community consider such systems to have two categories of states,
which change at two different rates; the slower rate governs the parameters of the
An adaptive approach to control was first researched in the 1940s and 1950s in
conjunction with high-performance aircraft [1], [2]. Such aircraft were intended to
perform in a wide range of changing operating conditions requiring different control
dynamics. Researchers investigated adaptive control strategies as a way of adjusting
the aircraft controller mechanisms as the operating conditions varied. Flight test
disasters occurred, and analog hardware and poor theoretical insight combined to
restrict design solutions, consequently reducing adaptive control research until the burst
of control theory contributions of the 1960s. The development at that time of such
theoretical notions as state space, stochastic control, dynamic programming, recursive
schemes, system identification, and parameter estimation, coupled with advances in
stability theory, provided new insight into adaptive control.

1.2. Robustness
An ideal system model for the purpose of studying adaptive control
algorithms is characterized by linearity, time-invariance, known relative degree, and an
upper bound on its order; it may also have well-modeled disturbances [3], [4], A
sizable body of theory exists to describe the behavior of such systems; many important
stability results and proofs, usually based on highly restrictive assumptions, appeared
in the late 1970s and early 1980s. However, the realistic situation of actual physical
systems found in industrial applications introduces perturbations affecting the ideal
all physical systems possess dynamic behavior which cannot be modeled
accurately, resulting in reduced-order modeling of the system being controlled
which can lead to problems'such as systematic bias errors [5];
external disturbances influence many physical systems;
noise is present in measurements of the states and outputs of physical systems.
Adaptive control researchers noted that stable behavior in the ideal situation of
algorithms being proposed and studied did not guarantee such behavior in the presence
of influences found in realistic situations. They also found drift of parameter estimates
to occur. For example, [6] questioned robustness of adaptive control algorithms to
unmodeled dynamics and external disturbances; in this instance, maintaining stability
requires application of dominantly rich reference inputs providing persistent excitation
of appropriate order [7].
In order to realize adaptive control in real-world applications, it is clearly
necessary to show robustness how adaptive algorithms can achieve control

objectives and maintain stability in the presence of realistic conditions. The focus on
adaptive control robustness research began in the early-1980s. Meanwhile, by the
same time frame, digital computer technology had developed so far that adaptive
controllers began to appear on the commercial market; these could realize adaptive
control algorithms, often much more complex than non-adaptive controllers, for
practical applications. From this sequence of events, one concludes that the maturity of
the theory lags the hardware technology required for physical realization in many
applications; indeed, adaptive controllers in use today require safety nets, often
elaborate, for safe operation [1]. This motivates the proliferation of robustness
research today. Many of the issues and strategies identified in research to date are
reviewed in [8], [4].
There are several classes of adaptive control algorithms: self-tuners, model
reference, pole placement, dual control, and variations on each. Likewise, different
methods exist of estimating parameters: least squares, gradient, stochastic averaging,
and variations of these. A wide range of algorithm possibilities results, and specific
examples of robustness research within this range include:
investigation of estimated plant model admissibility through restriction of parameter
space and sufficient excitation for adaptive pole placement using several classes of
least squares algorithms [4];
use of set membership and a priori knowledge of the plant model to provide
robustness [9], [10];
development of a Robust Ultimate Boundedness Theorem for global stability
analysis and design of perturbed adaptive systems with gradient and normalization

estimation algorithms [11]; the results shows parameter projection is required for
global stability, but persistent excitation is not;
examination of least squares estimation robustness as applied to self-tuning
regulators, which is addressed in the following.
1.3. Least Squares and the Self-Tuning Controller
One adaptive control scheme that has received significant attention over the
years is the self-tuning controller proposed by Astrom and Wittenmark in 1973 [12].
This control scheme incorporates the certainty equivalence principle to formulate its
control signal directly from the estimated parameters of the process. Other self-tuning
regulators are often described as indirect adaptive control algorithms, in the sense that
the controller design equation is recursively solved using the estimated system
parameters; once the controller parameters are determined, the controller generates the
signal which causes the desired dynamic response from the plant [1]. The parameter
estimator of the Astrom and Wittenmark self-tuning controller, an extended least
squares algorithm, is an attractive choice because it has a convergence rate superior to
other algorithms, such as the normalized gradient.

Simple block diagram of direct self-tuning controller
Process Parameters
Figure 1.1
Researchers have studied least squares parameter estimation in self-tuning
control for the case where unmodeled dynamics are not present in the system. For
example, convergence and consistency of least squares estimates has been shown in
[13], assuming additive noise in the system is Gaussian and white. The author of [13]
also obtained system stability and optimality, which applied specifically to the Astrom
and Wittenmark self-tuning regulator, as well as a wide variety of other adaptive control
schemes. Convergence of self-tuning algorithms using least squares was also shown
in [14].

In the presence of unmodeled dynamics and unstructured external disturbances,
it is useful to weaken as much as possible the assumptions for robustness, in order to
simplify the algorithms. Research has recently obtained global stability of the Astrom
and Wittenmark self-tuning controller, provided a parameter projection mechanism
assures bounding of the parameter estimates [3]. It is of interest here to investigate
elimination of the parameter projection mechanism while maintaining stability of the
controller. The paper [3] also identifies and formalizes a so-called self-stabilization
mechanism in the adaptation loop that causes the covariance matrix of the least squares
process to become bounded. A deeper review of these results of global stability and
self-excitation later in this thesis provides a point of departure for the computer
simulations that follow. These simulations demonstrate, using example systems, that a
self-tuning regulator exhibits stable behavior under realistic perturbation conditions,
and that its self-excitation mechanism operates as postulated, without using parameter
1.4. Overview of Analysis
Chapter 2 examines the estimation aspect of the problem. A review of the least
squares method precedes an extension of the algorithm for use in an optimal predictor,
to study the convergence behavior of its estimates. Chapter 3 derives a self-tuning
controller algorithm that uses least squares, and reviews important stability results for
this controller when it uses parameter projection. Finally, Chapter 4 summarizes
simulations of the controller without parameter projection in a perturbed environment
Two simple numerical examples and one based on a real plant incorporate unstructured

external disturbances and unmodeled dynamics; stability of the output, behavior of the
parameter estimates, and a measure of the self-stabilization mechanism are examined.

2.1. The Recursive Extended Least Squares Algorithm
According to [1], the principle of least squares was developed by Gauss to
determine planetary orbits; he argued that the unknown parameters of a math model
should be chosen such that the sum of the squares of the difference between the
observed values and computed values times their respective precision factors should be
minimized. As developed in [1], for the regression model
y(f) = +2(t)d2+...+(t)dn = (t)Td (2.1)
with y(t) the observed variable, 0T = \dx d2...dn] the vector of unknown parameters,
and (j>T(t) = [^ 02...0] the vector of regressors, each of which is paired with an
observation from an experiment, it is desired to determine the parameters such that the
outputs calculated from the model agree in the least squares sense as closely as possible
with the observations. The loss function which expresses this is
ne,t)=\t,{y(i)-fii)B)2=\\Y-ef (2.2)
y( 1)
Y(t) =
y( 2)


\ 1)

The loss function is quadratic in 0; provided T(f)(r) is nonsingular, (2.2) can be
rewritten and is minimized with respect to 0. The result for estimate 0 of the
unknown parameter vector is
0 = (&T(byl If we define the covariance matrix
P(t) = (<&r(f)O(0)_1 = fl V >=1
then (2.5) can be expressed as
0(0 = Mi)y(i)
. =i
(00(0 nonsingular is called an excitation condition. If the estimate were to
converge to its true value as the number of observations tended to infinity, the estimate
would be said to possess the property of consistency. This estimation method
obviously assumes all the data has been collected prior to calculating the estimate. It
would be useful to calculate updated estimates recursively for use in real-time
applications such as adaptive control algorithms.
Development of the recursive computation begins with (2.6), from which
= P~l (r -1) + ^(r)0r(r)

-(2.7) can alternately be expressed as
0(0 = Pit)
^MOyiO + QiOyiO
Manipulation of the previous two equations gives
2>(/)y( o=p~\t - -1)=p-\oht -1) mfioht d (2.10)
and using this result in (2.9) yields the following recursive relationship:
ko=kt -1) Piomfiokt -1)+pmioyio
= 9(t -1) + P(04>(0[y(0 0r(Okt -1)) (2.11)
= 0(t-l) + P(0(K0e(0
To avoid repeated matrix inversions in calculating the covariance matrix P(t)
recursively, the Matrix Inversion Lemma
(A + BCDy1 = A'1 A_15(Cr1 + DA_1fl)_1 DA~X (2.12)
is applied to (2.8) giving
p(0 = (<&r(fW))-1 = (<*>'(*-irn -1)+4>(04>t(0)~'
= (p-1(r-l) + ^(O0r(O)_1 (2.13)
=p(t D p(t i m)(i+f(OP(t -1) mt T(OP(t -1)
This can be written for an SISO system as
Pit) = Pit-\)-
l + piOPit-DIHO
because the dimensions of T(t) and (2.11) and (2.14) summarize the algorithm. In words, these say that a correction
proportional to the prediction error e(t) = y(t) 11

estimate; the prediction error is based on the previous parameter estimate. The product
P(t)(f)(t) is a vector containing weighting factors which mix the correction term and the
previous estimate.
The algorithm and notation to be utilized in the remainder of this thesis
incorporate slight modifications to the preceding algorithm, but the concept is the same.
One change is that the time indices of the new and previous parameter estimates and the
error are advanced by one. Note that the time index of the weighting factors vector
p(t)(t) remains the same. The error has been redefined as the a priori prediction error
of the output. Compare the following to (2.11):
kt+1)=0(0+pm(t)e(t+1)=0(0+p(tmt){y(t+1) 0r(owo) (2.i5)
The time index of the left-hand side indicates a predictive estimate of the parameters.
Another difference is that the covariance matrix is now represented by pit) instead of
pit) = pit -1) -
\ + fit)pit-l)(t>it)
As a final note, for recursive calculations, the covariance matrix must be
initialized; this is done as follows:
p(0) = p0I, p0 > 0 (2.17)
2.2. The Optimal Predictor
The least squares algorithm just developed will now be used in an optimal
predictor, or optimal estimator, to study the behavior of the parameters it estimates
through time. Normally distributed white noise is input to an ARMA system, whose

output is predicted. Least squares estimates of the system parameters are obtained
without first filtering the output measurements of the system. Unmodeled dynamics are
also simulated; good convergence of parameter estimates in this situation will indicate
that a similar parameter estimation approach may be useful for the self-tuning control
2.2.1. Algorithm Development
To develop a variation of the optimal predictor algorithm developed in [15], first
consider the following discrete-time linear time-invariant ARMA system model
A{q-')y{t + \) = C{.q-')G>{t + \) (2.18)
with output sequence {y(0} and Gaussian noise input sequence { mean and variance cr^. The operator q~l is the unit delay. The polynomials contain
coefficients whose values are assumed unknown; they are defined as
M.(fl) = 1 + atf 1+... +aA q~"A
C(q-') = l + c1q-l+...+cncq-nc
Behavior of the coefficients estimates is the subject of this phase of study; the
Spectral Factorization Theorem and associated Representation Theorem [16] will be
used to build an optimal predictor that provides the desired estimates. The
Representation Theorem says:
Given a rational spectral density function asymptotically stable linear dynamical system such that the output of the system
is a stationary process with spectral density 0() if the input is discrete time
white noise.

As applied to the considered problem, this theorem is interpreted to mean that one can
construct any signal y(t) by passing white noise 6)(t) through the appropriate stable
linear filter C{q~l)/A(q~l); conversely, if one knows y(t), one can reconstruct the
coefficients of the filter.
Dynamic system with parameters to he estimated
Figure 2.1
The desired polynomial coefficients, or system parameters, will be estimated by
recursively predicting the system output one step ahead using knowledge of the system
output up to the present; this information is then used in a recursive least squares
algorithm to estimate the parameter values, which are the filter coefficients previously
referred to. The a priori predicted system output is represented by y{t +1). Start
developing the estimator by examining the quantity C[y(t +1) y(t +1) CO{t +1)]
which involves the a priori prediction, the actual system output and noise input
C[y(f +1) y(t +1) Q)(t +1)] = C[y(t +1) y(t +1)] CQ)(t +1) (2.20)
Expanding the first term on the right gives

C[y(t+1) y(t +1) G)(t +1)] = 1 [y(t+1) y(t +1)]
+ ci[y(0 K0]+- -+Cc [y(f +1 nc) y(t +1 nc)] (2.21)
- Cco(t + L)
Using (2.18) to substitute for C(0{t +1), we have
C[><* + l)-Kf + l)-<0(' + l)] =
y(t +1) y(t +1) + Cj [y(0 y(t)]+.. .+cc [y(r +1 nc) y(t +1 nc)] (2.22)
-Ay(t + 1)
and expanding Ay(t +1) gives
C[ya +1) y (f +1) - +1)] =
y(t +1) y{t +1) + q [y(f) y(0]+- +cc [y(* +1 nc) y(t +1 nc)] (2.23)
- y(t +1) a^yit)- .-aA y(t -nA +1)
Cancellation of y(t +1) leaves the following expression for the predictor
C[y{t +1) K? +1) G)(r +1)] = -y(t +1)
+ Ci[y(t) K0]+. -+Cc [y(* +1 nc) %t +1 nc)] (2.24)
Optimality in the predictor is achieved by setting the input sequence m{t +1) equal to
the a priori prediction error
Q)(t +1) = y(t +1) y(t +1) = £(t +1) (2.25)
making the left side of (2.24) equal to zero. (2.25) represents an optimal predictor,
which means the only difference between the real and estimated outputs is the white
noise. Two modifications remain to be made to (2.24) to use it as an optimal predictor.
First, since the true parameter values are unknown, they are replaced with their present
estimates as in [15]. The vector of true system parameters is

so the corresponding vector of estimates is
An extended version of the least squares method discussed in the previous section of
this thesis is used to estimate the parameters; two sets of coefficients corresponding to
two variables (here, a to y(t) and c to d>(t)) are estimated, so the regressor and
parameter vectors are extended and partitioned accordingly. Second, in a departure
from [15], the a priori prediction error is replaced by the a posteriori prediction error
a(0 = y(0-y(0
in the regressor, where the a posteriori prediction is
y(t) = dTm(t-1)
The regressor for the algorithm to be used here is then

0(0 =
cb(t nc +1)
The correction term for the parameter estimate, however, still uses the a priori
prediction error, where the a priori prediction is
ftf + l) = 0r(r)0(O (2.31)
Note the other key differences between the optimal predictor algorithm which
will be used here and that developed in [15]. One is that the model in [15] accounts for
an exogenous input u(t), which the system model (2.18) does not require;
consequently, the corresponding terms disappear in the algorithm developed here.
Also, the parameter estimator to be used here incorporates an unfiltered version of the
regressor; the regressor in [15] is inversely filtered by the system noise dynamics prior
to calculation of the covariance matrix and parameter estimate.
In summary, the following equations must be implemented to simulate the
optimal predictor:
System model: (2.18)
Predictions and prediction errors: (2.25), (2.28), (2.29), (2.31)
Extended least squares parameter estimator (2.15), (2.16), (2.17), (2.27), (2.30)
Initial conditions must be supplied for the parameter estimates, system output, and a
posteriori prediction error.

The optimal predictor algorithm developed above was simulated using MATLAB
on a digital computer. The objective was to examine the convergence behavior of the
parameter estimates for signs of drift or divergence in the presence of unmodeled
dynamics. The nominal model corresponding to equation (2.18) was chosen to be a
second-order system with stable polynomials having no common zeros:
(H-0.7^-1 + 0.12 q~2)y(t +1) = (l + + 0.25 q~2)d)(t +1) (2.32)
All initial conditions were chosen as zero, and initial covariance matrix p0 = 107. The
system with unmodeled dynamics added was
A(q~l)y(t +1) = C(ql)(f +1) + ^ 0>(t +1) (2.33)
l-0.95 Parameter /controlled the intensity of the urtmodeled dynamics. The variance of the
noise input Q)(i) was E{co2(t)} = <7^=1. To give the parameter estimates opportunity
to settle, one thousand recursions were performed. A sample period of one second was
implicitly assumed.
The first experiment established the drift behavior of the parameter estimate
6T(t) = [oj(r) Ojit) Cj(r) c2(/)] when no unmodeled dynamics were present, which
corresponded to setting / = 0. The following four plots illustrate typical trends in the
parameter estimates.

Value of alhat
Figure 2.2

Value of a2hat

Value of clhat
Figure 2.4

Optimal estimator with no unmodeled dynamics: parameter estimate c2hat
Figure 2.5
None of the estimates exhibited divergent behavior or a tendency to wander away from
a neighborhood established after the transient died out. Also notice that, though
estimates seemed to converge, they were not consistent (did not converge to the true
parameter value). The parameter estimates were also initialized to the true parameter
values for another experimental case, and consistency was still not obtained.
The next experiment invoked the unmodeled dynamics at full intensity, that is,
with y = l. Plots of the parameter estimates follow for a typical case under this

Value of alhat

Value of a2hat

Value of clhat

Figure 2.9
The plots indicate convergence behavior similar to, if not better than, the case of no
unmodeled dynamics. The estimates q(f) and c2(t) actually seem to show signs of
One can conclude from the observations made on these simulation experiments
that, at least for this low-order system, this optimal estimation algorithm using extended
least squares provides convergence of parameter estimates whether unmodeled
dynamics are present in the system or not.

This chapter develops a model of the direct self-tuning controller or regulator,
first proposed in [12] and examined further in [3], then introduces models for the
considered perturbations. These models provides the groundwork for a review of
pertinent stability and robustness results which, in turn, set the stage for simulation
experiments in Chapter 4.
3.1. Development of Algorithm
Consider the following discrete-time SISO system, which is to be closed-loop
y(t +1) + aly{t)Jr. ,.+any(t n +1) = b0u(t) + byu(t -1)+.. .+bmu(t -m) (3.1)
Sequences {y(0} and (w(r)} are output and input sequences, respectively. The
systems parameters are constant but unknown; a self-tuning regulation algorithm shall
be developed. The system model can be expressed in two alternative ways which will
be useful later. The first and most obvious is
A(q'1)y(r+l) = fi(q-1MO (3.2)
for which the two polynomials are defined as
A(q~1) = l + alq~1+...+anq~"
B(q-) = b0+blq +...+bmq'm
The other alternate expression is formed by first solving for y(t +1):
y(t +1) = -Ojy(t)-. ..-any(t -n +1) + b0u(t) + b^it -1)+.. .+bmu(t m) (3.4)

and then defining regressor vector
0(0 =
-y(t n +1)
u(t m)
and parameter vector

resulting in
j(r + l) = 0or0(O (3.7)
The system output should track a reference input y*(0> t> 1, defined by a bounded
deterministic sequence {y*(0} where |y*(0| ^ ^ < . The control performance
objective for a given reference input is to asymptotically achieve zero tracking error:
lim[y(0-/(0] = 0 (3.8)
Another way of stating the performance objective is to minimize the functional criterion

thereby obtaining the least mean-square error between reference input and system
output To achieve this control objective, the reference signal is incorporated in the
system model (3.7):
y(t + l)-y\t + l) = dl (t) -y(t +1) (3.10)
Optimality in the sense of (3.8) or (3.9) dictates that the left-hand side of (3.10) shall
equal zero, leaving the following control law:
6T0 In terms of the control signal, the previous equation is
u(t) = -U-V(* ~ 1)- ~bmu(t ~m) + o,y(0+. -+any(t -n + l) + y*(t +1)] (3.12)
Since the parameters are unknown, we will formulate the control using the certainty
equivalence principle [1]; that is, we will treat the estimates of the parameters at any
given point in time as if they are the true parameter values, and substitute the estimates
for their respective parameters into the control law. This gives the following
expressions for the control law:
0TM(t) = y(t+D (3.13)
Wu(t 1) ... bm(t)u(t m) b0(tr (3.14)
-t-O! (r)y (0+- +a (t)y(t-n + l) + y*(t + l)]
To avoid division by zero, care must be taken with the value of parameter estimate
b0 (t); it is essential at least to choose a nonzero initial condition for this estimate, and

other steps may also be taken. For instance, it is often a good strategy to set a lower
bound on
b0 (t), making it essential to know some information ahead of time about b0;
this is not always easy, and a detailed discussion of this approach is outside the scope
of this thesis.
An extended least squares algorithm accomplishes the parameter estimation in
nearly the same way as explained and utilized in Chapter 2. The only difference is the
replacement of the a priori prediction error in the optimal predictor with the reference
input here in the regulator. For reference, the equations are repeated here with the
appropriate modifications for the considered control application.
0(f +1) = 0(f) + p(t){t)(y{t +1) y(f +1))
pit) = p{t-1)-
p(t-l)(t>(t) l + 0r(f)>(f-l)0(f)
P( 0) = p0I, p0>0
This is a self-tuning controller because it converges to the optimal control strategy that
would be arrived at if the system characteristics were known [12].
Now, assume the order of the nominal model (3.1) is known (its parameter
values remain unknown), and furthermore, the real system contains unmodeled
dynamics and a stochastic term. The new model with perturbations is expressed
A(q~l )y(t +1) = B(q~l )[l + At (q~l )]M(r) + A(q~l )Aa (q~l )u(t)
+ [l + q-lA3{q-1)]co(t + l)
A,(<7 l) and A2(q 1), respectively, represent multiplicative and additive system
perturbations. These are causal transfer functions, and A^#-1) is stable. The structure
of the perturbations is

A,2( rq~dNAjg-1)
where y is a constant gain representing the intensity of the unmodeled dynamics. The
control objective as stated in [3] for this perturbed model is similar to those for the
deterministic nominal model, that is, to minimize the functional criterion
Stochastic inputs affect the system in two ways: through an additive
disturbance, and in the reference input. The disturbance term contains the unstructured
external disturbance sequence {(*)} and its causal and stable dynamics A3 (q~l). The
statistical properties of the disturbance noise sequence are
E{m(f + 1)} = 0
r n o (3.21)
E{fl>(t + 1)2} = <£
This noise sequence is not filtered for the simple examples that will be simulated in the
following chapter, so A3(q~l) = 0 here. A different stochastic sequence {£(*)} will be
incorporated in the reference input such that
y*(f + l) = r + e(f + l) (3 22)
to assure persistent excitement (PE) of the system. Bypassing a detailed discussion of
PE conditions, it is noted in [1] that certain random signals are PE of any order,
enabling identification of systems of any order. The statistical properties of the input
noise sequence are
= 0
£{e(02} = ^

and (e(f)} is uncorrelated with {£&(?)} PE will be maintained here by ensuring
£{e(02} = <^>y-
Formal proof of global stability and robustness for this self-tuning controller
with the considered perturbations presently exists, but is based in part on a mechanism
called parameter projection being used to bound the parameter estimates. Looking
ahead to Chapter 4, simulations provide information about convergence and stability
when no such mechanism is used. The next section reviews the existing results as an
important stepping stone to the desired simplification of assumptions for robustness.
3.2. Review of Stability Results
The expected behavior of the self-tuning controller is defined here prior to
examining the results of simulation experiments. The results in [3] proved global
stability and formally defined a self-excitation mechanism for the perturbed system
model (3.18), under the condition that parameter projection is used to bound the
estimates. Certain assumptions and preliminary theorems were first established.
First, the disturbance sequence {(?)} is assumed to be a Martingale difference
sequence with first and second moments as defined in the previous section of this
thesis; moments greater than the second are finite. Also, all initial conditions are finite.
Next, the system is assumed to be minimum phase (the zeroes of
B(q~l) A0(#-1)[A(£-t) -1] are inside the unit circle). A quantity describing the
perturbed system polynomials is defined
A0 (q-1) = Biq-'^q-^ + Aiq-^q-1) (3.24)

System transfer functions whose characteristic equation is the polynomial
are assumed stable. Finally, the parameter projection
algorithm is established as part of the parameter estimation algorithm; it is based on the
assumption that the compact convex set 0 containing 0, the sign of b0, and lower
bound b0>min of |b0j is known; also, b0> 0 and fc0min > 0. A transformation projects
the parameter estimates onto another compact convex set defined to assure bounding of
the estimates.
Bounds are established on various transfer functions of the system using H-
infinity norms, and results regarding the signals in the adaptive loop are proven by
reference to a paper yet to be published at the time of publication of [3]. Specifically,
these signals cannot grow faster than exponentially due to the parameter projection
An additional assumption is established regarding the satisfaction of the PE
condition; specifically,
y*(r-n + l)

Also, {(?)}and |/(0} are independent. Using the assumptions established thus far,
PE conditions in the adaptive loop are proven to be satisfied, provided several
conditions are met. These conditions call for the intensities of the unmodeled dynamics
and unstructured external disturbances, as well as the upper bound on
to be sufficiently small. (3.27) is a measure of the tracking error.
A quantity important to the derivation of global stability for this controller is
now defined:
Note that this is the second term in the denominator of the covariance matrix recursion
equation (3.16). Results later in [3] show that if f5(t) is bounded for all t > 1, the
controller is globally stable and robust to the considered perturbations. A relationship
is developed among
the ratio of maximal eigenvalue of the inverse of the present covariance matrix to the
minimal eigenvalue of the inverse of the previous covariance matrix,
the dimension of the measurement vector.
This relationship shows that ft(/) is bounded if the numerator and denominator of the
eigenvalue ratio are growing at the same rate; this relates the bounded behavior of fi(t)
to the PE condition in the adaptive loop because the eigenvalue ratio is shown to be
related to (3.27).


The self-excitation mechanism is formalized next When the tracking error is
small, PE conditions are satisfied in the adaptive loop and Pit) is bounded.
Unmodeled dynamics and unstructured external disturbances always try to destabilize
the system; as a result, the self-excitation mechanism is initiated when all signals in the
adaptive loop grow without bound. This, in turn, results in growth of tracking error
and Pit). The Theorem of Ultimate Persistent Excitation developed in [3] proves that
the growth rate of the minimum eigenvalue of the covariance matrix inverse is the same
as that of the sum of all previous squared tracking errors; this results in bounding of
Pit). This proof requires development of one additional assumption that provides for
several specific relationships to be satisfied if the level of excitation a is sufficiently
large and the intensities of unmodeled dynamics and external disturbances are
sufficiently small. In other words, in this situation, temporarily unstable signals in the
adaptive loop provide sufficient excitation to bound the condition number of the
covariance matrix used in the parameter estimation. This behavior is inherent to the
least squares method.
Finally, global stability of the controller is proven with the aid of one additional
assumption on the relative magnitudes of the bound on Pit) and the intensity of
unmodeled dynamics. Robustness of the controller to unmodeled dynamics and
external disturbances is a byproduct of the stability result; as the intensities of these
perturbations tend to zero, the mean-square tracking error tends to its minimum
possible value.
The logical follow-up of the results in [3] is research to eliminate the parameter
projection mechanism from the least squares parameter estimation algorithm, while still

maintaining global stability and robustness. This task is much more complex than
obtaining equivalent results for the simpler normalized gradient algorithm. The benefit
of such results is simpler hardware realization of the self-tuning controller; a design
without the projection mechanism would require less computational time, power, and
memory than a design with it. So far, [3] shows parameter projection is crucial to the
results achieved to date. The following examples constitute experiments to show
stability without parameter projection; such has yet to be shown analytically.

This chapter reviews the most important results of this body of research. The
self-tuning controller scheme developed in the previous chapter was simulated by
digital computer using MATLAB [17] to determine the algorithms robustness to the
following perturbations:
unmodeled dynamics in the form of multiplicative and additive perturbations, and
stochastic external disturbances such as measurement noise.
The MATLAB simulation scripts are given in Appendix A.
4.1. Simple Numerical Examples
How strong must a perturbation be to cause instability in the system?
Perturbations were varied in intensity here to check sensitivity of the systems
asymptotic stability. Convergence of parameter estimates and behavior of the critical
parameter fi(t) were examined also. For simplicity, the sampling period is implicitly
one second.
Case 1
For this simulation, the nominal model polynomials were
A(q~}) = 1 0.5q~l -1.5q~2
B(q-l) = l + 0.6q~l

This means that four parameters are to be estimated, two from A(q~x) and two from
The only nonzero perturbation term from the model was the multiplicative
perturbation, which was

The non-stochastic portion of the reference input was a step of r = 50. Variances of
the disturbance and reference excitation sequences were, respectively
-1 (4.4)
=2 (4.5)
All initial conditions for the states and parameter estimates were set to zero, with the
exception of b0(0) = 1. The covariance matrix was initialized to p{0) = 101. Plots
the system output y(t), in order to assess stability;
the parameter estimates a^{t), (f), b0(t), and b{(t), to assess their convergence
. /3(f) to demonstrate its boundedness, and to assess for the self-excitation
The first group of runs was with y 0.001. The output typically stabilized in
about five recursions.

Self-Tuning Controller, Case I, gamma=0.001: y(t)
0 10 20 30 40 50
Time (sec)
Figure 4.1
All parameter estimates usually converged in five recursions or fewer, and remained
very steady hence. Excellent consistency was also observed for all estimates.


Self-Tuning Controller, Case I, gamma=0.001: a2hat(t)
-0.5 -
-1 -
1.5------- ^ 1 .
0 10 20 30 40 50
Time (sec)
Figure 4.3


Figure 4.5
The transient irt fi(t) was always very large, peaking in the neighborhood of 5X104,
but not divergent. Once transients died, f)(t) remained below 10'2 consistently, well
beneath the resolution of the plot shown. (3(t) is therefore bounded, consistent with
the stability of the output.

Figure 4.6
Next, 7 = 0.01 was used. Typical results were nearly identical to those for
7 = 0.001.
Behavior changed notably for the case of 7 = 0.1. The output typically
stabilized slower than for smaller intensities of unmodeled dynamics used in previous

Similarly, the parameter estimate tended to converge a little slower, though steady-state
convergence behavior remained quite good. Also, a distinct reduction in consistency
was noted.



Figure 4.10

Figure 4.11
P(t) behaved much the same as for the other two types of runs, except that its variance
was slightly higher in the steady state.

Self-Tuning Controller, Case I, gamma=0.1: beta(t)
Figure 4.12
Experiments were done with even larger values of y, and all evaluated
categories of behavior worsened, except that fi(t) always seemed to behave similarly.
Its steady-state variance did increase slightly for increasing y. The highest intensity
the system could withstand without instability was approximately y = 0.5.
An unstable case was run using y = 0.6. Interestingly, fi(t) still behaved
approximately as it did in the stable cases, except for a further increase in its steady-
state variance. Also, parameter estimates did not diverge, though they oscillated. Plots
of the output, a typical parameter estimate (notice the increased duration of the plot, to
better illustrate oscillations), and P(t) are included for comparison to stable cases.


Figure 4.14

Figure 4.15
4.1.2. Case II
For this simulation, the nominal model polynomials were the same as for Case
I; again, four parameters are to be estimated, two from A{q~y) and two from B(q~x).
An additive perturbation was simulated for this case, which was
A(g-)A2(g-') = (4.6)
All other simulation parameters were set as for Case I including initial conditions, noise
variances, and reference input; the same variables were plotted.

The first run was with y = 1; the output blows up at this value. It is interesting
to note that /J(r) displays bursts after its initial transient dies; the one plotted here is
several orders of magnitude larger than the initial transient, and quickly subsides. This
may illustrate the case of an unbounded J3(f), which was not seen with the
multiplicative perturbation simulated previously. Shown for this type of run are typical
plots of output and time is scaled on the (3(t) plot to illustrate the burst
Figure 4.16

x 106 Self-Tuning Controller, Case II, gamma=l: beta(t)
2 |-------------------1---------1---------1---------1----------1---------
0 --------------------------------------------------------------------p-
-2 -
H -6 -
-8 -
0 20 40 60 80 100 120 140
Time (sec)
Figure 4.17
The intensity of the unmodeled dynamics was scaled back to 7 = 0.1 for the
next run, and stable behavior was obtained. The output response was much like the
run with 7 = 0.1 for Case I.

Also like Case I, parameter estimates converged, but not necessarily with excellent


Figure 4.20


The now-familiar behavior for fi(t) from Case I appears again here. In the steady-state
time frame, the variance of /J(f) was slightly larger than for the runs in Case I.

4.2. ATS-6 Satellite
This numerical example is more complicated than the previous two and based
on a real-life case; it provides a basis to compare the self-tuning controllers robustness
to that of a classical scheme. The attitude control loop for the ATS-6 satellite was
chosen because of its unmodeled dynamics. This satellite was a successful science and
communications experiment launched in 1974; [18] explains the design steps
appropriate for assuring robustness to unknown but bounded high-frequency dynamics
and a well-defined periodic disturbance. The result is a continuous-time controller
designed primarily through frequency response techniques. Here, the analog controller

is replaced with the discrete-time self-tuning controller, requiring discretization and
sampling of the plant.
The plant model is defined by the cascade of transfer functions
ep(s) i
103^ + 0.01) (
0Js)= 10
ep(s) j+io
resulting in
Gp(s) =
_________10 = 0.01
103s(s + 0.01)(s +10) s(s + 0.01)(s +10)
Selection of the sampling interval was based on the bandwidth of the open-loop
plant, which was found to be approximately 0.01 radians/second. This agrees with the
dominant dynamic component of the plant, which is the pole at s = -0.01. The rule of
thumb used for sampling interval selection is
7> , 5 a (4.10)
A value of Ts = 50 seconds is a nice round number which satisfies the constraint on a.
MATLABs c2d function was invoked to discretize the plant model with the
appropriate zero-order hold at its input This results in a nominal model with six
parameters to estimate; A(q~l) is fourth-order and contains three parameters (because it
is monic), and fi(#_1)is third-order with three parameters.
A disturbance torque of solar origin acts on the satellite; this torque is therefore
periodic in the Earths rotation frequency and can be described by

d\t) = 105 sin(7 x 10-5)/ = 10-5 sin OJJ
The Laplace transform of this expression [19] is
D'(5) =
s2 + col
If the model'is manipulated to treat this disturbance as an output disturbance, it passes
through a transfer function
-0.01 _
Gd(s) =
52(j + 10)
The disturbance torque is therefore discretized by taking the z-transform of the product
of (4.12) and (4.13). No zero-order hold is involved, so a partial fractions solution for
the inverse z-transform is calculated by brute force using MATLAB, giving a disturbance
torque sequence.
The unmodeled dynamics take the form of a multiplicative perturbation as
expressed by the equations
G,(l) = Gp(s)(l + Z,) (4.14)

1 + 1
_ x ^ 0.0009
The magnitude of the perturbation has an upper bound, but as with the vast majority of
unmodeled high-frequency dynamics, nothing is known about the phase of the
perturbation. The worst-case approach of defining the unmodeled dynamics is to
simply model the transfer function located inside the magnitude brackets (4.15).

Measurement noise is generated by the attitude control sensors and also must be
simulated. The ATS-6 uses two types of attitude control sensing simultaneously: an
infrared horizon sensor, and its own communication system coupled with wave
interference techniques. It is assumed here that the horizon sensor generates noise
typical for the two sensors, and it is therefore sufficient to simulated only this sensor.
The horizon sensor is an infrared scanner; at geosynchronous altitude, the full disk
image of the Earth fits into its view. The Nonspinning Earth Sensor Assembly
(NESA), an infrared horizon scanner built by TRW for the ATS-6, has a specified
accuracy of 0.05 degrees over a +2.82 degree linear range [20]. Interpreting this
accuracy as a three-sigma distribution of measurement noise on the sensor output
provides a typical variance for the simulation.
In an application such as fine pointing of a satellite attitude, where a high degree
of accuracy is required, to purposely add white noise to the reference input would be
counterproductive. PE conditions must, however, be met. It is therefore hoped that the
measurement noise is of such intensity and frequency content that the PE condition will
be satisfied. The reference input used is simply a unit step, chosen to provide a basis
of comparison to the original analog controller design.
The first experiment for this simulation is simply to check the unit step
response, and evaluate the parameter estimates and The steady-state response of
the system output, pointing angle 6p(t), is quite good and the transient dies after less
than ten recursions. However, it should be noted that the transient response is highly
oscillatory with large overshoots and undershoots; this would be unacceptable in a
satellite pointing application.



Figure 4.26


Figure 4.28

Figure 4.29

Figure 4.30
The signature response of p(t) noted in Cases I and II is seen again for the ATS-6
simulation, though the transient is much smaller in proportion to the reference input of
the system.

Self-Tuning Controller, ATS-6: beta(t)
Figure 4.31
Another experiment was done to check the effect of estimating more parameters
on the system output response. This is done by simply increasing the dimension of the
regressor utilized in the least squares algorithm, providing a corresponding increase in
the dimension of the parameter estimate vector. The resulting regressor and estimate
are also used for calculating the control signal. This approach implies that a higher
order model than the nominal is assumed, to account for perturbations. Heuristically,
one would think that higher-order modeling tends to smooth transients and quicken
responses. In this case, however, that heuristic notion does not hold. The speed of
transient damping and, in one case, the size of the overshoot peaks can be seen to
degenerate compared to the response for estimating the nominal quantity of parameters.

Figure 4.32


Optimal estimation using the unmodified recursive extended least squares
method produces parameter estimates which do not tend to diverge. Convergence
occurs even in the presence of unmodeled dynamics. This bodes well for the use of
unmodified versions of least squares in adaptive control applications, because
modifications complicate the estimation algorithm, and these complications translate
into realizability issues and demands on hardware.
Experimentation with self-tuning controller simulations shows that the
objectives of parameter convergence and mean-square boundedness of the output can
be achieved as long as the intensity of unmodeled dynamics is sufficiently small. In
fact, these objectives and robustness to unmodeled dynamics were even achieved for a
system not excited by a stochastic reference input sequence. However, satisfaction of
these objectives does not always justify the use of adaptive algorithms; the transient
response of the least squares self-tuning controller contains several large oscillations
before it settles down.
The parameter j3(t) behaves in a quite predictable fashion when the system
response is stable. Large bursts occurring after the initial transient are indicative of a
system with a strong tendency to become unstable, and continuation of the unstable
behavior after the burst subsides seems to conflict with the notion of a self-excitation
Estimation of more parameters than the nominal system model calls for, though
attractive from the perspective of accounting for unmodeled dynamics, does not help

appreciably when using the direct self-tuning regulator with least squares. Indeed, it
degrades performance somewhat.


% David R. Sierzchula
% University of Colorado at Denver
% EE6950, Master's Thesis
% Estimates the coefficients of a system defined by the polynomial
% ratio C(l/q)/A(l/q), given white (Gaussian random) noise as the input
% Unmodeled noise dynamics are included here.
% System coefficients Chosen to give second-order stable polynomials in 1/q
% with the leading coefficient unity.
a 1=0.7;
% Unmodeled noise dynamics coefficient
% Simulation length
% Initial conditions
% System output y(-2) required to calculate omega_hat(0), since it is used in
% phi(-l).
% Noise prediction omega_hat(-2) required to calculate omega_hat(0), since it is
% used in phi(-l).
omega_h at_minus 1 =0;
omega_h at_minus2=0;
% Unmodeled noise dynamics
% Parameter estimates

% Covariance matrix diagonal
% Variance of noise input to system
% Sample period in seconds
% System coefficients
thetaO=[al ;a2;c 1 ;c2];
% Create vectors to use as time histories for values of interest; initialize each.
% Time history of system output y; initialize time point t=0.
y_pres=zeros (1 ,N+1);
y_last 1=zeros( 1 ,N+1);
y_lastl(l)=y_minusl; '
% Time history of noise prediction omega_hat; initialize timepoint t=0.
omega_hat_pres=zeros( 1 ,N+1);
omega_hat_lastl=zeros( 1 ,N+1);
omega_hat_last 1 (1 )=omega_hat_minus 1;
% Time history of parameter estimates; initialize timepoint t=0.
a 1 hat=zeros( 1 ,N+1);
a2hat=zeros( 1 ,N+1);
c 1 hat=zeros( 1 ,N+1);
c2hat=zeros( 1 ,N+1);
% Parameter estimate at time t=0; to be used in first parameter estimate calculation.
% Covariance matrix initialization.
% Vector for time history of beta=phi'*p*phi.
% Time history of each covariance matrix element
pi l_history=zeros(l,N+l);
p2 l_history=zeros( 1 ,N+1);
p3 l_history=zeros( 1 ,N+1);

p41 _history=zeros( 1 ,N+1);
p 12_history=zeros( 1 ,N+1);
p32_his tory=zeros( 1 ,N+1);
p42_history=zeros( 1 ,N+1);
p 13 _history=zeros( 1 ,N+1);
p33_history=zeros( 1 ,N+1);
p43_history=zeros( 1 ,N+1);
p 14_history=zeros( 1 ,N+1);
p34_history=zeros( 1 ,N+1);
p44_history=zeros( 1 ,N+1);
% Initialize covariance matrix history for t=0
pi l_history(l)=p_transpose(l);
p4 l_history( 1 )=p_transpose(4);
p 12_history (1 )=p_transpose(5);
p22_history( 1 )=p_transpose(6);
p32_history (1 )=p_transpose(7);
p42_history (1 )=p_transpose(8);
p 13_history (1 )=p_transpose(9);
p23_history( 1 )=p_transpose( 10);
p33_history(l)=p_transpose(l 1);
p43_his tory (1 )=p_transpose( 12);
p 14_history (1 )=p_transpose( 13);
p34_history (1 )=p_transpose( 15);
p44_history( 1 )=p_transpose(l 6);
% Error time history difference between output y(t+l) and predicted output
% y_hat(t+l)
% Time reference for plotting
time=0:T: (N-1 )/T;
% Time history of parameter estimate errors.
a2hat_err (1 )=a2- a2hat( 1);
c 1 hat_err (1 )=c 1-c 1 hat( 1);
% Create white (Gaussian) noise sequence used for system input.
omega_pres=sqrt(omega_var)*rand( 1 ,N+1);

omega_lastl=[0 omega_pres(l:N)];
% Create unmodeled noise dynamics.
for i=l:N;
und(i+1 )=0.95 *und(i)+gamma* (omega_pres(i+1)+1.7 *omega_pres(i));
% Measurement vector to be used for calculating omega_hat(0).
phi=[-y_minus 1 ;-y_minus2;omega_hat_minus 1 ;omega_hat_minus2];
% Perform simulation, iteratively executing system equation and estimating
% parameters.
% System output
% Calculate system output for system A*y(t+l)=G*omega(t+l)
y_pres(i+1 )=thetaO'*inp+omega_pres(i+1 )+und(i);
% Update system output history for phi
y_last 1 (i+1 )=y_pres(i);
% System output
% Calculate a posteriori prediction error
% Update a posteriori prediction error history for phi
% Update phi(t)
phi=[-y_pres(i);-y_lastl (i);omega_hat_pres(i);omega_hat_lastl (i)];
% Calculate covariance matrix p(t) to use in estimating parameters
% Calculate beta(t)=phi(t)*p(t-l)*phi(t) to use in calculating p(t)
% Calculate denominator and numerator of update term for p(t)
% Calculate new covariance matrix p(t) as p(t-l) minus update term

% Record each element of covariance matrix
pi l_history(i)=p_transpose(l);
p2 l_his tory (i)=p_transpose(2);
p3 l_history(i)=p_transpose(3);
p41 Jristory (i)=p_transpose(4);
p 12_history (i)=p_transpose(5);
p22_history (i)=p_transpose(6);
p32_history (i)=p_transpose(7);
p 13_history (i)=p_transpose(9);
p23_history(i)=p_transpose( 10);
p33_history(i)=p_transpose(l 1);
p43_history (r)=p_transpose( 12);
p 14_history (i)=p_transpose( 13);
p24_history (i)=p_transpose( 14);
p34_history (i)=p_transpose( 15);
p44_history (i)=p_transpose( 16);
% Parameter estimate
% Calculate a priori prediction error for use in parameter estimation
e(i+1 )=y_pres(i+1 )-thetahat'*phi;
% Estimate parameters based on previous estimate plus update term
% Save estimated parameters in time history
alhat(i+l )=thetahat(l);
c 1 hat(i+1 )=thetahat(3);
c2hat(i+l )=thetahat(4);
% Calculate errors in parameter estimates (real minus estimate)
a 1 hat_err(i+1 )=a 1-a 1 hat(i+1);
c 1 hat_err (i+1 )=c 1-c 1 hat(i+1);
c2hat_err(i+1 )=c2-c2hat(i+1);

% David R. Sierzchula
% University of Colorado at Denver
% EE6950, Master's Thesis
% This MATLAB script simulates the numerical example known as Case I for the
% self-tuning controller algorithm. Closed-loop control of the system
% Ay(t+l)=Bu(t) is performed. Model is discrete-time second-order SISO ARMA
% system.
% Perturbations are added:
% Unmodeled dynamics multiplicative perturbation
% Stochastic disturbance sequence Gaussian white noise
% Estimates 4 parameters (ahat: 2, bhat: 2).
% Clear all variables in workspace
% Identify simulation for offline postprocessing routines (not used)
% Nominal system parameters
b 1=0.6;
% Number of recursions
% Initial conditions
% Output
u minus2=0;
% Unmodeled dynamics
% Parameter estimates

% Initialize covariance matrix
% Non-stochastic portion of reference input
Variances of stochastic variables
Reference input
% Intensity of unmodeled dynamics
% Sample period
% Vector of true system parameters
thetaO=[al ;a2;b0;b 1];
% Allocate, initialize history vectors and matrices
% Output
% Control signal
% Unmodeled dynamics
% Parameter estimates
thetahat=[ahat(:, 1 );bhat(:, 1)];
% Initial covariance matrix

% History of beta, which is second term in denominator of update term of
covariance % matrix update equation
% History of covariance matrix
% Allocate vectors
pi l_history=zeros(l,N+l);
p2 l_history=zeros( 1 ,N+1);
p3 l_history=zeros( 1 ,N+1);
p4 l_history=zeros( 1 ,N+1);
p 12_history=zeros( 1 ,N+1);
p22_history=zeros( 1 ,N+1);
p32_history=zeros( 1 ,N+1);
p42_history=zeros( 1 ,N+1);
p 13_history=zeros( 1 ,N+1);
p23_histoiy=zeros( 1 ,N+1);
p33_history=zeros( 1 ,N+1);
p43_history=zeros( 1 ,N+1);
p 14_history=zeros( 1 ,N+1);
p24_history=zeros( 1 ,N+1);
p34_history=zeros( 1 ,N+1);
p44_history=zeros( 1 ,N+1);
% Initialize
pi l_history(l)=p_transpose(l);
p2 l_history (1 )=p_transpose(2);
p3 l_history (1 )=p_transpose(3);
p4 l_history (1 )=p_transpose(4);
p 12_history (1 )=p_transpose(5);
p22_history (1 )=p_transpose(6);
p32_history( 1 )=p_transpose(7);
p 13_history (l)=p_transpose(9);
p23_history (1 )=p_transpose( 10);
p33_history(l)=p_transpose(l 1);
p43_history (1 )=p_transpose( 12);
p 14_history (1 )=p_transpose( 13);
p24_history (1 )=p_transpose( 14);
% Allocate error signal history vector

% Time vector for plotting
% Create random (Gaussian, white) sequences for stochastic variables
% Gaussian distribution
% Stochastic part of reference input
ystar_epsilon=sqrt(epsilon_var) *rand( 1 ,N+1);
% Disturbance sequence
omega=sqrt(omega_var) *rand( 1 ,N+1);
% Create reference input sequence
% Begin iterations for self tuning controller
% Number of iterations is N
for i=l:N;
% Calculate control signal
% Term attributed to previous u(t)
% Term attributed to previous y(t)
% Reference input
% Final control signal calculation
u 1 (i)=(u_backterm+y_backterm+ref_inp)/bhat( 1 ,i);
% Set regressor
% Calculate system output y(t+l); includes nominal response (parameters
% times regressor) plus unmodeled dynamics plus stochastic disturbance
y (1 ,i+1 )=thetaO' *phi+v(i)+omega(i+1);
% Update output and control histories for time t+1
% Output
% Control
% Calculate error
e(i+1 )=y (1 ,i+1) -ystar(i+1);
% Calculate unmodeled dynamics term at time t+1

Covariance matrix p(t)
Beta (second term in denominator of update term)
Denominator of update term
Numerator of update term
num_p=p*phi* phi' *p;
Final calculation
Update history
p 1 l_history(i+1 )=p_transpose( 1);
p21_history(i+l )=p_transpose(2);
p3 l_history(i+l )=p_transpose(3);
p4 l_history (i+1 )=p_transpose(4);
p 12_history (i+1 )=p_transpose(5);
p22_history (i+1 )=p_transpose(6);
p32_history(i+1 )=p_transpose(7);
p42_history(i+1 )=p_transpose(8);
p 13_history(i+1 )=p_transpose(9);
p23_history(i+1 )=p_transpose( 10);
p33_history(i+l)=p_transpose(l 1);
p43_history (i+1 )=p_transpose( 12);
p 14_history(i+ l)=p_transpose( 13);
p24_history (i+1 )=p_transpose( 14);
p34_history(i+1 )=p_transpose( 15);
p44_history (i+1 )=p_transpose( 16);
% Calculate new parameter estimates for time t+1 by adding update term to
thetahat(: ,2)=thetahat(:, 1 )+p*phi*e(i+l);
thetahat(:, l)=thetahat(:,2);
ahat(: ,i+1 )=thetahat( 1:2,2);
bhat(:,i+1 )=thetahat(3:4,2);
% End of iteration

% casell.m
% David R. Sierzchula
% University of Colorado at Denver
% EE6950, Master's Thesis
% This MATLAB script simulates the numerical example known as Case II for the
% self-tuning controller algorithm. Closed-loop control of the system
% Ay(t+l)=Bu(t) is performed. Model is discrete-time second-order SISO ARMA
% system.
% Perturbations are added:
% Unmodeled dynamics additive perturbation
% Stochastic disturbance sequence Gaussian white noise
% Estimates 4 parameters (ahat: 2, bhat: 2).
% Clear all variables in workspace
% Identify simulation for offline postprocessing routines (not used)
% Nominal system parameters
% Parameter in unmodeled dynamics (pole)
% Number of recursions
% Initial conditions
% Output
% Control signal
% Unmodeled dynamics
% Parameter estimates

% Initialize covariance matrix
% Non-stochastic portion of reference input
% Variances of stochastic variables
% Reference input
% Disturbance
% Intensity of unmodeled dynamics
% Sample period
% Vector of true system parameters
thetaO=[al ;a2;b0;b 1];
% Allocate, initialize history vectors and matrices
% Output
% Control signal
% Unmodeled dynamics
% Parameter estimates

thetahat=[ahat(:, l);bhat(:,l)];
% Initial covariance matrix
% History of beta, which is second term in denominator of update term of
covariance % matrix update equation
% History of covariance matrix
% Allocate vectors
pi l_history=zeros(l,N+l);
p2 l_history=zeros( 1 ,N+1);
p31 _history=zeros( 1 ,N+1);
p4 l_history=zeros( 1 ,N+1);
p 12_history=zeros( 1 ,N+1);
p22_history=zeros( 1 ,N+1);
p32_histoiy=zeros( 1 ,N+1);
p42_history=zeros( 1 ,N+1);
p 13_history=zeros( 1 ,N+1);
p23_history=zeros( 1 ,N+1);
p33_history=zeros( 1 ,N+1);
p 14_history=zeros( 1 ,N+1);
p24_history=zeros( 1 ,N+1);
p34_history=zeros( 1 ,N+1);
p44_history=zeros( 1 ,N+1);
% Initialize
pi l_history(l)=p_transpose(l);
p2 l_history(l)=p_transpose(2);
p3 l_history(l)=p_transpose(3);
p 12_history( 1 )=p_transpose(5);
p22_history( 1 )=p_transpose(6);
p32_history( 1 )=p_transpose(7);
p42_history( 1 )=p_transpose(8);
p 13_history( 1 )=p_transpose(9);
p33_history(l)=p_transpose(l 1);
p43_history (1 )=p_transpose( 12);
p 14_history( 1 )=p_transpose( 13);
p24_history (1 )=p_transpose( 14);
p34_history (1 )=p_transpose( 15);

% Allocate error signal history vector
% Time vector for plotting
% Create random (Gaussian, white) sequences for stochastic variables
% Gaussian distribution
% Stochastic part of reference input
ystar_epsilon=sqrt(epsilon_var) *rand( 1 ,N+1);
% Disturbance sequence
% Create reference input sequence
ystar=ref_amplitude*ones( 1 ,N+l)+ystar_epsilon;
% Begin iterations for self tuning controller
% Number of iterations is N
for i=l:N;
% Calculate control signal
% Term attributed to previous u(t)
% Term attributed to previous y(t)
% Reference input
ref_inp=y star(i+1);
% Final control signal calculation
u 1 (i)=(u_backterm+y_backterm+ref_inp)/bhat( l ,i);
% Set regressor
% Calculate system output y(t+l); includes nominal response (parameters
% times regressor) plus unmodeled dynamics plus stochastic disturbance
y( 1 ,i+l)=thetaO'*phi+v(i)+omega(i+l);
Update output and control histories for time t+1
Calculateeiror prediction

% Calculate unmodeled dynamics term at time t+1
v(i+ l)=-udl *v(i)+gamma*u 1 (i);
% Covariance matrix p(t)
% Beta (second term in denominator of update term)
% Denominator of update term
% Numerator of update term
% Final calculation
p=p- (num_p/den_p);
% Update history
p 1 l_history(i+1 )=p_transpose( 1);
p21_history(i+1 )=p_transpose(2);
p4 l_history(i+1 )=p_transpose(4);
pl2_history(i+ l)=p_transpose(5);
p22_history(i+1 )=p_transpose(6);
p32_history(i+1 )=p_transpose(7);
p42_history (i+1 )=p_transpose(8);
p 13_history(i+1 )=p_transpose(9);
p23_history(i+1 )=p_transpose( 10);
p33_history(i+1 )=p_transpose( 11);
p43_history(i+1 )=p_transpose( 12);
p 14_history (i+1 )=p_transpose( 13);
p24_history(i+1 )=p_transpose( 14);
p34_history(i+1 )=p_transpose( 15);
p44_history(i+1 )=p_transpose( 16);
% Calculate new parameter estimates for time t+1 by adding update term to
% estimate
ahat(:,i+l)=thetahat(l :2,2);
bhat(: ,i+1 )=thetahat(3:4,2);
% End of iteration

% ats600.m
% David R. Sierzchula
% University of Colorado at Denver
% EE6950, Master's Thesis
% This MATLAB script simulates the numerical example for the ATS-6 satellite.
% The nominal system is transformed from continuous-time transfer functions to
% discrete-time polynomials describing the system as Ay=Bu.
% Perturbations are added:
% Unmodeled dynamics discretized from the given continuous-time
% transfer function
% Disturbance torque discretized using partial fractions from given time-
% domain description
% Direct self-tuning regulator replaces the continuous-time robust controller in the
% original problem
% Basic version estimates 6 parameters (ahat: 3, bhat: 3); other versions written
% based on this version, estimating more parameters.
% Sample period
% Discretize nominal plant from continuous-time transfer functions Gs, Gd, Gp
[Gsdpnumd,Gsdpdend] =ss2tf(Gsdpad,Gsdpbd,Gsdpc,Gsdpd, 1);
% Discretize unmodeled dynamics from continuous-time transfer function Gud
Gudz=[0.03 ;0.03];
Gudp=[0;-0.01; -10];
[Gudad.Gudbd] =c2d(Guda,Gudb,T s);
% Number of iterations of controller algorithm
% Coefficients of discrete-time polynomial model of plant

% A
% B
% Unmodeled dynamics - will be called C here
% Initial conditions
% Output y
% Control u
% Parameter estimates for A coefficients
% Parameter estimates for B coefficients
% Covariance matrix diagonal
% Variance of measurement noise generated at NESA attitude sensor
% Reference input to system
% Nominal plant model parameter vector
thetaO=[al ;a2;a3;b0;b 1 ;b2];
% Vectors for retaining output sequence data
% Present at time t

% Last at time t
% Two ago at time t
% Initialize
% Vectors for retaining control input sequence data
% Present at time t
% Last at timet
% Two ago at time t
% Initialize
% Vector for retaining disturbance sequence data
% Vector for retaining parameter estimate history data, A polynomial
a2hat=zeros( 1 ,N+1);
a3hat=zeros( 1 ,N+1);
% Vector for retaining parameter estimate history data, A polynomial
bOhat=zeros( 1 ,N+1);
b 1 hat=zeros( 1 ,N+1);
b2hat=zeros( 1 ,N+1);
% Initial parameter estimate vector
% Initial covariance matrix