Citation
Predictive control

Material Information

Title:
Predictive control a unified approach and identification
Creator:
Jeon, Hongjun
Publication Date:
Language:
English
Physical Description:
ix, 153 leaves : illustrations ; 28 cm

Subjects

Subjects / Keywords:
Predictive control ( lcsh )
Control theory ( lcsh )
Control theory ( fast )
Predictive control ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 152-153).
General Note:
Department of Electrical Engingeering
Statement of Responsibility:
by Hongjun Jeon.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
42612017 ( OCLC )
ocm42612017
Classification:
LD1190.E54 1999m .J46 ( lcc )

Full Text
PREDICTIVE CONTROL:
A UNIFIED APPROACH AND IDENTIFICATION
by
Hongjun Jeon
BE., Soonchunyang University, 1996
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering
1999


This thesis for the Master of Science
degree by
Hongjun Jeon
has been approved
by
Jan T. Bialasiewicz
Miloje Radenkovic
/H, tf, m
Date


Hongjun Jeon (M.S., Electrical Engineering)
Predictive Control: A Unified Approach and Identification
Thesis directed by Associate Professor Jan T. Bialasiewicz
ABSTRACT
A unified approach to predictive controller design is presented in this thesis.
This is realized by using a process and disturbance model that can be used to describe
a large class of processes and disturbances. Starting from a simple criterion function a
unified criterion function is developed which is suitable for use in a unified predictive
controller. While developing this criterion function many others are considered and
analyzed with respect of their properties. In particular, this thesis provides the
theoretical background of predictive controllers.
A new approach to the long-range predictive control, named Identified
Predictive Control (IPC), is proposed. The process predictor model, used to determine
the control sequence, is derived from a state space formulation of the ARIMAX
model and is directly identified using current input-output sequences. Since IPC uses
a properly defined observer model of the minimum-variance i-step-ahead predictor,
its implicit identification has dead beat properties that makes IPC particularly suitable
for control of flexible structures.
This abstract accurately represents the content of the candidates thesis. I
recommend its publication.
SignedJ
Jan T. Bialasiewicz


CONTENTS
Chapter
1. Introduction................................................................ 1
1.1 What is predictive control?................................................ 4
1.2 The predictive control concept............................................ 6
1.3 Relationship to other methods............................................. 12
1.3.1 Linear-quadratic (LQ) control........................................... 12
1.3.2 Pole-placement design................................................... 16
1.4 Summary and organization of the thesis.................................... 18
2. Predictive controller design............................................... 19
2.1 Process models and prediction............................................. 19
2.1.1 Process models and prediction when there are no disturbances............ 20
2.1.1.1 Prediction of the output of a process described by a transfer function. 23
2.1.2 Disturbance models...................................................... 28
2.1.2.1 Deterministic disturbance.............................................. 28
2.1.2.2 Stochastic disturbances................................................ 33
2.1.3 Properties of I-step-ahead predictors based on stochastic disturbance
models.................................................................. 37
2.1.4 Using predictions for feedback for processes with delay................. 39
2.1.5 A unified approach to process modeling and prediction................... 43


2.2 Criterion functions
46
2.2.1 Controller output weighting............................................. 47
2.2.2 Structuring the controller output....................................... 49
2.2.3 Extensions to the criterion function.................................... 57
2.2.3.1 A unified criterion function........................................... 59
2.3 The predictive control law.............................................. 60
2.3.1 Derivation of the unified predictive control law....................... 60
2.3.1.1 Relationship between u and u....................................... 62
2.3.1.2 Relationship between y* and u ...................................... 66
2.3.1.3 The predictive control law............................................. 67
2.3.2 The polynomial approach................................................. 70
2.3.3 Some implementation aspects............................................. 72
2.4 The reference trajectory................................................ 75
2.5 Overview of design parameters........................................... 82
2.6 Summary................................................................. 83
3. Identified predictive control............................................ 87
3.1 Process model structure................................................... 88
3.1.1 Auto regressive (AR) process............................................ 88
3.1.2 Linear models........................................................... 89
3.1.3 Prediction and predictor model.......................................... 90
3.1.4 Equation error model structure.......................................... 92
3.1.5 Linear regressions...........................................;........ 94


3.1.6 ARMAX model structure
95
3.1.7 Pseudolinear regressions............................................ 96
3.1.8 Other equation-error-type model structures.......................... 98
3.1.9 Output error model structure..................................... 100
3.1.10 Box-Jenkins (BJ) model structure................................. 102
3.1.11 A general family of model structures............................. 103
3.2 Model of the process................................................. 104
3.3 Standard approach to predictive control.............................. 107
3.4 Identified predictive control (IPC) algorithm........................ 113
3.5 Summary.............................................................. 121
4. Illustrative example.................................................. 123
Example 4.1 .............................................................. 124
Example 4.2............................................................... 128
Example 4.3............................................................... 131
5. Conclusions........................................................... 134
Appendix
A. General solution of the Diophantine equation.......................... 136
B. MATLAB programs of UPC................................................ 140
C. MATLAB programs of examples........................................... 146
References
152


FIGURES
Figure
1.1 Model-based control.......................................................... 2
1.2 Receding horizon predictive control. Parts b and d denote the situation
at time t while parts a and c denote situation at time t +1 ................ 7
1.3 Finite horizon LQ control in a receding horizon framework. Parts b
and d denote the situation at time t, whereas parts a and c denote
the situation at time Z +1 ............................................... 16
1.4 The closed-loop system as it is considered in pole-placement design...... 17
2.1 The structure of /-step-ahead predictor (Equation 2.4)..................... 25
2.2 Block diagram of z'-step-ahead predictor (Equation 2.12)................... 27
2.3 Controller structure using y(k) for feedback............................... 40
2.4 Controller structure using y{k + d +1) for feedback........................ 41
2.5 Controller structure using y(k + d +1) for feedback........................ 42
2.6 The closed-loop system..................................................... 72
2.7 Set point changes known a priori........................................... 76
2.8 Set point changes not known a priori....................................... 77
2.9 Two ways of generating a first-order reference trajectory.................. 79
2.10 Block diagram illustrating predictive controller design for a linear
system without constrains.................................................. 85
3.1 Block diagram of ARARMAX structure
100


3.2 Block diagram of an output error (OE) model structure........... 101
4.1 Unit-step response obtained when using the UPC with Hp = 4, Hm = 2,
Hc = 3 and ^ = A ............................................... 126
4.2 Dead-beat response obtained when using the UPC with Hp = 4 Hm = 2,
Hc- 3 and

4.3 Unit-step response obtained when using the UPC with Hp= 4, Hm= 2,
Hc = 3, (j) = A and noise....................................... 129
4.4 Dead-beat response obtained when using the UPC with Hp-4, Hm = 2,
Hc 3 (/> = A and noise........................................ 130
4.5 The response of a nonminimum-phase process with Hp = 4, Hm = 2
and Hc =3 ...................................................... 132
4.6 The response of a nonminimum-phase process with Hp 4 (Hp = 40),
H,= 1 and Hc = 3 .............................................. 133


TABLES
Tables
2.1 Some deterministic disturbance characterized by = 0 ......... 29
2.2 Some well-known process models and the settings for C and D in the
disturbance model (Equation 2.24)........................................ 33
2.3 Unified predictive control (UPC) design parameters...................... 82
3.1 The common special cases of (3.15)...................................... 104


1. Introduction
Predictive control belongs to the class of model-based controller design concepts.
That is, a model of the process is explicitly used to design the controller, as is
illustrated in Figure 1.1. In this figure u denotes the controller output, y denotes the
process output and w denotes the desired process output. Predictive control is not the
only model-based controller design method. Others are, for example, pole-placement
control and linear-quadratic (LQ) control. Note that in Figure 1.1 it is assumed that
there is a separation between controller design and the controller itself, the control
law. For pole-placement control and LQ control it is well known that such a
separation exists. For example, in LQ control the controller is usually a state feedback
controller. The controller parameters are obtained by solving an optimization problem.
In predictive control the above-mentioned separation can, in general, also be obtained
One of the attractive features of predictive controllers is that they are relatively
easy to tune. This makes predictive controllers attractive to a wider class of control
engineers and even for people who are not control engineers. Other features of
predictive controllers are:
1) The concept of predictive control is not restricted to single-input, single-output
(SISO) processes. Predictive controllers can be (and have been) derived for and
applied to multi-input, multi-output (MIMO) processes. Extending predictive
controllers for SISO processes to MIMO processes is straightforward [11].
2) In contrast to LQ and pole-placement controllers, predictive controllers can also
1


be derived for nonlinear processes. A nonlinear model of the process is then used
explicitly to design the controller.
Figure 1.1. Model-based control.
3) In contrast to LQ and pole-placement controllers, predictive controllers can also
be derived for nonlinear processes. A nonlinear model of the process is then used
explicitly to design the controller.
4) Predictive control is the only methodology that can handle process constraints in a
systematic way during the design of the controller. Since in real life process
constraints are common practice, this issue is rather important. Although it is well
known that predictive controllers have this property, little attention is paid in the
literature to predictive controller design when there are constraints. This feature in
particular is believed to be one of the most attractive aspects of predictive
controller design.
5) Predictive control is an open methodology. That is, within the framework of
2


predictive control there are many ways to design a predictive controller. As a
result, over ten different predictive controllers, each with different properties,
have been proposed in the literature over the last decade. Some well-known
predictive controllers are generalized predictive control (GPC, [8, 9]), dynamic
matrix control (DMC), extended prediction self-adaptive control (EPSAC),
predictive functional control (PFC), extended horizon adaptive control (EHAC),
and unified predictive control (UPC).
6) The concept of predictive control can be used to control a wide variety of
processes without the designer having to take special precautions. It can be used
to control 'simple' processes as well as 'difficult' processes, such as processes with
a large time delay, processes that are nonminimum phase and processes that are
open-loop unstable.
7) In a natural way feed-forward action can be introduced for compensation of
measurable disturbances and for tracking reference trajectories.
8) Because predictive controllers make use of predictions, pre-scheduled reference
trajectories (for example, used in robot control) or set points can be dealt with.
Unavoidably, predictive controller design has some drawbacks. Since predictive
controllers belong to the class of model-based controller design methods a model of
the process must be available. In general, in designing a control system two phases
can be distinguished: modeling and controller design. Predictive control provides
only a solution for the controller design part. A model of the process must be obtained
3


by other methods.
A second drawback is due to the fact that the predictive control concept is an
open methodology. As a result of this, many different predictive controllers can be
derived, each having different properties. Although, at first glance, the differences
between these controllers seem rather small, these 'small' differences can yield very
different behavior the closed-loop systems. As a result, it can be quite difficult to
select which predictive controller can be used to solve a particular control problem.
Sometimes, especially in the process industry, one cannot afford the expense
designing a control system that one knows will not work in another process and
whose cost cannot therefore be spread over a large number of applications. Therefore,
a unified approach to predictive controller design is needed which allows treatment
of each problem within the same framework and results in significant reduction in
design costs.
1.1 What is Predictive Control?
For a few years past predictive control scheme has been developed mostly in
process control area to be applied for solving realistic problems. There are many
constraints in real control problems due to various causes physical constraints such
as valve capacity, but pre-existing linear-quadratic (LQ) control does not consider
these constraints efficiently. Hence, in case that future inputs are known a priori,
predictive control scheme is widely applied in industries because it provides the
optimal solution which satisfies system performance specification and guarantees
4


stability of the system under the constraints inherently included in the system. This
control scheme is called receding horizon predictive control because it computes
optimal control input at each instant.
Generally speaking, optimal controllers are classified by what performance
index they use into finite fixed horizon controllers, infinite horizon controllers and
finite receding horizon controllers. Although finite fixed horizon controllers is a
basis of solving many optimal control problems, there is no way of applying the
controller which is defined in finite fixed horizon to feedback control system so that
infinite horizon controllers are mostly used in applications. Included in this infinite
horizon control scheme are steady-state LQ controllers. However, there are inherent
limitation to LQ control as an optimal control because it cannot provide a solution to
the tracking problem, which is necessary for applying optimal control to real plant
control system.
To overcome this limitation finite receding horizon control has been introduced.
At first, modified performance index, i.e. receding quadratic cost function is
introduced in proposing optimal feedback controller for continuous-time, time-
invariant system which is able to stabilize system and is fast in computing control
inputs. This scheme has been extensively applied to time-varying discrete-time
system as well as regulation problem in continuous-time system and discrete-time
tracking problem receding horizon tracking control.
Receding horizon tracking control as a predictive control is verified to be
equivalent to the generalized predictive control (GPC) in case of noise-free system
5


so that it is utilized to investigate the characteristics of GPC which is widely adopted
in such as chemical processes. In addition, it is known as being able to be applied to
non-minimum phase system, unstable open-loop system, inaccurately modeled
system of erroneous degree, and it has an advantage of simple structure in case of
implementation. However, this technique must be investigated further to be fully
verified for its characteristics especially, robustness in case of modeling errors in
spite of many known successful application examples.
1.2 The Predictive Control Concept
The way predictive controllers operate for a single-input, single-output system is
illustrated by Figure 1.2. The time scales in parts a, b, c and d are time scales relative
to the sample k, which denotes the present. The time scales shown at the foot of
Figure 1.2 are absolute time scales. Consider first Figure 1.2b and d and suppose that
the current time is denoted by sample k which corresponds to the absolute time t.
Further, u(k), y(k) and w(k) denote the controller output, the process output and the
desired process output at sample k, respectively. Now, define:
u- \u(k),- ,u(k + Hp -1)JT
y = [y(k + l),-,y(k + Hp)\T
w = \w(k + \),---,w(k + Hp)\T
where Hp is the prediction horizon and the symbol A denotes estimation. Then, a
6


predictive controller calculates such a future controller output sequence u (shown in
Figure 1.2d) that the predicted output of the process y is 'close' to the desired
process output w (both y and w are shown in Figure 1.2b). This desired process
output is often called the reference trajectory and it can be an arbitrary sequence of
points. However, the response of a first-order or second-order reference model is
generally used.
i fc i k+Hp
fc-1 fc+1
past future'
iiiiiiii---------
I t i ----
t-1 t+l
time
i fc, i
fc1 fc+1
past future
~ii
T I I I I
I t I
t-1 t+l
time
Figure 1.2. Receding horizon predictive control. Parts b and d denote the situation at
time t while parts a and c denote situation at time t+1.
7


Rather than using the controller output sequence determined in the above way to
control the process in the next Hp samples, only the first element of this controller
output sequence (= u{k)) is used to control the process. At the next sample (hence, at t
+ 1), the whole procedure is repeated using the latest measured information. This is
called the receding horizon principle and is illustrated by Figures 1.2a and 1.2c,
which show what happens at time t + 1. Assuming that there are no disturbances and
no modeling error the predicted process output y(k +1) predicted at time t is exactly
equal to process output y{k) measured at t + 1. Now, again, a future controller output
sequence is calculated such that the predicted process output is 'close' to the reference
trajectory. In general, this controller output sequence is different from the one
obtained at the previous sample, as is illustrated by Figure 1.2c. The reason for using
the receding horizon approach is that this allows us to compensate for future
disturbances or modeling errors. For example, due to a disturbance or modeling error
the predicted process output y(k +1) predicted at time t is not equal to the process
output y(k) measured at t + 1. Then, it is intuitively clear that at time t + 1 it is better
to start the predictions from the measured process output rather than from the process
output predicted at the previous sample. The predicted process output is now
corrected for disturbances and modeling errors. A feedback mechanism is activated.
As a result of the receding horizon approach the horizon over which the process
output is predicted shifts one sample into the future at every sample instant.
The process output is predicted by using a model of the process to be controlled.
Any model that describes the relationship between the input and the output of the
8


process can be used. Further, if the process is subject to disturbances, a disturbance or
noise model can be added to the process model, thus allowing the effect of dis-
turbances on the predicted process output to be taken into account.
In order to define how well the predicted process output tracks the reference
trajectory, a criterion function is used. Typically, such a criterion function is a
function of y, w and u. For example, a simple criterion function is:
J = ]£(K* + 0 ~w{k + i))2
;=1
Now the controller output sequence m over the prediction horizon is obtained by
minimization of J with respect to u :
op, = arg min J
r U
Then, u is optimal with respect to the criterion function that is minimized. As a
result, the future tracking error is minimized. If the model is identical to the process
and there are no disturbances and constraints, the process will track the reference
trajectory exactly on the sampling instants. When the process has a time delay of 2
samples, the approach is no different, except that predicting y(k +1) and y(k + 2)
does not make sense because these values cannot be influenced by the control actions
at t = k and t = k + 1. Then, the cost function can be changed into:
9


(1.1)
j=2l (y(k+f) ~ w(k+o)2
/=3
Thus when there is time delay the optimal controller output sequence is obtained by
the minimization of (1.1) with respect to u.
Clearly, calculating the controller output sequence is an optimization problem or,
more specifically, a minimization problem. Usually, solving a minimization problem
requires an iterative procedure. However an analytical solution is available when the
criterion is quadratic, the model is linear and there are no constraints. Moreover, in
this case the controller is linear and time invariant. Ideally, a criterion function is
based on design specifications such as a desired settling time, overshoot, gain margin,
etc. However, by using such a criterion function, the resulting optimization problem
is difficult to solve. This is the reason why a quadratic criterion function is used in all
predictive controllers. Now the design specifications must be translated into the
parameters of the quadratic criterion function such that when this function is
minimized, the original design specifications are satisfied. In order to obtain easy
tuning the parameters used in the quadratic criterion function must be closely related
to the traditional design specifications.
Note that the controller outputs calculated by minimization of a criterion
function with respect to u are not structured. That is, when minimizing the criterion
function the controller outputs are not assumed to be generated by a control law, in
contrast to most other control strategies. For example, in LQ controllers, the
controller output is usually calculated by using linear state feedback:
10


u(k) -k x
where k is a vector containing the controller parameters and x represents the state
vector of the process. The parameter vector k is obtained by minimization of a
quadratic criterion function with respect to k. Because in predictive controller design
it is not a priori assumed that the controller outputs are generated by a control law,
constraints on the controller output can be taken into account in a systematic way. For
example, if the output of the controller is limited between two values, the
optimization problem can simply be formulated as:
uop, argmin J
y U
with uopt subject to
ux < u{k + i -1) < u2
where w, and u2 represent the lower and upper bound, respectively, on the
controller output. Now, the optimization problem is a constrained optimization
problem for which an analytical solution is no longer available and, therefore, an
iterative optimization method must be used.
Up to now, the predictive control concept for SISO systems has been discussed.
11


However, the predictive control concept is also applicable to multi-input, multi-output
(MIMO) processes. Although the computations become more complex, an analytical
solution to the optimization problem is still available if the criterion is quadratic, the
model is linear and there are no constraints [11].
1.3 Relationship to Other Methods
LQ controllers and predictive controllers are based on the minimization of a cri-
terion function. Therefore, one can expect that LQ control is very closely related to
predictive control.
1.3.1 LQ (Linear-Quadratic) control
Because predictive controllers are in general discrete-time controllers, discrete
LQ controllers only are considered. Further, only the single-input, single-output case
N
is considered. In order to show the similarities and differences between predictive
control and LQ control a brief discussion of discrete-time LQ control is in order.
Some LQ methods are based on linear state feedback while others are based on output
feedback. Here, the state-space approach is considered.
In discrete-time LQ control the process is given by:
x(k +1) = Ax(k) + Bu(k)
y(k) = CTx(k)
12


where x(k) denotes the state of the process and A, B and C are the process parameters.
In order to calculate the controller output the following criterion function is
minimized:
N
J = Yjxr(k)Qx(k) + Ru1(k) (1.2)
k=0
where N is the terminal sampling instant, Q is a weighting matrix and R is a
weighting factor. Hence, similar to predictive controller design, a criterion function is
minimized over a horizon. However, in contrast to predictive controllers the receding
horizon approach is not employed. This is a fundamental difference. The criterion
function is minimized only once (at k = 0), resulting in an optimal controller output
sequence from which the controller output used to control the process is taken at
every sample. Hence, future disturbances and modeling errors cannot be taken into
account. However, the controller output sequence determined in this way can also be
generated by the following linear state feedback:
u{k) = -kT (k)x(k)
The feedback vector k is time varying despite the fact that the process is time
invariant. However, as k increases k becomes constant. This is one of the reasons why
in practice the solution for N oo is used. A time-invariant state feedback
controller is then obtained which, because of the feedback, attenuates the effect of
13


disturbances and modeling errors on the behavior of the control system. Disturbances
can also be taken into account explicitly by introducing disturbances on the states and
on the output of the system. Then, a stochastic optimization problem must be solved.
This problem is usually solved by applying the separation theorem which states that
the stochastic optimization problem can be separated into two parts: find a state
estimator which gives the best estimates of the states from the observed outputs, and
find the optimal state feedback law which is now a feedback from the estimated states.
The latter problem is solved by minimization of (1.2) in which the real states are
replaced by their estimates.
For finite N, the optimal controller output sequence can be found by using
dynamic programming. First, the optimal controller output u(N) is determined, then
u(N -1), and so on. The controller output sequence u(0), , u(N) minimizing has
been found, minimization of
N
J = YJxT(k)Qx(k) + Ru1(k) (1.3)
k=0
has been found, minimization of
N
J = ^xT (k)Qx(k) + Ru2(k)
k-j
under the assumption that have been supplied to the process and
14


that there are neither disturbances nor modeling errors, yields a controller output
sequence u{j),- ,u(N) which is equal to that obtained when minimizing (1.3).
Knowing this, finite horizon LQ control can also be obtained by using a receding
horizon framework where the prediction horizon is decreased by one at every
sampling instant. This situation is illustrated in Figure 1.3 where Q-CTC. Using
this, we can rewrite (1.2):
d denote the situation at time t. Similar to predictive controllers, an optimal
controller output sequence is calculated over the prediction horizon and only the first
controller output in this sequence (= u(k)) is used to control the process. At the next
sampling instant, hence at time t + 1, the situation changes. Again, an optimal
controller output sequence is calculated but now over an interval with length H -1
keeping the end of the interval constant (namely at time t + Hp).
In conclusion it can be stated that for a finite prediction horizon the fun-
damental difference between predictive control and finite horizon LQ control is the
receding horizon approach with a fixed length prediction horizon employed by
predictive controllers in contrast to a decreasing length prediction horizon employed
by LQ controllers.
if,
P
1=]
Where H is used instead of N and denotes the prediction horizon. Figure 1.3b and
15


I k I
A 1 &+1
iiiiiir
i t i
t1 f+i
time
i k i
k1 fc+1
past -* future
iiiiiiii
I t |
t-1 t+1
time
Figure 1.3. Finite horizon LQ control in a receding horizon framework. Parts b and d
denote the situation at time t, whereas parts a and c denote the situation
at time t+\.
1.3.2 Pole-placement design
Another model-based controller design method that is frequently used to design
a discrete controller is the pole-placement design method. Pole-placement design is
usually applied to processes described by:
y(k) = q " 'V)+£W
A{q )
(1.4)
16


Figure 1.4. The closed-loop system as it is considered in pole-placement
design.
where y(k) is the output of the process, u(k) denotes the input of the process, ^(k) is
a disturbance acting on the process output and d is the time delay of the process in
samples. The controller output is generated by the linear control law:
Ru(k) = -Sy(k) + 7Sp (1.5)
where R, S and 7 are the controller polynomials in the backward shift operator q~]
and S denotes the set point. Equations (1.4) and (1.5) result in the closed-loop
system as depicted in Figure 1.4. The controller polynomials R and S are selected
such that the closed-loop poles appear at the desired locations and 7 is selected such
that the closed-loop system behaves as desired to set point changes. The major
difficulty with the pole-placement design method is to decide where to place the
poles of the closed-loop system, that is, how to translate the design specifications
(which are usually defined in the time domain) into the location of the poles and
17


zeros of the closed-loop system. Only in the case of low-order processes without
zeros can this be done quite easily. Further, owing to the fact that the controller
outputs are generated by (1.5), constraints on the controller output and an arbitrary
reference trajectory cannot be included in the design. Moreover, in contrast to
predictive control the pole-placement approach can only be applied to linear
processes. Despite this, many issues in pole-placement design also appear in
predictive controller design.
1.4 Summary and organization of the thesis
The design of a unified predictive controller (UPC) for linear SISO processes is
discussed in Chapter 2, together with theoretical results concerning its design
parameters. Relationships with other controller design methods such as minimum-
variance and pole-placement control are also discussed in this chapter. Because of
this unified approach, many well-known predictive controllers and other controllers
can be obtained simply by selecting the parameters of the UPC controller.
A new approach to the long-range predictive control, named Identified
Predictive Control (IPC), is proposed in Chapter 3. The process predictor model,
used to determine the control sequence, is derived from a state space formulation for
the ARIMAX model. Since IPC uses a properly defined observer model of the
minimum-variance /-step-ahead predictor, its implicit identification has dead beat
properties that makes IPC particularly suitable for control of flexible structures. A
numerical example is used to demonstrate the algorithm in Chapter 4.
18


2. Predictive Controller Design
Any predictive controller is based on the predictive control concept depicted in
Figure 1.2. Therefore, any predictive controller has four major features in common:
1) A model of the process to be controlled. This model is used to predict the process
output over the prediction horizon.
2) The criterion function that is minimized in order to obtain the optimal controller
output sequence over the prediction horizon. Usually, a quadratic criterion which
weights tracking error and controller output is used.
3) The reference trajectory for the process output.
4) The minimization procedure itself.
In the following sections each of the above-mentioned items is discussed
separately.
2.1 Process Models and Prediction
In order to predict the process output over the prediction horizon an /-step-
ahead predictor is required. An /-step-ahead prediction of the process output must be
a function of all data up to t = k (defined as the vector (p), the future controller
A
output sequence u and a model of the process H. Such an /-step-ahead predictor
can thus be described by:
19


y(k + i) = f{u,(p,H)
where/is a function. Clearly, /-step-ahead predictors depend heavily on the model of
the process. In the following sections it is shown how /-step-ahead predictors can be
obtained for various process and disturbance models.
2.1.1 Process models and prediction when there are no disturbances
One of the simplest models that can be used to predict the output of a process is
the impulse response model:
:K*) = 2>,(*-;-1) (2.1)
where hj is they'th element of the impulse response of the process.
Note that an infinite number of impulse response elements are required in this
model. Assuming, however, that the impulse response goes to zero asymptotically,
the impulse response may be truncated to some finite number of terms. Then, (2.1)
becomes:
y(k)= YjijUik- j-1)
j=0
(2.2)
20


where nH is the number of impulse response elements hj that are taken into
account. All other elements are assumed to be equal to zero. The model
(2.2) is called the Finite Impulse Response model (FIR) [12]. The main
disadvantages of FIR models are:
1) Unstable processes (among which are processes with one or more integrators)
cannot be modeled.
2) The FIR model requires many parameters to be known or estimated.
The main advantages of using FIR models are:
1) Prediction of the process output at t = k+i with i >1 is simple:
H~ i
y{k + 0 = Y^hjUik -j + i-1)
j=o
No complex calculations are required.
2) No assumption has to be made about the order of the process.
Another process model that is often used in predictive controllers is the Finite
Step Response model (FSR):
j=o
21


where A is the differencing operator (A = 1 q 1 with q 1 the backward shift
operator) and ns is the number of step response elements Sj that are taken into
account. All other elements s are assumed to be constant. In the case where the
ns
process is not disturbed by noise, the coefficients of the FSR model can be
determined simply by applying a unit step to the input of the process. The FSR
model has the same advantages and disadvantages as the FIR model.
A process model that does not have the disadvantages of the above-mentioned
models is the transfer-function model:
y(k) = l Jig'}u(k~[) (2.3)
Aq )
where d is the time delay of the process in samples (d > 0 ) and the polynomials A
and B are given by:
A(q-') = \ + a]q'] +--- + anAq~nA
B(q ]) = b0+b\q +'" + bBq "B
where nA and nB are the degrees of the polynomials A and B, respectively. In
contrast to FIR and FSR models, (2.3) can also be used to model unstable processes.
Note that the FIR and FSR models can be regarded as subsets of (2.3). In FIR model,
22


A = 1 and the coefficients of the B polynomial are the impulse response elements:
bj = hj. In FSR model, A =1 and the coefficients of the B polynomial are given by
b0 = s0 and bj = Sj Sj, V/ > 1. Transfer-function models have the following
advantages:
1) A minimal number of parameters is required to describe a linear process.
2) Stable and unstable processes can be described by using transfer-function models.
The main drawbacks are:
1) An assumption about the order of the process must be made.
2) Prediction of the output of a process described by a transfer-function model is
more complex than that of a process described by FIR or FSR model.
2.1.1.1 Prediction of the output of a process described by a transfer function
The process output at t = k + i based on the model (2.3) can be obtained by
substituting k + i for k in (2.3):
X* + 0 = q .f'% A(q ')
Using the certainty equivalence principle [3], hence replacing the true d, A and B by
23


their estimates d, A and B yields:
y(k + z) = q ^q ^u(k + i -1)
(2.4)
where the symbolA denotes estimation. Equation (2.4) can also be written as:
Now, y(k + i) for i > 1 can be computed recursively using (2.5), starting with the
following equation for z = 1:
The z-step-ahead predictor (2.4) runs independently of the process, as is shown in
Figure 2.1. Because in practice differences between the model and the process are
inevitable, this z'-step-ahead predictor is not suitable for practical purposes. Hence, a
small mismatch between the true parameters of the process and the estimated
parameters or a disturbance on the output of the process will result in a prediction
error. One way to improve the predictions is to compute the predictions using (2.5)
y(k + i)-q dBu{k + z -1) q(A l)y(& + z -1)
(2.5)
Note that q(A-l) = al +a2q 1 h + anq nA+l since A is assumed to be monic.
y(k + \) = q-dBu{k)-q{A-\)y{k)
(2.6)
24


and (2.6) with y(k) in the right-hand side of (2.6) replaced by the measured process
output y{k). In order to show the implication of this, (2.4) is rewritten as:
Ay(k + i) = q 3 Bu(k + i -1)
(2.7)
Figure 2.1. The structure of z'-step-ahead predictor (Equation 2.4)
Now introduce the following identity:
i = £,+?"'5-
A A
EiA = \-q-Fi
(2.8)
where Et has a degree less than or equal to i -1 and Ft is of degree nA -1.
Equation (2.8) is called a Diophantine equation [1] whose solution can be computed
25


manually using long division or a computer by using a recursive algorithm (see
Appendix A).
Multiplying (2.7) by Ej yields using (2.8):
y(k + 0 = q-*EtBu{k + i-1) + Fty(k) (2.9)
Instead of computing y(k + /') recursively using (2.5), y(k + i) can be calculated
directly using (2.9). Using (2.6) with y(k) replaced by y{k) as suggested above to
correct the model for differences between the model and the process, is now
equivalent to using (2.9) with y(k) replaced by y(k') yielding:
y(k + i) = q *EtBu{k + /-!) + F^k)
(2.10)
In order to show the structure of this /'-step-ahead predictor, (2.8) is multiplied by
B:
EiB = i-q-i^-
A A
(2.11)
Substituting the factor EtB in (2.10) by the right-hand side of (2.11) yields the
following predictor-corrector model:
26


(2.12)
y(k + i) = ?-J-u(k + i-l) + Fl[y(k)-
'-Av---------' '-------,------
prediction correction
The first part of (2.12) is y(k + i) without replacing y(k) in (2.9) by y(k) and
hence fully relies on the model, while the second part of (2.12) corrects for modeling
errors or disturbances present on the output of the process. Obviously, if there are
neither modeling errors nor disturbances, y(k) y(k) is equal to zero and the /-step-
ahead predictor (2.12) coincides with (2.4). That (2.12) is in fact an observer is
clearly shown by Figure 2.2.
Figure 2.2. Block diagram of z-step-ahead predictor (Equation 2.12)
27


2.1.2 Disturbance models
Thus far, disturbances have not been explicitly taken into account. In order to
take these disturbances into account when predicting the output of the process the
disturbances must also be modeled. For this purpose, the model (2.3) is extended
with a disturbance term %(k) which represents the totality of all disturbances and
without loss of generality is assumed to be located at the output of the process:
The disturbance i;(k) may in general be a sum of deterministic and stochastic
disturbances. Prediction of the process output at t = k + i is realized by substitution
of k + i for k in (2.13):
Clearly a prediction of the disturbance at t = k + i is required.
2.1.2.1 Deterministic disturbances
Two kinds of deterministic disturbances can be distinguished: disturbances
which can be predicted exactly and disturbances which cannot be predicted exactly.
A(q ')
(2.13)
(2.14)
28


Deterministic disturbances which can be predicted exactly
Disturbances which can be predicted exactly are, for example, disturbances
characterized by:

(2.15)
where are shown in Table 2.1. For example, for a sinusoid only the frequency co and the
sampling period Ts need to be known, the phase and amplitude need not be known.
The /-step-ahead prediction of a disturbance characterized by (2.15) is given by:
#f(?')£(* + 0 = 0 (2.16)
Table 2.1. Some deterministic disturbance characterized by ^ (q 1 )%(k) = 0.
Disturbance
Constant Ramp Parabola Sinusoid l-q-] (1- 29


The following Diophantine equation is used in order to write %(k + i) as a function
of data up to time t = k :
-f = £, (2.17)
ft ft
Note that Ei and Fj satisfying (2.17) are different from the Et and F( obtained
when solving (2.8). Multiplication of (2.16) by yields using (2.17):
ftk + i) = Fiftk)
(2.18)
£(k) in (2.18) can be computed from (2.13):
m = y(k)~
u(k -1)
(2.19)
The /-step-ahead predictor (2.14), for the process (2.13) and the disturbances de-
scribed by (2.15), becomes using (2.18) and (2.19):
y(k + i)
^ A |
q ,f(j? ) (* + i -1) + F, [y(k) m]
A(q- )
Note above equation is an observer with a structure similar to that shown in Figure
30


2.2.
Deterministic disturbances which cannot be predicted exactly
Disturbances which cannot be predicted exactly are disturbances which, for ex-
ample, can be modeled by:
where v(k) is a signal that can be measured but not predicted and C and D are monic
polynomials. Prediction of £(&) at t = k + i is realized by substituting k + / for k in
Because, by assumption, v(k) cannot be predicted it is not possible to calculate
C
J;(k + i) exactly. However, because of the filter , E,{k + i) not only depends on
the unknown v(k +1), , v(k + i) but also on v(ft), v(k -1), which are assumed
to be known. In order to separate (2.21) in future and past signals, the following
Diophantine equation is used:
(2.20)
(2.20):
B,{k + /) = v(k + i)
D
(2.21)
31


(2.22)
where Ej and Ft are polynomials. Note that a negative degree of a polynomial
implies that the polynomial does not exist and may be replaced by zero. Using (2.22),
(2.21) can be rewritten as:
The first term of (2.23) involves future values of v(k) and is thus unknown. The
second term of (2.23) is known because, by assumption, v(k) can be measured. Note
that this second term is the future response of d;(k) if v(k + /') = 0 for / > 1. In
order to be able to calculate %(k + i) an assumption for v(k + i) must be made for
/ > 1. Hence, some a priori knowledge of v(k) must be available. If one assumes that
v(k + i) is constant it can be calculated, for example, by using:
Now, v(k + i) is equal to the mean value of v(k) calculated over N + 1 samples.
U--V---' D
unknown 1
(2.23)
known
32


2.1.2.2 Stochastic disturbances
A stochastic disturbance appearing on the output of the process is assumed to be
described by:
m = (2.24)
where e(k) is a discrete white noise sequence with zero mean and variance a] .
If the model (2.13) is used with the disturbance £(k) described by (2.24) some
well-known stochastic process models can be obtained for particular settings for C
and D.
Table 2.2. Some well-known process models and the settings for C and D in the
disturbance model (Equation 2.24)
Name of model C D
ARX 1 A
ARMAX C A
ARIX 1 XA
ARIMAX c AA
The prediction of the disturbance at t = k + i is given by:
33


Z(k + i)=^e(k + i)
Separation of future and past terms is again realized by using the Diophantine
equation (2.22):
i;(k + i) Ete{k + i) + ~ e(k)
future
past
(2.25)
The first term involves future noise only and is thus unknown. Although the second
term of (2.25) involves the unknown e(k) it can be calculated using data available at t
= k. Recall the process (2.13) with i;(k) given by (2.24):
y(k) = - ^u(k-l) + ^-e(k)
A(q ) D
Multiplying above equation by ~ and rearranging the result yields:
F F
<£)=
D C
y(k)-^~-u(k-l)
(2.26)
Equation (2.26) shows that the second term in (2.25) can be computed from data
available at t = k and is thus known. The /-step-ahead predictor for the process
output (2.13) now becomes, using (2.14), (2.25) and (2.26):
34


y(k + i) = 2JLu(k + i-\) + ?
y(k)-?^u(k-\)
A
+ Efiik +z)
This predictor still, however, contains a term which is unknown, namely Eje(k + i).
The best z'-step-ahead predictor can be obtained by taking the conditional expectation
of y(k + z) given

y(k + z) = E [y(k + z)|w,^]

-----u(k + i-1) +
A C
-d D
y(k)-^^u(k-1)
(2.27)
where E is the expectation operator. Clearly, the prediction error e(k + i) for the
z'th predictor is given by:
s(k + z) = y(k + z) y(k + z) = Ete{k + z)
Since this prediction error consists of future noise only, and because e(k) is
assumed to be white noise with zero mean, the variance of the prediction error is
minimal. For this reason (2.27) will be called the minimum-variance (MV) z'-step-
ahead predictor. (2.27) finally yields:
35


y(k + z) =
(2.28)
d ^
u(k + i -1) + 5- \y(k) j>(*)]
/,1 L
Q~d B
with j>(&) = w(Z -1). Note that F, in (2.28) is now computed by using (2.22)
A
A A
with C and Z) replaced by C and D. Note also that (2.28) has a structure identical
to that of (2.12) and hence can be regarded as an observer. This is illustrated by
F-
Figure 2.2 in which Fl is replaced by -T. In order to split up the /-step-ahead
predictor (2.28) in parts that are known at time t = k and future signals, another
Diophantine equation is introduced:
i = i>d +1 (2.29)
A A
where G, and Hl are polynomials. G, can be simplified into:
G'(q~l) = g0 +g1e" + + £,-1?
-;+l
Using (2.29) the z-step-ahead predictor (2.28) becomes with i>d +1:
y(k + i) = G,,u(k + i-d Y) + A- u(k 1) + \y{k) y(k)]
v------------------y-------/ A T
future '--------------v--------------x
past
(2.30)
The term Gtu(k + i-d -1) involves future controller outputs only. The other terms
36


in (2.30) do not depend on future controller outputs and hence are fully determined
at t = k.
2.1.3 Properties of /-step-ahead predictors based on stochastic
disturbance models
This section shows some of the properties of minimum-variance (MV) /-step-
ahead predictors. First, the effect of the stochastic disturbance model (2.24) on the
predictions is discussed.
Influence of the disturbance model on the predictions
The MV /-step-ahead predictor (2.28) gives an insight into the influence of the
T and D polynomials on the predictions and on the predictive controller which
relies on the predictions. For example, (2.28) shows that the differences in z'-step-
ahead predictors based on different disturbance models can be expressed by a simple
term which depends on the filtered difference between the measured process output
F.
and the estimated process output. The filter depends on the way disturbances are
modeled. Hence, depending on the choice of T and D (remember these polynomials
are selected by the designer), modeling errors turn up filtered on the predictions, and
because the controller output depends on the predictions the modeling errors turn up
filtered on the controller output too.
Another way of showing the influence of the disturbance model on the pre-
37


dictions is to examine the prediction error as a function of the disturbance model.
Recall for this purpose the process model (2.13):
y(k) =
u(k-\) + $(k)
(2.31)
and the MV /-step-ahead predictor (2.28) for this model, assuming that %(k) in
(2.31) is generated by (2.24):
Kk + i)
(i ^
u{k + i -1) + ^ \y{k) y(kj[
A I
(2.32)
A more compact form of (2.32) can be obtained by using (2.22) (with C replaced by
T and D replaced by D) multiplied by BD /AT:
A A A A
B BDEi BFi
= ~ +q -ir1-
A AT AT
BFl
AT
= q
/ A
(B
BDEt"
Using this relation, (2.32) can be rewritten as:
y(k + i)
BDEi . Fi
- u(k +1 d -1) + y(k)
AT T
(2.33)
Now, using (2.22) and (2.31), (2.33) can be rewritten as [7]:
38


u{k + i-1)j^-tfrk + i)
The prediction error e(k + i) is now given by:
s(k + i) = y(k + i) y(k + i)
model error
From above equation it follows that the effect of modeling errors and disturbances
2.1.4 Using predictions for feedback for processes with delay
This section shows that if the process has time delay using predictions of the
process output for feedback rather than using the delayed output of the process
directly, potentially yields a control system that is more robust and has a better
performance.
Suppose we have the closed-loop system depicted in Figure 2.3. The rela-
tionship between input and output is given by:
on the prediction error can be influenced by tuning the polynomials D and T.
(2.34)
39


Figure 2.3. Controller structure using y(k) for feedback.
in which R is the transfer function of the controller.
Because the time delay of the process is present in the closed-loop characteristic
equation, the closed-loop system easily becomes unstable (because of the large phase
shifts caused by the time delay). In order to avoid such problems, the closed-loop
structure depicted in Figure 2.4 is preferable. Note that an algebraic loop is present in
Figure 2.4 if the transfer function R does not contain at least a unit delay. Hence,
calculation of u(k) using the block diagram depicted in Figure 2.4 is not
straightforward. Now the relationship between input and output is given by:
-c/-l nn*
y(k) = l----- w(*) (2.35)
in which R is a transfer function different from R. With this closed-loop system,
the time delay is no longer present in the closed-loop characteristic equation and
hence phase shifts caused by the time delay no longer influence the stability of the
closed-loop system. The signal used for feedback is y(k+d+1). This signal is not
40


measurable. Therefore, y(k+d+\) has to be estimated.
Figure 2.4. Controller structure usingy(k+d+l) for feedback
Making the input-output relations (2.34) and (2.35) identical yields for R:
R =-------/---------- (2-36)
\ + R-(\-q-d~x)
A
Using (2.36) the closed-loop system depicted in Figure 2.3 becomes as shown in
Figure 2.5. The symbols A indicate that a model of the process is now used to
calculate y(k + d +1). The signal used for feedback is now an estimate of y(k+d+1).
If the process is correctly estimated and there are no disturbances (£(k) = 0), then
the correction signal c(k) is equal to zero. Thus, in this case y(k) and y(k) are
identical. If, on the other hand, disturbances are present or the process is not
correctly estimated, c(k) 0 and an extra feedback loop is activated.
41


Figure 2.5. Controller structure using y(k + d +1) for feedback.
From Figure 2.5 it is easy to calculate y(k + d +1):
y(k + d + \) = ^r u(k) +
A
y(k)-q d iu{k-l)
A
(2.37)
The predictor (2.37) is called the Smith predictor. Note that the Smith predictor
allows correct predictions in the steady state (if £(k) and u(k) are constant), even if
the process is not correctly estimated.
The Smith predictor (2.37) is the best d+ 1-step-ahead predictor for a process
described by (2.13) with a constant disturbance or with £(k) generated by:
A
42


Such a disturbance is called random walk or Brownian motion and can be regarded
as a sequence of random steps occurring at random instants. Because this type of
disturbance is found in many industrial processes, the Smith predictor is a good
choice for predicting y(k + d +1). A drawback is that in that case there is an
algebraic loop (see Figure 2.5) and hence the calculation of u(k) is not
straightforward. This problem can be avoided by predicting y(k + d) and using
y(k + d) for feedback.
2.1.5 A unified approach to process modeling and prediction
Matrix notation
For convenience, the /-step-ahead predictors (2.30):
y(k + i) = Gtu(k + i- d -\) + ^-u(k-\) + \y(k)-y(k)\
A T
A
for i = d +1, , Hp can be written in matrix notation yielding:
y = Gu + Hu + Fc
(2.38)
where:
y = [y(k + d + \),-,y(k + HP)\T
43


u = \u(k),- ,u(k + HP -d-l)Jr
u = [u(k l),u(k 2),- ] T
c = [c(k),c(k- !),] r
with
u{k) =
u(k)
A
y(k)-Xk)
T
and the dimensions of y, u u and c are given by:
[y\=HP-dx 1
\u\= Hp-dx 1
[] = max(nw ) +1 x 1
i 1
[c] = max(nF() +1 x 1
The matrices G, H and F are built up of the elements of the polynomials Gt, Hi
and Ft, respectively:
44


G =
go 0
S\ go
g
H p-d-\
0
0
go
[G\ = HP dxHP-d
H =
d+\
H,
Hp
d+1
F =
F,
where g. denotes the jth element of Gi. Note that i > d +1 since prediction of
y(k +1), , y(k + d) does not make sense because these values cannot be influenced
by the future controller output sequence u. Note also that G is lower triangular.
As already mentioned, the disturbance model is assumed to be described by
C
%{k) ~e(k) (2.24). Hence, hereafter deterministic disturbances other than those
characterized by ^ (#')£(&) = 0 (2.15) are no longer considered. They can,
however, be taken into account simply by adding predictions of the deterministic
disturbance to (2.38):
45


y = Gu + Hu + Fc + £del
(2.39)
where:
f*,=kd.,(*+rf+i).-.fd and £det (k + i) is the effect of a deterministic disturbance on y(k + i). The model
(2.39) is used to derive the predictive controller.
2.2 Criterion Functions
Criterion functions for use in predictive controllers are discussed in this section.
Because in predictive controllers, minimization of a criterion function yields the
predictive control law, the choice of the criterion function is of paramount impor-
tance.
Design objectives such as overshoot, rise time, settling time and damping ratio
can easily be understood and specified. However, it is difficult to minimize criterion
functions based on such objectives, because the relationship between the controller
parameters and these criteria is in general highly nonlinear. Analytical solutions are
seldom available. This is why mathematically convenient criterion functions are
often used.
Probably the simplest criterion function that can be used for predictive
46


controller design is:
J = ^[j)(yt + /)-w(yt + /)]2 (2.40)
i=i
Minimization of above equation yields a controller which minimizes the tracking
error between the predicted process output and the reference trajectory.
Minimum-variance control applied to a process with badly situated zeros results
in a badly damped or even unstable control system. Since almost any discretized
process has badly situated zeros, minimum-variance control can hardly ever be used
in practice; it is included for completeness only. As a result, criterion function (2.40)
is not suited for predictive controller design [2]. Basically there are two ways to
overcome the problems of badly situated zeros with minimum-variance control:
1) Include the controller output in the criterion function.
2) Structure the future controller outputs u(k),- ,u(k + Hp -1) and modify the
criterion function (2.40) in such a way that the badly situated zeros of the process
are not included in the closed-loop characteristic equation.
2.2.1 Controller output weighting
The first way of overcoming the problems with minimum-variance control
results, for example, in the use of the following criterion function:
47


(2.41)
J = ^[y(& + 0 w(k + O]1 2 + + i -1)2
;=i
where p is a weighting factor (p>0). Now, two conflicting objectives arise: the
minimization of the tracking error and the minimization of the controller output. The
weighting factor p is introduced to make a trade-off between these objectives.
Increasing the weighting factor makes the controller output variance more important
in the criterion function. Minimization of the criterion function then results in a less
active controller output. However, at the same time the tracking of the trajectory by
the process output becomes less important resulting in a slower process output.
The use of the weighting factor p as proposed in (2.41) has two major dis-
advantages:
1) Although the effect of p on the closed-loop system is clear it is hard to choose
p such that the system behaves as desired because p depends on the process
and must usually be determined by simulations in combination with the well-
known and often-used trial-and-error method.
2) The use of p yields a steady-state error which is a function of p for type 0
processes. This is caused by the fact that for a type 0 process u(k) is constant and
nonzero in the steady state if the set point and disturbances are constant.
Consequently, p affects the criterion function in the steady state and hence the
48


steady-state controller output that is obtained by minimization of the criterion
function. Clearly, this results in a steady-state error.
The effect of p on the steady-state error is the reason why in many predictive
controllers the controller increments are weighted instead of the controller outputs:
Up
J = ^ \y{k + i) w(k + /)]2 + pAu(k + i -1)2
1=1
In the steady state the controller increments are equal to zero if the reference
trajectory and the disturbances are constant. Hence, in the steady state, p has no
effect on the criterion function and thus none on the controller output. Consequently,
p does not affect the steady-state error of the closed-loop system. However,
weighting the increments rather than weighting the controller outputs directly does
not make the choice for p easier and can have a disastrous effect on the stability of
the closed-loop system.
2.2.2 Structuring the controller output
The second way to overcome the problems with minimum-variance control is
by using a priori information about the structure of the controller output required to
drive the process output to the reference trajectory. For example, the steady-state
relationship between the controller output and the output of the process can be used
49


to obtain an appropriate structure for the future controller outputs.
Structuring the controller output using steady-state arguments
If the system is driven by a disturbance %(k) which in the steady state is
characterized by ^ (#"')£(&) = 0, then the controller output in the steady state
satisfies %(q~l)u(k) = 0. This result can easily be extended to the case where the
system is driven by a disturbance %(k) and a reference trajectory w(k').
Using this a priori information with respect to the controller output in the steady
state, one can minimize (2.40) under the constraint that the future controller outputs
satisfy ^ (q~] )u(k) 0. Hence, the predictive controller is now obtained by
minimization of
Up
J = ^ [y(k + i) w(k + /)]2 (2.42)
;=i
taking into account the following equality constraint:
(q-l)u(k + i-1) = 0 1 Now, the dimension of the optimization problem is reduced from Hp to 1. Once
u(k) is known, u(k + i -1) for i > 1 can be computed using (2.43). Minimization of
50


(2.42), taking into account (2.43), ensures that tp^(q l)u(k) = 0 is satisfied.
Structuring the controller output using transient behavior arguments
q~dB
C
For dead-beat control the output of the process y{k) ---u{k -1) h e{k)
A D
in the absence of disturbances can be driven to a constant reference trajectory in
nB +1 samples; nA +1 different controller outputs are required to do so. Further,
for i > nA +1, u(k + i -1) is constant. Hence, the output of a dead-beat controller
satisfies:
Au(k + i -1) = 0 / > nA +1
and the process output satisfies:
y{k + i) = w(k + i) i^nB +1
A predictive controller with the same behavior can hence be realized by mini-
mization of:
J= J][Kk + i)-w(k + i)]2
i=nB+]
(2.44)
51


under the following constraint:
Au(k + i-Y) = 0 nA+\ (2.45)
and Hp > co. Note that the tracking error for i = \,---,ng is not included in (2.44)
and Au(k + i -1) is not assumed to be zero for i = 2, , nA +1. In order to have both
possibilities, two extra parameters can be introduced yielding for (2.44) and (2.45):
where Hm is the minimum-cost horizon and Hc is the control horizon. Hence, we
can state that if HP o, Hm = nB +1 and Hc = nA +1 the predictive controller
minimizing (2.46), taking (2.47) into account, causes the process output to settle to a
constant reference trajectory in nB+1 samples using nA+1 distinct controller
outputs. Hence, for these settings the predictive controller acts as a dead-beat
controller. In order to generalize the above-mentioned result consider the following
criterion function:
(2.46)
i=H,
and
A u{k + i -1) = 0 Hc (2.47)
52


(2.48)
J= ^][?(* + 0-w(* + o]2
i=H
ni
which is minimized under the following constraint:
The following theorem can now be stated:
Theorem 2.1 If Hp>nA+nB +d + n^, Hm=nB+d + \, Hc =nA +n^
disturbances are absent and the input/output behavior of the process is correctly
estimated, then minimization of (2.48), taking (2.49) into account, yields a controller
that drives the process output y{k) in nB+d +1 samples to a reference trajectory
specified by 1.
Hence, when using Theorem 2.1 one can now select HP, Hm and Hc such
that the process output settles in nB+d +1 samples at a reference trajectory
described by is available HP, H m and Hc can easily be selected if a dead-beat response [1] is
desired. A disadvantage is that the response time can only be influenced by changing
the sample time. However, the servo behavior can easily be tuned by introducing two
53


auxiliary signals, y'(k) and u'(k) defined by:
y(k) = -^-y'(k)
P(q')
ii(k) = ^1 u'ik)
Pis')
where P(q~l) is a monic polynomial in q~x. If criterion function (2.48) is mini-
mized, taking (2.49) into account, with y{k) and u(k) replaced by y'(k) and
u'(k) respectively, then dead-beat control of y(k) can be realized by choosing
HP, Hm and Hc in accordance with Theorem 2.1. Clearly, the response of the
true process output y(k) can be influenced by P.
Because the use of P is essential for tuning the servo behavior of the control
system, P is incorporated directly in criterion function (2.48) and in (2.49) yielding:
j= Y\Py{k + i)-P{\)w{k + i)\2 (2.50)
and
0Pu(k + i-1) = 0 Hc < i < HP (2.51)
As a result, if the reference trajectory is constant, minimizing (2.50) and taking
54


(2.51) into account, yields, if the parameters in the criterion function are chosen
according to Theorem 2.1 and if the assumptions mentioned in the theorem are
satisfied, a predictive controller.
Criterion function (2.50) shows that Py(k + i) must now be predicted instead
of y(k + i). In a way similar to the one used to derive (2.28) it can be shown that the
MV /-step-ahead predictor predicting Py(k + i) is given by:
a~d BP F
Py(k + i) = -a u(k + i-1) + [y(&) t(£)]
A T
(2.52)
-djj
where y(k) is given by y{k) = u(k -1) and Ft is solved from
A
PT zr Fi
-7T- = Ej +q -A-
D D
where nFi = max(nF +nT -i,nD -1). Separation of future and past can be realized
by using:
^- = G,+q-l+S^- , i>d +1
A A
/\ A
where nG rewritten as:
55


(2.53)
Py(k + i) = G,u(k + i-d 1) + u(k -1) + \y(k) y(k)\
'------y-------' A T
v-----------V---------------
future past
Note that (2.52) is similar to (2.30). However, Gi, Ht and F: in (2.53) differ from
those in (2.30). Finally, collecting the /-step-ahead predictors (2.53) in matrix
notation similar to (2.39) for yields:
y =Gu + Hu + Fc +1 det
where:
y =[PKk + Hm),-,Py(k + Hp)]T
u = ,u(k-V Hp -d ljjr
u = [u(k-Y),u(k-2),---] T
c = [c(k),c(k~ l),-]T
and
u(k) =
c(k) =
u(k)
A
y(k)-y(k)
T
56


The dimensions of y*, u, u and c are given by
[f]=(//,,-//+l)xl
W=(
[] = (max(%.) +1 jx 1
[c] = (max(n,,) +1 jx 1
The matrices G, H and F are still built up of the elements of the polynomials G,,
Hj and Fi Note that G is not square if Hm>d +1.
2.2.3 Extensions to the criterion function
In the preceding sections it has been shown that by structuring the future
controller outputs, problems arising when controlling processes with badly situated
zeros can be avoided. Moreover, by structuring the controller outputs the dimension
of the optimization problem can be reduced. This reduces the numerical complexity
of the optimization problem considerably. Therefore, (2.50) and (2.51) are quite
useful for predictive controller design. For this purpose weighting factors on the
tracking error have been introduced in the criterion function. This yields for (2.50):
57


(2.54)
^ = i>, [w+0 --POM*+0]2
<=W,
where yt is the weighting factor for the tracking error at t = k + i. A single
parameter X > 0 is often used to determine the weighting factors yt:
r,=*H'-1 nm Note that if X = 1, the tracking error is equally weighted over the prediction horizon.
Then, criterion function (2.54) coincides with (2.50).
By choosing X < 1, the tracking error in the near future is less important than
the tracking error in the remote future. For P = 1 this choice for X is motivated by
the fact that a real process cannot track a set point change in, for example, one
sample. As a result, the tracking can be quite large in the first few samples. By not
including these tracking errors in the criterion function that is minimized, the
controller will not attempt to make them smaller. Hence, choosing X < 1 in
combination with P = 1 usually yields smooth control. However, for nonminimum-
phase processes, choosing X<\ and P = 1 yields opposite results. This can be
explained by the fact that choosing X<\ is similar to choosing Hm > d +1. If
X - 0, then the same results are obtained by choosing Hm =HP.
Clearly, in the case of a nonminimum-phase process, tracking errors in the near
58


future must be taken into account and hence X must not be chosen too small.
By choosing X > 1, the tracking error in the near future is more important than
that in the far future, yielding more active control actions. However, in contrast with
choosing X < 1, there seems no motivation to choose X > 1. Moreover, simulations
have shown that in controlling a nonminimum-phase process, choosing X > 1 can
result in an unstable closed-loop system.
2.2.3.1 A unified criterion function
In order to be able to select and examine different criterion functions, the
following unified criterion function is used:
where Qn and Qd are monic polynomials with no common factors. This criterion
function is minimized under the following constraint:
The reason for introducing controller output weighting is that it is in some cases
quite useful despite the fact that p is difficult to choose. Moreover, because many
J = Y \Pv(k + i) P(l)w(k + Ol2 + p Y u(k + /-1)
(2.55)
(2.56)
59


predictive controllers consider controller output weighting in a unified approach [2]
to predictive controller design such a feature should not be lacking. Note that Qn
and Qd can be used to obtain weighting of u(k) as in (2.41) and weighting of
Up
Au(k + i -1) as in J = ^[j}(£ + i) w(k + /)]2 + pAu(k + i -1)2.
i=i
Note that if the process to be controlled contains time delay,
A
u(k),- ,u(k + Hp -1) do not affect y(k + Y),---}y(k + d) and hence have no effect
on the optimization problem. Thus there is no need to include these predicted process
outputs in the criterion to be minimized. Hence, the minimum cost horizon Hm can
be selected greater than or equal to d +1.
2.3 The Predictive Control Law
The optimal control law can be derived by the minimization of criterion
function (2.55) subject to (2.56) with respect to the controller output sequence over
the control horizon Hc: u(k),- ,u(k + Hc -1). In some predictive controllers the
minimization of the criterion function is performed by using an iterative method.
However, when the process model is linear, the criterion function is quadratic and
there are no constraints on the controller output, the criterion function can be
minimized analytically. In this section, it is assumed that there are no constraints.
2.3.1 Derivation of the unified predictive control law
If the criterion function J is optimized with respect to the vector u, then any
60


local optimum u satisfies:
where g denotes the gradient and the symbol d denotes the partial derivative.
Moreover, if the Hessian is positive definite Vh, then any local optimum is the
global minimum. The Hessian J is given by:
In order to calculate the gradient of (2.55) with respect to u, the criterion function
(2.55) is rewritten in matrix notation:
(2.57)
where:
w = [?(!)# + //), -,P(l)w(k + Hr)Y
f =[Py(k + HJ,-,Py(k + Hr)]T
61


Further, introduce the vector u :
u [u(k),---,u(k + Hc -1)] T
Note that u contains only those elements of the controller output sequence that
must be calculated. The other elements over the prediction horizon must satisfy
(2.56).
The gradient of (2.57) with respect to u becomes:
a/ _ du ,
= 2-=L-(v w) + 2p----u
du du du
(2.58)
Sy*
Equation (2.58) shows that the partial derivatives and
du
du . ,
---- are required,
du
2.3.1.1 Relationship between u and u
The relationship between u and u can be derived by solving u(k + Hc),
u(k + Hp-d-1) from(2.56):
$Pu(k + i -1) = 0 1 < Hc (2.59)
The required relationship can be obtained by using the following Diophantine
62


equation:
4>p
^-Hc+^+HcF^^m-Hc=^-^
F,
i-Hr
(2.60)
where the degree of is given by n^+np-1. From (2.60) follows using
(2.59):
u(k + i-\) = Fi_H u(k + Hc-\) 1 Separation of future and past terms is realized by using:
Fi-Hc ~^i~hc +<1 cHi-
Hr
(2.62)
in which the degrees of Gi_Hc and Hi_t1c are given by nG = mm(Hc ,n^+np)~ 1
and nH = + np Hc -1 respectively. Using (2.62), (2.61) becomes:
u(k + /'-!) = G,_Hcu(k + HC -1) + H,_h u(k -1)
with 1 = A and P = 1 (hence, controller outputs
63


are assumed to be constant for i Hc+l,---,Hp-d), then G,._w =1 and
Hi-Hc ~ 0
Now, the relationship between u and u becomes in a matrix notation:
u Mil + Nu
in which M is a matrix of dimension Hp-dxHc:
M =
'1 0 ... ... ... 0 "
0 1 ... ... 0
0 0 ... ... 1
0 0 0 0 & jinc Sj.0_
>
Hr
Hp-Hc-d
A'' is a matrix of dimension (Hp -d^x{n^+np- Hc):
N =
h
7,0
0
0
hi n * h]
1,0 rl\,nH
h,
J >
Hr
Hp Hc d
(2.63)
64


where j = Hp-Hc-d and u is given by:
u = [u(k-l),---,u(k +Hc
T
Note that if Hc = Hp -d, then M=I and N =0.
Further, the relationship between u and u is required:
u (k + i-1) = u(k + i -1)
Qd
1 < i < HP d
(2.64)
Separation of future and past elements is realized by using:
^ = 0(. +q~iQ-
Qd Qd
(2.65)
where O, and Q, are polynomials. Note that, because Qn and Qd are monic, (.
is also monic. Using (2.65), (2.64) becomes:
u (k + i-1) = Q'U(k + i-1) + Qt
Qd
A
Collecting u (k + i-l) for i = \,---,HP -d in matrix notation yields:
65


U =&U+&U
(2.66)
where <3> is a lower triangular matrix of dimension [hp -cI)y.{hp -d) and Q is a
matrix of dimension [h p d)x nn with nn = max [n0 ,n0i). The vector u is given
by:
u -
u{k-1) u(k-nn)
a Qd
~\T
Using (2.63) and (2.66), the relationship between u and u is given by:
u = 0M u +£u + 0Nu
The partial derivative
du .
---now becomes:
du
?*L = MT du
2.3.1.2 Relationship between y* and u
The partial derivative
^7 can be calculated by using the unified prediction
du
model:


y =Gu + Hu + Fc + l del
(2.67)
in which:
y=[Py{k + Hm),-,Py(k + Hrj\T
u = -//-])]7
u [u(k -Y),u(k -2),- ] T
c = [c(k),c(k + l),---]r
Utilizing (2.67) and (2.63), the relationship between and u is given by:
y = GMu + Hu + Fc + $det + GNu
and hence:
tG
T
2.3.1.3 The predictive control law
The gradient (2.58) becomes:
67


|z: = 2Mt (gtG + pr§)Mu + 2Mt [gt (m + Fc+l det + GNu w )+ p$T(Qu+ OJVfi)]
(2.68)
and the Hessian J is:
J = 2MT(GTG + p Assuming that the Hessian is singular, a global minimum of J with respect to u can
be obtained by setting the gradient (2.68) equal to zero and solving for u :
u = [mt (gtG + p&0]mY Mt [gt (w* -Hu- Fc ~liet GNu)-p (2.69)
Because this control law
y(k) = - u(k -\) + ^-e(k)
A D
is based on the general process model
and on a unified criterion function, it is called the
unified predictive control law.
Note that the matrix to be inverted is of dimension Hc x Hc. Hence, a small
control horizon is preferable for numerical reasons and in order to save computation
68


time. In many practical situations the control horizon can be chosen small and hence
the inverse can be calculated quite easily for low-order processes. The first element
of u (= u{k)) is used to control the process. All other elements are not used and
need not be calculated. Then, the unified predictive control law (2.69) can be
reduced to:
u{k) = vTw' -hTu- fTc-vTldei -v2u- pzTu- pzT2u (2.70)
where:
vT =xtGt 1 x {Hp Hm +1)
xT =e]R~v1MT \x(HP-d)
e] = [l50,---,0] T UHC
Rv =MT(GTG + ptfr0)M HcxHc
hT =vtH [vr]x[flr]
f =vtF [vT]x[F]
> ii 1x(j( +np ~HC
X II 1 xnn
zt2 = xr$ 69


The control law (2.70) can easily be implemented on a computer. Once the controller
vectors v,h,fv2,z and z2 are calculated, the calculation of u(k) is simple and fast.
2.3.2 The polynomial approach
In this section it is shown that if l iet =0 the unified predictive control law,
(2.70) can be written as:
Zu(k) = -Sy(k) + 7w(k + HP) (2.71)
The control law (2.71) is the one used in the pole-placement controller design
method. In this case w(k + HP) is equal to the set point. The purpose of writing the
predictive control law in polynomial form is for analysis purposes.
Rewriting (2.70) in polynomial form yields:
u(k) = Vw*(k + Hp)-~ u(k -1) Fc(k) V2 u(k -Y)-pu{k -1) pZ2 u(k -1)
A Qd
(2.72)
in which:
V VHP-Hm+1 + q~i+--- + vxq~Hp+H'
H = p\hT}
70


F = v\fT)
K =?{<'}
Z = p{zTj
z2=p}4)
where P is an operator that translates a vector into a polynomial. Rewriting (2.72)
using y(k) = ^^u(k-1) and c(k) = yields:
A T
[aTQ + q-' (,HTQd + ATQdV2 + pATZ + pATQdZ2 -q^BFQd)}/(*)
=ATQdVv\k + Hr)-AFQiy(k)
From above equation it directly follows that:
* = ATQd+ q-x[HTQd + ATQdV2 + pATZ + pATQdZ2 -qdBFQd)
S = AFQd
7 = A(q-1)P(l)T(q-1)Qd(q-')V(q-')
Figure 2.6 shows the control law (2.71) in a block diagram. From Figure 2.6 the
following transfer functions can be calculated:
71


(2.73)
y(k) =
u(k) =
q-d-'B7w(k + HP) + A^(k)
Ag + q~d'lBS
A7w(k + HP) + AS%(k)
AZ + q-d~lBS
Figure 2.6. The closed-loop system
2.3.3 Some implementation aspects
The i-step-ahead predictors used in the unified predictive control (UPC, [2]) are
given by (2.52):
Py(k + i)
q~d BP
A
A
u{k + i-l) + ^\y{k)-y{k)}
(2.74)
where y(k) is given by y(k) =
-z-u(k-l) and F, is solved from
A
72


PT F.
^ = Ei + q- 4
D D
(2.75)
Multiplication of above equation by BD /AT and rewriting the result yields:
A A \
BDEi
~IFJ
(2.76)
Substitution of (2.76) in (2.74) using y(k) =
u(k-l) yields:
Py(k + i)
q-dBbE, n F,
-Lu(k + i-\) + L y(k)
AT T
(2.77)
Separation of future and past is realized by using:
BDEj
A
AT
= Gi + 7
/+rf

(2.78)
Using (2.78) yields, for (2.77):
Pj>(A: + z) = G,u{k + i d -1) +
^u(k-l) + ^y(k)
AT 71
(2.79)
Collecting the z'-step-ahead predictors (2.79) for i = in a matrix notation
73


yields:
y = Gu + Hu + Fy + £det
(2.80)
where idet denotes the effect of deterministic disturbances on Py(k + i), and
and_y are given by:
u =
y =
u(k -1) u(k 2)
AT AT
y(k) T
T T
By using the same derivations as those used in Section 2.3.1 to derive the UPC
control law, it can be shown that (2.70) becomes:
u{k) = vT\v -hTu-fTy-vTldei -pzTu-(y2 + pzT2)u (2.81)
where u = \u(k-\),---,u{k +Hc-n^ ~np)\T and U =
A A A
If A is a factor of D, one simply removes A from (2.78) and solves Gt and
Hi from:
u(k-Y) u(k-nQ)
& Qd
74


(2.82)
rjn i * ji
where D = AD}. Further, in this particular case, (2.79) becomes:
n,/7 .s ,, . % HMk-ty + FMk)
Py(k + z) = Gtu{k + i-d-\) + -A-\ZF-L
and u(k-l) is no longer filtered by A. The UPC control law is still given by
(2.81) except that u is now given by:
u =
u(k-1) u(k-Y)
T~' T
~\T
Rewriting (2.81) in the polynomial form (2.71) shows that when A is a factor of D
A A
then A does not appear as a factor ofS and 7, and consequently A is no longer
A
a factor of the closed-loop characteristic equation. As a result, A need not be
eliminated from JR and the closed-loop characteristic equation, thus avoiding
numerical problems. Finally, the control law (2.81) is used to control the process.
2.4 The Reference Trajectory
The reference trajectory is discussed in this section. For the sake of convenience,
75


time delay is assumed to be absent: d = d = 0.
w
w
Sp-
Sp-
1 II
1
k l k
time kl k
b
time
Figure 2.7. Set point changes known a priori.
Often, a reference trajectory is used to define the behavior of the process output
from one set point to another. In generating a reference trajectory there are two
distinct approaches:
First, set point changes are known a priori. Then a set point change at
t = k + Hp is already known at t = k and the controller can respond to it by
calculating the correct u(k). Figure 2.7a and b illustrates this approach when the
reference trajectory is equal to the set point. Figure 2.5a shows the situation at
t = k-1. Now the set point change at t = k + Hp is beyond the prediction horizon
and hence is not included in the reference trajectory:
(2.83)
76


Figure 2.7b shows the situation at t = k. Now, the set point change is detected and w
becomes:
w = [l,---,l,Sp]r
Obviously, by generating w as described above, the relation
w{k + /-!) = q~xw(k + i) is valid for all i.
Figure 2.8. Set point changes not known a priori.
Second, set point changes are not known a priori. If, for example, an operator
changes the set point at t = k, this set point can be fed to the controller by making w
equal to:
w = [Sp,---,Sp]T
77


Note that at t = k -1, the reference trajectory w is still given by (2.83). Figure 2.8a
and b illustrates this approach. Again, Figure 2.8a shows the reference trajectory at
t = k-1, while Figure 2.8b shows the reference trajectory at t = k.
Apart from whether changes in the set point are known a priori or not, the way
the trajectory is generated plays an important role. The trajectory may be an arbitrary
sequence of points describing a path that a robot must track, for example, or a
parabola which is well suited to a number of servo systems. However, for many
processes, a simple first-order trajectory can be used. Such a trajectory is generated
by:
w(k + i) = (1 a)Sp + aw(k + i -1) l One question has not yet been answered: how is the reference trajectory initiated (i.e.
how to choose w(k) in (2.84))? Two possibilities are:
1. Use w{k) = y{k), as in many predictive controllers [8].
2. Use w(k) as the initial value of the reference trajectory.
Both possibilities are shown in Figure 2.9, where part a shows a first-order trajectory
initiated at y(k) and part b shows a first-order trajectory initiated at w(k). By
using the first alternative, an extra feedback loop is activated. Rewriting (2.84)
yields:
78


w(k +1) = (l a' )Sp + ay(k)
Figure 2.9. Two ways of generating a first-order reference trajectory.
In vector notation of above equation becomes:
w = jrSp + uy(k)
(2.85)
in which jt and v are vectors of dimension HP x 1.
jr=[(l-a)J(l-a2)."-,(l-a"')]7'
v = [a,a2,---,affp]r
Substituting (2.85) in (2.70) and using p = 1 yields (now wf = w):
79


u(k) = vTxSp + vTvy{k) -hTu-fTc-vTl det -vT2u-pzTu-pzT2u
From above equation it follows directly that S in (2.71) becomes:
S :=S-vTuATQd
Note that if T = 1 and D = 1 then feedback is realized only by the way the reference
trajectory is generated:
S = -vTvAQd
Choosing a reference trajectory is a way of defining the desired closed-loop
response. However, the same goal can be achieved by defining P in the criterion
function that is minimized. Hence, there is a strong relationship between generating
the reference trajectory as in (2.78) and choosing P = 1 aq~x .If Hp = Hm = 1, the
two ways of defining the closed-loop behavior are identical. Then (2.84) becomes:
w(k +1) = (1 a)Sp + ay(k)
Substituting above equation in criterion function (2.55) with P = 1 yields:
80


J = [(\-aq-x)y{k + l)-(l-a)Sp]2 +p
Q^u(k)
Q*
-<2
From above equation it follows directly that the same result could be achieved by
using w(k +1 ) = Sp and choosing P in (2.55) as:
P = l-aq
-i
If Hp>\, the criterion function (2.55) with P = 1 aq can be rewritten as
HP
J = 2l\y(k + 0 ((1 a)Sp + ay(k + i -1))] +
i=H
i=1
a
a
u(k + / -1)
Hence, minimization of (2.55) with P = \-aq 1 is equivalent to minimization of
(2.55) with p = 1, with the reference trajectory generated by:
w(k + i) = (1 a)Sp + ay(k + i -1)
(2.86)
If y(k + i-1) = w{k + i -1) then (2.84) and (2.86) are identical. In general this will
not be the case. However, if the predicted process output tracks the reference signal
closely, then both methods yield approximately the same results. If, for example,
Hc = HP, Hm = 1 and p = 0, then both ways of generating the reference trajectory
81


are identical because then the controller minimizing (2.55) does not depend on Hp.
2.5 Overview of Design Parameters
An overview of all design parameters of the UPC controller is given in this
section. In table 2.3 the parameters are shown, together with their range and type
(polynomial, integer, real). If a parameter is a polynomial, no range is specified as
indicated by the symbol . The table shows that many parameters have to be
selected by the designer of the control system, which is a direct result of the unified
approach that has been used.
Table 2.3. Unified predictive control (UPC) design parameters.
Description Type Range
HP Prediction horizon Integer >Hm m
Hm Minimum cost horizon Integer d + l Hc Control horizon Integer \ P Filter on tracking error Polynomial -
Minimal polynomial of P Weighting factor Real >0
Qn With Qd : filter on controller output Polynomial -
Q, With Qn : filter on controller output Polynomial -
T A With D: noise model Polynomial -
A D With T : noise model Polynomial -
82


2.6 Summary
The four basic elements of predictive controllers have been reviewed in this
chapter. First, many aspects of the prediction of the process output have been
discussed. It has been shown how z'-step-ahead predictors can be derived for different
process and disturbance models, including FIR and transfer-function models and
deterministic and stochastic disturbances. It has also been shown that a transfer-
function model with arbitrary degrees for the numerator and denominator
polynomials, in combination with a stochastic disturbance model, is well suited to
the design of a unified predictive controller. This is due to the fact that such a model
can be used to derive z'-step-ahead predictors for a wide variety of processes and
disturbances. Moreover, the process model unifies those predictors that are used in
many well-known predictive controllers. Further, it has been shown that z'-step-ahead
predictors have the structure of an observer: predictions based on the model are
corrected by using the deviation between the process and model output. The
stochastic disturbance model has been shown to play a key role in determining the
robustness and regulator behavior of the closed-loop system. Because of this and the
difficulty of estimating the disturbance model it is assumed that the parameters of the
disturbance model are selected by the designer of the control system. As a result, the
stochastic disturbance model can also be used to obtain z'-step-ahead predictors for a
large class of deterministic disturbances.
The second basic element of predictive controllers is the criterion function that
is minimized in order to obtain the controller outputs. It has been shown that the
83


choice of the criterion function is of paramount importance. Further, it has been
shown that in order to obtain a criterion function suitable for predictive controller
design it is essential to structure the future controller outputs. This can be achieved
by using a priori knowledge about the relationship between the controller output and
the desired process output. In this chapter, the relationships between u(k) and
y(k) have been used for this purpose. As a result, for particular settings of the
parameters in the unified criterion function, the predictive controller minimizing this
criterion function becomes a pole-placement controller where the closed-loop pole
locations are defined by the design polynomial P in the criterion function. Hence, in
general, pole-placement control can be regarded as a subset of predictive control.
Moreover, the criterion function used in most existing predictive controllers can be
obtained for particular settings of. the parameters in the unified criterion function.
Using this unified criterion function and the unified z-step-ahead predictors, it has
been possible to derive conditions that, if satisfied, ensure that steady-state errors do
not occur for reference trajectories satisfying w(q~x)w{k) = 0 in the steady state
and for disturbances satisfying 0f(q~1)£(k) = 0 in the steady state irrespective of
modeling errors. Moreover, conditions have also been derived that, if satisfied,
ensure that the process output settles to a reference trajectory described by
(f>w(q~])w(&) = 0 in nB+d +1 samples. Here, however, it is assumed that the
process is correctly modeled and that there are no disturbances.
Also, it was shown that the unified criterion function can be minimized an-
84


alytically, yielding a linear time-invariant control law. This control law can also be
written in the standard form 1Ru(k) = -Sy{k) + 7w(k + HP) where S and 7 are
functions of the criterion parameters and of the model, as is illustrated in Figure 2.10.
The diagram shows that if the process is linear and constraints are absent then,
theoretically, the same results can be obtained with a pole-placement controller as
can be obtained with a predictive controller. With a pole-placement controller P and
T are selected such that the desired result is obtained whereas in a predictive
controller Hp,Hm and Hc are selected such that the control system behaves as
desired.
Figure 2.10. Block diagram illustrating predictive controller design for a linear
system without constraints
85


The predictive controller minimizing the unified criterion function using the
unified i-step-ahead predictors for the prediction of the process output has been
called the Unified Predictive Controller (UPC) and is a unification and extension of a
number of well-known predictive controllers and of pole-placement controllers.
Finally, it has been shown that generating the reference trajectory by filtering
the set point by using a first-order filter initiated at y(k) is closely related to
choosing the P polynomial in the criterion function according to this filter and using
a step as the reference trajectory.
A new approach to the long-range predictive control, named Identified
Predictive Control (IPC), is proposed in next chapter. The IPC control law is
constantly corrected for any changes in process parameters through extracting the
relevant information from the current input-output sequence. Since IPC uses a
properly defined observer model of the /-step-ahead predictor, its identification has
dead beat properties that makes IPC particularly suitable for control of flexible
structures.
86


3. Identified Predictive Control
Self-tuning control algorithms are potential successors to manually tuned PID
controllers traditionally used in process control applications. A very attractive design
method for self-tuning controllers is the long-range predictive control (LRPC [4]).
The success of LRPC is due to its effectiveness with plants of unknown order and
dead-time which may be simultaneously nonminimum phase and unstable or may
have multiple lightly damped poles (as in the case of flexible structures or flexible
robot arms [13]).
LRPC is a receding horizon strategy and can be, in general terms, summarized
as follows. Assuming a long-range (or multi-step) cost function the optimal control
law is found in terms of:
(1) Unknown parameters of the predictor model of the process.
(2) Current input-output sequences.
(3) Future reference signal sequences.
A common approach is to assume that the input-output process model is known or
separately identified and then used to find the predictor model parameters. Once
these are known, the optimal control law determines a control signal at the current
time t which is applied at the process input and the whole procedure is repeated at
87


the next time instant.
Most of the recent research in this field is apparently centered around the LRPC
formulation [8, 9], known as Generalized Predictive Control (GPC). GPC uses
ARIMAX/CARIMA model of the process in its input-output formulation. The use of
this model has the effect of augmenting the original process with an integrator or
considering increments of the past, current, and future control signals as its inputs.
GPC assumes a finite control horizon, i.e., the penalty for control increments beyond
this horizon is infinite.
In this chapter, GPC formulation is used but the process predictor model is
derived from state space formulation of the ARIMAX model and is directly
identified over the receding horizon, i.e., using current input-output sequences. The
underlying technique in the design of Identified Predictive Control (IPC) algorithm
is the identification algorithm of observer/Kalman filter Markov parameters [10].
3.1 Process model structure
3.1.1. Auto regressive (AR) process
Consider the process of first-order:
y(t) = -axy(t -1) + e(t) = ~axq']y(t) + e{t)
g(0
\ + axq
< 1 (This is stability
condition)
which corresponds to the passing of a white noise e(t) through a stable filter
88


1
r. An auto-regressive process has the following general form:
1 + a}q
y{t) = -YJaTy(t~'c) + e(t) or Aq~x)y(t) = e(t)
T=1
n
with A(q~]) = \ + ^arq^T being a polynomial with all its roots within the unit
r=l
circle (A(z 1) = 0 => \z\ < l).
3.1.2. Linear models
A complete linear model of a linear time invariant systems (LTIS) is given by:
y(t) = G(q)u(t) + H(q)e(t)
with G{q) = ^g(k)q k and H(q) = \ + ^h(k)q k G denotes dynamic properties
k=1 k=1
of the system (tells how the system output is formed from the input). For linear
systems G is called the transfer function. H represents noise properties and is called
noise model. It describes how the disturbances are formed from same standardized
noise source e{t).
As we can see G and /fare defined by infinite sequences {g(&)} and {H(k)}.
In most cases it is impractical way of specification. However, G and H, using such
structures as rational transfer functions or finite-dimensional state-space
89


representation, can be specified in terms of finite number of numerical parameters.
Also, if we assume that e(t) is Gaussian, then the probability density function
(PDF) is entirely specified by the first and the second moment:
Ee{t)= f xfAx)dx = 0
Loo
Ee2(t)~ T x2fe(x)dx a2
Loo
where fe(x) is the probability density function.
Let 0 be a vector of model parameters to be determined. Then, we have the
following model description:
y( 0 = G{q,6)u{t) + H(q,0)e(t)
3.1.3 Prediction and predictor model
Let us define:
v(0 = H{q)e(t) = Yh(k)e(t k)
k=0
and assume that if is stable that is ^|A(&)| < oo. We can represent v(t) as:
k=0
90


v(t) = e(t) + ^h(k)e(t k)
4=1
Suppose that we have observed v(s) for s < t -1 and that we want to predict the
value v(/) based on these observations. Let us denote this predictor by v(t|t-l).
Since the variable e(t) has zero mean, we have:
#-0=
^Th(k)q-
4=1
e(k) = \H(q) -1] e(l) = [l -//-'(?)] v(r)
since e(t) = H 1 (q)v(t). We would like to predict y(t):
y(t\t -1) = G(q)u(t) + v(t\t -1)
= G(fM/)+[i-/r'(9)]v(/)
= G(q)u(t) + [l -H~' (q)\ [> (/ ) G(q)li(j)} <= y(t) = G(q)uG) + v(0
= H-' (q)G(q)u(l) + [l H~\q)\ y(l)
H(q)y(t\t -1) = G(q)u(t) + [H(q) -1] y(t)
This is the equation of one-step-ahead predictor. The prediction error:
>(0 Kl\> -1) = -H-' (q)G(q)u(t) + H~' (?M0 = <0
91