Citation
Identification and control of a flexible robotic arm model

Material Information

Title:
Identification and control of a flexible robotic arm model
Creator:
Kebede, Anteneh
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
xi, 186 leaves : illustrations ; 29 cm

Subjects

Subjects / Keywords:
Robots -- Control systems ( lcsh )
Vibration ( lcsh )
Robots -- Control systems ( fast )
Vibration ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 185-186).
Thesis:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Electrical Engineering
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Anteneh Kebede.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
34233878 ( OCLC )
ocm34233878
Classification:
LD1190.E54 1995m .K43 ( lcc )

Downloads

This item has the following downloads:


Full Text
IDENTIFICATION AND CONTROL OF A FLEXIBLE
ROBOTIC ARM MODEL
by
Anteneh Kebede
B.S., University of Colorado, 1992
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering
1995


This thesis for the Master of Science
degree by
Anteneh Kebede
has been approved
by
Miloje S. Radenkovic
Hamid Z. Fardi
OS'-O*/-
Date


Anteneh Kebede (M.S., Electrical Engineering)
Identification and Control of a Flexible Robotic Arm Model
Thesis directed by Professor Jan T. Bialasiewicz
ABSTRACT
The purpose of this thesis is to design a controller which can suppress
vibrations of a physical model of flexible robotic arm. As part of the
design, the identification of the flexible robotic arm model was carried out
using the observer/Kalman filter identification and the recursive least squares
algorithms. An adaptive pole assignment controller is used to control the
flexible robotic arm model. A gradient estimator is used to eliminate the
possible bursting phenomena in the adaptive control system.
This abstract accurately represents the content of the candidate's thesis. I
recommend its publication.
Signed
Jan T. Bialasiewicz
m


CONTENTS
CHAPTER
1. INTRODUCTION.................................................. 1
2. SYSTEM DESCRIPTION ..........................................4
2.1 DC Motor............................................ 6
2.2 Flexible Robotic Arm Model. . ... 8
2.3 Tip position Sensor............................. . . 13
3 SYSTEM IDENTIFICATION....................................... .14
3.1 Identification Principle .... . . 14
3.2 Observer/Kalman Filter Identification ... . . 15
3.2.1 System and Observer Markov Parameters..............16
3.2.2 Computation of System Markov Parameters............23
3.2.3 Computation of Observer Gain Maikov Parameters . . 28
3.2.4 Computational Steps of OKID ... .29
3.3 Recursive Parameter Identification.................. 30
3.3.1 Recursive Least Squares Algorithm. . . .... 31
3.4 PRBS Signal for Process Identification.............36
4. CONTROL DESIGN FOR THE FLEXIBLE ROBOTIC
ARM MODEL...................................................40
. 41
4.1 Pole Placement Design in State Space


4.2 Transfer Function-Based Design of Linear Regulator . . 60
5. ADAPTIVE CONTROLLER........................... ... 71
5.1 Parameter Estimation ... 72
5.1.1 Recursive Least Squares Algorithm. . . 73
5.1.2 Gradient Algorithm ........................... 76
5.2 Adaptive Algorithms for Closed Loop Pole Placement 79
5.3 Simulations.............. ........................83
6. IDENTIFICATION OF THE FLEXIBLE ROBOTIC ARM MODEL. 92
6.1 Observer/Kalman Filter Identification ... . . 92
6.2 Recursive Least Squares Algorithm. . . 106
7. EXPERIMENTAL RESULTS . ........................110
7.1 Choice of Sampling Frequency and Order of
the System Model.............................. . .110
7.2 Non-Adaptive Controller . . .112
7.3 Adaptive Pole Placement .... . .117
7.3.1 Deterministic Case . ... 117
7.3.2 Stochastic Case . . ..... .122
8. CONCLUSION.....................................................128
APPENDIX.................... .130
REFERENCE. .............. ................................185
v


FIGURES
Figure
1.1 Basic structure of an adaptive controller 2
2.1 Experimental setup of the flexible robotic arm model . .5
2.2 The experimental flexible robotic aim model................ . 5
2.3 Flexible robotic arm model......................................... .11
3.1 Structure of recursive least squares estimation. ... 32
3.2 PRBS signal...........................................................39
4.1 Servo system with observed state feedback.................. . 42
4.2 System response and command signal for the servo system
with observed state feedback and integral control. ... .53
4.3 Observer output and command signal for servo system with
observed state feedback and integral control .... . . 54
4.4 Control signal for servo system with observed state feedback
and integral control..................................................55
4.5 System output and command signal for the servo system
with observer state feedback and integral control designed
in Example 4.2...................................................... .57
4.6 Observer output and command signal for servo system with
Observer state feedback and integral control designed in
Example 4.2...........................................................58
vi


4.7 Control signal for servo system with observer state feedback
and integral control designed in Example 4.2.................. .59
4.8 Block diagram of the closed-loop system ... . . 61
4.9 System output and reference signal when no process zeros
are cancelled 69
4.10 Control signal when no process zeros are cancelled. . .70
5.1 Schematic diagram of an adaptive control system......................72
5.2 System output and reference signal when a recursive least
squares algorithm is used............................................85
5.3 Control signal when a recursive least squares algorithm is used 86
5.4 System output when a recursive least squares algorithm is used
in the presence of unmodeled dynamics......................... .88
5.5 Control signal when a recursive least squares algorithm is used
in the presence of unmodeled dynamics................................89
5.6 System output and reference signal when a gradient algorithm is
used in the presence of unmodeled dynamics..........................90
5.7 Control Signal when a gradient algorithm is used in the presence
of unmodeled dynamics................................................91
6.1 Computational steps of OKID ... .... .... 93
6.2 Input excitation............ .... .... 94
6.3 Output measurement of the flexible robotic arm model.................95
6.4 Observer pulse response sequence. .... ... 96
6.5 System Markov pulse response. ... 97
vii


6.6 Observer gain pulse response . . ... .98
6.7 Hankel matrix singular values.................... . .100
6.8 Estimated output of the system................. . . .102
6.9 Predicted output of the identified model and the real output . . .103
6.10 Normalized prediction error ... 104
6.11 Frequency response of the identified model............................105
6.12 Parameter estimates of the flexible arm when a recursive least
squares algorithm is used.............................................107
6.13 Output measurement when a recursive least squares algorithm
is used...............................................................108
6.14 Predicted output when a recursive least squares algorithm
is used...............................................................109
7.1 System output when a non-adaptive controller is used . . .113
7.2 Control signal when a non-adaptive controller is used........114
7.3 System output when a non-adaptive controller is used
with payload added at the tip of the flexible arm....................115
7.4 Control signal when a non-adaptive controller is used
with payload added at the tip of the flexible arm.....................116
7.5 Tip position response when a gradient estimator is used in the
adaptive control system...............................................119
7.6 Control signal when a gradient estimator is used in the
adaptive control system...............................................120
viii


7.7 Tip position response when a gradient estimator is used
in the adaptive control system with a payload added at
the tip of the arm................................... . .121
7.8 Control signal when a gradient estimator is used in the
adaptive control system with a payload added at the tip
of the arm........................ ... .... .122
7.9 System output when a stochastic gradient estimator is in
the adaptive control system................................ . . 124
7.10 Control signal when a stochastic gradient estimator is in
the adaptive control system. .............................. . .125
7.11 Tip position response when a stochastic gradient estimator
is used in the adaptive control system with payload added
at the tip of the aim...................................... . .126
7.12 Control signal when a stochastic gradient estimator is used
in the adaptive control system with a payload added at the
tip of the aim......................................................127
IX


TABLES
Table
2.1 DC motor/tachometer Specification.....................................9
2.2 Properties of the Flexible Arm.......................................10
3.1 Generation of Maximum Length PRBS....................................38
6.1 Modal Parameters....................................................101
x


ACKNOWLEDGEMENTS
I wish to thank Professor Bialasiewicz for his help in motivating my work
on this thesis, in providing me with meaningful direction in this research effort
and funding my research work.
I would like also to thank professors Radenkovic and Fardi for serving in
my thesis committee.
xi


CHAPTER 1
INTRODUCTION
In control systems design, complete knowledge of the system model is
essential. A classical control design method can be used, when the system
model is completely specified [4]. However, in practice system models are
not known or are known partially. Therefore, identification of the system
model parameters is necessary for optimal design of a control system.
System identification is the determination of the system model from the
input/output measurements [10]. Depending on the stability of the system,
either an open-loop or a closed-loop identification can be implemented. For
unstable systems it is desirable to use a closed-loop identification.
For time-varying or slowly time-varying systems, identification of the
system models only is not enough. For optimum system performance, it is
necessary to design a self-learning controller which can account for
variations of the system parameters. For unknown system models, a
controller could be combined with on-line parameter estimation to design a
controller using the estimated parameters instead of the true parameters [1,4],
1


This type of controllers are known as certainly equivalent adaptive controllers
[4], The basic structure of an adaptive control is shown in Fig. 1.1. There
exists a wide range of adaptive controllers. The choice of an adaptive
controller depends on the system performance and specifications. Some adaptive
controllers may be more suitable than the others for a particular system.
Fig. 1.1 Basic structure of an adaptive controller.
An adaptive control loop may exhibit bursting phenomena due to
unmodelled dynamics and disturbance present in a system. One such system
is a flexible manipulator which is characterized by infinite set of modes.
2


However, to design a controller for a flexible manipulator a model with only
a finite number of modes can be used. Thus, the unmodelled dynamics may
give rise to possible bursting phenomena and the adaptive control system may
become unstable [14]. In this case, a proper choice of an estimator will
stabilize the adaptive control system.
3


CHAPTER 2
SYSTEM DESCRIPTION
The system considered to be identified and controlled consists of a power
amplifier driving a DC motor and physical model of a flexible robotic arm
model. The flexible robotic arm model is mounted directly on the shaft of a
DC motor. The tip position of the arm to be controlled is sensed with an
optical detector.
The flexible robotic aim model is mounted on top of a D.C. motor which
is connected to a computer via an input/output circuit board. The tip position
of the arm is measured with position indicator system which includes the
LED, the detector and the lens. The LED is placed at the tip of the robot
arm. A lateral effect detector is centered above the center of the arm tip. The
detector with a lens covers up to 4 degrees in the horizontal plane. The arm
bends horizontally freely but is stiff in vertical bending. The mechanical
configuration of the system is shown in Fig 2.1 and Fig. 2.2 is the picture
of the system.
4


Fig 2.1 Experimental setup of the flexible
robotic aim model
Fig. 2.2 The experimental flexible robotic arm model.
5


2.1 DC Motor
The robotic arm model is mounted on the top of the motor. This is an
armature-controlled D.C. servomotor with a permanent magnet field and a
tachometer connected to the load via a rigid shaft coupler which has a
moment of inertia of 0.0011 oz-in-s2/rad about the rotational axis. The
complete model of the motor is described in [9], The load moment of inertia
is 0.62 oz-in-s2 /rad about the rotational axis. The robotic arm model is
mounted on the top of the load. The power amplifier provides current through
the motor armature which is proportional to the input voltage generated by the
amplifier. Peak output current is limited to 6.0 A and the output current step
rise time is limited to a minimum of 0.7 msec. The combination of the static
Coulomb friction and the viscous dumping form the friction torque in the
system which are given by the following equation
F=p W+K/c( co)+Kfs( co,Tm) (2.1.1)
where P is the coefficient of viscous friction, fc(o>) is the Coulomb friction,
fs( co,!^ is static friction, Ks is the static friction coefficient with a value of
3.0 oz-in and Kc is the Coulomb friction with a value of 3.4 oz-in.
The coefficients Kc and Ks are obtained from the DC motor system
measurements. The differential equations describing the dynamics of the DC
6


V
motor and load have the following form:
(2.1.2)
and
VJflrRjJfr-I'Jjfr-K* 0(') (2.1.3)
The time constant of the RC filter at the output of the tachometer circuit
is 100 times greater than the time constant x2 due to the tachometer armature
inductance. The tachometer gain is given by the transfer function
W V2
0)(j) S+X2
(2.1.4)
where Kv =lOKg and Kg is the generator constant of the tachometer.
The potentiometer gain is given by the transfer function
W vs
0(5) j+J_ (2.1.5)
where Kp is the DC gain of the potentiometer circuit and is the time
constant of an RC filter which filters the output of the potentiometer.
The motor characteristics are given in Table 2.1 [9], The power amplifier
and motor armature circuit provide the necessary power gain required for
driving DC motor through direct control of the motor armature current. The
7


dynamic equations of the power amplifier are coupled to the non-linear motor
dynamics given by Eq. (2.1.2) and Eq. (2.1.3). Assuming the term Ke0(/) to
be a constant voltage source, the closed-loop transfer function of the power
amplifier is given by
W -2.06* 108(.s+667)
-----=------------------------- (2.1.6)
Vt(s) (s+63448)(s+3379)(s+646) v J
Since the power amplifier allows direct torque control, the effects due to non-
linear armature circuit resistance are not considered in this system.
2.2 Flexible Robotic Arm Model
A flexible robotic aim model is a distributed parameter system. Therefore,
its analytical model is characterized by infinite set of natural modes [3]. The
flexible robotic arm model is a uniform, slender beam connected to the
armature of a D.C. motor via a rigid hub. The physical model of a flexible
robotic aim is implemented as a beam which can bend freely in the horizontal
plane but is stiff in vertical bending. The geometric and mass properties of
the system are summarized in Table 2.2 and a sketch of the flexible robotic
arm model is given in Fig. 2.3.
8


Table 2.1 DC motor/tachometer specifications.
Peak torque 250 oz-in
Continuous torque 30 oz-in
Friction torque 4 oz-in
Damping factor 0.4 oz-in/Krpm
Rotor inertia 0.0058 oz-in-s2
Weight 52 oz
Winding temperature 155 C
Thermal resistance 5 C/Watt
Torque sensitivity 10 oz-in/A
Back E.M.F 7.4 V/Kprm
Terminal resistance 1.6 Ohms
Terminal inductance 2.2 mH
Peak Current 25 A
Voltage gradient 7 V/Kprm
Armature resistance 36 Ohms
9


Define the parameters of the arm as follows:
l =length of the beam
x =distance along the of the beam
r =hub radius
Mp =payload mass
Ip =payload inertia
m =mass per unit length
El =flexual rigidity of the beam
=flexural deflection of the beam
Table 2.2 Properties of the flexible arm.
Beam length, cm 72
Beam thickness, mm 1.5
Beam width, cm 7.6
10


\
Fig. 2.3 Flexible robotic arm model
As shown in [3], the tip position y is given by
y(x,t)=w(x,t)+(x+r)Qh(t) (2.2.1)
The kinetic energy of the system is given by
I
T=\/2Ih(jh2+Jy(xtfm dx+Mjiltf+Ij \l,t)2
(2.2.2)
11


where denotes the derivative with respect to t and denotes the derivative
with respect to jc. The first term is due to hub rotation whereas the integral
tenn arises from the motion of the flexible link. The potential energy of the
system is given by
i
Ve=l-fEM(x,tf dx (2.2.3)
and the external work done on the system by the motor torque, Tm, is given
by
C2.2-4)
0
where is the total angle of rotation of the aim.
It is impractical to design a controller based on full-order flexible aim
because the flexible arm is characterized by infinite set of natural modes.
Therefore, it is necessaiy to use a reduced-order lumped-parameter model to
design a controller. One way to model a flexible arm is to use the assumed-
modes method together with Lagrange equation [3]. However, in this thesis,
to design a controller for flexible arm we use identified finite-order models
12


obtained using input/output data.
2.3 Tip Position Sensor
An optical sensor, a UDT SC10/D lateral effect detector, along a camera
lens was used to measure the tip position of the flexible arm. The distance
between the optical sensor and the lens is about 1.0 inch [5]. The detector
has a lcm x 1cm active area and will detect light from 350-1 lOOnm, i.e.,
the maximum resolution is about 1pm. Its peak responsivity is 0.5A/W at
900nm. The detector proportions the photocurrent among its 4 electrodes
irrespective of image detail.
The detector is incorporated in a computer-based control scheme using a
UDT 431 position indicator which has a resolution of 1 part in 4000 [5], The
431 position indicator has a bandwidth of 1kHz. The 431 can accommodate
a 100000:1 range of light levels. The light source is provided by a high
powered LED. At 100 mA the LED emits 12 mW at 880 nm into a 15
beam. The LED is connected to a 100 mA LED supply connection on 431
position indicator. The complete description of the UDT 431 x-y position
indicator along the UDT SC10/D lateral effect detector is given in [5].
13


CHAPTER 3
IDENTIFICATION SCHEMES
3.1 Identification Principles
The dynamical behavior of systems can be described with various model
structures, each one dedicated to a particular application. In some cases, the
model of a system can be obtained from the laws of chemistry, physics,
etc [4,10], These type of models give a fairly complete system description.
However, these models, in general, are complex and can rarely be used for
the design of control systems [10]. In other cases, it might not be feasible
.. to determine the model of a system precisely from the laws of physics,
chemistry, etc. In these instances it may be possible to determine the model
of a system by observing the nature of the system's response under
appropriate experimental conditions. This procedure is known as system
identification meaning the experimental determination of the model of a
dynamic process from input/output measurements [10].
Two types of identification methodologies will be used to identify the
parameters of a flexible robotic arm model: observer/Kalman filter
14


identification algorithm and recursive least squares algorithm. The
observer/Kalman filter identification is based on identification of observer
Markov parameters from which the state space model is obtained. The
recursive least squares is based on minimizing a quadratic cost function.
3.2 Observer/Kalman Filter Identification
Many system identification techniques have been developed and applied to
identify a state-space model for modal parameter identification of flexible
structures. The modal parameters include frequencies, damping and mode
shapes. Most techniques are based on sampled pulse system response histories
which are known as system Markov parameters [6]. One approach solves
directly for system Markov parameters from system input and output data. But
there is a drawback of this approach when inverting an input matrix which
becomes particularly large for lightly damped systems. Another approach uses
Fast Fourier Transforms of the input/output data to compute the Frequency
Response Functions, and then compute system Markov parameters by using the
Inverse Discrete Fourier transform [6],
Since system Markov parameters may exhibit veiy slow decay,
asymptotically stable observer can be used to form a stable discrete state-space
15


model for the system to be identified [6,8], In this approach one can assign
any poles desired for the observer, and specify the decay rate of the observer
Markov parameters as well the number of parameters needed before they have
decayed to negligible level.
3.2.1 System and Observer Markov Parameters
Consider the discrete state-space model given by the following equations:
x{k+\)=A4k)+Bi4k) (3.2.1)
y(k)=Cx(k) +D t/(k) (3.2.2)
where
x(k) is an n-vector
y(k) is an m-vector,
u(k) is an r-vector
A is n*n matrix
B is n*r matrix
C is mxn matrix
D is mxr matrix
Assuming that x(0)=0, the solution of Eq. (3.2.1) for £=0,1,..../-I may be
16


obtained by recursion as follows:
jc(0)=0
x(l)=Ba(0)
x(2)=ABu(0)+Bu(l)
i-i
*(/-! )='EAi-1Bu(l-i-l)
1=1
and, one obtains the solution of Eq. (3.2.2) as follows:
><0)=Z)u(0)
><1)=CR*(0)+Z)*<1)
y(2)=CABt(Q)+CBt(l)+Du(2)
i-i
y(I-l)='E/CAt-1Bi4!-l -i)+Z>a(M)
(i
Regrouping Eq. (3.2.3) and Eq. (3.2.4) into a matrix form yields
y=[D CB CAB ... CAl~2B]
(0) a(l) ( 2) tt(3) - (/-!)
m (i) a(2) .. a(/-2)
(0) *i(l) a(/-3)
m .. a(/-4)
a(0)
m*rl rl*l
(3.2.3)
(3.2.4)
(3.2.5)
17


where
F-WO) XI) XM)]
Equation (3.2..5) can be written in the compact form:
y=YU (3.2.6)
The matrix F contains Markov parameters to be determined The matrix U is
an upper triangular input matrix. The matrix U is square for single input
(r=l). This is the only case in which F has a unique solution Y=ylT\. Even
for r=l, the matrix U may become ill-conditioned and the solution may not
be attainable [2]. For r £ 2, the solution for F is not unique because there
are m*rl unknown equations in the Markov parameters matrix but only m*l
equations.
Assume that the system is asymptotically stable. Then, for sufficiently large
P
Ak =0 for ki.p
Then, Eq. (3.2.6) can be approximated by
where
y*YU
(3.2.7)
18


y=bi0) MD M2) MW M/-i)]
m*l
Y=[D CB CAB ... CAp'lB]
r{p+l)
a(0) a(l) ... u(p) ... a(/-l)
a(0) ... a(p-l) u(/-2)
a(0) a(/-p-l)
f By choosing />r(p+l) we will have more equations (m*/) than unknowns
(mxr(p+l)) and we could approximately calculate p Markov parameters using
pseudo-inverse if of the matrix U, i.e.
Y=yUf (3.2.8)
However, for lightly damped systems the size of the matrix U becomes too
large to solve for pseudo-inverse Uf numerically [2], The solution to this
problem is to increase artificially the damping of the system, as proposed in
[8]. By increasing artificially the damping of the system and implementing the
observer model, it is possible to obtain the solution of system Markov
19


parameters. Adding and subtracting Gy(k) on right-hand side of Eq. (3.2.1) and
rearranging yields
x(k+l)=(A +GQx(k)+(B+GD)u(k) -Gy(k)
=Ax(k)+Bv(k)
(3.2.9)
where
A=A+GC, B=[B+GD -G],
m=
u(k)
m
(3.2.10)
and G is an n*m matrix chosen to make A+GC as stable as desired [2], The
output of the system is given by Eq. (3.2.2). Markov parameters of Eq. (3.2.9)
are referred as observer Markov parameters. The input-output description for
the observer model is then given by
y=YV (3.2.11)
where
y=\m yii) y(2)...y(p). v(M)]
lx/
20


f=[D CB CAB ... CAp'lB ... CA^B]
jM^Kiu+r^Z-^+r]
a(0) a(l) u(2) ti(p) .. (/-l)
m v(l) .. v(p-l) .. v(/-2)
m - V(p- 2) .. v(/-3)
v(0) v(/-p-l)
v(0)
[(ixr)(/-l)+r]x/
When using real data the eigenvalues of the A are placed at the origin [8,12]
(i.e. the deadbeat observer is considered) so that CAkB*0 for kzp. The
observer Markov parameters can be obtained from real data using Eq. (3.2.11)
with the truncated data, i.e., we have :
y=FV (3.2.12)
where
r=[D CB CAB ... CAp~lB ]
mx[(m+r)p+r]
21


(0) h(1) u(2)
V=
v(0) v(l)
m
... u(p) ... u(l-1)
... v(p-l) ... v(/-2)
... v(p- 2) v(/-3)
v(0) v(/-/>-l)
[(rxm)p+r]x/
For the systems described by Eq. (3.2.1) and Eq. (3.2.2), one can approximate
the firstp Markov parameters using the pseudo-inverse of the matrix V, i.e.,
where V* is the pseudo-inverse of the matrix V. The identified observer
Markov parameters may not necessarily appear asymptotically decaying for
first p-1 steps, although CAk3=0 for izp [6,8], By choosing the maximum p
which maximizes the number (mq)p+m of independent rows of V it is
possible to solve for Y uniquely. Hence, the maximum p is the upper bound
of the order of the deadbeat observer [8]. Choosing the maximum p also
decreases the approximation error. The solution of Eq. (3.2.12) can be obtained
using either recursive or non-recursive algorithms.
So far it was assumed that x(0)=0. For non-zero initial conditions, the
following equation is used:
y=CApx+YV (3.2.13)
22


where
Y=[D CB CAB ... CAp'lB\
y=bip) Mp+1) ... y(/-l)]
*=M0) *0) ... xil-p-1)]
(p) u(p+1) ... a(/-l)
vO-l) y(p) ... v(/-2)
-yip- 2) v(p-l) ... v(l-3)
v(0) v(l) ... v(l-p-1)
The effect of the non-zero initial conditions on measured data becomes
negligible after p steps because the non-zero initial conditions are multiplied
by Ap [2], The negligible effect of the initial conditions after p time steps
makes the response of the system stationary [6].
3.2.2 Computation of System Markov Parameters
The system matrices A, B, C, and D are computed from the system
Markov parameters and the observer gain matrix G is obtained from the
23


observer gain Markov parameters. To compute system Markov parameters Y
from the observer Markov parameters Y, following the approach given in
[6,8,12], partition Y such that
Y=[ Y0 Y, ... f ] (3.2.14)
where
Yk=[C(A +GC)i~1(B +GD) -C(A+GCflG ]
4[^(1) -pf]; *=1,2,3,...
To obtain Markov parameter Y{, consider
Y fCB (3.2.16)
Adding and subtracting the term CGD on the right side of Eq. (3.2.16) and
using Eq. (3.2.15) we get
Pj =CB+CGD-CGD
=C(B+GD) -CGD (3.2.17)
Markov parameter CAB is obtained by considering P2(1) in Eq. (3.2.15)
24


Yj1)=CA3+CGCB*-C(A +GQGD
=72+r1(2)r1+72(2)D
Therefore,
Y2=CAB=Y2a)-Y^2)Y1 -F2)D
(3.2.18)
To compute Markov parameter 73, consider 73(1) in Eq. (3.2.14)
7,(1>=CA 2B+CGCAB*C(A +GQGCB+C(A +GC)2GD
=7 +7(2)7 +7(2)7 +7(2)Z)
J3 i J2 J2 Ji *3
from which
73=C425=731)-7j2)72-72(2)71-73(2)D
(3.2.19)
Similarly, one can obtain Markov parameters 74, 75 and so on. As shown in
[12], the general relationship between the actual system Markov parameters and
the observer Markov parameters is given by
25


for
(3.2.20)
D-T,-T,
y^m- ?V,
1=1
y-ty-V.
for k=p+
Equation (3.2.20) shows that there are only p independent Markov parameters
and there are only /t+1 observer Markov parameters computes as least square
solution from Eq. (3.2.12). By the proper choice of p, Y and Y are
considered to be zero for k>p [8]. As established in [8], the relationship
between system Markov parameters and observer Markov parameters can be
developed further. Define
*2 r* -
*3 y4 Yjf+2
Y . Y Y
pti V 2 1N+p
(3.2.21)
^(2). (2) _p(2)
L rp Ip-1
I=[Y,
Y
P+2 1p+3

where N is a sufficiently large arbitrary number. Further, the Hankel matrix
H can be expressed by
26


c
CA
H=
CA2
A[B AB A2B ..An iB\
CAp-\
(3.2.22)
=R4Q
where R is the observability matrix Q is the controllability matrix, and A
is the system state matrix. The size of the matrix H is mp Nr. From
Eq. (3.2.20) and Eq. (3.2.21) one obtains
Z=Y(2)H=Y(2)RAQ (3.2.23)
The identified state matrix A represents the controllable and observable part
of the system. The maximum rank of H is mp provided that Nr>mp. The
system matrix A (of order n) should exist if p is chosen such that mpzn and
F(2) is obtained uniquely. Therefore, p must be chosen such that mpzn. The
product mp is the upper bound on the order of the identified system model.
Therefore, it is not necessary to have more system Markov parameters than
the necessary number to create a full-rank Hankel matrix. [6,8]
27


3.2.3 Computation of Observer Gain Markov Parameters
To compute the observer gain G, following the approach in [6,8], first
recover the parameters
Y=CAk-lG for *=1,2,3 ... (3.2.24)
From Eq. (3.2.24), the first parameter is
Yi=CG=Yl2) (3.2.25)
The second parameter is given by
Y^Y^-Y^Y? (3.2.26)
The third parameter is given by
V0 y(2) y(2)y0 y(2)y0
-J3 Jj I2 -*2
(3.2.27)
The next k parameters are computed similarly. As established in [8], the
general relationship is given by
28


T=CG=T^
for k=2,3 ... ,
1=1
r=-Er?L for k=p+1, ... -
<=i
(3.2.28)
The observer gain G is obtained from
G=iRTRylRTY
where
C CG
CA CAG
R= CA2 T= CA2G
CAk CAkG
(3.2.29)
(3.2.30)
The integer k has to be sufficiently large so that the observability matrix R
has the rank of n.
3.2.4 Computational Steps of OKED
The computational steps of the observer/Kalman filter identification (OKID)
algorithm, given a set of input and output data [6], are listed below:
29


1) Choose a value of p which determines the number of observer
Markov parameters to be identified from the given set of input and
output data. Compute the least-squares solution of the observer Markov
parameter matrix Y.
2) Determine system and observer gain Markov parameters from the
identified observer Markov parameters.
3) Identify a state-space model of the system and the observer gain from
the recovered sequence of the system and observer gain Markov
parameters by using ERA (Eigensystem Realization Algorithm) or
ERA/DC (Eigensystem Realization Algorithm with Data Correlation)..
4) Find the eigensolution of the realized state matrix. Transform the
realized model to modal coordinates for modal parameter identification
which includes frequencies, dampings and mode shapes at sensor
locations.
3.3 Recursive Parametric Identification
An estimation algorithm with recursive formulation can be implemented on
digital computer to estimate the parameters of the model. There are two
classes of parameter estimation algorithm: on-line and off-line. In the off-line
30


estimation algorithm, all the data are available prior to analysis and the data
may be treated with no strict time limitations. In on-line case, parameters are
updated recursively during each sampling interval. Hence, it is necessary to use
a relatively simple estimation algorithm in many practical applications.
3.3.1 Recursive Least Squares Algorithm
In this section, the identification of sampled discrete-time parametric
dynamic model is discussed. A schematic diagram of a recursive parameter
estimation is shown in Fig. 3.1. Consider a discrete-time system given by
y(t+\)=q-A(q-l)u{t+\)
(3-3.1)
where A(q~l) and B(q~l) are polynomials in the unit-delay operator q'1 and are
defined as follows:
A(q-l)=l+alq1+...+anq-n
(3.3.2)
B(q )=VM 1 + +bm4
(3.3.3)
and d represents the pure delay of a system.
31


u(t+1)
* plant
y(H-l)
< V k e(^i) estimator

y(t)

Fig. 3.1 Structure of recursive least squares estimation.
Equation (3.3.1) can be written as
y(f+l)=0r (3-3.4)
where
0r=K
...a
X b,

(3.3.5)
32


is the parameter vector, and
4>(/)7=[ >^0 1) ... -y(t+\-n\u(t+\-d) a(f-rf-l) ... u(t+l-m-d)] (3.3.6)
is the measurement vector.
Then, the prediction model is given by
is the estimated parameter vector. A recursive parameter adaptation algorithm
with memoiy is desired. Least squares algorithm provides a satisfactory
convergence speed and a high adaptation gain at the beginning when the
optimum is far away [4,10). The recursive least squares algorithm minimizes
the following quadratic cost function

(3.3.7)
where
(3.3.8)
t

(3.3.9)
33


As shown in [4], the recursive least squares parameter adaptation algorithm is
given by
§(t+l)T=$(t)r+

y(t+\)-6(t)Tm
(3.3.10)

(3.3.11)
The posteriori prediction error, e(t+l), is given by
('+lHtf+l)-rt*+l) (3.3.12)
Equation (3.3.10) is based on posteriori prediction error, i.e., parameters are
estimated alter the action of the parameter adaptation algorithm. The recursive
least squares algorithm gives less and less weight to the new measurements
since it is an algorithm with decreasing adaptation gain. For estimation of
time-vaiying system parameters and other system parameters, a decreasing
adaptation gain may not be suitable [10], There are a number of variants of
the recursive least square algorithm with a different choice of adaptation gain.
As shown in [10], the least squares algorithm can be generalized by
introducing two weighting sequences Ax(t) and A2(t) into Eq. (3.3.11):
34


P(r*l)=
K(t)
m-
Pimwfpjt)
A.j(0
-i-+mTP(m)
a2
(3.3.13)
A number of choices for Aj(t) and A2(t) is possible depending on the type
of a system. For example, for the identification of slowly time-varying system,
Xj(t) and A2(t) are chosen as
A^t^Aj; 0 A value of Aj between 0.95 and 0.99 is a typical choice. The effect of Al
is to give less and less weight to the old data. The cost function to be
minimized in this case is given by:
/=i
(3.3.14)
If it is necessary to avoid a too rapid decrease of the adaptation gain, then
one would choose At(t) and A2(t) as:
Aj(t)=Aq[A|(t-l)-l]+Aq ; 0 A2(t)=l
The criterion to be minimized in this case is:
35


(3.3.15)
* 1(0''(0]2
i=i
This type of profile is very suitable for the identification of stationary systems.
If Aj(t) and A2(t) are chosen as A^t^Aj (t)=l, Eq. (3.3.11) is obtained. The
criterion to be minimized in case is given by Eq. (3.2.9). This type of
choice of A}(t) and A2(t) is suited for the identification stationary systems.
3.4 PRBS as Test Signal for Process Identification
In order to correctly identify parameters of a system, a frequency-rich input
must be used A frequency-rich input can be obtained by the use of pseudo-
random binary sequences (PRBS). PRBS are generated by a digital waveform
generator which produces a binary signal by switching randomly between two
output levels [2], The PRBS is characterized by a sequence length within
which the pulse width vary randomly but over the large time horizon it is
periodic. The period is defined by the sequence length. The PRBS are
generated by means of shift registers. The maximum sequence length L is
Z=2*-l
where N is the number of stages of the shift register. Each stage of the shift
36


register stores either 0 or 1. Note that at least one of the N stages should
have a value of 1, although one generally stores a value of 1 on all N stages
initially. At every pulse from the clock, all digits are moved one place to the
right. The digit in the last right-hand stage moves out to the output stage
register while the first stage is filled by a new digit obtained by the modulo
2 addition of the digits present in the stages /, and lh before the clock pulse
arrived [2,11]. The stage number /, and lh are specified in Table 3.1 [10]. The
pattern of every N adjacent digits is then unique. Note the maximum duration
of a PRBS signal is equal to N. In order to identify the steady state gain of
the system correctly, the following condition must be satisfied:
N-TftR (3.4.1)
where tR is the rise time of the system, Ts is the sampling period.
The magnitude of the PRBS may be low, provided that the signal/noise
ratio is high. For a low signal/noise ratio it is necessary to extend the
duration of the test to obtain a satisfactory parameter estimation [10].
37


Table 3.1 Generation of maximum length PRBS
NUMBER OF STAGES SEQUENCE LENGTH STAGES ADDED
N Z=2Ar1 /, and lh
4 15 3 and 4
5 31 3 and 5
6 63 5 and 6
7 127 4 and 7
8 255 3 and 8
9 511 5 and 9
10 1023 7 and 10
Example 3.1
Suppose that for system to be identified /max=2Hz and tR=0.1 sec. If N=5
and 7^=0.1 sec are chosen, then we have the inequality of Eq.. (3.4.1) is
satisfied:
M/=0.5>tR=0.1
The plot of a PRBS signal for N= 5 and Ts=0.1 is shown in Fig. 3.3.
38


39


CHAPTER 4
CONTROL DESIGN FOR THE FLEXIBLE ROBOTIC ARM MODEL
The aim of system identification is to design a high-performance control
system. There are exists a wide range of control laws. Some controllers
may be more suitable than the others to apply for a particular system.
Controllers are designed according to the requirements for a system. In
some cases it may be necessary to combine a parameter estimation scheme
with a control law, so that the control law is updated at each sampling
interval (adaptive controllers). In other cases, only the off-line identified
model of the system may be enough to design a controller (non-adaptive
controller). In this chapter two non-adaptive controllers will be designed.
The first controller is based on the state space of the model. The second
controller is a transfer-function based linear regulator. In both cases the pole
assignment method is used.
40


4.1 Pole Placement Design in State Space
In this section a controller will be designed via pole placement for a
servo system. In designing a controller one has to utilize the feedback of all
the available state variables. However, all state variables are not available for
direct measurement. Hence, it is necessary to estimate the state variables which
are not available for direct measurement. Observation of the unmeasurable state
variables is known as a minimum-order state observation whereas observation
of all state variables is known as a full-order observation [13].
Here, the full-order observer is considered in designing a controller because
full-order observer results in better system performance when the measurement
of the output variables involves noise. Note that the system matrices used in
designing the controller are obtained from real-time identification of the robotic
arm model and the data collected for the identification of system matrices may
be contaminated with measurement noise. Although the measurement noise level
is very low, the measurement of the output is relatively inaccurate. Hence,
using the full-order observer to design a controller for the robotic arm model
is well justified. In this section, only the simulation of the system is
presented. In Chapter 7, the real-time performance of the system will be
presented.
41


Consider the servo system with observed state feedback shown in Fig 4.1.
The integrator has been added to the system to eliminate the steady-state error.
The output state variable is the only state variable available for direct
measurement Hence, it is necessary to estimate the other state variables of the
robotic arm model that are not available for direct measurement. However,
for the reasons stated above, all the state variable of the system will be
estimated.
Fig. 4.1 Servo system with observed state feedback.
42


The state equation describing the plant is given by
x(k+\)=Ax(k)+Bi4k)
(4.1.1)
m=cm
(4.1.2)
and the observer equation is given by
where
x{k)= plant state vector (n-vector)
u(k)= control vector (m-vector)
y(ky= plant output vector (m-vector)
jp(Jfc)= observer state vector
y(k)= observer output vector
G=n* n matrix
B=n*m matrix
C=m*n matrix
The integrator state equation is
*(*+1 )=^c(t)+5u(ife)+CG[K*)-^)]
(4.1.3)
m=cm
(4.1.4)
v(k)r=v(k-1 )+r(k) -y(k)
(4.1.5)
43


where
v(£)=actuating error(/n-vector)
r(£)=command input vector(7w-vector)
Using Eq. (4.1.1) and Eq. (4.1.2), Eq. (4.1.5) can be written as
v(k+1 )=v(k)+r{k+1)-y(k+1)
=v(*)+K*+1) ~C[Ax(k) +Bt(k)\ (4.1.6)
=-CAj(k)+v(k)-CBt(k) +r(k+1)
As shown in [13], the control vector u(k) is given by
u(k)=-K^KlV{k) (4.1.7)
The design parameters are Kb and G. The pole placement design and the
observer design are independent of each other because the closed-loop poles
of the observed state feedback control system consist of the poles due to the
pole placement design plus the poles of the observer. According to the
separation principle they can be designed separately and combined together to
form the observed state feedback control system. Hence, the integral constant
Kx and the state feedback gain K2 are computed separately from the observer
gain G. In other words, Kx and K2 are designed without the use of the
44


observed state feedback. Shifting in time, we can rewrite Eq. (4.1.7) as
ii(At+ 1 )=-AT2*(ifc+1 )+J£:i v(Ai-1)
(4.1.8)
Substituting Eq. (4.1.1) and Eq.(4.1.5) into Eq. (4.1.8), we obtain
u(*+1)=(AT2-J^ -KiCAKk)+(.Jm ) (4.1.9)
Combining Eqs. (4.1.9) and (4.1.9), and introducing an augmented state vector
consisting of x(k) and u(k), we obtain
x(k+\) A B x(k) 0
u(Jfc+l) K^KjA-K^CA la-K^-KxCB u{k) + *1.
(4.1.10)
where it is assumed that r(k+\y=r, for A>0, and Im denotes m-dimensional
unity matrix for sufficiently large k. Thus, Eq. (4.. 110) can be written as
*() A B *() 0
11(00) K^K^A-K^CA I^KjB-K^CB (*)). + K\
(4.1.11)
Subtracting Eq. (4.1.11) from Eq. (4.1.10), we obtain
W+l)=
[A B
0 0
m*
w(k)
(4.1.12)
45


where
m=
x(k)-x(*>)
a(*)-u()
=(n+m)-vector
(4.1.13)
w{k)=[K2-K^A-KxCA Im-KJi-KxCB\%(k)
If we define
A=
A B
0 0
=(n+m)*(n+m) matrix
(4.1.14a)
B=
=(n+m)xm matrix
(4.1.14b)
K^-iK^K^A-K^CA Im-K^B-KxCB\ =m*(n +m) matrix (4.1.14c)
then Eqs. (4.1.12) and (4.1.13) can be written as
Uk+l)=AZ(k)+Bwfc)
(4.1.15)
(i)
(4.1.16)
respectively.
The controllability matrix for the system defined by Eq. (4.1.15) is given by
46


[A AB An'm-lB]
which has the dimension of (w+m) by mUr&m) and a rank of (n+m). Matrix K
can be computed using the pole placement method. Equation (4.1.14c) can be
written as
K=[K2 KJ
A-I B
n
AC CB
+[0 -IJ
Hence, we obtain
[*2 AT,]=[je+[0 IJ]
A-I.
AC CB
(4.1.17)
K can be computed from Ackerman's formula:
JC=[0 0 ... l][A AB ... An*m~lB]MA) (4.1.18)
and 4>(/4)is given by
4>(J)=A "+a,i"'1+... +cta_lA+uMI (4.1.19)
where a, 's are the coefficients of the desired characteristic equation.
47


The observer feedback gain matrix G can be obtained from the Ackerman's
formula:
c -l o'
G=m CA 0
CA-\ 1
(4.1.20)
where
$(A)=A n+alAn~1+...+un_lA+aHI (4.1.21)
Again a/s are the coefficients of the desired characteristic equation.
Example 4.1
A numerical example is given to illustrate the computation of Kh state
feedback gain and the observer feedback gain G. The plant considered in
this example is the robotic arm model. The plant model of the robotic arm
is identified using real experimental data (see Chapter 6 for details). The order
of the system is 6 and the system has one input and one output. The design
requirement is that the response of the system to any command input be
deadbeat In other words, it is required that the observed state approaches the
48


true state as fast as possible. Note that for the robotic arm model only the
output state variable is available for direct measurement. The system matrices
are
l.Ollle+OOO
-1.4429e-001
-3.0748c-002
-5.008U-002
-8.5083e-003
5.2715e-003
6.8724e-002
7.4473*-00l
-6.4023e-001
4.5664e-002
-1.2266e-001
-1.3169c-001
-4.9805e-003
1.9958e-001
1.7683e-001
-9.5602 -1.0670e-001
7.3783e-002
3.5749e-003
1.2432e-002
3.0144e-001
8.8341e-002
-6.3616^-001
-6.1830e-001
6.925 8e-005
3.0047e-003
-3.0228e-003
1.2854e-001
-7.1734C-001
6.6445e-001
-6.7708c-005
-8.5978e-004
-1.9883e-003
-2.645 le-002
-1.3588e-001
-3.7099e-001
£=[-1.2228 -6.2701 -6.8474 -2.4512 -1.9035 -1.3527JT
C=[3.66e-002 -9.59e-003 2.26e-003 -2.69e-004 2.95e-005 -3.63e-006]
From Eq.(4.14a) and (4.14b), we obtain
1.011 6.872e-02 -4.981e-03 3.575e-03 6.9258e-05 -6.7708C-05 -1.223
-1.443e-01 7.447e-01 1.996e-01 1.243e-02 3.005e-03 -8.598e-04 -6.271
-3.075e-02 -6.402e-01 1.768e-01 3.014e-01 -3.023e-03 -1.988e-03 -6.847
-5.008c-02 4.566e-02 -9.560e-01 8.834e-02 1.285c-01 -2.645e-02 -2.451
-8.508e-03 -1.227 5.272C-03 -1.317e-01 7.378e-02 -6.183c-01 6.645c-01 -3.710C-01 -1.353
0 0 0 0 0 0 0
49


5=[0 0 0 0 0 0 1]T
The inverse of the controllability matrix is
o
1.626e-02 -7.353e-02
-9.103e-03 -6.233C-02
-2.621e-02 -3.485e-02
-5.618e-03 -1.964e-02
-4.428e-02 6.447e-03
-3.239e-01 1.152e-01
0 0
-6.406e-02 -3.150e-02
2.251e-02 5.754e-02
5.932e-02 -9.942e-03
3.448e-02 -7.235e-02
-8.86U-03 -2.309e-02
-5.473e-02 7.578e-02
0 0 1
-1.664e-02 -8.378e-03 0
4.705e-02 1.271c-02 0
-6.785e-02 -1.564e-03 0
5.536C-02 -2.526C-02 0
5.397e-03 8.925e-02 0
-2.423e-02 -6.745*-02 0
|B AB A*B A3B A4B A5B A6B ]"' =
0
The closed-loop poles are placed at the origin. Hence, the coefficients, a/s,
in Eq. (4.1.19) are equal to zero. Note that the order of A is 7 since n+m=7.
From Eq. (4.1.19) we obtain $(Ay=jV:
9.411e-01 1.936e-01 2.407e-02 1.365e-02 1.152e-03 -5.836c-04 -2.563
-4.015e-01 -2.850e-02 7.642e-03 2.821e-03 -1.346c-04 6.342e-05 3.312e-01
1.783e-01 -2.408r-02 -3.569-02 4.83e-03 2.913C-03 -9.848e-04 -9.253e-02
-2.467c-01 3.048e-02 -1.581e-02 -3.225C-02 -8.704e-03 6.210e-04 1.033
9.022e-02 8.598e-03 4.974e-02 4.149e-02 3.842 2.085e-01 1.665e-02 7.352e-02 1.042e-02 -4.961e-02 1.924e-02 2.862e-01
0 0 0 0 0 0 0
50


Matrix K is computed from Ackerman's formula (Eq. (4.1.18)),
K=[0 0 0 0 0 0 1] [B AB J2B J*B J*B J*B A*B ]_1i7
and is given by
.£=[-0.396 -0.0637 -0.0123 -0.00851 0.00121 -0.00125 0.933]
Kl and K2 are computed from Eq.(4.1.17) and are given by
Kx=-9.9162
K2={-1.625 -0.027 0.0302 0.0047 0.000319 0.000799]
Referring to Eqs. (4.1.20) and (4.1.21), we obtain
c -l 0
CA 0
CA1 0
CA3 0
CA4 0
CA5 1
51


Thus, G is given by
G=[25.101 -0.450 10.861 46.774 -38.827 604.165]T
The system requires 13 sampling periods to reach the steady-state (i.e. any
non-zero error will be driven to zero error in at most 13 sampling period)
because the order of the plant plus additional integrator is 7 and the order
of the state observer is 6. However, as shown in Fig. 4.2, the response
required more than 13 sampling periods to settle. This can be explained by
the presence of disturbance on the data was used for the identification of the
system model. The output of the observed state approaches the true state as
fast as possible as shown in Fig 4.3. Fig. 4.4 shows the control input. The
change in the magnitude of the control input is due to a change in the input
command signal which is a square wave.
In general, the observer poles are chosen so that the observer response
is 4 to 5 times faster than the system response [13]. One way to decrease
the response time of the observer is to choose all the observer poles at the
origin. This is shown in Example 4.2.
52


System Output y(k)
Fig. 4.2 System response and command signal for the servo
system with observed state feedback and integral control.
53


Observer Output
Fig. 4.3 Observer output and command signal for the servo
system with observed state feedback and integral control.
54


Fig. 4.4 Control signal for the servo system with observed
state feedback and integral control.
Example 4.2
Consider the system shown in Fig 4.1. Now it is desired that the closed-
loop poles be placed at
z=0.8252+j0.1497, z=0.8252-j0.1497
which results in the following characteristic equation:
z2 -1.65042+0.7034=0
55


The observer poles are placed at the origin. Hence, the observer feedback gain
matrix G is the same as in Example 4.1. Therefore, only Kl and K2 need to
be computed in this example. Kl and K2 are obtained from Eq. (4.1.17):
^=-0.5255
£j=[-0.1805 -0.0158 0.0021 0.0008 0.0061 0.0070]
The responses of the system and the observer are shown in Fig 4.5 and
Fig. 4.6, respectively. It can be seen that they are almost identical. As shown
in Fig. 4.5, the system response does not exhibit overshoots because the
observer poles are located at the origin and the closed-loop poles generated
by state feedback are at 0.8252j0.7035. The comparison of the results
obtained in this example with the results in Example 4.1 shows that the
settling time in Fig 4.5 is smaller than the settling time in Fig 4.2. This is
due to the fact that the response speed of the observer is faster than the
response speed of the system. In Example 4.1 the response speeds of the
observer and the system were roughly the same. However, the rise time in Fig
4.2 is smaller than the rise time in Fig 4.5. As shown in Fig 4.7, the
magnitude of the control signal is smaller than the one in Example 4.1
because only the observer poles are placed at the origin.. In Example 4.1 both
56


the observer poles and the system poles are placed at the origin which
resulted in large control magnitude. Overall the design in Example 4.2 gives
better system performance.
Fig. 4.5 System output and command signal for the servo
system with observed state feedback and integral control
designed in Example 4.2.
57


Observer Output
Fig. 4.6 Observer output and command signal for the servo
system with observed state feedback and integral control
designed in Example 4.2.
58


Fig. 4.7 Control signal for the servo system with observer
state feedback and integral control designed in Example 4.2.
59


4.2 Transfer Function-Based Design of Linear Regulator
Consider a process characterized by the following difference equation
and d is the delay. It is desired to design a controller such that the tip of
the robotic arm model follows given reference signal. This can be achieved
using a linear regulator which is given by
Where the coefficients of the polynomials R{q'\ S(qml) and Tig'*) are control
parameters designed to obtain the desired closed-loop response of a system.
Combining Eq. (4.2.1) and Eq. (4.2.2) we obtain
H(q '1)=V(0= 9 ^ Kt) (4.2.3)
AR +q ~dBS
and the desired closed-loop transfer function is given by
where
Aiq'm-qMqMt)
Aiq'X)=\+axqA+a1q'1+ . +arg'n
Biq1)=b0+blql+b2q2+ . +bmq'm
(4.2.1)
R(qMt) = Tiq'lHt)-S(q-l)y(t)
(4.2.2)
(4.2.4)
60


A block diagram of the closed-loop system is shown in Fig. 4.8.
w{t)
Fig. 4.8 Block diagram of the closed-loop system
The design task is to design a controller such that the closed-loop system
gives the desired response to command signal r(t), i.e.,
qdTB
AR+q ~dBS Am
(4.2.5)
61


In our design, we shall follow the procedure outlined in [1,4,10]. The
coefficients of the polynomials R S and T are control parameters which are
computed from the following relationships obtained by equating the numerators
and the denominators in Eq. (4.2.5)
AR+qdBS=Am (4.2.6)
T=Bm (4.2.7)
where Am is a polynomial specifying the desired closed-loop poles.
Defining
p = max deg (n m)
Eq. (4.2.6) has a unique solution when A(q~l) and B(q~l) do not have a
common factor. The degree of the polynomial Am(q'1) should satisfy the
following inequality
deg Am(ql) z 2p + 1
The polynomials R(qA) and S(qA) are given by
R(q _1)=1 +ryq _1 +r2q '2... +r q (4.2.8)
S(q '1)=50+5,9 'l+s2q ~2... +snq "* (4.2.9)
and the degrees of R(q~l) and S(qA) are obtained from :
deg R(gAy= nr=deg B(ql)+d-1 ; deg S(qA) = ns =deg B{ql)-\ (4..2.10)
62


In order to solve Eq. (4.2.6), we put it in a matrix form :
Mx=l
where
xT=[l,r1,r2._rB^1,s2...^]
/T=[l,fl ,a ...a ]
1 "0 "1 V>
and the matrix M is given by
1 0 0 0 0 0
al l 0 0 0
*2 ai l *0 0
*2 - 0 *1
a n - - 1 - * bo
a n al K *
<-2p+l
The vector x is obtained from
(4.2.11)
(4.2.12)
2p+l
63


x=M!/
(4.2.13)
If A and B are relatively prime, R and S can be designed by assigning
arbitrarily the closed-loop poles [1]. The closed-loop poles of the system are
obtained from the desired characteristics of the system. The desired
characteristics equation can be obtained from the specification given to the
designer. Generally the desired polynomial equation is given for a continuous-
time system, specifying cog, and C and the discretization of the continuous-time
system is carried out beforehand with the properly chosen sampling interval
Ts. The desired characteristic equation, corresponding to the in deadbeat
performance, where all closed-loop poles are located at the origin, is given
by
') -1
In this case the system response settles in at most n steps. This method has
no restriction on process zeros, i.e., the process zeros can be stable or
unstable.
Procedure 1
It is assumed that the process zeros are the closed-loop zeros, the desired
response is characterized by Am and Bm. Then, as established in [1], the
design procedure is summarized as follows:
64


Step 1
Solve the equation
AR +qdBS=Am
with respect to R and S. The degrees of R and S are given by Eq. (4.2.10)
Step 2
Use the control law given by
Ru(t)=Tu(t)-Sy(t)
where
AJ\)
T=B =-
Ml 5(1)
This design procedure does not cancel unstable system zeros and can be used
for non-minimum phase systems (systems with unstable zeros). Cancellation of
the unstable process zeros (non-minimum phase system) must be avoided. The
only restriction of this method is that A and B should not have common
factors. However, the system zeros can be simplified since deg Am is normally
less than deg (AR+q^BS). It follows from Eq. (4.2.5). that factors of B which
are not also factors of Bm must be factors of R. The open-loop zeros (factors
of B correspond to open-loop zeros) which are not the desired closed-loop
poles must be cancelled. Thus B is factored as
65


B=B'B~
(4.2.14)
where zeros of B+ are stable (well damped modes) and zeros of B' are
unstable (poorly damped modes). A necessary condition for solvability of the
servo problem is
*.-**" (4.2.15)
This equation defines Bm As shown in [1], the design procedure in which
the open-loop zeros which are not desired closed-loop zeros are cancelled can
be described as follows :
Procedure 2
Data
Given the process model A and B and the desired response characterized by
Am and Bm. Assume Eqs. (4.2.14) and 4.2.15) are satisfied and d *1.
Step 1
Factor polynomial B into
B=B *B'
Step 2
Solve the equation
66


A^q-dBS^Am
with respect to Rx and S. The degree of Rx and S is obtained from
deg £=deg 5-1
deg i?j=deg B'+d-1
(4.2.16)
Step 3
Use the control law given by
Ru(t)=Tu(t)-Sy(t)
where
R=R.B' and T=B_
l "j
A and B are coprime since A and B are assumed to be coprime. This
method can also be used for non-minimum phase systems. However, the
decomposition in step 1 is time consuming if deg B is very large. In self-
tuning algorithms, where computation time is critical, each calculation has to
be repeated in each step between each consecutive sampling instants. Note that
for minimum phase system all the process zeros can be cancelled.
Example 4.3
The system considered in this example is a flexible robotic arm model
which is a non-minimum phase system. Thus, unstable process zeros must
67


not be cancelled. Further, to avoid the factorization problem, no process zeros
are cancelled. Note that the model of the flexible robotics arm was obtained
from real-time identification. The model is given by
m=

where
A{q '>1-0.2505? '-0.1901? '*-0.1381? '*-0.0927? ~4-0.0664? s-0.0840? '*
B(q '>-0.0013-0.0021? *'+0.012? '*+0.0939? '*+0.1011? '4+0.0622? '5+0.0371? '*
The desired characteristic equation is
and
The design parameters are the coefficients of the polynomials T, R and S. The
polynomials R and S are obtained by solving
AR+q^BS=Am
and are given by
R(q )=1 +0.251 Iq _1 +0.2549q '2+0.2464q "3+0.1883q _4+0.1023q 5+0.0760q "6
S(q "1)=0.4464+0.7068^ _1 +0.4508g "2+0.19779 -3+0.0791$ ~4+0.1722g 5
and 7Mm(l)/5(l)=3.3014


System Output y(k)
The output of the system and the control input are shown in Fig. 4.9 and
Fig. 4.10 respectively.
Fig. 4.9 System output and reference signal when no
process zeros are cancelled.
69


Control Signal u(k)
Fig. 4.10 Control signal when no process zeros are
cancelled.
70


CHAPTER 5
ADAPTIVE CONTROL
The basic idea behind designing an adaptive control system is to
combine a particular parameter estimation scheme with any control law.
The plant parameters are estimated recursively and the control law
parameters are computed in each step, using the estimated parameters
instead of the true ones [1,4]. This approach is known acertainty
equivalence adaptive control. The control law parameters can be directly
identified by parameterizing the system directly in terms of the control
law. This class of algorithms is known as direct algorithms. If the
parameters of the system are explicitly estimated and are available as
intermediate result to compute the control law parameters, then this type of
algorithm is called indirect algorithms [1,4]. The block diagram of an
adaptive control system is shown in Fig. 5.1.
71


Fig. 5.1 Schematic diagram of an adaptive control
system
5.1 Parameter Estimation
It is highly desirable that estimation algorithm be simple and easy to
implement because on-line estimation algorithms update parameter estimates
between consecutive sampling instants. Two types of estimation schemes will
be discussed in this paper: recursive least squares algorithm and gradient
algorithm.
72


5.1.1 Recursive Least-Squares Algorithm
Consider a process model given by
(5.1.1)
Equation (5.1.1) can be written as
XO'O1* (5.1.2)
where
e'=[<') . . a i . . bm } (5.1.3)
is the parameter vector, and
is the measurement vector.
If the parameters are unknown, a recursive least squares algorithm will be
used to estimate the parameters. The prediction model is described as
JAW-#W) (5.1.5)
where
^=[d1,42,...,dn,VI,...,AJ (5J.6)
is estimated parameter vector and <)>(t) is given by Eq. (5.1.4). As shown in
[1,4], the recursive least squares algorithm is given by
73


(5.1.7)

m-w-d)r§(t-if]
i>(r-=l/X[P(r-^-1) -
P(t-d-l)Mt-dWt-dfP(t-d-l)
X +fy(t-d)TP{t-d-\yb(t-d)
(5.1.8)
The weighting sequence X can be used if greater weighting is necessary to
more recent data in time-varying systems or to discard initial data in non-
linear systems. Typical values of X are between 0.95 and 0.99. However, using
A<1 coupled with change in command signal may cause bursts in the system
output [1]. The cause of this type of burst can be explained as follows. When
there are no changes in command input for a long time, P(t)$(t) will be zero
and thus there will be no change in the parameter estimate. This will lead
to the negative term in Eq. (5.1.8) to be zero because the negative term
represents a reduction in parameter uncertainty due to the last measurement.
Then Eq. (5.1.8) becomes
and the covarince matrix P becomes very large due to the lack of persistent
excitation [1]. A change in the command signal may result in large changes
P{ty=^Ll)
X
(5.1.9)
74


in parameter estimates and thus the in system output. Bursting phenomena may
also occur in adaptive control system if unmodelelled dynamics and external
disturbances are present [15]. Even with perfect modelling of systems,
algorithms with vanishing gain could exhibit bursting phenomena. Although it
gives fast converging rate initially, least squares algorithm gain approaches zero.
Bursting occurs periodically. This means that a self-excitation mechanism may
stabilize the system [14]. However, these bursts which cause very large control
signals are not tolerable in practice. There are many ways to eliminate bursts.
Use of exponential data weighting with variable forgetting, resetting of the
covariance matrix, switching of the estimator if P is not positive semidefinite
or stop updating P when the prediction error is smaller than the given value
may eliminate a possible bursting phenomena in the adaptive control loop
[1,4].
Although bursting can occur in perfect modelling case, we are interested,
in this research, in bursting which occurs due to unmodelled dynamics and
possibly external disturbances. The reason being that the flexible arm has
infinite number of modes and has to be modelled with finite number of
modes. Thus, the unmodelled dynamics in the flexible arm may cause bursting
in the adaptive loop. The proposed modifications for least squares algorithm
75


in [1] were implemented during the real-time running of the flexible arm
system and did not work well. However, the gradient algorithm, which does
not require persistently exciting conditions, gave satisfactory results.
5.1.2 Gradient Algorithm
Consider a system with unmodelled dynamics :
A?'VfH)=?"'B(91Xl+iito'1M')+A'V^VMl+A^-1)H/+l) (5.1.10)
Since the parameters are not known they have to be estimated. Define a
vector of parameters estimates
where A^'1), /'=1,2 denote additive and multiplicative system perturbations and
A3(q1) denotes external disturbance [14]. The polynomials A and B are given
by
A{tfl)=1 +a1q~l+a2q'2+. +anqn
B(.q'l)=b0+bl q'l+b2q'2+. . +bmq'm
(5.1.11)
and a measurement vector
4*(0r=[-3<0, ~ ...,
(5.1.12)
76


The cost function to be minimized for deterministic case is:
1 N 2<=i (5.1.13)
and the cost function to be minimized for stochastic case is: 1 K Nin\ (5.1.14)
As shown in [14], the gradient algorithm for deterministic adaptive control
system is given by
£(/+1 )=£(/)+^-'('+1 )-tf(/)r where ^0=n0+*W2 0 <0<* (5.1.16)
and njtf = An0(t-1)2 + KKOII2 where A is a fixed number satisfying (5.1.17)
77


X1 + A.0 0 As shown in [15], the stochastic gradient algorithm is given by
n(t) a>0 (5.1.18)
n(r)^ax^mjS0 (5.1.19)
o 11 ft (5.1.20)
The global stability analysis for the design adaptive systems in the
presence of unmodeled dynamics and external disturbance is developed in [14],
where the usefulness of the developed methodology is demonstrated on the
robust deterministic and stochastic adaptive control systems. A stochastic
gradient algorithm is designed in [15], where it is shown that it has a better
rare of convergence of the tracking error than the extended least squares (or
least squares) self-tuning algorithm.
78


5.2 Adaptive Algorithms for Closed-Loop Pole Placement
The parameters to be estimated are given by

and the desired closed-loop transfer function is given by


(5.2.1)
There are several assumptions which have to be made:
1) n=deg A(q'1) and m=deg B(q~l) are known.
2) A(q'1) and B(q'1) are relatively coprime.
3) The system delay is known a priori.
4) The time delay in the desired closed-loop transfer function is equal
to or less than the system's delay.
5) For non-minimum phase systems, the unstable zeros must be
included in the desired closed-loop transfer function as zeros, i.e.,
Bm=Bn B' where the zeros of B' are unstable.
The unstable zeros within the system bandwidth have significant effect on the
79


properties of the closed-loop system since they can not be cancelled [1].
The control signal is generated by

(5.2.2)
The polynomials R and S are obtained by solving the equation
AR+gndBS=Am
(5.2.3)
where Am is the desired closed-loop poles.
If Am is a stable polynomial, then the closed-loop system is asymptotically
stable. Generally, the desired closed-loop poles are chosen in the form of
dominant poles of a second-order continuous-time systems [1,10]. For higher
order systems the remaining poles can be assigned near or at the origin. For
the deadbeat controller all the closed-loop poles are assigned at the origin. Am
can be chosen by discritization of a second-order continuous-time as
with the damping coefficient £, sampling period Ts and frequency w given
by
AJq '*)=! ~2e "?*r'cos{wT)q ~l +e
(5.2.4)
where w0 is the natural frequency.
80


Once the closed-loop system performance is specified, an adaptive algorithm
for closed-loop pole assignment can be implemented using Fig. 5.1. Depending
on the type of system (stable or unstable zeros) the following adaptive
algorithms can be used for closed-loop assignment [1]. The algorithms are
based on indirect identification.
Algorithm 1 (No process zeros are cancelled)
Data
It is assumed that the process zeros are the closed-loop zeros. The desired
response is characterized by Am and Bm
Step 1
Estimate the parameters of the model
AyW^Buit)
by one of the appropriate estimation schemes.
Step 2
Solve the equation
AR +q'd BS=Am (5.2.5)
with respect to R and S. The degrees of R and S are given by Eq. (4.2.10)
Step 3
Use the control law given by
81


Ru{t)=Tu(t)-Sy(t)
where T-B =A (1)/B(1)
The steps 1 through 3 are repeated at each sampling interval.
The algorithm can be used for non-minimum phase systems since it does not
simplify system zeros. The next algorithm simplifies system zeros.
Algorithm 2. (stable process zeros are cancelled)
Data
It is assumed that the unstable process zeros are included in the closed-loop
system as zeros. The desired response is characterized by Am and Bm.
Step 1
Estimate the parameters of the model
by one of the appropriate estimation scheme.
Step 2
Factor the polynomial B as
B=B+B~
where the zeros of B+ lie inside the unit circle and the zeros of B' lie
outside the unit circle.
82


Step 3
Solve the equation
ARl+g'dB'S=Am
with respect to Rx and S. The degrees of R{ and S are given by Eq. (4.2.16)
Step 4
Use the control law given by
Ru(t)=Tu(t)-Sy(t)
where R=R]B+ and T=Bm
The steps 1 through 4 are repeated at each sampling period.
The drawback with this algorithm is that the factorization in step 2 is time
consuming. Computation time is critical especially for high-order systems and
the factorization in step 2 may take too long time for the control input to
be generated at each sampling period. For minimum phase systems all the
process zeros can be cancelled.
5.3 Simulations
The system used for simulation is a flexible robotic arm model. The
polynomials A and B used in the simulations are obtained from real-time
identification (see Chapter 6) of the system. Since the flexible robotics arm
83


model is non-minimum phase system, Algorithm 1 is used in all simulations.
AJgl)=\ is chosea The reference signal is a square wave of amplitude 0.2
and a period of 500 samples.
Example 5.1
The estimation scheme used in this example is the recursive least squares
algorithm given by Eqs. (5.1.7) and (5.1.8). The initial value of the covariance
matrix is chosen as /,0=0.1I where I is a 13 by 13 identity matrix. Further,
A=1 is chosen. The polynomials R, S and T after 2000 iterations are given
by
^_1)=l+0.2511?1+0.2549?'2+0.2464?3+0.1883^+0.10239'5+0.0760^
%'1)=0.4464+0.7086tf'1+0.4508?'2+0.1977?3+0.0.0791?'4+0.1722g'5
7^=3.3014
respectively. The output and the control signal are shown in Fig 5.2 and Fig
5.3, respectively. The high gain of the output at the beginning is due to the
fact that the estimates were at great distance from the optimum initially.
However, as time elapsed, the system output was able to follow the reference
signal due to the convergence of the parameter estimates to the true ones.
84


System Output y(k)
Fig 5.2 System output and reference signal when a
recursive least squares algorithm is used..
85


Fig. 5.3 Control signal when recursive least squares algorithm
is used.
Example 5.2
In reality the flexible arm is characterized by infinite number of modes.
Thus, we have a case of unmodelled dynamics which could give bursting
phenomena. To imitate this real-time situation in simulation we estimate 9
parameters for the system described by 13 parameters. The order of the system
86


to be estimated is 4. The polynomials A and B to be estimated are given by
A{qx)=l+alql+a2q'2+a3q'3+a4q'4
B(q'1)=b0+blq~l+b2 q~2+b3q~2+b3q'A
respectively.
All the initial conditions are the same as in Example 5.1. The polynomials
R S and T after 2000 iterations are given by
/?(g*1)=l+0.1740^'1+0.1268?2+0.1532tf'3-0.0126^
%*1)=-2.0162+1.0977g*1 -0.8229g'2+0.0575?3
T=6.97
respectively.
The output and the control signal are shown in Fig. 5.4 and Fig. 5.5
respectively. As shown in Fig. 5.4, except at the beginning, the output of the
system follows the reference signal. However, the system behaved poorly
when a recursive least squares algorithm was used during the experiment.
87


System Output y(k)
Fig 5.4 System output when a recrusive least squares
is used in the presence of unmodeled dynamics.
88


Fig. 5.S Control signal when a recrusive least squares
is used in the presence of unmodeled dynamics.
Example 5.3
During a real-time run the least squares algorithm did not work well.
However the gradient algorithm given by Eq. (5.1.14) gave a satisfactory
results. Here Example 5.2 is repeated with estimation scheme given by Eq.
(5.1.14). The polynomials R, S and T are obtained from Eq. (5.2.2) and are
given by
89


Full Text

PAGE 1

IDENTIFICATION AND CONTROL OF A FLEXIBLE ROBOTIC ARM MODEL by Anteneh Kebede B.S., University of Colorado, 1992 A thesis submitted to the Faculty of the Grad1Iate School of the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Electrical Engineering 1995

PAGE 2

This thesis for the Master of Science degree by Anteneh Kebede has been approved by Miloje S. Radenkovic -,_ ----Hamid Z. Fardi os--oL,Date

PAGE 3

Anteneh Kebede (M.S., Electrical Engineering) Identification and Control of a Flexible Robotic Ann Model Thesis directed by Professor Jan T. Bialasiewicz ABSTRACT The purpose of this thesis is to design a controller which can suppress vibrations of a physical model of flexible robotic ann. As part of the design, the identification of the flexible robotic ann model was carried out using the observer/Kalman filter identification and the recursive least squares algorithms. An adaptive pole assignment controller is used to control the flexible robotic ann model. A gradient estimator is used to eliminate the possible bursting phenomena in the adaptive control system. This abstract accurately represents the content of the candidate's thesis. I recommend its publication. Jan T. Bialasiewicz iii

PAGE 4

CONTENTS CHAPTER 1. INTRODUCTION. l 2. SYSTEM DESCRIPTION 4 2.1 DC Motor. 6 2.2 Flexible Robotic Ann Model. 8 2.3 Tip position Sensor 13 3 SYSTEM IDENTIFICATION 14 3 .l Identification Principle 14 3.2 Observer/Kalman Filter Identification IS 3.2.1 System and Observer Markov Parameters. 16 3.2.2 Computation of System Markov Parameters 23 3.2.3 Computation of Observer Gain Markov Parameters 28 3.2.4 Computational Steps of OKID 29 3.3 Recursive Parameter Identification 30 3.3.1 Recursive Least Squares Algorithm. 31 3.4 PRBS Signal for Process Identification 36 4. CONfROL DESIGN FOR THE FLEXIBLE ROBOTIC ARM MODEL ........... . 40 4.1 Pole Placement Design in State Space 41 IV

PAGE 5

4.2 Transfer Function-Based Design of Linear Regulator 60 5. ADAPTIVE CONTROLLER . 71 5.1 Parameter Estimation 72 5 .1.1 Recursive Least Squares Algorithm . 73 5.1.2 Gradient Algorithm 76 5.2 Adaptive Algorithms for Closed Loop Pole Placement 79 5.3 Simulations . . . 83 6. IDENTIFICATION OF Tiffi FLEXIBLE ROBOTIC ARM MODEL. 92 6.1 Observer!Kalman Filter Identification 92 6.2 Recursive LeaSt Squares Algorithm .106 7. EXPERIMENTAL RESULTS ; . .. .110 7.1 Choice of Sampling Frequency and Order of the System Model . 110 7.2 Non-Adaptive Controller .112 7.3 Adaptive Pole Placement 117 7.3.1 Deterministic Case 117 7.3.2 Stochastic Case .122 8. CONCLUSION .128 APPENDIX .130 REFERENCE. .185 v

PAGE 6

FIGURES Figure 1.1 Basic structure of an adaptive controller 2 2.1 Experimental setup of the flexible robotic arm model 5 2.2 The experimental flexible robotic arm model 5 2.3 Flexible robotic arm model 11 3.1 Structure of recursive least squares estimation 32 3.2 PRBS signal. . . . . . 39 4.1 Servo system with observed state feedback . 42 4.2 System response and command signal for the servo system with observed state feedback and integral control. . . 53 4.3 Observer output and command signal for servo system with observed state feedback and integral control . . . 54 4.4 Control signal for servo system with observed state feedback and integral control . . . . . . . 55 4.5 System output and command signal for the servo system with observer state feedback and integral control designed in Example 4.2. . . . . . 57 4.6 Observer output and command signal for servo system with Observer state feedback and integral control designed in Example 4.2. . . . . . . . . 58 VI

PAGE 7

4.7 Control signal for servo system with observer state feedback and integral control designed in Example 4.2 59 4.8 Block diagram of the closed-loop system 61 4.9 System output and reference signal when no process zeros are cancelled 69 4.10 Control signal when no process zeros are cancelled 70 5.1 Schematic diagram of an adaptive control system. 72 5.2 System output and reference signal when a recursive least squares algorithm is used 85 5.3 Control signal when a recursive least squares algorithm is used 86 5.4 System output when a recursive least squares algorithm is used in the presence of unmodeled dynamics. 88 5.5 Control signal when a recursive least squares algorithm is used in the presence of unmodeled dynamics. 89 5.6 System output and reference signal when a gradient algorithm is used in the presence of unmodeled dynamics . 90 5. 7 Control Signal when a gradient algorithm is used in the presence of umnodeled dynamics 91 6.1 Computational steps of OKID 93 6.2 Input excitation. 94 6.3 Output measurement of the flexible robotic ann model 95 6.4 Observer pulse response sequence 96 6.5 System Markov pulse response. 97 vii

PAGE 8

6.6 Observer gain pulse response 98 6.7 Hankel matrix singular values .100 6.8 Estimated output of the system .102 6.9 Predicted output of the identified model and the real output .103 6.10 Normalized prediction error .104 6.11 Frequency response of the identified model. .105 6.12 Parameter estimates of the flexible ann when a recursive least squares algorithm is used . . . . . . 107 6.13 Output measurement when a recursive least squares algorithm is used . . . . . . . .108 6.14 Predicted output when a recursive least squares algorithm is used . . . . . . . .109 7.1 System output when a non-adaptive controller is used 7.2 Control signal when a non-adaptive controller is used 7.3 System output when a non-adaptive controller is used with payload added at the tip of the flexible arm 7.4 Control signal when a non-adaptive controller is used with payload added at the tip of the flexible arm .113 .114 . .115 .116 7.5 Tip position response when a gradient estimator is used in the adaptive control system . . . . . . . 119 7.6 Control signal when a gradient estimator is used in the adaptive control system . . . . . . .120 Vlll

PAGE 9

7. 7 Tip position response when a gradient estimator is used in the adaptive control system with a payload added at the tip of the arm . . . . 7.8 Control signal when a gradient estimator is used in the adaptive control system with a payload added at the tip of the arm . . . . . 7.9. System output when a stochastic gradient estimator is in the adaptive control system . . . . 7.10 Control signal when a stochastic gradient estimator is in the adaptive control system . . . . 7.11 Tip position response when a stochastic gradient estimator is used in the adaptive control system with payload added at the tip of the arm. . . . . . . 7.12 Control signal when a stochastic gradient estimator is used in the adaptive control system with a payload added at the .121 .122 .124 .125 .126 tip of the ann . . . . . . . . .127 ix

PAGE 10

TABLES Table 2.1 DC motor/tachometer Specification. . . . . . . . . . . 9 2.2 Properties of the Flexible Arm. . . . . . . . . . . I 0 3.1 Generation of Maximum Length PRBS ................. 38 6.1 Modal Parameters . . . . . . . . . . . . . . 101 X

PAGE 11

I wish to thank Professor Bialasiewicz for his help in motivating my work on this thesis, in providing me with meaningful direction in this research effort and funding my research work. I would like also to thank professors Radenk:ovic and Fardi for seMng in my thesis committee. XI

PAGE 12

CHAPTER 1 INTRODUCTION In control systems design, complete knowledge of the system model is essential. A classical control design method can be used when the system model is completely specified [4]. However, in practice system models are not known or are known partially. Therefore, identification of the system model parameters is necessary for optimal design of a control system .. System identification is the determination of the system model from the input/output measurements [10]. Depending on the stability of the system, either an open-loop or a closed-loop identification can be implemented. For unstable systems it is desirable to use a closed-loop identification. For time-varying or slowly time-varying systems, identification of the system models only is not enough. For optimum system performance, it is necessary to design a self-learning controller which can account for variations of the system parameters. For unknown system models, a controller could be combined with on-line parameter estimation to design a controller using the estimated parameters instead of the true parameters [1,4]. 1

PAGE 13

This type of controllers are known as certainly equivalent adaptive controllers [4]. The basic structure of an adaptive control is shown in Fig. 1.1. There exists a wide range of adaptive controllers. The choice of an adaptive controller depends on the system performance and specifications. Some adaptive controllers may be more suitable than the others for a particular system . 1 notse ... plant outputs .... --, 7' controller LL .... parameter estimator I" Fig. 1.1 Basic structure of an adaptive controller. An adaptive control loop may exhibit bursting phenomena due to unmodelled dynamics and disturbance present in a system. One such system Is a flexible manipulator which is characterized by infinite set of modes. 2

PAGE 14

However, to design a controller for a flexible manipulator a model with only a finite number of modes can be used. Thus, the unmodelled dynamics may give rise to possible bursting phenomena and the adaptive control system may become unstable [14]. In this case, a proper choice of an estimator will stabilize the adaptive control system. 3

PAGE 15

CHAPTER 2 SYSTEM DESCRIPTION The system considered to be identified and controlled consists of a power amplifier driving a DC motor and physical model of a flexible robotic arm model. The flexible robotic arm model is mounted directly on the shaft of a DC motor. The tip position of the arm to be controlled is sensed with an optical detector. The flexible robotic arm model is mounted on top of a D.C. motor which Is connected to a computer via an input/output board. The tip position of the ann is measured with position indicator system which includes the LED, the detector and the lens. The LED is placed at the tip of the robot ann. A lateral effect detector is centered above the center of the arm tip. The detector with a lens covers up to degrees in the horizontal plane. The ann bends horizontally freely but is stiff in vertical bending. The mechanical configuration of the system is shown in Fig 2.1 and Fig. 2.2 is the picture of the system. 4

PAGE 16

lense arm LED-+ 1: fixed load motor and l tachometer I I Fig 2.1 Experimental setup of the flexible robotic ann model Fig. 2.2 The experimental flexible robotic arm model. 5

PAGE 17

2.1 DC Motor The robotic ann model is mounted on the top of the motor. This is an armature-controlled D.C. servomotor with a permanent magnet field and a tachometer connected to the load via a rigid shaft coupler which has a moment of inertia of 0.0011 oz-in-s2/rad about the rotational axis. The complete model of the motor is described in [9]. The load moment of inertia is 0.62 oz-in-s2 /rad about the rotational axis. The robotic arm model is mounted on the top of the load. The power amplifier provides current through the motor armature which is proportional to the input voltage generated by the amplifier. Peak output current is limited to .0 A and the output current step rise time is limited to a minimum of 0.7 msec. The combination of the static Coulomb friction and the viscous dumping form the friction torque in the system which are given by the following equation (2.1.1) where p is the coefficient of viscous friction, fc( tu) is the Coulomb friction. fi tu,T ,J is static friction, K5 is the static friction coefficient with a value of 3.0 oz-m and Kc is the Coulomb friction with a value of 3.4 oz-in. The coefficients Kc and K5 are obtained from the DC motor system measurements. The differential equations describing the dynamics of the DC 6

PAGE 18

motor and load have the following form: (2.1.2) and (2.1.3) The time constant of the RC filter at the output of the tachometer circuit is 100 times greater than the time constant -r 2 due to the tachometer armature inductance. The tachometer gain is given by the transfer function --=--(2.1.4) where Kv =IOKg and Kg is the generator constant of the tachometer. The potentiometer gain is given by the transfer function = K lt1 p 1 s+tl (2.1.5) where KP is the DC gam of the potentiometer circuit and -r 1 is the time constant of an RC filter which filters the output of the potentiometer. The motor characteristics are given in Table 2.1 [9]. The power amplifier and motor armature circuit provide the necessary power gam required for driving DC motor through direct control of the motor armature current. The 7

PAGE 19

dynamic equations of the power amplifier are coupled to the non-linear motor dynamics given by Eq. (2.1.2) and Eq. (2.1.3). Assuming the tenn Ke O(t) to be a constant voltage source, the closed-loop transfer function of the power amplifier is given by la(s) -2.06x l08(s+667) V,(s) (s+63448)(s+3379)(s+646) (2.1.6) Since the power amplifier allows direct torque control, the effects due to nonlinear armature circuit resistance are not considered in this system. 2.2 Flexible Robotic Arm Model A flexible robotic arm model is a distributed parameter system. Therefore, its analytical model is characterized by infinite set of natural modes [3]. The flexible robotic arm model is a uniform, slender beam connected to the annature of a D.C. motor via a rigid hub. The physical model of a flexible robotic ann is implemented as a beam which can bend freely in the horizontal plane but is stiff in vertical bending. The geometric and mass properties of the system are summarized in Table 2.2 and a sketch of the flexible robotic arm model is given in Fig. 2.3. 8

PAGE 20

Table 2.1 DC motor/tachometer specifications. Peak torque 250 oz-in Continuous torque 30 oz-in Friction torque 4 oz-in Damping factor 0.4 oz-in!Krpm Rotor inertia 0.0058 2 oz-m-s Weight 52 oz Winding temperature 155 oc Thermal resistance 5 C/Watt Torque sensitivity 10 oz-in/A BackE.M.F 7.4 V/K.pnn T enninal resistance 1.6 Ohms T enninal inductance 2.2 mH Peak Current 25 A Voltage gradient 7 V/Kprm Armature resistance 36 Ohms 9

PAGE 21

Define the parameters of the arm as follows: l =length of the beam x =distance along the of the beam r =hub radius =payload mass IP =payload inertia m =mass per unit length EI =flexual rigidity of the beam w(x,t) =flexural deflection of the beam Table 2.2 Properties of the flexible arm. Beam length, em 72 Beam nnr.n 1.5 Beam width, em 7.6 10

PAGE 22

Fig. 2.3 Flexible robotic ann model As shown m [3], the tip position y is given by y(x,t)=w(x,t)+(x+r)6h(t) (2.2.1) The kinetic energy of the system is given by (2.2.2) 11

PAGE 23

where denotes the derivative with respect to t and denotes the derivative with respect to x. The first term is due to hub rotation whereas the integral term arises from the motion of the flexible link. The potential. energy of the system is given by 1 V-1 fEIW'(x,t'f dx e 2 0 (2.2.3) and the external work done on the system by the motor torque, T m is given by e ..., W=fr d6 Ill h (2.2.4) 0 where emax is the total angle of rotation of the arm. It is impractical to design a controller based on full-order flexible arm because the flexible ann is characterized by infinite set of natural modes. Therefore, it is necessary to use a reduced-order lumped-parameter model to design a controller. One way to model a flexible arm is to use the assumedmodes method together with Lagrange equation [3]. However, in this thesis, to design a controller for flexible arm we use identified finite-order models 12

PAGE 24

obtained using input/output data. 2.3 Tip Position Sensor An optical sensor, a UDT SCtO/D lateral effect detector, along a camera lens was used to measure the tip position of the flexible ann. The distance between the optical sensor and the lens is about t. 0 inch [ 5]. The detector has a tern x tern active area and will detect light from 350-llOOnm, i.e., the maximum resolution is about t J.lm. Its peak responsivity is 0.5A/W at 900nm. The detector proportions the photocurrent among its 4 electrodes irrespective of image detail. The detector is incorporated in a computer-based control scheme usmg a UDT 431 position indicator which has a resolution of t part in 4000 [5]. The 431 position indicator has a bandwidth of 1kHz. The 431 can accommodate a I 00000: I range of light levels. The light source is provided by a high powered LED. At tOO rnA the LED emits t2 mW at 880 nm into a 15 beam. The LED is connected to a I 00 rnA LED supply connection on 431 position indicator. The complete description of the UDT 43t x-y position indicator along the UDT SCIO/D lateral effect detector is given in [5]. 13

PAGE 25

CHAPTER 3 IDENTIFICATION SCHEMES 3.1 Identification Principles The dynamical behavior of systems can be described with various model structures, each one dedicated to a particular application. In some cases, the model of a system can be obtained from the laws of chemistry, physics, etc [4,10]. These type of models give a fairly complete system description. However, these models, in general, are complex and can rarely be used for the design of control systems [10]. In other cases, it might not be feasible / to determine the model of a system precisely from the laws of physics, chemistry, etc. In these instances it may be possible to determine the model of a system by observing the nature of the system's response under appropriate experimental conditions. This procedure is known as system identification meaning the experimental determination of the model of a dynamic process from input/output measurements [10]. Two types of identification methodologies will be used to identify the parameters of a flexible robotic arm model: observer/Kalman filter 14

PAGE 26

identification algorithm and recursive least squares algorithm. The observer/Kalman filter identification is based on identification of observer Markov parameters from which the state space model is obtained. The recursive least squares is based on minimizing a quadratic cost function. 3.2 Observer/Kalman Filter Identification Many system identification techniques have been developed and applied to identify a state-space model for modal parameter identification of flexible structures. The modal parameters include frequencies, damping and mode shapes. Most techniques are based on sampled pulse system response histories which are known as system Markov parameters [6]. One approach solves directly for system Markov parameters from system input and output data. But there is a drawback of this approach when inverting an input matrix which becomes particularly large for lightly damped systems. Another approach uses Fast Fourier Transforms of the input/output data to compute the Frequency Response Functions, and then compute system Markov parameters by using the Inverse Discrete Fourier transform [6]. Since system Markov parameters may exhibit very slow decay, asymptotically stable observer can be used to form a stable discrete state-space 15

PAGE 27

model for the system to be identified [6,8]. In this approach one can assign any poles desired for the observer, and specify the decay rate of the observer Markov parameters as well the number of parameters needed before they have decayed to negligible level. 3.2.1 System and Observer Markov Parameters Consider the discrete state-space model given by the following equations: where x(k+ 1 +Bfi..k) x(k) is an n-vector y(k) 1s an m-vector, u(k) is an r-vector A IS nxn matrix B 1s nxr matrix C is mxn matrix D is mxr matrix (3.2.1) (3.2.2) Assuming that x(O)=O, the solution of Eq. (3.2.1) for IF0,1, .... /-1 may be 16

PAGE 28

obtained by recursion as follows: .x(O)=O x(1)=Bu(O) .x(2)=ABu(O)+Bu(1) 1-1 .x(l-1)= LA t-1Bu(l-i-1) t=l and, one obtains the solution of Eq. (3.2.2) as follows: y(O)=Du(O) y(1}=CBt(O)+Dt(1) y(2)=CABr{O)+CBt(1 )+Dt(2) 1-1 y(l-1)= LCA t-lBt(l-1-i)+Dt.(l-1) tl Regrouping Eq. (3.2.3) and Eq. (3.2.4) into a matrix fonn yields u(O) u(l) u(2) u(3) ... u(l-1) u(O) u(l) u(2) ... u(l-2) y=[D CB CAB ... CA 1 2 B] u(O) u(l) ... u(l-3) u(O) ... u(l-4) u{O) mxrl rlxl 17 (3.2.3) (3.2.4) (3.2.5)

PAGE 29

where y=[v(O) Y(l) ... y(/-1)] Equation (3.2 .5) can be written in the compact form: y=YU (3.2.6) The matrix Y contains Markov parameters to be determined The matrix U is an upper triangular input matrix. The matrix U is square for single input (r=l). This is the only case in which Y has a unique solution Y=ylT1 Even for r=l, the matrix U may become ill-conditioned and the solution may not be attainable [2]. For r 2, the solution for Y is not unique because there are mxr/ unknown equations in the Markov parameters matrix but only mxf equations. Assume that the system is asymptotically stable. Then, for sufficiently large p Ak :::::0 for Then, Eq. (3.2.6) can be approximated by y:::::YU where 18 (3.2.7)

PAGE 30

y=[y(O) y(l) y(2) ... y(p) ... y(/-1)] U= mxl Y=[D CB CAB ... CA:"-1B] r(p ... l) u(O) u(l) u(p) ... u(/-1) u(O) ... u(p-1) u(l-2) u(O) u(l-p-1) r(p,..l)xl By choosing />r(p ... l) we will have more equations (mxl) than unknowns (mxr(p+ 1)) and we could approximately calculate p Markov parameters using pseudo-inverse ut of the matrix U, i.e. (3.2.8) However, for lightly damped systems the size of the matrix U becomes too large to solve for pseudo-inverse ut numerically [2]. The solution to this problem is to increase artificially the damping of the system, as proposed in [8]. By increasing artificially the damping of the system and implementing the observer model, it is possible to obtain the solution of system Markov 19

PAGE 31

parameters. Adding and subtracting Gy(k) on right-hand side of Eq. (3.2.1) and rearranging yields x(k+ 1 )=(A +GC)x(k)+(B +GD)u(k)-G}(k) where -A=A+GC, -=Ax(k)+Bv(k) -B=[B+GD -G], v(k)=[u(k)l .v(k) (3.2.9) (3.2.10) and G is an nxm matrix chosen to make A +GC as stable as desired [2]. The output of the system is given by Eq. (3.2.2). Markov parameters of Eq. (3.2.9) are referred as observer Markov parameters. The input-output description for the observer model is then given by -y=YV (3.2.11) where y=[y(O) y(l) .v(2) .. .y(p)... .v(/-1 )] mxl 20

PAGE 32

V= Y=[D ci cAB ... cir1i ci'-2..8] mx[(m+r)(l-l)+r] u(O) u(l) u(2) u(p) u(l-1) v(O) v(1) ... v(p-1) v(l-2) v(O) ... v(p-2) v(l-3) . v(O) v(l-p-1) . v(O) [(mxr)(l-l)+r]xl When using real data the eigenvalues of the A are placed at the origin [8, 12] (i.e. the deadbeat observer is considered) so that cikizo for hp. The observer Markov parameters can be obtained from real data using Eq. (3.2.11) with the truncated data, i.e., we have where y=YV Y=[D ci CAB ... cip-li 1 mx[(m+r)p+r] 21 (3.2.12)

PAGE 33

u(O) u(l) u(2) v(O) v(l) V= v(O) u(p) ... u(l-1) ... v(p-1) ... v(l-2) ... v(p-2) v(l-3) v(O) [(r>
PAGE 34

where --p-1Y=[D CB CAB ... CA B] y=[y(p) y(p+1) ... y(i-1)] x=[x(O) x(1) ... x(i-p-1)] u(p) u(p+1) ... u(l-1) v(p-1) v(p) ... v(i-2) V= v(p-2) v(p-1) ... v(l3) v(O) v(l) ... v(l-p-1) The effect of the non-zero initial conditions on measured data becomes negligible after p steps because the non-zero initial conditions are multiplied by jP [2]: The negligible effect of the initial conditions after p time steps makes the response of the system stationary [6]. 3.2.2 Computation of System Markov Parameten The system matrices A, B, C, and D are computed from the system Markov parameters and the observer gain matrix G is obtained from the 23

PAGE 35

observer gain Markov parameters. To compute system Markov parameters Y from the observer Markov parameters i, following the approach given in -[6,8,12], partition Y such that where Y=D 0 ---Y=[ Y0 Y1 YP] To obtain Markov parameter Y1 consider Y=CB 1 (3.2.14) (3.2.15) (3.2.16) Adding and subtracting the term CGD on the right side of Eq. (3.2.16) and using Eq. (3.2.15) we get Y1 =CB+CGD-CGD =C(B+GD) -CGD (3.2.17) Markov parameter CAB is obtained by considering -cl) Y2 in Eq. (3.2.15) 24

PAGE 36

f;1> =CAB+CGCBt-C(A +GC)GD Therefore, (3.2.18) To compute Markov parameter Y3 consider -(1) y3 in Eq. (3.2.14) from which (3.2.19) Similarly, one can obtain Markov parameters Y4 Y5 and so on. As shown in [12], the general relationship between the actual system Markov parameters and the observer Markov parameters is given by 25

PAGE 37

D==Y. ==Y 0 0 for k==l, ... JJ (3.2.20) for k=p+ l, ... CID Equation (3.2.20) shows that there are only p independent Markov parameters and there are only p+ 1 observer Markov parameters computes as least square solution from Eq. (3.2.12).. By the proper choice of p. r:n and r:2> are considered to be zero for k>p [8]. As established in [8], the relationship between system Markov parameters and observer Markov parameters can be developed further. Define y2 y3 ... YN+l y3 y4 ... YN+2 H= yp+l yp+2 ... YN+p (3.2.21) f(2) = [ y(2) -(2) -fi2)] -Y ... p p-1 where N is a sufficiently large arbitrary number. Further, the Hankel matrix H can be expressed by 26

PAGE 38

c CA H= CA2 A[B AB A2B .. AN-1B] =RAQ (3.2.22) where R is the observability matrix Q is the controllability matrix, and A is the system state matrix. The size of the matrix H is mp x Nr. From Eq. (3.2.20) and Eq. (3.2.21) one obtains (3.2.23) The identified state matrix A represents the controllable and observable part of the system. The maximum rank of H is mp provided that Nr>mp. The system matrix A (of order n) should exist if p is chosen such that and f'2> is obtained uniquely. Therefore, p must be chosen such that The product mp is the upper bound on the order of the identified system model. Therefore, it is not necessacy to have more system Markov parameters than the necessary number to create a full-rank Hankel matrix. [6,8] 27

PAGE 39

3.2.3 Computation of Observer Gain Markov Parameters To compute the observer gain G, following the approach m [6,8], first recover the parameters fi kl23 1."' .n.. or = , ... (3.2.24) From Eq. (3.2.24), the first parameter is (3.2.25) The second parameter ts gtven by (3.2.26) The third parameter Is given by (3.2.27) The next k parameters are computed similarly. As established m [8], the general relationship is given by 28

PAGE 40

for k=2,3 ... .p for k=p+ 1, ... ,CD The observer gain G is obtained from where c CA R=CA2 (3.2.28) (3.2.29) (3.2.30) The integer k has to be sufficiently large so that the observability matrix R has the rank of n. 3.2.4 Computational Steps of OKID The computational steps of the observer!Kalman filter identification (OKID) algorithm, given a set of input and output data [6], are listed below: 29

PAGE 41

1) Choose a value of p which determines the number of observer Markov parameters to be identified from the given set of input and output data. Compute the least-squares solution of the observer Markov parameter matrix Y. 2) Determine system and observer gain Markov parameters from the identified observer Markov parameters. 3) Identify a state-space model of the system and the observer gain from the recovered sequence of the system and observer gain Markov parameters by using ERA (Eigensystem Realization Algorithm) or ERA/DC (Eigensystem Realization Algorithm with Data Correlation) .. 4) Find the eigensolution of the realized state. matrix. Transform the realized model to modal coordinates for modal parameter identification which includes frequencies, dampings and mode shapes at sensor locations. 3.3 Recursive Parametric Identification An estimation algorithm with recursive formulation can be implemented on digital computer to estimate the parameters of the model. There are two classes of parameter estimation algorithm: on-line and off-line. In the off-line 30

PAGE 42

estimation algorithm, all the data are available prior to analysis and the data may be treated with no strict time. limitations. In on-line case, parameters are updated recursively during each sampling interval. Hence, it is necessary to use a relatively simple estimation algorithm in many practical applications. 3.3.1 Recursive Least Squares Algorithm In this section, the identification of sampled discrete-time parametric dynamic model is discussed. A schematic diagram of a recursive parameter estimation is shown in Fig. 3.1. Consider a discrete-time system given by -.tA( -1) y(t+l)-q q u(t+l) B(q -1) (3.3.1) where A(q-1 ) and B(q-1 ) are polynomials in the unit-delay operator q-1 and are defined as follows: (3.3.2) (3.3.3) and d represents the pure delay of a system. 31

PAGE 43

u(t+ 1) pIan t y(t+I) + '" e(t+l) y(t) ,,. y(t+ 1) Fig. 3.1 Structure of recursive least squares estimation. Equation (3.3.1) can be written as y(t+ 1 )=8r cl>(t) (3.3.4) where (3.3.5) 32

PAGE 44

is the parameter vector, and P(tf =[ -y(t) -y(t-1) ... -y(t+ 1-n),u(t+ 1-d) u(t-d-1) ... u(t+ 1-m -d)] (3.3.6) is the measurement vector. Then, the prediction model ts given by (3.3.7) where (3.3.8) is the estimated parameter vector. A recursive parameter adaptation algorithm with memocy is desired. Least squares algorithm provides a satisfactocy convergence speed and a high adaptation gain at the beginning when the optimum is far away [4,10]. The recursive least squares algorithm mtmmtzes the following quadratic cost function J(t)= .E [y(i+1)-0(t)r P(i)]2 (3.3.9) tal 33

PAGE 45

As shown in [4], the recursive least squares parameter adaptation algorithm 1s given by P(t+ 1 )=P(t)1 The posteriori prediction error, e(t+l), 1s given by e(t+ 1)=y(t+ 1) (3.3.10) (3.3.11) (3.3.12) Equation (3.3.10) is based on posteriori prediction error, i.e., parameters are estimated after the action of the parameter .adaptation algorithm. The recursive least squares algorithm gives less and less weight to the new measurements since it is an algorithm with decreasing adaptation gain. For estimation of time-varying system parameters and other system parameters, a decreasing adaptation gain may not be suitable [10]. There are a number of variants of the recursive least square algorithm with a different choice of adaptation gain. As shown in [10], the least squares algorithm can be generalized by introducing two weighting sequences A1(t) and A2(t) into Eq. (3.3.11): 34

PAGE 46

(3.3.13) A number of choices for J..1(t) and J..2 (t) is possible depending on the type of a system. For example, for the identification of slowly time-varying system, A value of J..1 between 0.95 and 0.99 is a typical choice. The effect of A.1 is to give less and less weight to the old data. The cost function to be minimized in this case is given by: I J(t)= LA 1 )-S(t)r (3.3.14) t=l If it is necessary to avoid a too rapid decrease of the adaptation gam, then one would choose J..1(t) and J..2 (t) as: J..1(t)=J..0 [ J..1(t-1 )-1 ]+ J..0 l2(t)=1 The criterion to be minimized in this case ts: 35

PAGE 47

I J(t)= L .11 (i}1-t[y(; + 1) -6(t)r 4>(i)]2 (3.3.15) t=l This type of profile is very suitable for the identification of stationary systems. If .A.1(t) and .A.it) are chosen as .A.1(t)=.A.2 (t)=1, Eq. (3.3.11) is obtained. The criterion to be minimized in case is given by Eq. (3.2.9). This type of choice of .A.1(t) and .A.2(t) is suited for the identification stationary systems. 3.4 PRBS as Test Signal for Process Identification In order to correctly identify parameters of a system, a frequency-rich input must be used. A frequency-rich input can be obtained by the use of pseudo-random binary sequences (PRBS). PRBS are generated by a digital waveform generator which produces a binary signal by switching randomly between two output levels [2]. The PRBS is characterized by a sequence length within which the pulse width vary randomly but over the large time horizon it is periodic. The period is defined by the sequence length. The PRBS are generated by means of shift registers. The maximum sequence length L is L=t'-1 where N is the number of stages of the shift register. Each stage of the shift 36

PAGE 48

register stores either 0 or 1. Note that at least one of the N stages should have a value of 1, although one generally stores a value of 1 on all N stages initially. At every pulse from the clock, all digits are moved one place to the right. The digit in the last right-hand stage moves out to the output stage register while the first stage is filled by a new digit obtained by the modulo 2 addition of the digits present in the stages 1; and 1 h before the clock pulse arrived [2,11]. The stage number 1; and 1h are specified in Table 3.1 [10]. The pattern of every N adjacent digits is then wrique. Note the maximum duration of a PRBS signal is equal to N. In order to identify the steady state gain of the system correctly, the following condition must be satisfied: (3.4.1) where tR is the rise time of the system, T5 is the sampling period. The magnitude of the PRBS may be low, provided that the signal/noise ratio is high. For a low signal/noise ratio it is necessary to extend the duration of the test to obtain a satisfactory parameter estimation [10]. 37

PAGE 49

Table 3.1 Generation of maximum length PRBS NUMBER OF STAGES SEQUENCE LENGTH STAGES ADDED N L='J!f-I I; and lh 4 15 3 and 4 5 31 3 and 5 6 63 5 and 6 7 127 4 and 7 8 255 3 and 8 9 511 5 and 9 10 1023 7 and 10 Example 3.1 Suppose that for system to be identified fmax=2Hz and tR=O.l sec. If N=5 and T5=0.I sec are chosen, then we have the inequality of Eq .. (3.4.1) is satisfied: NTS=0.5>tR=O.I The plot of a PRBS signal for N=5 and T5=0.1 is shown in Fig. 3.3. 38

PAGE 50

a; c: 0) Ci5 C/) m 0::: a.. 0.2 0.15 0.1 0.05 0 -0.05 -0.1 -0.15 -0.2 0 ,-----'---0.5 1 -;-;-'----1.5 2 2.5 Time (sec) Fig. 3.2 PRBS signal for N=5 and Ts=O.l sec. 39 '---3

PAGE 51

CHAPTER 4 CONTROL DESIGN FOR THE FLEXIBLE ROBOTIC ARM MODEL The aim of system identification is to design a high-performance control system. There are exists a wide range of control laws. Some controllers may be more suitable than the others to apply for a particular system. Controllers are designed according to the requirements for a system. In some cases it may be necessary to combine a parameter estimation scheme with a control law, so that the control law is updated at each sampling interval (adaptive controllers). In other cases, only the off-line identified model of the system may be enough to design a controller (non-adaptive controller). In this chapter two non-adaptive controllers will be designed. The first controller is based on the state space of the model. The second controller is a transfer-function based linear regulator. In both cases the pole assignment method is used. 40

PAGE 52

4.1 Pole Placement Design in State Space In this section a controller will be designed VIa pole placement for a servo system. In designing a controller one has to utilize the feedback of all the available state variables. However, all state variables are not available for direct measurement. Hence, it is necessary to estimate the state variables which are not available for direct measurement. Observation of the unmeasurable state variables is known as a minimum-order state observation whereas observation of all state variables is known as a full-order observation [13]. Here, the full-order observer is considered in designing a controller because full-order observer results in better system performance when the measurement of the output variables involves noise. Note that the system matrices used in designing the controller are obtained from real-time identification of the robotic arm model and the data collected for the identification of system matrices may be contaminated with measurement noise. Although the measurement noise level is very low, the measurement of the output is relatively inaccurate. Hence, using the full-order observer to design a controller for the robotic arm model is well justified. In this section, only the simulation of the system ts presented. In Chapter 7, the real-time performance of the system will be presented. 41

PAGE 53

Consider the servo system with observed state feedback shown in Fig 4.1. The integrator has been added to the system to eliminate the steady-state error. The output state variable is the only state variable available for direct measurement Hence, it is necessary to estimate the other state variables of the robotic arm model that are not available for direct measurement. However, for the reasons stated above, all the state variable of the system will be estimated. y(k) Fig. 4.1 Servo system with observed state feedback. 42

PAGE 54

The state equation describing the plant is given by x(k+ 1 )=A.x(k) + Bll(k) y(k)=C.x(k) and the observer equation is given by where :i(k+ 1 )=Ai(k)+BL'(k)+CG[v(k)-jl(k)] jl(k)=Ci'(k) x(k)= plant state vector (n-vector) u(k)= control vector (m-vector) y(k)= plant output vector (m-vector) :i(k)= observer state vector j(k)= observer output vector G=nx n matrix B=n xm matrix C=mxn matrix The integrator state equation is v(k)=v(k-1 )+r(k) -y(k) 43 (4.1.1) (4.1.2) (4.1.3) (4.1.4) (4.1.5)

PAGE 55

where v(k)=actuating error(m-vector) r(k)=command input vector(m-vector) Using Eq. (4.1.1) and Eq. (4.1.2), Eq. (4.1.5) can be written as v(k+ 1 )=v(k)+r(k+ 1 )-y(k+ 1) =v(k)+r(k+ 1)-QAl(k)+Bt(k)] = -CAl(k)+v(k) -CBtJ..k) +r(k+ 1) As shown in [13], the control vector u(k) ts gtven by u(k)= -Kzi(k)+K1 v(k) (4.1.6) (4.1.7) The design parameters are K 1 K'b and G. The pole placement design and the observer design are independent of each other because the closed-loop poles of the observed state feedback control system consist of the poles due to the pole placement design plus the poles of the observer. According to the separation principle they can be designed separately and combined together to form the observed state feedback control system. Hence, the integral constant K1 and the state feedback gain K2 are computed separately from the observer gain G. In other words, K1 and K2 are designed without the use of the 44

PAGE 56

observed state feedback. Shifting in time, we can rewrite Eq. ( 4 .1. 7) as u(k+ 1)+K1 v(k+ 1) (4.1.8) Substituting Eq. (4.1.1) and Eq.(4.1.5) into Eq. (4.1.8), we obtain Combining Eqs. (4.1.9) and (4.1.9), and introducing an augmented state vector consisting of x(k) and u(k), we obtain (4.1.10) where it is asswned that r(k+ 1)=r, for k>O, and lm denotes m-dimensional unity matrix for sufficiently large k. Thus, Eq. (4 .. 110) can be written as (4.1.11) Subtracting Eq. (4.1.11) from Eq. (4.1.10), we obtain (4.1.12) 45

PAGE 57

where =(n+m)-veetor (4.1.13) If we define Bl A=lo 0 =(n+m)x(n+m) matrix (4.1.14a) (4.1.14b) (4.1.14c) then Eqs. (4.1.12) and (4.1.13) can be written as (k+ 1 (4.1.15) (4.1.16) respectively. The controllability matrix for the system defmed by Eq. (4.1.15) is given by 46

PAGE 58

which has the dimension of(n+m) by m(n+m) and a rank of(n+m). Matrix K can be computed using the pole placement method. Equation (4.1.14c) can be written as [A-1 B l K=[K2 Kl] II +[0 -1 1 AC CB 111 Hence, we obtain [A-1 B l-1 Kl]=[K+[O I ]] II "' AC CB (4.1.17) K can be computed from Ackennan's fonnula: A A A .IIIIo AIJi'JR-1 A A K=[O 0 ... l][A AB ... A B]ci>{A) (4.1.18) and +(A)is given by (4.1.19) where "1 's are the coefficients of the desired characteristic equation. 47

PAGE 59

The observer feedback gam matrix G can be obtained from the Ackerman's fonnula: G=4(A where c CA 1 0 0 Again m, 's are the coefficients of the desired characteristic equation. Example 4.1 (4.1.20) (4.1.21) A numerical example is given to illustrate the computation of K 1 state feedback gain K2> and the observer feedback gain G. The plant considered m this example is the robotic arm model. The plant model of the robotic arm is identified using real experimental data (see Chapter 6 for details). The order of the system is 6 and the system has one input and one output. The design requirement is that .the response of the system to any command input be deadbeat In other words, it is required that the observed state approaches the 48

PAGE 60

true state as fast as possible. Note that for the robotic arm model only the output state variable is available for direct measurement. The system matrices are l.Ollle+OOO 6.8724e-002 -4.980Se-003 3.S749e-003 6.9258e-OOS -6.7708e-OOS -1.4429e-001 7.4473e-00l 1.9958e-00l 1.2432e-002 3.0047e-003 -8.5978e-004 -3.0748e-002 -6.4023e-001 1.7683e-001 3.0144e-001 -3 .0228e-003 -1.9883e-003 A= -S.OOSle-002 4.5664e-002 -9.5602e-001 8.834le-002 1.2854e-001 -2.645le-002 -8.S083e-003 -1.2266e-001 -l.0670e-001 -6.3616e-001 -7.1734e-001 -1.3588e-001 5.271Se-003 -1.3169e-001 7.3783e-002 -6.1830e-001 6.6445e-001 -3.7099e-001 B=[ -1.2228 -6.2701 -6.8474 -2.4512 -1.9035 -1.3527JT C=[3.66e-002 -9.59e-003 2.26e-003 -2.69e-004 2.95e-OOS -3.63e-006] From Eq.(4.14a) and (4.14b), we obtain 1.011 6.872e-02 -4.98le-03 3.575e-03 6.9258e-05 -6.7708e-05 -1.223 -1.443e-01 7.447e-Ol 1.996e-Ol 1.243e-02 3.005e-03 -B.S98e-04 -6.271 -3.07Se-02 -6.402e-Ol 1.768e-Ol 3.014e-Ol -3.023e-03 -1.988e-03 -6.847 i= -5.008e-02 4.S66e-02 -9.560e-Ol 8.834e-02 l.285e-Ol -2.645e-02 -2.451 -B.SOBe-03 -1.227e-Ol -1.067e-Ol -6.362e-Ol -7.173e-Ol -1.3S9e-Ol -1.904 5.272e-03 -1.317e-01 7.378e-02 -6.183e-Ol 6.645e-Ol -3.710e-01 -1353 0 0 0 0 0 0 0 49

PAGE 61

7 B=[O 0 0 0 0 0 l]T The inverse of the controllability matrix Is ----2 --3 --4--s -1 (B A.B A. B A. B A. B A B A. B J = 0 0 0 0 0 1 l.626e-02 -7.353e-02 -6.406e-Ol -J.ISOe-02 -1.664e-02 -8.37Be-03 0 -9.103e-03 -6.233e-02 2.2Sle-02 S.1S4e-02 4.70Se-02 1.271e-02 0 = -2.621e-02 -3.48Se-02 5.932e-02 -9.942e-03 -6.18Se-02 -1.564e-03 0 -5.61Be-03 -1.964e-02 3.44Be-02 -1.23Se-02 S.S36e-02 -2.S26e-02 0 -4.42Be-02 6.447e-03 -B.B61e-03 -2.309e-02 S.397e-03 B.92Se-02 0 -3.239e-Ol l.lSle-01 -S.473e-02 7.S7Be-02 -2.423e-02 -6.74Se-02 0 The closed-loop poles are placed at the ongm. Hence, the coefficients, a i's, inEq.(4.1.19)areequal to zero. Note that the order of A is 7 since n+m=7. From Eq. (4.1.19) we obtain ci(A)=A7 : 9.4lle-01 1.936e-01 2.407e-02 1.36Se-02 l.IS2e-03 -S.836e-04 -2.563 -4.01Se-01 -2.8SOe-02 7.642e-03 2.82le-03 -1.346e-04 6.342e-OS 3.312e-Ol 1.783e-OJ -2.40Be-02 -3.569e-02 4.83e-03 2.913e-03 -9.84Be-04 -9.2S3e-02 = -2.467e-01 3.048e-02 -I.SBle-02 -3.22Se-02 -8.704e-03 6.210e-04 1.033 9.022e-02 8.S98e-03 4.974e-02 4.149e-02 3.842e-02 1.017e-02 -1.633e-02 2.08Se-Ol 1.665e-02 7.352e-02 1.042e-02 -4.96le-02 l.924e-02 2.862e-Ol 0 0 0 0 0 0 0 50

PAGE 62

Matrix K ts computed from Ackerman's formula (Eq. (4.1.18)), and is given by K=[ -0.396 -o.o637 -o.o123 -o.oos51 0.00121 -o.oo125 o.933J K1 and K2 are computed from Eq.(4.1.17) and are given by K1=-9.9162 K2=[-1.625 -0.027 0.0302 0.0047 0.000319 0.000799] Referring to Eqs. (4.1.20) and (4.1.21), we obtain G=A6 CA 0 CA2 0 CA3 o CA4 0 CAS 1 51

PAGE 63

Thus, G is g:tven by G=[2S.101 -0.450 10.861 46.774 -38.827 604.16S]T The system requires 13 sampling periods to reach the steady-state (i.e. any non-zero error will be driven to zero error in at most 13 sampling period) because the order of the plant plus additional integrator is 7 and the order of the state observer 1s 6. However, as shown in Fig. 4.2, the response required more than 13 sampling periods to settle. This can be explained by the presence of disturbance on the data was used for the identification of the system model. The output of the observed state approaches the true state as fast as possible as shown in Fig 4.3. Fig. 4.4 shows the control input. The change in the magnitude of the control input is due to a change in the input command signal which is a square wave. In general, the observer poles are chosen so that the observer response ts 4 to 5 times faster than the system response [13]. One way to decrease the response time of the observer is to choose all the observer poles at the origin. This is shown in Example 4.2. 52

PAGE 64

2 : 0.15 ____ _,. ________________________ ,e. ::J 0 0.1 ____ _, __ _, _______ ______ _, ____ _,___ ----------: _, ___________ -------......... 0.05 I o ------------j ----------------------.. --... ; ---.... -----! i I ---------------------: ---------------------------------l -----------------------------------------------0.05 --------------------------------------j ---------------------------------------: l l i i -0.1 -------1 --------u ..... -------------: 'I' ----------------------0.15 0 100 200 300 400 500 Sample (k) Fig. 4.2 System response and command signal for the servo system with observed state feedback and integral control. 53

PAGE 65

0.2,_------.--------.--------.-------.--------. i 0 0.15 r 0.1 ---------------------------------------: ----------. --------0.05 -----; -------------: -----------------0 --------! ..... -------------------------------------; --------------.... ------! -0.05 1----------------------1 ----------------------------------------! .1 ---------------'i ---------------; ------------------0.15 I i ------------! 'U' .............. ---------;<-: Hit, U'l\1-----# ..... --... ...... _. __ .. ; 0 100 200 300 400 500 Sample (k) Fig. 4.3 Observer output and command signal for the servo system with observed state feedback and integral control. 54

PAGE 66

i I l 2 ----------l -------i-------------------l ............................... 2 l ; i :; j : @' 1 -------1--------j------------; -------t-------! i i 0 In, l11. II!. i 11. u 11' 1 IY' --. i1 1,_ IY .. ---I' I I I -1 .. ------.. ------.. --------------------r----------------------------------------; -2 ------j_____________________ ----i : i 100 200 300 400 500 Sample (k) Fig. 4.4 Control signal for the servo system with observed state feedback and integral control. Example 4.2 Consider the system shown in Fig 4.1. Now it is desired that the closedloop poles be placed at z=0.8252+j0.1497, z=0.8252-jO.l497 which results in the following characteristic equation: i -l.6504z+0.7034=0 55

PAGE 67

The observer poles are placed at the origin Hence, the observer feedback gam matrix G is the same as in Example 4.1. Therefore, only K 1 and K 2 need to be computed in this example. K1 and K2 are obtained from Eq. (4.1.17): K1=-0.5255 -0.1805 -0.0158 0.0021 0.0008 0.0061 0.0070) The responses of the system and the observer are shown in Fig 4.5 and Fig. 4.6, respectively. It can be seen that they are almost identical. As shown in Fig. 4.5, the system response does not exhibit overshoots because the observer poles are located at the origin and the closed-loop poles generated by state feedback are at 0.8252j0.7035. The comparison of the results obtained in this example with the results in Example 4.1 shows that the settling time in Fig 4.5 is smaller than the settling time in Fig 4.2. This is due to the fact that the response speed of the observer is faster than the response speed of the system. In Example 4.1 the response speeds of the observer and the system were roughly the same. However, the rise time in Fig 4.2 is smaller than the rise time in Fig 4.5. As shown m Fig 4.7, the magnitude of the control signal is smaller than the one in Example 4.1 because only the observer poles are placed at the origin.. In Example 4.1 both 56

PAGE 68

the observer poles and the system poles are placed at the ongm which resulted in large control magnitude. Overall the design in Example 4.2 gives better system performance. 0.2 :i" -0.15 >. ::1 1--7-----n --------' -y-----h------t---r-----l r r r Q. 0.1 ::1 -------: ------.. -------' -----------------0 E 0.05 ----------:. -----.. -----: -------.. ---------0 -.... -------: ----------. --------i .. ------.. ----------0.05 ---------1 -----. ----------! -------. ----------i l -0.1 ---------------i ---------------0 .15 I ---------\_ +'--'------+'---------0.2 0 100 200 300 400 500 Sample (k) Fig. 4.5 System output and command signal for the servo system with observed state feedback and integral control designed in Example 4.2. 57

PAGE 69

0.15 f---:,.---...... ( ( f 0.1 --i ----------------------0.05 --------: ---------------; .. ------.. ------G) .a 0 I i 0 ---------j .. --------. ------!-r--------------------0.05 --------)-----. --------' .. ----------------l -0.1 -------! --------. ---------i----------. ------: i 1 \ \ --------------! .... .. ___________ .... ;1---l.....------+'--------------0.15 -0.2 :---------,;-h:------.....-.!...-------=:--------...:-,...-------;:;! 0 100 200 300 400 500 Sample (k) Fig. 4.6 Observer output and command signal for the servo system with observed state feedback and integral control designed in Example 4.2. 58

PAGE 70

::::J ::::J Q. .s e &: 0 (.) 0.4 0.3 j ---------L------. i i : i I : 0.2 0.1 o i --------:---------! i --------l ---------+------: ---------+-------. I -----------1------------. ------------1 -0.1 r r -------1----------.. __ _: ___________ 1--------.. -------. i . -0 2 i l -------+----------i -------!----------\ ----------! ! ! -0.3 ------------!----------! --------: : i : 0 100 200 300 400 500 Sample (k) Fig. 4.7 Control signal for the servo system with observer state feedback and integral control designed in Example 4.2. 59

PAGE 71

4.2 Transfer Function-Based Design of Linear Regulator Consider a process characterized by the following difference equation (4.2.1) and d is the delay. It is desired to design a controller such that the tip of the robotic arm model follows given reference signal. This can be achieved using a linear regulator which is given by (4.2.2) Where the coefficients of the polynomials R(q-1), S(q-1 and T(q-1 are control parameters designed to obtain the desired closed-loop response of a system. Combining Eq. (4.2.1) and Eq. (4.2.2) we obtain (4.2.3) and the desired closed-loop transfer function is given by (4.2.4) 60

PAGE 72

A block diagram of the closed-loop system is shown in Fig. 4.8. r(t) T R s R w(t) -d q B A y(t) Fig. 4.8 Block diagram of the closed-loop system The design task is to design a controller such that the closed-loop system gives the desired response to command signal r(t), i.e., q-4TB = q-4B,. .AR+q -4BS A,. (4.2.5) 61

PAGE 73

In our design, we shall follow the procedure outlined in [1,4,10]. The coefficients of the polynomials R S and T are control parameters which are computed from the following relationships obtained by equating the numerators and the denominators in Eq. (4.2.5) AR+q -dBS=Am T=B m where Am is a polynomial specifying the desired closed-loop poles. Defining p = max deg (n m) (4.2.6) (4.2.7) Eq. (4.2.6) has a unique solution when A(q-1 ) and B(q-1 ) do not have a common factor. The degree of the polynomial Am(q-1 ) should satisfy the following inequality deg Am(q-1 ) s; 2p + 1 The polynomials R(q-1 ) and S(q-1 ) are given by (4.2.8) (4.2.9) and the degrees of R(q-1 ) and S(q-1 ) are obtained from deg S(q-1 ) = n5 =deg B(q-1)-1 (4 .. 2.10) 62

PAGE 74

In order to solve Eq. (4.2.6), we put it m a matrix form Mx=l (4.2.11) where (4.2.12) and the matrix M is given by 1 0 0 0 0 0 a I 1 0 bo 0 0 a2 a I 1 bl bo 0 a2 0 bl bo M= 2p+1 a 1 b ho II m-1 a a I b bl II Ill b Ill a b II Ill -2]1+1--+ The vector x is obtained from 63

PAGE 75

(4.2.1J) If A and B are relatively pnme, R and S can be designed by asstgmng arbitrarily the closed-loop poles [1]. The closed-loop poles of the system are obtained from the desired characteristics of the system. The desired characteristics equation can be obtained from the specification given to the designer. Generally the desired polynomial equation is given for a continuous time system, specifying c.> O> and C and the discretization of the continuous-time system is carried out beforehand with the properly chosen sampling interval T s.. The desired characteristic equation, corresponding to the in deadbeat performance, where all closed-loop poles are located at the origin, ts gtven by Am(q-1) =I In this case the system response settles in at most n steps. This method has no restriction on process zeros, i.e., the process zeros can be stable or unstable. Procedure 1 It is assumed that the process zeros are the closed-loop zeros, the desired response is characterized by Am and Bm. Then, as established in [1], the design procedure is summarized as follows: 64

PAGE 76

Step 1 Solve the equation with respect to R and S. The degrees of R and S are gtven by Eq. ( 4.2.1 0) Step 2 Use the control law given by where Ru(t)=Tu(t)-Sy(t) A,.(l) T=B =-1111 B(l) This design procedure does not cancel unstable system zeros and can be used for non-minimum phase systems (systems with unstable zeros). Cancellation of the unstable process zeros (non-minimum phase system) must be avoided. The only restriction of this method is that A and B should not have common factors. However, the system zeros can be simplified since deg Am is normally less than deg (AR+q -dBS). It follows from Eq. ( 4.2.5). that factors of B which are not also factors of Bm must be factors of R. The open-loop zeros (factors of B correspond to open-loop zeros) which are not the desired closed-loop poles must be cancelled. Thus B is factored as 65

PAGE 77

B=B ... B-(4.2.14) where zeros of B+ are stable (well damped modes) and zeros of B-are unstable (poorly damped modes). A necessary condition for solvability of the servo problem is B =B B-m t (4.2.15) This equation defines B As shown in [I], the design procedure in which mt the open-loop zeros which are not desired closed-loop zeros are cancelled can be described as follows : Procedure 2 Data Given the process model A and B and the desired response characterized by Am and Bm. Assume Eqs. (4.2.14) and 4.2.15) are satisfied and d Step 1 Factor polynomial B into Step 2 Solve the equation 66

PAGE 78

with respect to R1 and S. The degree of R1 and S is obtained from deg S=deg B --1 deg R1 =deg B+d-1 Step 3 Use the control law given by where Ru(t)=Tu(t)-Sy(t) R=RB-1 and T=B 1111 (4.2.16) A and B-are copnme smce A and B are assumed to be copnme. This method can also be used for non-minimum phase systems. However, the decomposition in step I is time consuming if deg B is very large. In selftuning algorithms, where computation time is critical, each calculation has to be repeated in each step between each consecutive sampling instants. Note that for minimum phase system all the process zeros can be cancelled. Example 4.3 The system considered in this example is a flexible robotic arm model which is a non-minimum phase system. Thus, unstable process zeros must 67

PAGE 79

not be cancelled. Further, to avoid the factorization problem, no process zeros are cancelled. Note that the model of the flexible robotics arm was obtained from real-time identification. The model is given by where .A(q -1)=1-0.2SOSq -1-0.1901q -2-0.13Blq -3-0.0927q """-0.0664q -5-0.0840q -6 B(q -1)=-0.0013-0.00llq -1+0.012q -2+0.0939q -3+0.101lq -4+0.0622q -s+0.037lq -6 The desired characteristic equation 1s and The design parameters are the coefficients of the polynomials r, R and S. The polynomials R and S are obtained by solving AR+q-dBS=Am and are giVen by R(q -1)=1 +0.25llq -1 +0.2549q -2+0.2464q -3+0.1883q -4+0.1023q -s+0.0760q -6 S(q -1)=0.4464+0.7068q -1 +0.4508q -2+0.1977q -3+0.0791q -4+0.1722q -s and T=A,.(l)/B(1)=3.3014 68

PAGE 80

The output of the system and the control input are shown m Fig. 4.9 and Fig. 4.10 respectively. 0.25 .----.,..------,-------r-----.------.----,------, 0.2 1--------,r----""'". ------\-" .,..--------'t\------'i-T-----' -----'S 0.15 .... --, -... --; ----... --' -----0 E i en 0.1 -----; ----.. --------: --------.. ---: -----0.05 ----: ----.. -----i -----or-'-----! ,.0.05 -0.1 --i --- --- ----.. ---; ---: .......................... ----: .. ----i .. -----! -0.15 -! .. ---. --j .. -. ----J ------0.2 -[ -_j --0.25 0 50 1 00 150 200 250 300 350 Sample (k) Fig. 4.9 System output and reference signal when no process zeros are cancelled. 69

PAGE 81

1U c C) .l 1 ---------------------l----------------------------------------r----------------------------------------t----------------------------------------------en o.5 ------+++-----,g c 0 (.) 1 0 \ I I --------------: -----------------;---------------------------------, --------r--------. -0.5 -: --!--! ----+i ---:-! -1 i 1 0 50 100 150 200 250 300 350. Sample (k) Fig. 4.10 Control signal when no process zeros are cancelled. 70

PAGE 82

CHAPTER 5 ADAPTIVE CONTROL The basic idea behind designing an adaptive control system is to combine a particular parameter estimation scheme with any control law. The plant parameters are estimated recursively and the control law parameters are computed in each step, using the estimated parameters instead of the true ones [1,4]. This approach is known acertainty equivalence adaptive control. The control law parameters can be directly identified by parameterizing the system directly in terms of the control law. This class of algorithms is known as direct algorithms. If the parameters of the system are explicitly estimated and are available as intermediate result to compute the control law parameters, then this type of algorithm is called indirect algorithms [1,4]. The block diagram of an adaptive control system is shown in Fig. 5.1. 71

PAGE 83

r l w(noise) H u controller plant controller parameter parameter estimation calculation T Fig. 5.1 Schematic diagram of an adaptive control system 5.1 Parameter Estimation y 1+-It is highly desirable that estimation algorithm be simple and easy to implement because on-line estimation algorithms update parameter estimates between consecutive sampling instants. Two types of estimation schemes will. be discussed in this paper: recursive least squares algorithm and gradient algorithm. 72

PAGE 84

5.1.1 Recursive Least-Squares Algorithm Consider a process model given by A(q-1 )y(t) =q ..tJ B( q-1 )u(t) (5.1.1) Equation ( 5 .1.1) can be written as y(t)=6T cj>(t) (5.1.2) where (5.1.3) is the parameter vector, and cl>(tl =[-y(t) . -y(t-n), u(t-d) . .u(t-d-m)] (5.1.4) is the measurement vector. If the parameters are unknown, a recursive least squares algorithm will be used to estimate the parameters. The prediction model is described as (5.1.5) where is estimated parameter vector and cj>(t) is given by Eq. (5.1.4). As shown in [1,4], the recursive least squares algorithm is given by 73

PAGE 85

.L-P(t-d-l)cl(t-d) H(t)T=t:J(t-lf + [y(t)-cl(t-d)Tt:J(t-l)TJ 1 +cl(t-d)TP(t-d-1 )cl>(t-d) P(t-d)=l!A[P(t-d-l)-P(t-d-1)4(t-d)4(t-d)TP(t-d-1)] A +cl(t-d)TP(t-d-1 )t-d) (5.1.7) (5.1.8) The weighting sequence ).. can be used if greater weighting is necessary to more recent data in time-varying systems or to discard initial data in nonlinear systems. Typical values of ).. are between 0.95 and 0.99. However, using ).. <1 coupled with change in command signal may cause bursts in the system output [1]. The cause of this type of burst can be explained as follows. When there are no changes in command input for a long time, P(t)t) will be zero and thus there will be no change in the parameter estimate. This will lead to the negative term in Eq. (5.1.8) to be zero because the negative term represents a reduction in parameter uncertainty due to the last measurement. Then Eq. (5.1.8) becomes P(t)=P(t-1) A (5.1.9) and the covarince matrix P becomes very large due to the lack of persistent excitation [1]. A change in the command signal may result in large changes 74

PAGE 86

in parameter estimates and thus the in system output. Bursting phenomena may also occur in adaptive control system if unmodelelled dynamics and external disturbances are present [15]. Even with perfect modelling of systems, algorithms with vanishing gain could exhibit bursting phenomena. Although it gives fast converging rate initially, least squares algorithm gain approaches zero. Bursting occurs periodically. This means that a self-excitation mechanism may stabilize the system [14]. However, these bursts which cause very large control signals are not tolerable in practice. There are many ways to eliminate bursts. Use of exponential data weighting with variable forgetting, resetting of the covariance matrix, switching of the estimator if P is not positive semidefinite or stop updating P when the prediction error is smaller than the given value may eliminate a possible bursting phenomena in the adaptive control loop [I ,4]. Although bursting can occur in perfect modelling case, we are interested, m this research, in bursting which occurs due to unmodelled dynamics and possibly external disturbances. The reason being that the flexible arm has infinite number of modes and has to be modelled with finite number of modes. Thus, the unmodelled dynamics in the flexible arm may cause bursting in the ad&ptive loop. The proposed modifications for least squares algorithm 75

PAGE 87

m [1] were implemented during the real-time runmng of the flexible arm system and did not work well. However, the gradient algorithm, which does not require persistently exciting conditions, gave satisfactory results. 5.1.2 Gradient Algorithm Consider a system with unmodelled dynamics where A1(q-1), i=1,2 denote additive and multiplicative system perturbations and A3(q-1 ) denotes external disturbance [14]. The polynomials A and B are given by Since the parameters are not known they have to be estimated. Define a vector of parameters estimates (5.1.11) and a measurement vector ci(t)r =[ -y(t), ... u(t), ... u(t-m)] (5.1.12) 76

PAGE 88

The cost function to be minimized for deterministic case ts: (5.1.13) and the cost function to be minimized for stochastic case is: (5.1.14) As shown in [14], the gradient algorithm for deterministic adaptive control system is given by (5.1.15) where (5.1.16) and (5.1.17) where }.. is a fixed number satisfying 77

PAGE 89

As shown in [15], the stochastic gradient algorithm ts given by 6'(t+ 1 1) -3(t)r n(t) a>O &>0 (5.1.18) (5.1.19) (5.1.20) The global stability analysis for the design adaptive systems in the presence of unmodeled dynamics and external disturbance is developed in [14], where the usefulness of the developed methodology is demonstrated on the robust deterministic and stochastic adaptive control systems. A stochastic gradient algorithm is designed in [15], where it is shown that it has a better rare of convergence of the tracking error than the extended least squares (or least squares) self-tuning algorithm. 78

PAGE 90

5.2 Adaptive Algorithms for Closed-Loop Pole Placement The parameters to be estimated are given by and the desired closed-loop transfer function is given by There are several assumptions which have to be made: I) n=deg A(q-1 ) and m=deg B(q-1 ) are known. 2) A(q-1 ) and B(q-1 ) are relatively coprime. 3) The system delay is known a priori. (5.2.1) 4) The time delay in the desired closed-loop transfer function is equal to or less than the system's delay. 5) For non-minimum phase systems, the unstable zeros must be included in the desired closed-loop transfer function as zeros, I.e., B =B B-where the zeros of B-are unstable. Ill ... The unstable zeros within the system bandwidth have significant effect on the 79

PAGE 91

properties of the closed-loop system since they can not be cancelled [1]. The control signal is generated by R(q"1)u(t) =T(q "1)r(t)-S(q -1)y(t) The polynomials R and S are obtained by solving the equation AR+q-dBS=Am where Am is the desired closed-loop poles. (5.2.2) (5.2.3) If Am is a stable polynomial, then the closed-loop system is asymptotically stable. Generally, the desired closed-loop poles are chosen in the form of dominant poles of a second-order continuous-time systems [1,10]. For higher order systems the remaining poles can be assigned near or at the origin. For the deadbeat controller all the closed-loop poles are assigned at the origin. A m can be chosen by discritization of a second-order continuous-time as (5.2.4) with the damping coefficient C, sampling period T8 and frequency w given by where w0 is the natural frequency. 80

PAGE 92

Once the closed-loop system perfonnance is specified, an adaptive algorithm for closed-loop pole assignment can be implemented using Fig. 5.1. Depending on the type of system (stable or unstable zeros) the following adaptive algorithms can be used for closed-loop assignment [1]. The algorithms are based on indirect identification. Algorithm 1 (No process zeros are cancelled) Data It is assumed that the process zeros are the closed-loop zeros. The desired response is characterized by Am and Bm Step Estimate the parameters of the model Ay(t)=q-dBu(t) by one of the appropriate estimation schemes. Step 2 Solve the equation AR +q-d BS=Am (5.2.5) with respect to R and S. The degrees of R and S are given by Eq. (4.2.10) Step 3 Use the control law given by 81

PAGE 93

Ru(t)=Tu(t)-Sy(t) where T=B =A (1)/B(l) 1111 Ill The steps 1 through 3 are repeated at each sampling interval. The algorithm can be used for non-minimum phase systems since it does not simplify system zeros. The next algorithm simplifies system zeros .. Algorithm 2. (stable process zeros are cancelled) Data It is assumed that the unstable process zeros are included in the closed-loop system as zeros. The desired response is characterized by Am and Bm. Step 1 Estimate the parameters of the model Ay(t)=q-dBu(t) by one of the appropriate estimation scheme. Step 2 Factor the polynomial B as where the zeros of B+ lie inside the unit circle and the zeros of B" lie outside the unit circle. 82

PAGE 94

Step 3 Solve the equation AR1+q-dB-S=Am with respect to R1 and S. The degrees of R1 and S are given by Eq. (4.2.16) Step 4 Use the control law given by Ru(t)=Tu(t)-Sy(t) where The steps 1 through 4 are repeated at each sampling period. The drawback with this algorithm is that the factorization in step 2 is time consuming. Computation time is critical especially for high-order systems and the factorization in step 2 may take too long time for the control input to be generated at each sampling period. For minimum phase systems all the process zeros can be cancelled. 5.3 Simulations The system used for simulation is a flexible robotic arm model. The polynomials A and B used in the simulations are obtained from real-time identification (see Chapter 6) of the system. Since the flexible robotics arm 83

PAGE 95

model is non-minimum phase system, Algorithm 1 is used in all simulations. Am(q"1)=1 is chosen The reference signal is a square wave of amplitude .2 and a period of 500 samples. Example 5.1 The estimation scheme used in this example is the recursive least squares algorithm given by Eqs. (5.1.7) and (5.1.8). The initial value of the covariance matrix is chosen as P 0=0.11 where I is a 13 by 13 identity matrix. Further, J.=l is chosen. The polynomials R, S and T after 2000 iterations are given by R(q"1)=I+0.2511q1+0.2549q"2+0.2464q-3+0.1883q4+0.I023q-5+0.o760q-6 S(q"1)=0.4464+0.7086q"1+0.4508q"2+0.1977q"3+0.0.0791q4+0.1722q"5 T=3.3014 respectively. The output and the control signal are shown in Fig 5.2 and Fig 5.3, respectively. The high gain of the output at the beginning is due to the fact that the estimates were at great distance from the optimum initially. However, as time elapsed, the system output was able to follow the reference signal due to the convergence of the parameter estimates to the true ones. 84

PAGE 96

e >. :J c.. :J 0 E Q) en 2.5 2 ----------------------------r--------------------------------!------------------------------------r-----------------------------------+----------------------------------; 1.5 --------------------------j-------------------------------L ........... -----------+-------------------------------+--------------------------------1 : I : ---------------------------r----------------------------1"---------------------j-----------------------------.. t---------------------0.5 ------------------------------i---------------------------------1--------------------------------+--------------------------------t----------------------------------: l 0 ------------------------------------0.5 i : ----------------------t--------------------------r--------------------------t------------------------------+--------------------------------. -1 ----------------------t-------------------j-----------------------------+-------------------------+ .. ------------------------------1.5 -----------.. i --2 0 500 1000 1500 Sample (k) 2000 2500 Fig 5.2 System output recursive least squares and reference signal when a algorithm is used .. 85

PAGE 97

'0 12 -----t---L.------l .................................... i ........... ! 1 0 1ii c: o 8 ------l..----l.------L---.. .. -iCi) ; ; g c: 0 (.) 6 ; 4 ; ------+-------+-----------j-----i-----2 1 I 0 ............... .................................... r [ [ [ -2 -4 -;--.. --+--i---i--0 500 1000 1500 2000 2500 Sample (k) Fig. 5.3 Control signal when recursive least squares algorithm is used. Example 5.2 In reality the flexible ann is characterized by infinite number of modes. Thus, we have a case of unmodelled dynamics which could give bursting phenomena. To imitate this real-time situation in simulation we estimate 9 parameters for the system descnDed by 13 parameters. The order of the system 86

PAGE 98

to be estimated is 4. The polynomials A and B to be estimated are given by A(q-1)= 1 +a1q-1+a2q-2 +as'l3+a4q4 B(q-)=bo+htq-l+b2 respectively. All the initial conditions are the same as in Example 5.1. The polynomials R S and T after 2000 iterations are given by respectively. R(q-1)= 1 +0.1740q-1+0.1268q-2+0.1532q-3 -0.0126q4 S(q-1)=-2.0162+ 1.0977q-1 -0.8229q-2+0.0575q-3 T=6.91 The output and the control signal are shown in Fig. 5.4 and Fig. 5.5 respectively. As shown in Fig. 5.4, except at the beginning, the output of the system follows the reference signal. However, the system behaved poorly when a recursive least squares algorithm was used during the experiment. 87

PAGE 99

>. :::;, c. :::;, 0 E .s! en 1 0.5 ---------j--------f-------+----------t i : 0 -0.5 ------------i-----------------!---.. -----t----------------+-----j j i i ; -1 ------------------------------------------i--------------------"------------------------+-------------------0 500 1 000 1500 2000 2500 Sample (k) Fig 5.4 System output when a recrusive least squares is used in the presence of unmodeled dynamics. 88

PAGE 100

g 1: 0 (.) 6 ---------+-------_l_ ------4 2 ---...1-----.i------t--------i-0 --................... -j .. -2 ---1----i_::---+----t'---j i . 4 ------L .................................. .l.... ----!----L. = l I -6 -.. i i l -8 -f------r--i----.. 0 500 1000 1500 2000 2500 Sample (k) Fig. 5.5 Control signal when a recrusive least squares is used in the presence of unmodeled dynamics. Example 5.3 During a real-time run the least squares algorithm did not work well. However the gradient algorithm given by Eq. (5.1.14) gave a satisfactory results. Here Example 5.2 is repeated with estimation scheme given by Eq. (5.1.14). The polynomials R, S and T are obtained from Eq. (5.2.2) and are given by 89

PAGE 101

R(q-1)=1 +0.1917q-1+0.1673q-2+0.2155q-3 -0.0355q4 S(q-1)=-2.0002+ 1.3011q-1-0.602q-2 -0.1806q-3 T=8.2020 The output and the control signal are shown in Fig. 5.6 and Fig, 5.7 respectively. i 0.5 'S -]-----t-----t-.. 0 E i en v 0 --.................. -0.5 ----;---j---+----f--.1 i l .. -t-ooOoo-ToOoo-l l -1.5 0 500 1 000 1500 2000 2500 Sample (k) Fig 5.6 System output and reference signal when graident algorithm is used in the presence of unmodeled dynamics. 90

PAGE 102

-:::::r a; c: 0) Ci5 ,g l 4 : I 2 ------------------------! _______________________ ---------------------------: _______________________ j ________________________________ i v-! o ----------------------------------------------------------------------------------------------------------_____________ T! -------------------------------c: ' ' 0 -2 ---------------------------l----------_________________ ].__ ________________________________ L_ ______________ ---------------L-------------------------------u : I : -4 ________________________________ .L _____________________________ j ______________________________ .l _______________________________ L_ _____________________________ : : : -6 ----+------------l-------------------------------L---------+l -8 j l 0 500 1000 1500 2000 2500 Sample (k) Fig 5.7 Control signal when gradient algorithm Is used in the presence of unmodelled dynamics. 91

PAGE 103

CHAPTER 6 IDENTIFICATION OF THE FLEXIBLE ROBOTIC ARM MODEL In this chapter the system parameters will be identified through two methods: 1) Observer/Kalman filter identification 2) Recursive least square algorithm On both approaches a frequency-rich PRBS signal Is used for correct identification of the system. 6.1 Observer/Kalman Filter Identification The system to be identified has one input and one output. The experiment was performed for the open-loop system. The identification method used m this section is off-line: the data from the output of the system were recorded and made available for later identification of the system. The identification of the system was carried out using a matlab program OKID [7], which was developed at NASA Langley Research Center. Following [6,7], the computational steps of OKID are shown in Fig. 92

PAGE 104

6.1. The input is a PRBS signal with a total of 1023 samples. The sampling period is 0.008 seconds. I input and O\!lJ!Ut data I l -observer parameters system Markov observer gain parameters Markov parameters system matrice! observer gain l mada1 parameter identification Fig. 6.1 Computational steps of OKID. The first step in identifying the system is to choose p for Eq .(3.2.12). p is chosen to be 6. Thus, the maximum system order will be pm=6, where m=l is the number of outputs. Although, p is required to be at least four times larger than the effective order of the system, choosing p=6 gives the better system responses for the given system. The observer Markov parameters are obtained from Eq. (3.2.12). The input excitation and the ouq:)ut measurement 93

PAGE 105

used to compute the observer Markov parameters are shown in Fig 6.2 and in Fig. 6.3 respectively. The observer pulse response sequence is shown in Fig . 6.4. l 0.1 ----.. r+-t--r-! ........ ,---.' ---! 16 o.o5 ---.. ..... --L ... c .9> UJ .. --. -1---UJ CD a::: a. 0 f-, ___ +--j---.... .. --. -. ; i I -0.051--.. --+---__ J,_..... ... ........ ,_ .. __ !-- i i j i ! -0.11------'-+-...... 4-; --l.JL-l..-.J;L .. L-.... L+J-LU l 0 0.1 0.2 0.3 0.4 0.5 Time (sec) Fig. 6.2 Input excitation. 94 rh.---,,;.--,.. r-... ------. 1---. ... i + ............ ............. -! i i .... ----' l 0.6 0.7 0.8

PAGE 106

: : ' 0.15 i ----i------+----L----+---j----+---+------------::I i +------i : -------........ --------i--------------i----I 1 I 0 0.05 0 i -0.05 ----1---------+--------r i i i i L ; ---... -----! 0 1 2 3 4 5 6 7 8 9 Time (sec) Fig. 6.3 Output measurement of the robotic arm model. 95

PAGE 107

e 1 E e 0.8 ---. ------t---------+--------+-----> (U ::! 0.6 '-Q) 0 0.4 0.2 i i ------------1 ; ; : j ; i -------------! ----------!--------------t--------------+--------------l l l i -------------------r------------J-------------t---------------------r------------------; ! 0 0.02 0.04 0.06 0.08 0.1 Time (sec) Fig. 6.4 Observer pulse response sequence. 96

PAGE 108

Next, the system Markov parameters and the observer gam Markov parameters are computed from the identified parameters. The system Markov parameters are cOmputed by using Eq. (3.1.19) where as the observer gain Markov parameters are obtained from Eq.(3.26). The system pulse response and the observer gain pulse response are shown in Fig. 6.5 and Fig. 6.6 respectively. f!! 0.02 ,--...,....---.--..,..----.----r-----.---,,.---,----, E 0 i:;:::=+! r-----------+-----------------1 ----------r ---------, -----: ---------------T----------------r --------------i -0.02 tU ::! -0.04 1 --.......... J ........... ..... L ....... .......... ; ............ ---tfL _______________ ____ ................. :--0.06 ... .................... l 1 1 i -0.08 ................ -0.1 .................. .l .................... l .................... t ........... _,,, ... .i ..... ............ .i ....... -------.l. .. ------------i------------------i---------------1 i i l . l l -0.12 0 1 2 3 4 5 6 7 8 9 Time (sec) Fig. 6.5 System Markov pulse response. 97

PAGE 109

CD en "5 D. i (!) ... CD 2: CD (I) .c 0 1 +---f-----+----r----t-----t-----i---l-----= i 0.8 0.6 ! .. --+----+---+----f----+---t--------+-0.4 0.2 l 1 I 1 --1 l 0 : ; ----T--... -----0 1 2 3 4 5 6 7 B 9 Time (sec) Fig. 6.6 Observer gain pulse response. The system model and its corresponding observer gain are obtained from the system Markov parameters and observer gain Markov parameters using the Matlab program ERADC [7]. The system order identified from ERADC was chosen to be 6. The plot of the Hankel matrix singular values is shown in Fig. 6.7. Note that the number of the non-zero singular values equals the system order. The magnitude of the Hankel matrix singular values measures the contribution of each state. In general, p should be chosen at least four 98

PAGE 110

times larger than the order of the system model for accurate identification of Kalman filter gain. Since the system data was assumed to have only a very low measurement noise level, i.e., the signaVnoise ratio is relatively large, the order of the system n is chosen to be equal the number p. The state matrix A, input matrix B, output matrix: C and the transfonnation matrix D are given below. l.Ollle+OOO 6.B724e-002 -4.9BOSe-003 3.S149e-003 6.92SBe-OOS -6.770Be-OOS -1.4429e-001 7.4473e-001 1.9958e-001 1.2432e-002 3.0047e-003 -B.S97Be-004 -3.074Be-002 -6.4023e-OO 1 1.76B3e-001 3.0144e-00l -3.0228e-003 -1.9BB3e-003 A= -S.OOSle-002 4.S664e-002 -9.S602e-00l B.B34le-002 1.28S4e-00l -2.64Sle-002 -B.SOBle-003 -1.2266e-001 -l.0670e-001 -6.3616e-001 -7.1734e-00l -1.3SB8e-001 S.271Se-003 -1.3169e-001 7.37B3e-002 -6.1B30e-001 6.644Se-001 -3.7099e-001 B=[ -1.2228-6.2701 -6.8474 -2.4512 -1.9035 -1.3527f C=[3.66e-002 -9.59e-003 2.27e-003 -2.69e-004 2.95e-OOS -3.63e-006J D=3.1895e-4 99

PAGE 111

CD 'CI ::I = c: C) cu > 00 10 10 10 10 10 10 10 10 10 u ..; "' ... -1 -1 -1 1 2 3 4 Number Fig. 6.7 Hankel matrix singular values. 5 6 The modal parameters including frequencies, damping ratios and mode shapes are obtained from system matrices. The modal parameters are shown in Table 6.1. The first four modes are the flexible robotics ann model modes and the last two modes are from the DC motor dynamics. In Table 6.1 the fourth column shows mode singular values and the fifth column shows Modal Amplitude Coherence. 100

PAGE 112

Table 6.1 Identified modal parameters for flexible robotic ann model. Mode No Damping(%) Frequency(Hz) Mode SV MAC 1 18.246 52.515 0.033084 1 2 18.246 52.515 0.033084 1 3 35.03 27.348 0.11155 1 4 35.03 27.348 0.11155 1 ,8.105 I : Figure 6.8 shows the estimated output and the measured output. And Fig. 6.9 shows the predicted and the measured outputs of system. As shown in Fig .. 6.8, the estimated output is in good agreement with the measured output because the estimated output is reconstructed from the identified observer and the identified observer is correcting for system uncertainties. In contrast, the predicted output is only reconstructed from the identified model which dose not correct for system uncertainties. As the result there is significant difference between the predicted output and the measured output. The error between the predicted output and the measured output is shown in Fig 6.10 Also shown in Fig 6.10 is the variance of the predicted output. The frequency response of the identified model is shown in Fig 6.11. 101

PAGE 113

Output Number = 1 0:15 ... :I c. ... :I 0 0.1 s ca E = U) 0.05 w -0.05 0 1 2 3 4 5 6 7 8 9 Time (sec) Fig. 6.8 Estimated output of the system. 102

PAGE 114

::1 c. -:::r 0 "0 =s a. Output Number = 1 0.1 I' I' I ,1, I I o' I -0.1 0 1 2 3 4 5 6 7 8 9 Time (sec) Fig. 6.9 Predicted output of the identified model and the real output. 103

PAGE 115

... g w c:: 0 := 'C a.. 0.3 0.2 0.1 Sample (k) Fig. 6.10 Nonnalized prediction error. 104

PAGE 116

m "C c: a; (!) CJ Q) "C Q) rn tU .t::. c. ... 3 -200 10 10 10 10 10 frequency (Hz) 200 0 -200 3 -400 10 10 10 10 10 frequency (Hz) Fig. 6.11 Frequency response of the identified model. 105

PAGE 117

6.2 Recursive Least Squares Algorithm The recursive least squares algorithm discussed in Chapter 3 is used to identify the parameters of the flexible robotic ann model. As shown in Fig.. 6.12, some parameters of the fleXIble ann are not converging fast enough. The output measurement is shown in Fig. 6.13 and the predicted output is shown in Fig .. 6.14. The recursive least squares algorithm with a variable forgetting factor was used identify the parameters of the plant. The forgetting factor A 1(t) is given by A1(t)=A1(t-1)+ l-A0 where A1(0)=0.99 and A0=0.99 were used. These chosen values gave better result overall. Further, A2(t)=I. The criterion minimized in this case is given by Eq. (3.3 .14). The reason for choosing the variable forgetting factor was to avoid the rapid decrease in adaptation gain when the estimates are at great distance from the optimum at the beginning of the adaptation mechanism. However, this was not the case for few parameter estimates which converge slowly as shown in Fig. 6.13. The reason is the forgetting factor A 1(t) tends toward 1 faster which resulted in decreasing gain. However, the use of other values for A1(0), l0 and other profiles of weighing sequences were not as 106

PAGE 118

successful as the one used here for the given system. 0.15 .----.,.----.,...----.,...----.,...----.,...----,.---en s cu E 0.05 w I : : E f! -0.05 cu c.. -0.1 -0.15 -0.2 -0.25 ----------1---------------j I --r--T--r------r-i i I : : j ----------+--------------l------l------------+------------t.. --------------r---= i l : 0 5 10 15 20 25 30 35 Time (sec) Fig. 6.12 Parameter estimates of the flexible arm when a recursive least squares algorithm is used. 107

PAGE 119

0.06 0.04 c: Q) E 0.02 :::3 rn ca Q) 0 :E :::3 .9--0.02 :::3 0 -0.04 lllrlllll -0.06 -0.08 -0.1 -0.12 0 2 4 6 8 10 12 14 16 Time (sec) Fig. 6.13 Output measurement when a recrusive least squares algorithm is used. 108

PAGE 120

.9-0 "l:l "l:l tl. 0.04 0.02 0 -0.02 lllil -0.04 -0.06 -0.08 -0.1 0 2 4 6 8 10 12 14 16 Time (sec) Fig. 6.14 Predicted output when a recrusive least squares algorithm is used. 109

PAGE 121

CHAPTER 7 EXPERJMENTAL RESULTS A number of experiments were conducted to control the flexible robotic arm model. An adaptive algorithm based on pole assignment was implemented on the computer to control the flexible robotic ann model. The algorithm used in this experiment is described in section 5.2.1. The estimation schemes used in the algorithm are given by Eqs. ( 5 .1.14) and (5.1.17). The attempts to use the recursive least squares estimation scheme were unsuccessful largely due to the unmodelled dynamics present in the system. The schematic diagram used to implement the adaptive pole placement controller is shown in Fig 5.1. 7.1 Choice of Sampling Frequency and Order of the System Model Before implementing the adaptive control scheme, the order of the system, n, has to be determined. Since the system has infmite number of modes where each mode is modeled by a second order system, the system can not be ovennodelled. The plant is always undennodelled. However, by 110

PAGE 122

choosing n large enough, the system can be modelled accurately enough. The order of the system n should be chosen large enough so that the dominant poles are included in the design of the controller [ 16]. But n can not be too large because it will take longer time for the computer to implement the control scheme. The control parameters should be computed during each sampling interval. This will not be possible if n was chosen too large. In this thesis, n=6 is chosen This number was determined through experimentation and gave satisfactory result. The sampling period T5 was chosen to be T5=.02 sec. This value of T5 Is large enough for the control signal to be computed during each sampling interval and yet it is small enough to give sufficiently high sampling rate. The sampling theory states that the sampling frequency should be at least twice the system bandwidth. However, in practice, the sampling frequency should be much higher than system bandwidth depending on the particular performance requirement of the system. The rule of thumb in choosing the sampling frequency is that the sampling frequency must be possibly as much as 25 times the system bandwidth [2, 16]. The bandwidth of the system is 2Hz. In this thesis, the sampling frequency of f5=50Hz is chosen. Note that the choice of T5 is limited by the computation speed of the computer. 111

PAGE 123

7.2 Non-Adaptive Controller The parameters identified, using the recursive least squares algorithm, were used to design the deadbeat pole placement controller. The polynomials R, S and T, designed in example 4.3, were used in the experiment. However, the tip of the flextble arm was going out of the range of the detector frequently and the output of the flexible arm was approximately twice the reference signal. Therefore, it was decided to change the value of T so that the output of the flexible arm follows the reference signal. The value of T=2, which was found by trial and error, gave better performance. The tip position and the control signal are shown in Fig. 7.1 and Fig. 7.2 respectively. As shown in Fig. 7.1, from t==O to t==10 sec there is a 30% error between the output and the reference signal. And, after t> 10 sec the output of the flexible arm did not follow the reference signal exactly. The non-adaptive controller did not reduce the vibration of the flexible arm. However, the system output exhibit little oscillation due to the change in signal. The performance of the system got even worse when a payload was added at the tip of the flexible arm.. Just after the release of the payload, the error between the output and the reference signal reduced to 20%. However, for t>10 sec the error was 12%. The output and the control signal 112

PAGE 124

(when a payload was placed at the tip of the arm) are shown in Fig. 7.3 and Fig. 7.4, respectively. :::::. 0.2 f-----i-.................... >-0.15 --------------:2 .& ::I 0 E .!! en 0.1 ----....................... .................... 0.05 0 ----................... -0.05 -----....................... .......................................... -0.1 -....................... ------.................... -........................ -0.15 ------................... -0.2 -,..., _, ... ,'ijliiillii -0.25.!.----7.:---...!=---,;!';;o--.......... 0 10 20 30 40 50 60 70 80 Time (sec) Fig. 7.1 System output when a non-adaptive controller is used. 113

PAGE 125

1,----.----.-----r----.-----.----.-----.----. 0.8 ------------------+-------------------'--------------------:-------------------'----------------------+----' ----------------------l-------------------------J..--------------i-........... ______ ---------...;..--8 0.6 1 i ---------------------r-----------------------------------!-------------------_________________ T ________________ ----------------------j--------------------! C) 0.4 Ci5 E 0.2 ..................... L ___________________ -------------------'------------------------------------"----------------------_____________________ ]___ _________________ --l i : 6 c..> 0 -02 j j i --------------------------------------+--------------------------------------i ----------------------:-------------------------------------: : ; -0.4 r------------------f------------------------------------------!------------------..................... .L _____________________ -------------------! i : -0.6 '-_____________ ) ____________________ -------------------1---------------------------------------1-------------: : -0.8 -------------L------------------l ........ ----------------------------------;-----------------------------. l 1 . -: 0 10 20 30 40 50 60 70 80 lime (sec) Fig. 7.2 Control signal when a non-adaptive controller is used. 114

PAGE 126

0.25 r----.----..--------r---,----.------u----.-----, 0.2 t-----+-------------------........... e r i 0.15 -------------------------------------------:::J 0.1 ------------------------------------------------------------------------------------------------------------------------------------------------------0 E 0.05 1 0 ----------------------------------------------------------------------------------------------------------------------------------------------0.05 -----------------------------------------------------------------------------------------------------------------------------------------------------------------0.1 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------0.15 .. -0.2 ------------------!t----+--------------------li------\----------------------!t---+-------------------!tf----j 0 10 20 30 40 50 60 70 80 Time (sec) Fig. 7.3 System output when a non-adaptive controller Is with a payload added at the tip of the flexible arm. 115

PAGE 127

0.8 ,----.--.-----,----.---.--------r----.---o.6 ---L-----..................... J ......... ---------L-. j ; 0 l 1 : 0.4 ------J------L.-------;o.2 ---1----L--------1-------------;--e c 8 0 ... .. -==. . -0.2 -------..------..i.------l_------! i -0.4 ------!-----------..1----------)-----; : -0.6 -------.._______________ -!._------.J------i -0.8 --!----;-; 1 : 0 10 20 30 40 50 60 70 80 lime (sec) Fig. 7.4 Control signal when a non-adaptive controller Is with a payload added at the tip of the flexible ann. 116

PAGE 128

7.3 Adaptive Pole Placement Two types of estimation schemes were used in the adaptive pole placement controller during the experiments: the gradient algorithm for the deterministic case and stochastic gradient algorithm for stochastic case. 7 .3.1 Deterministic Case The gradient algorithm given by Eq. (5.1.14) was used in the adaptive controller scheme. The reference signal was a square wave with amplitude .2 volts and a period of 1000 samples. The value of n0=10 and i..=0.7 were used. Figure 7.5 shows the output of the system for deterministic case. As shown in Fig. 7.5, the output exhibits oscillation at the beginning of the experiment and at change of the command signal. The reason that the output exhibited oscillation at the beginning was that the parameter estimates were far from the optimum. And the oscillations due to the change of command signal may be explained as follows. When the are no changes and the parameter estimates are constant, the output follows the command signal with no oscillation However, when the command signal changes bursting occurs m the adaptive loop. This bursting cause fluctuations in other variables such as 117

PAGE 129

parameter estimation error [1]. However, a self-stabilizing mechanism occurs in the adaptive system and the adaptive system remains globally stable [14]. The control signal is shown in Fig. 7.5. Comparing the results between the non-adaptive and adaptive controllers, the vibration of the flexible arm had reduced significantly when using the adaptive pole placement controller. The output of the system in the nonadaptive case exhibited smaller overshoot (at the change of the reference signal) over a shorter period than the output of the system in the adaptive case. However, the error between output and the reference signal in the nonadaptive case is much larger than the error in the adaptive case. 118

PAGE 130

>o c::: 0 := a; 0 Q, Q, := 0.5 -----r-----' -------1----------t----. --i--------t------------+---------_: ---:---......... 0.4 0.3 0.2 0.1 --------------------------0 ----------. ------------------------------0.1 t--------------------------------.................... -0.2 -. l : .. -!-----0.3 t-----! -----1-------! l -0.4 i 0 10 20 30 40 50 60 70 80 time, sec Fig. 7.5 Tip position response when a gradient estimator is used in the adaptive control system. 119

PAGE 131

e 1 5 -t-----, ---t-----------t-----:::1_ 1 -----}--- -------.:_---oo---J-------;----l l o.5 ---L ______ ------! .. ------------L--------8 -: 11.. .. -Ill. --0.5 --------1---------1-----------t---------. 1 1 i i -1 --------+-------------1---r------:--------i i -1.5 ---------J.-------..L-------..!.----' . 0 10 20 30 40 50 60 70 80 time ,sec Fig. 7.6 Control signal when a gradient estimator is used in the adaptive control system. Figures 7.7 and 7.8 show the output and the control signals, respectively, when a payload is added at the tip of the flexible robotic arm model. The weight of the payload is 1.5 oz. The payload was released at t=2 sec (approximately) when the tip of the arm reached quiescent state. As shown in the plots the adaptive controller was able to adjust with the release of the payload. And the vibration of the flexible arm had reduced significantly. 120

PAGE 132

However, the system exhibited large oscillations when the sign of the reference signal changed. The magnitude of overshoots are approximately the same as with the case when no payload were released However, the settling time was longer with payload release. Note that the control action was larger than the one without a payload release. Heavier payload results in large control action since the amount of energy necessary to move the payload increases with the increase in payload weight. o.4 -----r-----------}--_i,_--------------------T---------------r-----l i 0.3 -----------------!----------------: ------------------r-----------------------------------r----------------------------------j-------------------5 0.2 _.!"'_: ---'"'--_!!!., ..................... ;::; c;; &. 0.1 .... -............... ------------------' ...................... ____ ._ .......................... ......... --------------------. -----------------.................... -Q. ;::; 0 ........... -........ .. ................. ------------------....................... ___ ._ ................................ ............................. : .................... -0.1 ---------. -------0.2 ------------------; --; ...................... :ull!iiW--""i--------------------0.3 --------------------0.4 0 10 20 30 40 50 60 70 80 time, sec Fig. 7.7 Tip position response when a gradient estimator is used in the adaptive control system with a payload added at the tip of the arm. 121

PAGE 133

' 2 -----------L-----------------------.]--------------------L---------------....................... L .................. j 1.5 -----------------+-----------------____________________ j _______________________ -----------------------i-------------! 1 ------------+:-----------------------------t-------------------------------+-----------------------------------+----------------(1) 05 i ___________________ ----------------------------------r-----------------------------------------------------c: 8 0 lilA.. -----------------!1.. _______ 'IIJ., ................ _______ ... IJ.. _________________ .............. .. ,., II'" T'' 1'1' -0.5 ----------------------------------------------------------------------------------------------------------------i : i -1 1--------------------____________________ i._ _________________ .. ______________________ L ................ ___ ----------------L _________________________________ __ : i -1.5 ---------------------_______________________ l__ ________________________________________ _l_ __________________________________________ [ ___________________ ................... .. -2 _____________________ L_ _______ _________ J _______________________ -----------------------!-------------------____________________ [.. ____________________ ---------------. I -2.5 0 10 20 30 40 50 60 70 80 time, sec Fig. 7.8 Control signal when a gradient estimator is used in the adaptive control system with a payload added to tip of the arm. 7 .3.2 Stochastic Case The parameter estimation scheme for stochastic case is giVen section 5.2.2. The reason for using the stochastic gradient algorithm is to determine if external disturbances affect the output of the system. Although the stochastic 122

PAGE 134

gradient algorithm is used, the disturbance model was not used. The adaptive pole placement controller with stochastic gradient algorithm .Is used to control the tip position of the flexible robotics arm model. The model of the system has 13 parameters. The design parameters are the polynomials R, S and T. The order of the model is 6 with 13 parameters. The following parameter values were chosen in the stochastic gradient algorithm: a=l, o=O.l and r41(0)=e. The output of the flexible arm follows the reference signal for t> 10 sec. However, the settling time was larger than the case where the deterministic gradient algorithm was used. And the vibration of the flexible arm was also larger. Figures 7.9 and 7.10 show the output of the system and the control input, respectively, when the stochastic gradient algorithm was used. The settling time got even larger with the release of payload. Note that the noise model was not used in the adaptive control system due the constraint imposed by the computational power of the computer. In other words, the computational time will increase significantly due to the added parameters of the noise model and computation time is very critical since the control signal has to be generated between each sampling interval. The response of the tip position and the control signal with a payload release are shown in figs. 7.11 and 7.12 123

PAGE 135

respectively. Note that the weight of the payload was 1.5 oz. 0.5 0.4 0.3 0.2 c 0 ;; 0.1 0 Q. Q. = 0 -0.1 -0.2 -0.3 -0.4 i i ; ; l k ; !l It l :II' !f I ) 10 20 30 <40 50 eo 70 80 time, sec Fig. 7.9 Tip position response when a stochastic gradient estimator is used in the adaptive control system. 124

PAGE 136

1.5 ----r.: ------------+-------------1 . i 1V c C) --------------i---------------------+-------------------r---------i ;;; 0.5 ----------:-------r----------r-----e c 8 0 'II.._ !Jl !r' -0.5 ------------------t----------------------r--------------------r----------------! i I -1 ------------f---------------+-----------------------------i-----------------! i -1 .5 ---------t-------------------!---------------------------f---------------------. i 0 10 20 30 40 50 60 70 80 time, sec Fig. 7.10 Control signal when a stochastic gradient estimator is used in the adaptive control system 125

PAGE 137

C' >-c::: 0 ;I ;;; 0 Q. Q. ;:I 0.5 0.4 0.3 0.2 0.1 -0.1 -0.2 time, sec Fig. 7.11 Tip position response when a stochastic gradient estimator is used in the adaptive control system with a payload added at the tip of the arm 126

PAGE 138

2.5 2 Q-1.5 :I ii 1 c C) c;; _g 0.5 c 0 8 -0.5 -1 -1.5 -2 -2.5 .. i!u.. II!!.. ... 1r Rr r :r i i i i ) 10 20 :30 40 50 EiO 70 ao time sec Fig. 7.12 Control signal when a stochastic gradient estimator is used in the. adaptive .control system with a payload added at the tip of the arm 127

PAGE 139

CHAPTER 8 CONCLUSION The identification of the flexible manipulator was carried usmg two methods: the observer!Kalman filter identification and the recursive least squares algorithm. The identification of the flexible manipulator using the observer/Kalman filter identification was carried off-line. Attempts to implement a controller based on the identification of the system using observer!Kalman filter identification algorithm was unsuccessful. In contrast, the identification of the system using recursive least algorithm was carried out on-line. The corresponding deadbeat controller was relatively successful m controlling the flexible manipulator. The design and implementation of vanous controllers was shown in Chapter 7. However, the implementation of a pole-placement controller based on a model obtained using the observer/Kalman filter identification was not successful because the numerical model of the flexible robotic arm model was inadequate for the design of robust controller in the closed-loop application. Therefore, direct use of the identified plant model in control 128

PAGE 140

design was the only way to achieve desired performance [6]. The adaptive controller with detenninistic gradient algorithm gave better performance overall . The adaptive controller with stochastic gradient algorithm could give better perfonnance if the noise model was included in the design of the controller. However, this was not done due to the constraints imposed by the computational power of the computer. 129

PAGE 141

APPENDIX % Program prbsm.m % This program is written by Prof. Jan Bialasiewicz (see ref. 2). % This program generates the pseudo-random binary sequence (PRBS) % as test signal for identification of the flexible robotic arm model. % N=input('N (=trffs=5-10)='); A=input('A(magnitude )='); sf=input('sf='); L=2"N-1; pr=input('period=') g=pr*L; e=O PRB=zeros(g, 1 ); for i=l:pr SR=ones( l,N); SR 1 =zeros( 1 ,N); %initialization of the Output Register %initialization of the Shift Register %mod 2 sum of the first and last digit of SR is %stored in SRl(l) for m=l:L f=m+e; SRSUM=SR(sf)+SR(N); %MODULO 2 SUMMATION if SRSUM==O SRl(l)=O; elseif SRSUM-2 SRI(l)=O; else SRl(l)=l; end %0UfPUT GENERATION if SR(N)=l PRB(f)=A; else PRB(f)=-A; end 130

PAGE 142

%SH1FTING SR TO THE RIGHf AND UPDATING for l=l:N-1 end e=e+L end SR(N+ 1-l)=SR(N-l); end SR(l)=O; SR,;,SR+SR1; save c:\das16\du.m PRB -ascii 131

PAGE 143

%PROGRAM fiokuy.m % This program is written by Prof. Jan Bialasiewicz (see ref 2) .. %This is the IDENTIFIER program %for single input multiple output system. %When calling this program the user has to load % (in the MATLAB format) frequency-rich input-output time histories %OBSERVER/KALMAN FILTER IDENTIFICATION ALGORITHM is used. % %User-Defined Parameters: % % T sampling interval used to obtain % input-output time histories % u input sequence (single column vector) % y output sequence (a matrix with m columns) % m number of outputs % p number of observer parameters % Note: Maximum order of the system which % can be identified is p*m. % %Variables Available: % % Al,Bl,Cl,Dl identified state space representation % G observer gain matrix % y response to u of the system to be identified % y 1 -response to u of the identified system % y2=[y yl] % error identification error. % pt=length(u); %number of points in the sample t=[T:T:T*pt ]'; m=input('m(number of outputs)=');%number of system outputs %r=input('r(nwnber of inputs)=');%nwnber of system inputs r=l; disp(Declare p number of. observer Markov parameters.') disp('Maximum order of the system to be identified is p*m') p=input('p(number of observer Markov parameters)='); %number of observer Markov parameters 132

PAGE 144

%system of maximum order p*m can be identified [A1,B1,C1,D1,G]=okid(m,r,T,u,y,'lq',p);%identification y1=dlsim(A1,B1,Cl,D1,u); %simulation to determine identification error y2=[y y1]; plot(y2),title('System and identified model response') disp('Both real system output and identified system output') disp('as response to the same input are now displayed in the Figure Window.') disp('To save the plot you can open New Figure now.') disp('ENTER to continue.') pause error=y-y 1 ;%identification error plot( error),title('Identification Error') disp('Identification error is now displayed in the Figure Window') disp('Identified system matrices are available as AI, B1, C1, Dl') disp('Observer gain matrix is available as G') [mag1,phase1,w]=dbode(A1,B1,Cl,D1,T,1); lmagl =20*log(mag1(:,1 )); v=[.1,10,-100,50]; subplot(2, 1,1) semilogx(w,lmag1,'g'),grid, ... axis(v) xlabel('frequency (Hz)'), ... ylabel('Gain dB'); pause subplot(2, 1 ,2) semilogx(w,phase1(:,1)),grid, ... xlabel('frequency (Hz)'), ... ylabel('phase ( deg)'); pause 133

PAGE 145

% program observer.m % This matlab program generates the control signal based on pole % placement. % This program also calculates the integral constant K 1 the state feedback % gain constant K2 and the observer gain G. Later, K1 K2 and G were % used to in real-time simulation. % a0=zeros(1,6); A1=[a b aO 0]; B1=[zeros(6,1); 1]; axO=zeros( 6,50 1 ); % initialize the plant state variables. axeO=zeros(6,501); % initialize the observer state variables. xO=zeros( 6,1 ); xeO=zeros( 6,1 ); vO=l; ghl=inv([Bl A1*Bl fl*Bl f2*Bl f3*B1 f4*Bl f5"'Bl ]); ap=zeros(l,6); aap=[ap 1]; dff=f6-1.6504*f5+.7034*f4; khat=aap*ghl*f6; ai=a-eye(6); ca=c*a; cb=c*b; kk=[khat + aap ]; k=kk*(inv([ai b; ca cb])); kl=k(:,7); for j=1:6; k2(1j)=k(lj); end ff=a*a*a*a*a*a; ke I =ff'l'( inv([ c;ca;c*a*a;c*a*a *a;c*a"'a*a*a;c*a*a*a *a*a]) )*([0;0;0;0;0; 1 ]); n=input('n= ') hold off h=O:l:n; r=.1S*square(2*pi*h/200); plot(h,r);grid pause 134

PAGE 146

hold on vO=.lS; x0(3)=.15; xO(l)=-.19; for i=l:n+l v=-c*a*xO + c*b*k2*x0+(1 -c*b*k1)*vO+r(i); u=-k2*xeO+kl*v; x=a*xO+b*u; y=c*xO; xe=a *xeO+ b*u+kel*(y-c*xeO); ys=c*xeO; yo(:,[i])=y; yol(:,[i])=ys; uol(:,[i])=u; axO(:,i)=xO; axeO( :,i)=xeO; xO=x; vO=v; xeO=xe; end plot(h,yo,'g');grid; xlabel('sample k') ylabel('magnitude'); title('output of the system'); pause plot(h,yo 1, 'b');grid; xlabel('sample k') ylabel('magnitude'); title('estimated output of the system'); pause hold off plot(h,uo 1, 'b'};grid; xlabel('sample k') ylabel('magnitude'); title('control input'); pause 135

PAGE 147

% Program rsl.m % This matlab program uses a recursive least algorithm to estimate the % parameters of the system and update the control parameters indirectly. % The transfer function-based linear regulator is used to generate a control % signal. The desired closed-loop poles are placed at the origin. % n=13; N=1999; ttl =0: 1 :N+ 1; T=l; w=n-1; ys=0.2*square(2*pi*ttl/500); Thest=zeros( n,N+ 2); p=.O.l *eye(n); u=zeros(l ,N+ 2); x=zeros( 1271 ); y=zeros(l ,N+ 2); I); yst=zeros( I ,N+ 2); yerr=zeros( I 7 N+ 2); b=zeros(12,1); tho=[0.2316696 0.1983438 O.I73241 O.I429359 0.1172126 0.1168989 -2.487187-04 -2.125822E-03 5.835298E-03 6.650744E-02 0.0889677 0.0554328 4.764577-02 ]; thest(1)=.2I 136

PAGE 148

thest(2)=.1722; thest(3)=.19 ; thest(4)=.154138 ; thest(5)=.10855; thest(6)=.089 ; thest(7)= -4.012411e-4; thest(8)=-8.991258e-4 ; thest(9)=1.0627e-3; thest(10)=-2.005586e-2 ; thest( 11 )=-2.205883e-2 ; thest(12)=1.162508e-2 ; thest( 13 )=-2. 806634e-2 ; for t=7:N+ 1 % main program al =thest(1 ); a2=thest(2); a3=thest(3 ); a4=thest( 4 ); aS=thest( 5); a6=thest(6); bO=thest(7); b 1 =thest(8); b2=thest(9); b3=thest( 1 0); b4=thest( 11 ); b5=thest(12); b6=thest(13 ); den=[thest(1) thest(2) thest(3) thest(4) thest(S) thest(6)]; num=[thest(7) thest(8) thest(9) thest(10) thest(11) thest(12) thest(13)]; roots( den); roots(num); 137

PAGE 149

b( 1 )=-thest( 1 ); b(2)=-thest(2); b(3)=-thest(3); b(4)=-thest(4); b( 5)=-thest( 5); b(6)=-thest(6); b(7)=0; b(8)=0; b(9)=0; b(10)=0; b(l1)=0; b(12)=0; for i=l:w % Subroutine for Gauss elimination with pivoting. temp=abs( ff(i, 1) ); for j=l:w-1 if temp < abs(ff(ij+1)) 138

PAGE 150

end temp=abs(ff(ij+ 1)); end s(i)=temp; end for m=l:w templ=abs(ff(m,m))/s(m); for c=m+l:w if tempi < abs(ff(c,m))/s(c) templ=abs(ff{c,m))/s(c); temp2=s(m); s(m)=s(c); s( c )=temp2; temp3=ff{m,: ); ff{m,:)=ff{c,:); ff{c,:)=temp3; temp4=b(m); b(m)=b(c); b(c)=temp4; end; % end if end; % c for i=m+l:w ff{i,m )=ff{i,m)/ff( m,m); for j=m+l:w ff(ij)=ff(ij)-ff{i,m)*ff(mj); end; %end j b(i)=b(i)-ff(i,m)*b(m); ff(i,m)=O; end; % end i end; % m b(w)=b(w)/ff(w,w); k=w for i=l:w-1 k=k-1; for j=k+l:w b(k)=b(k)-ff(kj)*b(j); 139

PAGE 151

end; %j b(k)=b(k)/ff(k,k); end; %i % end of subroutine u1=--b(1)*u(t-1)-b(2)*u(t-2)-b(3)*u(t-3)-b(4)*u(t-4); u2=--b(5)*u(t-5)-b(6)*u(t-6); u3=--b(7)*y(t)-b(8)*y( t-1 )-b(9)*y( t-2)-b( 1 O)*y( t-3); u4=-b(11 )*y(t-4)-b(12)*y(t-5); uS=T*ys(t); u(t)=(u1+u2+u3+u4+u5); % Control signal generation % *************Recursive least squares algorithm***************** phi=[ -y(t) -y(t-1) -y(t-2) -y(t-3) -y(t-4) -y(t-5) u(t) u(t-1) u(t-2) u(t-3) u(t-4) u(t-5) u(t-6)]; phi1=[y(t) y(t-1) y(t-2) y(t-3) y(t-4) y(t-5) u(t) u(t-1) u(t-2) u(t-3) u(t-4) u(t-5) u(t-6) ]; 140

PAGE 152

y(t+ 1)=tbo'*phi1; L=p*phi/(1 +phi'*p*phi); p=(p-p*phi*phi'*p/(lamo+phi'*p*phi) )!lamo; yst(t)=thest'*phi ; thest=thest+L *(y(t+ 1 )-yst(t));_ %**************end of recursive least squares algorithm************* yerr( t )=y( t )-ys( t ); Thest( :,t )=thest; T= 11( thest(7)+thest(8)+thest(9)+ thest( 1 O)+thest(11 )+thest( 12)+thest( 13) ); end plot(ttl,ys); pause hold on plot(ttl,y,'g');grid; xlabel(' Time, (sec)'); ylabel('Output and reference signals y(t) r(t)'); title('FIGURE 3, Output ,y(t), and reference signals r(t)'); gtext('r( t )'); pause hold off plot(Thest( :,2:N+ 1 )','-');grid xlabel(' time'); title(' FIGURE 4, parameters estimated'); % end of program rsl.m 141

PAGE 153

% Program unmorsl.m % This matlab program uses a recursive least algorithm, in the presence of % unmodelled dynamics, to estimate the parameters of the system and % update the control parameters indirectly. % n=9; N=1999; tt1=0:1:N+1; w=n-1; ys=0.2*square(2*pi*ttl/500); Thest=zeros(n,N+2); p=0.1*eye(n); u=zeros(1,N+2); y=zeros(1,N+2); tho=[-.2316696 -.1983438 -.173241 -.1429359 -.1172126 -.1168989 -2.487187E-04 -2.125822E-03 5.835298E-03 6.650744E-02 .0889677 .0554328 4. 764577E-02]; thest=zeros(n, 1 ); yst=zeros( 1 ,N+ 2); yerr=zeros( 1 ,N+ 2); b=zeros(8,1); thest(1)=-.21 thest(2)=-.176; thest(3)=-.19 ; thest(4)=.1542 ; thest( 5)=5 .0 124e-4; thest(6)=-8.9912e-5; thest(7)= 1. 627e-4; 142

PAGE 154

thest(8)=-2.004586e-2; thest(9)=-5 .20588e-3; T=1; for t=7:N+1 % main program a1 =thest(l ); a2=thest(2); a3=thest(3 ); a4=thest( 4 ); bO=tbest( 5); b 1 =thest( 6); b2=thest(7); b3=thest(8); b4=thest(9); b(l )=-thest(l ); b(2)=-thest(2); b(3)=-thest(3); b(4)=-thest(4); b(5)=0; b(6)=0; b(7)=0; 143

PAGE 155

b(8)=0; end for i=t :w % subroutine for Gauss elimination with pivoting temp=abs(ff{i,t)); for j=t:w-t if temp < abs(ff{ij+t)) temp=abs( ft{ ij+ t) ); end s(i)=temp; end for m=t:w tempt =abs(ff{m,m))/s(m); for c=m+t:w if tempt < abs(ff{c,m))/s(c) temp 1 =abs( ff( c,ni) )Is( c); temp2=s(m); s(m)=s(c); s( c )=temp2; temp3=ff( m,: ); fit c,: )=temp3; temp4=b(m); b(m)=b(c); b( c )=temp4; end; % end if end; % c for i=m+1:w ff{i,m)=ffti,m)/ff(m.,m); for j=m+1:w ff{ij)=fftij)-ffti,m)*ff{mj); end; %end j b(i)=b(i)-ff(i,m)*b(m); ff{i,m)=O; end; % end i 144

PAGE 156

end; % m b(w)=b(w)/ff(w,w); k=w; for i=1:w-1 k=k-1; for j=k+1:w b(k)=b(k)-ff(kj)*b(j); end; %j b(k)=b(k)/ff(k,k); end; %i % end of the subroutine u1=--b(l)*u(t-1)-b(2)*u(t-2)-b(3)*u(t-3)-b(4)*u(t-4); u2=-b(5)*y(t)-b( 6)*y(t-1 )-b(7)*y(t-2)-b(8)*y(t-3); u3=T*ys(t); u(t)=(u1+u2+u3); % control signal %*********** recursive least algorithm********.******** phi=[ -y(t) -y(t-1) -y(t-2) -y(t-3) u(t) u(t-1) u(t-2) u(t-3) u(t-4)]; phi1=[y(t) y(t-1) y(t-2) y(t-3) y(t-4) y(t-5) u(t) u(t-1) u(t-2) u(t-3) u(t-4) 145

PAGE 157

u(t-5) u(t-6) ]; y( t+ 1 )=tho'*phi 1; L=p*phil( 1 +phi'*p*phi); p=(p-p*phi*phi'*p/( 1 +phi'*p*phi) ); yst( t )=thest'*phi ; thest=thest+L *(y(t+ 1)-yst(t)); Thest( :,t )=thest; T= 1/( thest( 5)+thest( 6)+thest(7)+ thest(8)+thest(9) ); end thest plot(tti,ys); pause hold on plot( tt I ,yst, 'g');grid; xlabel(' time'); ylabel('Output and reference signals y(t) r(t)'); title('FIGURE 3, Output ,y(t), and reference signals r(t)'); gtext('r( t )'); pause hold off plot(Thest(:,2:N+ I )','-');grid xlabel(' time'); title(' FIGURE 4, parameters estimated'); %*********** END of program unrnorsl.mu 146

PAGE 158

% gradien.m % This matlab program uses a gradient algorithm, in the presence of % unmodelled dynamics, to estimate the parameters of the system and % update the control parameters indirectly. n=9; N=1999; ttl=0:1:N+l; w=n-1; ys=0.2*square(2*pi*tt1/500); Thest=zeros( n,N+ 2); ba=1 ; u=zeros(1,N+2); y=zeros(l,N+2); tho=[ -.2316696 -.1983438 -.17324I -.I429359 -.1172I26 -.II68989 -2.487187-04 -2.125822-03 5.835298-03 6.650744-02 .0889677 .0554328 4.764577-02 ]; T=1; thest=zeros( n, I); yst=zeros(I ,N+ 2); yerr=zeros( I ,N+ 2 ); b=zeros(8, 1 ); thest(l )=-.2 I thest(2)=-. i 76; thest(3)=-.19 ; thest(4)=.1542 ; thest(5)=5.0I24e-4; 147

PAGE 159

thest(6)=-8.9912e-5; thest(7)=1.627e-4; thest(8)=-2.004586e-2; thest(9)=-5.20588e-3; mphi=zeros( 1 ,N+ 2); nphi=zeros( 1 ,N+ 2); rr=zeros( 1,N+ 2 ); lam=0.001; n0=30; for t=7:N+ 1 % main program a1 =thest( 1 ); a2=thest(2); a3=thest(3 ); a4=thest( 4 ); bO=thest( 5); b1=thest(6); b2=thest(7); b3=thest(8); b4=thest(9); b(l )=-thest(l ); b(2)=-thest(2); b(3)=-thest(3); 148

PAGE 160

b( 4 )=-thest( 4 ); b(5)=0; b(6)=0; b(7)=0; b(8)=0; for i=l:w % subroutine for Gauss elimination with pivoting temp=abs( ff(i, 1) ); end for j=t:w-1 if temp < abs(ff(ij+ t)) temp=abs(ff(ij+ 1)); end s(i)=temp; end for m=l:w tempt =abs(ff(m,m))/s(m); for c=m+l:w if tempi < abs(ff(c,m))/s(c) tempt =abs(ff( c); temp2=s(m); s(m)=s(c); s( c )=temp2; temp3=ff(m,:); ff( m,: )=ff( c,: ); fftc,:)=temp3; temp4=b(m); b(m)=b(c); b( c )=temp4; end; % end if end; % c for i=m+t:w ff(i,m)=ff(i,m)/ff( m,m ); for j=m+t:w fftij)=ff(ij)-flti,m)*ff(mj); end; %end j b(i)=b(i)-ffti,m)*b(m); :ffti,m)=O; 149

PAGE 161

end; % end i end; % m b(w)=b(w)/ff(w,w); k=w, for i=1:w-l k=k-1; for j=k+1:w b(k)=b(k)-ff{kj)*b(j); end; %j b(k)=b(k)/ff(k,k); end; %i % end of subroutine u1=-b(1)*u(t-1)-b(2)*u(t-2)-b(3)*u(t-3)-b(4)*u(t-4); u3=-b(5)*y(t)-b(6)*y(t-1)-b(7)*y(t-2)-b(8)*y(t-3); uS=T*ys(t); u(t)=(u1+u3+u5); % control signal % *************gnKtient algoritlun******************** phi=[ -y(t) -y(t-1) -y(t-2) -y(t-3) u(t) u(t-1) u(t-2) u(t-3) u(t-4)]; phi1=[y(t) y(t-1) y(t-2) y(t-3) y(t-4) y(t-5) u(t) u(t-1) u(t-2) u(t-3) u(t-4) 150

PAGE 162

u(t-5) u(t-6) y( t+ 1 )=tho'*phi 1; vphi(t )=phi'*phi; mphi(t)=lam*mphi(t-1 )+vphi(t); rr(t)=nO+mphi(t); L=ba*philrr(t); yst( t+ 1 )=thest'*phi thest=thest+L *(y(t+ 1 )-yst(t+ 1 )); of gnutient yerr( t )=y( t )-ys( t ); Thest( :,t)=thest; T=1/( thest( 5)+thest( 6)+thest(7)+ thest(8)+thest(9) ); end plot(tt1,ys ); pause hold on plot(tt1,y,'g');grid; xlabel(' time'); ylabel('Output and reference signals y(t) r(t)'); title('FIGURE 3, Output ,y(t), and reference signals r(t)'); gtext('r(t)'); pause hold off plot(Thest(:,2:N+ 1)','-');grid xlabel(' time'); title(' FIGURE 4, parameters estimated'); 151

PAGE 163

'* This program implements a recursive least square to identify a flexible '* robotic ann model. This is accomplished through the use of MetraByte's '* model Dash-16 AID and D/A convertor. '* '* The following variables are used to accomplish this task: '* p: covarince matrix '* phi: measurement vector '* thest: parameter vector '* lam: forgetting factor '* DtoA (0): Controller output sent to DAC via D/A channel 0 '* AtoD (0): Position Feedback sent to ADC via AID channel 1. '* 1: Dummy loop counter. '* mode%: Dash-16 parameter. '* flag"-4: Dash-16 parameter. '* '* CLS INPUT "dp =", dp INPUT "n1= ", n1 INPUT "n2= ", n2 DIM Atodo/o(4) DIM dio%(4) DIM postbo/o(4) DIM dtoa%(4) DIM r(1 TO dp) 'InitializeDash 16 mode% = 0 flag% = 0 Atod%(0) = &H300 'Base address of Dash16 Atodo/o(1) = 2 'Set DMA level Atod%(2) = 3 'Set Interrupt level CALL das16(mode%, Atodo/o(O), flag%) 'Set Scan Limits and Channels mode%= 1 'Set mux to toggle from 152

PAGE 164

Atodo/o(O) = 1 'channel one to channel Atod%(1) = 2 'three. CALL das16(mode%, Atod%(0), tlag"!O) 'Set Programmable Timer Rate mode% = 17 'This sets the sampling time Atodo/o(O) = nl to .n1 *n2/1E6 seconds Atodo/o( 1) = n2 CALL das16(mode%, Atodo/o(O), tlag"!O) 'Reset Motor mode% = 15 'output zero volts to motor dtoa%(0) = 0 dtoa%(1) = 2048 CALL das16(mode%, dtoao/o(O), flag%) n = 13 DIM lam(1 TO dp) DIM p(1 TO n, 1 TO n) DIM phi(l TO n) DIM 1(1 TO n) DIM thest(1 TO n) DIM u(1 TO dp) DIM y(1 TO dp) DIM yst(1 TO dp) DIM pphi(l TO n) DIM phip(I TO n) DIM pphip(I TO n) DIM pphiphi(l TO n, I TO n) DIM pphiphip(l TO n, 1 TO n) DIM sdiv(I TO n, 1 TO n) lam(6) = .99 OPEN "du.m" FOR INPUT AS #I OPEN "adpu.m" FOR OUTPUT AS #2 OPEN "adpys.m" FOR OUTPUT AS #3 153

PAGE 165

OPEN "adpxs.m" FOR OUTPUT AS #4 OPEN "adtbs.m" FOR OUTPUT AS #5 FOR j = 1 TO dp INPUT #1, u(j) NEXT CLOSE #1 FOR i = 1 TO n p(i, i) = .1 NEXT 'MAIN PROGRAM STARTS HERE FORt= 7 TO dp mode%= 4 Atodo/o{O) = 2 Atod%(1) = V ARPTR(posfb%(0)) Atod%(2) = 1 'Read data from tip position This mode allows only 3 conversion per sample time interval. CALL das16(mode%, Atod%(0), flag>lo) POSFEEDBACK = posfb%(0) I 204.8 CALL das16(mode%, Atod%(0), flag%) xs = posfb%(1) I 204.8 y(t + 1) = xs phi(1) = -y(t) phi(2) = -y(t -1) phi(3) = -y(t 2) phi(4) = -y(t 3) phi(5) = -y(t 4) phi(6) = -y(t 5) phi(7) = u(t) phi(8) = u(t -1) phi(9) = u(t 2) 'recursive least square algorithm 'starts here. 154

PAGE 166

phi(lO) = u(t -3) phi(ll) = u(t -4) phi(l2) = u(t -5) phi(I3) = u(t 6) lam(t) = .99 lam(t 1) + .OI FOR i = 1 TO n pphi(i) = 0 FOR j = I TO n 'recursive least square algorithm starts here. pphi(i) = p(i, j) phi(j) + pphi(i) NEXT NEXT FOR i = I TO n phip(i) = 0 FOR j = I TO n phip(i) = phi(j) p(j, i) + phip(i) NEXT NEXT phipphi = lam(t) FOR i = 1 TO n phipphi = phip(i) phi(i) + phipphi NEXT FOR i = 1 TO n l(i) = pphi(i) I phipphi NEXT FORi= 1 TO n FOR j = 1 TO n pphiphi(i, j) = pphi(i) phi(j) NEXT NEXT FOR k = 1 TO n FOR i = 1 TO n pphiphip(k, i) = 0 FOR j = I TO n 155

PAGE 167

pphiphip(k, i) = pphiphi(k, j) p(j, i) + pphiphip(k, i) NEXT NEXT NEXT FOR i = I TO n FOR j = I TO n sdiv(i, j) = pphiphip(i, j) I phipphi NEXT NEXT FOR i = I TO n FOR j = I TO n p(i, j) = (p(i, j) -sdiv{i; j)) I lam(t) NEXT NEXT thestphi = 0 FOR i = I TO n thestphi = thest(i) phi(i) + thestphi NEXT yst(t) = thestphi FORi = I TO n thest(i) = thest(i) + l(i) (y(t + I) yst(t)) NEXT recursive least algorithm end here. m = -u(t) 'Check M, make sure it does not exceed abs(2.5) IF ABS(m) > .4 THEN m = .4 SGN(m) 'Output Control Voltage mode% = I5 dtoa%(0) = 0 Select DIA Channel 0 dtoao/o(I) = (m + 2.4475) 8I9 Convert to bits CALL dasi6(mode%, dtoa%(0), flag'lo) FOR i = I TO n PRINT #5, thest(i) NEXT I 56

PAGE 168

NEXT 'Send zero volts to the mode% = 15 'motor dtoao/o(O) = 0 dtoa%(1) = 2048 CALL das16(mode%, dtoa%(0), flagG/0) FOR ii = 1 TO dp PRINT #2, -u(ii) PRINT #3, yst(ii) PRINT #4, y(ii) NEXT CLOSE #2 CLOSE #3 CLOSE #4 CLOSE #5 PRINT "PROGRAM TERMINATED" END write data into files. 157

PAGE 169

'* This program implements a non-adaptive controller based on pole '* placement to control a flexible robotic ann model. This is '* accomplished through the use of MetraByte's model Dash-16 AID '* and D/ A convertor. '* '* The following variables are used to accomplish this task: '* u: input vector '* y: output vector '* ys: square wave input '* T, R and S: control polynomials '* DtoA (0): Controller output sent to DAC via D/A channel 0 AtoD (0): Position Feedback sent to ADC via AID channel 1. '* I: Dummy loop counter. '* mode%: Dash-16 parameter. '* flagG/o: Dash-16 parameter. '* '* INPUT "n1= ", n1 INPUT "n2= II' n2 DIM Atod%(4) DIM dio%(15) DIM posfb%(4) DIM dtoa%(4) 'InitializeDash16 mode% = 0 flag% = 0 Atod%(0) = &H300 'Base address of Dash16 Atodo/o(1) = 2 'Set DMA level Atod%(2) = 3 'Set Interrupt level CALL das16(mode%, Atod%(0), flagG/o) mode% = I Atodo/o(O) = 1 Atod%(1) = 2 'Set Scan Limits and Channels 'Set mux to toggle from 'channel 1 to channel .2 CALL das16(mode%, Atodo/o(O), flagG/o) 158

PAGE 170

mode% = 17 Atodo/o(O) = n1 Atod%(1) = n2 'Set Programmable Timer Rate This sets the sampling time to n1 *n2/1E6 seconds CALL das16(mode%, Atod%(0), flag'/o) 'Reset Motor mode% = 15 'output zero volts to motor dtoa%(0) = 0 dtoa%(1) = 2048 CALL das16(mode%, dtoa%(0), flag%) CLS INPUT "dp =", dp INPUT "T =", T DIM u(1 TO dp) DIM y(l TO dp) DIM r(1 TO dp) OPEN "ysl.m" FOR INPUT AS #1 OPEN "radpul.m" FOR OUTPUT AS #2 OPEN "radpyl.m" FOR OUTPUT AS #3 FOR j = 1 TO dp INPUT #1, r(j) NEXT CLOSE #1 'MAIN PROGRAM STARTS HERE FORt= 7 TO dp 159 'to write data to files read data from a file

PAGE 171

y(t) = xs 'Read Position Feedback mode% = 4 Atod%(0) = 2 Atod%(I) = VARPTR(posfbo/o{O)) Atod%(2) = I This mode allows only 2 conversion per sample time interval. CALL das16(mode%, Atodo/o{O), flagO/o) POSFEEDBACK = posfb%(0) I 204.8' CALL das16(mode%, Atod%(0), flagO/o) xs = posfb%(1) I 204.8 ui = .2511 u(t -I) + .2549 u(t 2) + .2464 u(t 3) u2 = .1883 u(t 4) + .1023 u(t 5) + .076 u(t 6) u3 =.4464 y(t) +.7086 y(t -I) + .4508 y(t 2) + .1977 y(t 3) u4 = .0791 y(t -4) + .1722 y(t 5) u(t) = T r(t) ui -u2 -u3 -u4 'Calculate the Control Signal m = -1 u(t) 'Check M, make sure it does not exceed abs(2.5) IF ABS(m) > 2.5 1HEN m = 2.5 SGN(m) 'Output Control Voltage mode%= 15 dtoa%(0) = 0 Select Dl A Channel 0 dtoa%(1) = (m + 2.44755) 8I9 Convert to bits CALL dasi6(mode%, dtoa%(0), flagO/o) NEXT 'Send zero volts to the mode% = I5 'motor dtoa%(0) = 0 dtoa%(1) = 2048 CALL das16(mode%, dtoa%(0), flag%) FOR i = I TO dp record data PRINT #2, u(i) 160

PAGE 172

CLOSE #2 CLOSE #3 PRINT #3, y(i) NEXT mode% = 15 'Reset Motor dtoao/o{O) = 0 'Send zero volts to the dtoa%(1) = 2048 'motor CALL das16(mode%, dtoao/o(O), flagOA.) PRINT "PROGRAM TERMINA1ED" END 161

PAGE 173

'* This program implements an adaptive controller based on pole '* placement to control a flexible '* robotic arm model. The parameter '* estimator is a gradient algorithm. '"' This program uses a gauss elimination with partial pivoting '* to solve for control parameters S(q-1 ) and R(q-1). '* This is accomplished through the use of MetraByte's model '* Dash-16 AID and D/ A convertor. '* The following variables are used to accomplish this task: '* '* '* '* phi: measurement vector thest: parameter vector u: input vector '* y: output vector '* ys: square wave input '* T, R and S: control polynomials '* vphi, nO, nphi, nn and rphi: gradient algorithm parameters '* DtoA (0): controller output sent to DAC via D/A channel 0 '* AtoD (0): tip position sent to ADC via AID channel 1. '* 1: Dummy loop counter. '* mode%: Dash-16 parameter. '* flagO/o: Dash-16 parameter. '* '* CLS INPUT "nl= ", nl INPUT "n2= ", n2 INPUT "dp =" dp INPUT "nO =", nO INPUT "qr =", qr INPUT "lam =", lam DIM Atod%(4) DIM posfb%(4) DIM dtoa%(4) 'InitializeDash 16 mode%= 0 162

PAGE 174

flagG/o = 0 Atod%(0) = &H300 Atod%(1) = 2 Atod%(2) = 3 'Base address of Dash 16 'Set DMA level 'Set Interrupt level CALL das16(mode%, Atodo/o(O), flag%) mode%= 1 Atod%(0) = 1 Atod%(1) = 2 'Set Scan Limits and Channels 'Set mux to toggle from 'channel 1 to channel 2 CALL das16(mode%, Atod%(0), flagG/o) mode% = 17 Atod%(0) = n1 Atod%(1) = n2 'Set Programmable Timer Rate This sets the sampling time to n1*n2/1E6 seconds CALL das16(mode%, Atod%(0), flag%) 'Reset Motor mode% = 15 'output zero volts to motor dtoa%(0) = 0 dtoa%(1) = 2048 CALL das16(modeo/o, dtoa%(0), flagG/o) n = 13 w = 12 DIM phi(1 TO n) DIM 1(1 TO n) DIM thest(1 TO n) DIM u(1 TO dp) DIM yy(l TO dp) DIM y(1 TO dp) DIM ys(l TO dp) DIM yst(l TO dp) 163

PAGE 175

DIM ff{1 TO w, 1 TO w) DIM b(1 TO w) DIM s(1 TO w) DIM vphi(1 TO dp) DIM rphi(l TO dp) DIM nn(1 TO dp) OPEN "ysl.m" FOR INPUT AS #1 OPEN "dyst.m" FOR OUTPUT AS #2 OPEN "dadpu.m" FOR OUTPUT AS #3 OPEN "dadpy.m" FOR OUTPUT AS #4 OPEN "dnn1.m" FOR OUTPUT AS #5 OPEN "dthest.m" FOR OUTPUT AS #6 FOR j = 1 TO dp INPUT #1, ysG) NEXT CLOSE #1 sw = 2000 max= 0 thest(l) = -.185 thest(2) = -.17622 thest(3) = -.19 thest(4) = -.146 thest(5) = -.108 thest(6) = -.08 thest(7) = 5.01241-04 thest(8) = -8.99125-05 thest(9) = 1.627832-04 thest(lO) = 2.005586-02 thest(ll) = 5.205883-02 thest(12) = 4.162508-02 thest(13) = 2.800663E-02 aa = 1 'MAIN PROGRAM STARTS HERE 164

PAGE 176

FORt= 7 TO dp mode%= 4 Atod%(0) = 2 Atod%(1) = VARPTR(posfb%(0)) Atod%(2) = 1 'read tip position This mode allows only 2 conversion per sample time interval. CALL das16(mode%, Atod%(0), flagG/o) POSFEEDBACK = posfb%(0) I 204.8 CALL das16(mode%, Atod%(0), flag%) xs = posfb%(1) I 204.8 flt1, 1) = 1 ff( 1, 7) = thest(7) ff(2, 1) = thest( 1) flt2, 2) = 1 ff(2, 7) = thest(8) flt2, 8) = thest(7) ff(3, 1) = thest(2) flt3, 2) = thest(1) ff(3, 3) = 1 flt3, 7) = thest(9) ff(3, 8) = thest(8) ff(3, 9) = thest(7) ff(4, 1) = thest(3) ff( 4, 2) = thest(2) ff(4, 3) = thest(1) flt4, 4) = 1 ff( 4, 7) = thest( 10) ff( 4, 8) = thest(9) ff(4, 9) = thest(8) ff(4, 10) = thest(7) ff(5, 1) = thest(4) ff(5, 2) = thest(3) ff(5, 3) = thest(2) ff(5, 4) = thest(1) ff(5, 5) = 1 ff( 5, 7) = thest(ll) AR+q-dBS=Am is put in matrix form 165

PAGE 177

fft5, 8) = thest(10) fft5, 9) = thest(9) fft5, 10) = thest(8) fft5, 11) = thest(7) flt6, 1) = thest(5) ff(6, 2) = thest(4) ff(6, 3) = thest(3) ff(6, 4) = thest(2) ff(6, 5) = thest(1) ff(6, 6) = 1 ff(6, 7) = thest(12) ff(6, 8) = thest(11) ff(6, 9) = thest(10) ff(6, 10) = thest(9) flt6, 11) = thest(S) ff(6, 12) = thest(7) ff(7, 1) = thest(6) ff(7, 2) = thest(5) ff{7, 3) = thest(4) ff{7, 4) = thest(3) ff(7, 5) = thest(2) ff(7, 6) = thest(1) ff(7, 7) = thest(13) ff(7, 8) = thest(l2) ff(7, 9) = thest(ll) ff(7, 10) = thest(lO) ff(7, 11) = thest(9) fit7, 12) = thest(S) ff(8, 2) = thest(6) ff(8, 3) = thest(5) ff(8, 4) = thest(4) ff(8, 5) = thest(3) ff(8, 6) = thest(2) fitS, 8) = thest(l3) ff(S, 9) = thest(12) ff(S, 1 0) = thest(11) ff(S, 11) = thest(10) fitS, 12) = thest(9) 166

PAGE 178

ffl9, 3) = thest(6) ff(9, 4) = thest(5) ffl9, 5) = thest(4) ff(9, 6) = thest(3) fft9, 9) = thest(13) ff{9, 10) = thest(12) ff(9, 11) = thest( 11) ff(9, 12) = thest(IO) ff(10, 4) = thest(6) ff{10, 5) = thest(5) ff{IO, 6) = thest(4) ff(10, 10) = thest(13) flt10, 11) = thest(12) flt10, 12) = thest(ll) flt11, 5) = thest(6) ff{11, 6) = thest(5) ff( 11, 11) = thest(l3) ff(11, 12) = thest(12) ff(12, 6) = thest(6) ff(12, 12) = thest(l3) b(l) = -thest(1) b(2) = -thest(2) b(3) = -thest(3) b(4) = -thest(4) b(5) = -thest(5) b(6) = -thest(6) b(7) = 0 b(S) = 0 b(9) = 0 b(10) = 0 b(ll) = 0 b(12) = 0 FOR i = 1 TO w subroutine for gauss elimination temp = ABS(ff{i, 1)) starts here FOR j = 1 TO w -1 IF temp < ABS(:ff(i, j + 1)) THEN temp = ABS(ff(i, j + 1)) 167

PAGE 179

END IF s(i) = temp NEXT NEXT gg = 0 FORm= I TOw gg=gg+I tempi = ABS(ff(m, m)) I s(m) FOR c = m + I TO w IF tempi < ABS(fftc, m)) I s(c) THEN tempi = ABS(fftc, m)) I s(c) temp2 = s(m) s(m) = s(c) s(c) = temp2 FORi= m TO gg FOR j = I TO w temp3 = ff{i, j) ff{i, j) = m: c, j) fftc, j) = temp3 NEXT NEXT temp4 = b(m) b(m) = b(c) b(c) = temp4 END IF NEXT FOR i = m + I TO w ffti, m) = ff{i, m) I ff(m, m) FOR j = m + I TO w ff(i, j) = ff(i, j) ff(i, m) ff(m, j) NEXT NEXT b(i) = b(i) ffti, m) b(m) ff(i, m) = 0 NEXT 'end of m b(w) = b(w) I ff(w, w) I68

PAGE 180

kk=w FOR i = 1 TO w -1 kk=kk-1 FOR j = kk + 1 TO w b(kk) = b(kk) ff(kk, j) b(j) NEXT b(kk) = b(kk) I fflkk, kk) NEXT subroutine for gauss elimination 'ends here y(t + 1) = xs ul=-b{1)* u(t 1) b(2) u(t 2) b(3) u(t 3) b(4) u(t 4) u2 = -b(5) u(t -5) -b(6) u(t 6) u3 = -b(7) y(t) b(8) y(t 1) b(9) y(t 2) b(10) y(t 3) u4 = -b(l1) y(t 4) b(12) y(t 5) u5 = qr ys(t) u(t) = u1 + u2 + u3 + u4 + u5 'calculate control signal phi(1) = -y(t) phi(2) = -y(t 1) phi(3) = -y(t 2) phi(4) = -y(t -3) phi(5) = -y(t 4) phi(6) = -y(t -5) phi(7) = u(t) phi(8) = u(t 1) phi(9) = u(t 2) phi(10) = u(t 3) phi(ll) = u(t 4) phi(12) = u(t 5) phi(l3) = u(t -6) phiphi = 0 'gradient algorithm starts here FOR i = 1 TO n phiphi = phi(i) phi(i) + phiphi NEXT 169

PAGE 181

vphi(t) = phiphi rphi(t) = lam rphi(t 1) + vphi(t) nn(t) = nO + rphi(t) thestphi = 0 FOR i = 1 TO n thestphi = thest(i) phi(i) + thestphi NEXT. yst( t) = thestphi FOR i = 1 TO n thest(i) = thest(i) + (aa phi(i) (y(t + 1) yst(t))) I nn(t) NEXT gradient algorithm end here T =l/(thest(7)+thest(8)+thest(9)+thest(10) + thest(11) + thest(l2) + thest(l3)) m = -1 u(t) 'Check m. make sure it does not exceed abs(2.5) IF ABS(m) > 1.5 THEN m = 1.5 SGN(m) 'Output Control Voltage mode%= 15 dtoa%(0) = 0 Select D/A Channel 0 dtoa%(1) = (m + 2.4475) 819 Convert to bits CALL dasl6(mode%, dtoa%(0), flag%) NEXT 'Send zero volts to the mode% = 15 'motor dtoao/o{O) = 0 dtoao/o{ 1) = 2048 CALL das16(mode%, dtoa%(0), flagGAt) FOR .i = 1 TO 4000 PRINT #2, yst(i) PRINT #3, u(i) PRINT #4, y(i) PRINT #5, nn(i) NEXT FOR ii = 1 TO n 'record data 170

PAGE 182

PRINT #6, thest(ii) NEXT FOR jj = 1 TO dp 1 subroutine to plot the output yy(jj) = 5 y(jj) 1 and the control signal NEXT scREEN 1 VIEW (50, 20)-(290, 140), 1 COLOR 0, 2 WINDOW (0, -2)-(sw, 2) style% = &H7777 LINE (sw, 0)-(0, 0) FOR qx = 0 TO sw -1 LINE -( qx, yy( qx + 1) ), , style% NEXT LOCATE 2, 16: PRINT "OtiTPUT OF TilE SYSTEM" LOCATE 20, 5: PRINT 0 LOCATE 20, 12: PRINT 500 LOCATE 20, 19: PRINT 1000 LOCATE 20, 27: PRINT 1500 LOCATE 11, 3: PRINT 0 LOCATE 6, 3: PRINT .2 LOCATE 15, 3: PRINT -.2 LOCATE 23, 16: PRINT "Sampling ,k" SLEEP CLS SCREEN 1 VIEW (50, 20)-(290, 140), 1 COLOR 0, 2 WINDOW (0, -2)-(sw, 2) style% = &H7777 LINE (sw, 0)-(0, 0) FOR qx = 0 TO sw -1 LINE -( qx, u( qx + 1) ), , style% NEXT LOCATE 2, 16: PRINT "CONTROL INPUT II 171

PAGE 183

CLOSE #2 CLOSE #3 CLOSE #4 CLOSE #5 CLOSE #6 LOCATE 20, 5: PRINT 0 LOCATE 20, 12: PRINT 500 LOCATE 20, 19: PRINT 1000 LOCATE 20, 27: PRINT 1500 LOCATE 11, 3: PRINT 0 LOCATE 6, 3: PRINT .2 LOCATE 15, 3: PRINT -.2 LOCATE 23, 16: PRINT "Sampling ,k" SLEEP 'Reset Motor 'Send zero volts to the mode% = 15 'motor dtoa%(0) = 0 dtoao/o(1) = 2048 CALL das16(mode%, dtoao/o(O), flaglo) END 'end of program 'PRINT PROGRAM TERMINATED 172

PAGE 184

'* This program implements an adaptive controller based on pole '* placement to control a flexible robotic arm model. The parameter '* estimator is a stochastic gradient algorithm. '* This program uses a gauss elimination with partial pivoting '* to solve for control parameters S(q-1 ) and R(q-1). '* This is accomplished through the use of MetraByte's model '* Dash-16 AID and D/ A convertor. '* The following variables are used to accomplish this task: '* '* phi: measurement vector '* thest: parameter vector '* u: input vector '* y: output vector '* ys: square wave input '* T, R and S: control polynomials '* vphi, nnw, nphi, nn and rphi: gradient algorithm parameters '* DtoA (0): controller output sent to DAC via D/A channel 0 '* AtoD (0): tip position sent to ADC via AID channel 1. '* I: Dummy loop counter. '* mode%: Dash-16 parameter. '* 'flagG/o: Dash-16 parameter. '* '* CLS INPUT "n1= ", n1 INPUT "n2= ", n2 INPUT "dp =" dp INPUT "qr =", qr DIM Atod%(4) DIM postbo/o(4) DIM dtoa%(4) 'InitializeDash16 mode% = 0 flagG/o = 0 Atod%(0) = &H300 'Base address of Dash 16 173

PAGE 185

Atod%(1) = 2 Atod%(2) = 3 : 'Set DMA level 'Set Interrupt level CALL dasl6(mode%, Atod%(0), flago/o) Set Scan Limits and Channels mode% = 1 'Set mux to toggle from Atodo/o(O) = 1 'channel one to channel 2 Atod%(1) = 2 CALL das16(mode%, Atod%(0), flag%) mode% = 17 Atod%(0) = n1 Atod%(1) = n2 'Set Programmable Timer Rate 'This sets the sampling time to n1*n2/1E6 seconds CALL das16(mode%, Atod%(0}, flag%) 'Reset Motor mode% = 15 'output zero volts to motor dtoa%(0) = 0 dtoao/o(1) = 2048 CALL das16(mode%, dtoa%(0), flag%) n = 13 w = 12 DIM phi(1 TO n) DIM 1(1 TO n) DIM thest(1 TO n) DIM u(1 TO dp) DIM yy(1 TO dp) DIM -y(1 TO dp) DIM ys(l TO dp) DIM yst(l TO dp) DIM ff(1 TO w, 1 TO w) DIM b(1 TO w) DIM s(1 TO w) DIM vphi(1 TO dp) DIM rphi(l TO dp) DIM nnw(1 TO dp) 174

PAGE 186

OPEN "ysl.m" FOR INPUT AS #1 OPEN "syst.m" FOR OUTPUT AS #2 OPEN "sadpu.m" FOR OUTPUT AS #3 OPEN "sadpy.m" FOR OUTPUT AS #4 OPEN "snnl.m" FOR OUTPUT AS #5 OPEN "sthes.m" FOR OUTPUT AS #6 FOR j = 1 TO dp INPUT #1, ys(j) NEXT CLOSE #1 sw = 2000 max= 0 thest(1) = -.185 thest(2) = -.17622 thest(3) = -.19 thest(4) = -.146 thest(5) = -.108 thest(6) = -.08 thest(7) = 5.01241E-04 thest(8) = -8.99125-05 thest(9) = l.627832E-04 thest(lO) = 2.005586E-02 thest(ll) = 5.205883E-02 thest(l2) = 4.162508E-02 thest(13) = 2.800663E-02 rphi( 6) = EXP( 1) aa = 1 'MAIN PROGRAM STARTS HERE FORt= 7 TO dp 175

PAGE 187

'Read tip position mode% = 4 I This mode allows only one Atod%(0) = 2 1 conversion per sample time Atod%(1) = VARPTR(posfb%(0)) 1 interval. Atodo/o(2) = 1 CALL das16(mode%, Atodo/o(O), flag'/o} POSFEEDBACK = posfb%(0) I 204.8' CALL das16(mode%, Atod%(0), flag%) xs = postbo/o(1) I 204.8 fft1, 1) = 1 fft1, 7) = thest(7) flt2, 1) = thest( 1) ff(2, 2) = 1 flt2, 7) = thest(8) :ff{2, 8) = thest(7) fft3, 1) = thest(2) fft3, 2) = thest(1) fft3, 3) = 1 flt3, 7) = thest(9) flt3, 8) = thest(8) fft3, 9) = thest(7) fft4, 1) = thest(3) fft 4, 2) = thest(2) fft4, 3) = thest(1) ff(4, 4) = 1 fft4, 7) = thest(lO) flt4, 8) = thest(9) flt 4, 9) = thest(8) flt4, 10) = thest(7) fft5, 1) = thest(4) flt5, 2) = thest(3) ff(5, 3) = thest(2) fft5, 4) = thest(1) fft5, 5) = 1 ff( 5, 7) = thest( 11) flt5, 8) = thest(10) AR+q-dBS=Am is put in matrix form 176

PAGE 188

ff(5, 9) = thest(9) ff(5, 10) = thest(8) ff(5, 11) = thest(7) ff(6, 1) = thest(5) ff(6, 2) = thest(4) ff(6, 3) = thest(3) fft6, 4) = thest(2) fft6, 5) = thest(1) fft6, 6) = 1 ff(6, 7) = thest(12) ff{6, 8) = thest(ll) ff{6, 9) = thest(lO) fft6, 10) = thest(9) ff{6, 11) = thest(8) ff(6, 12) = thest(7) ff{7, 1) = thest( 6) ff(7, 2) = thest(5) ff(7, 3) = tbest(4) ff(7, 4) = thest(3) ff(7, 5) = thest(2) ff{7, 6) = thest(1) ff(7, 7) = thest(13) ff(7, 8) = thest(12) ff{7, 9) = thest(ll) ff(7, 10) = thest(10) ff(7, 11) = tbest(9) ff{7, 12) = thest(8) ff(8, 2) = thest(6) ff(8, 3) = thest(5) ff(8, 4) = thest(4) ff{8, 5) = thest(3) ff(8, 6) = thest(2) ff(8, 8) = thest(13) ff(8, 9) = thest(12) ff(8, 10) = thest(ll) ff{8, 11) = thest(lO) ff{8, 12) = thest(9) ff{9, 3) = thest(6) 177

PAGE 189

ff(9, 4) = thest(5) ff(9, 5) = thest(4) ff(9, 6) = thest(3) ff(9, 9) = thest(13) ff(9, 10) = thest(12) flt9, 11) = thest(ll) ff(9, 12) = thest(10) :ffl:10, 4) = thest(6) ff{10, 5) = thest(5) ff(10, 6) = thest(4) :ffl:10, 10) = thest(l3) ff(IO, 11) = thest(12) ff(10, 12) = thest(ll) ff(11, 5) = thest(6) ff(ll, 6) = thest(5) :ffl:11, 11) = thest(13) :ffl:11, 12) = thest(12) :ffl:12, 6) = thest(6) ff(12, 12) = thest(13) b(1) = -thest(1) b(2) = -thest(2) b(3) = -thest(3) b(4) = -thest(4) b(5) = -thest(5) b(6) = -thest(6) b(7) = 0 b(8) = 0 b(9) = 0 b(10) = 0 b(ll) = 0 b(l2) = 0 FOR i = 1 TO w subroutine for gauss elimination starts temp = ABS(ff(i, 1)) 'here. FOR j = 1 TO w -1 IF temp < ABS(ff(i, j + 1)) THEN temp = ABS(ff(i, j + 1)) 178

PAGE 190

END IF s(i) = temp NEXT NEXT gg = 0 FORm= I TOw gg=gg+I tempi = ABS(ff(m, m)) I s(m) FOR c = m + I TO w IF tempi < ABS(ff(c, m)) I s(c) THEN tempi = ABS(ff(c, m)) I s(c) temp2 = s(m) s(m) = s(c) s(c) = temp2 FORi= m TO gg FOR j = I TO w temp3 = ff( i, j) ff(i, j) = ff(c, j) ff(c, j) = temp3 NEXT NEXT temp4 = b(m) b(m) = b(c) b(c) = temp4 END IF NEXT FOR i = m + I TO w ff(i, m) = ff(i, m) I tT(m, m) FORj =m+ 1 TOw ff(i, j) = ff(i, j) -ff(i, m) ff(m, j) NEXT b(i) = b(i) ff(i, m) b(m) ff(i, m) = 0 NEXT NEXT 'end of m b(w) = b(w) I ff(w, w) 179

PAGE 191

kk=w FOR i = 1 TO w -1 kk=kk-1 FOR j = kk + 1 TO w b(kk) = b(kk) ff(kk, j) bG) NEXT b(kk) = b(kk) I ff(kk, kk) NEXT 1 subroutine for gauss elimination ends y(t + l) = xs I calculate the control signal u1= -b(1) u(t -1) b(2) u(t 2) b(3) u(t 3) b(4) u(t 4) u2 = -b(5) u(t -5) -b(6) u(t 6) u3 = -b(7) y(t) b(S) y(t -1) b(9) y(t 2) b(10) y(t 3) u4 = -b(ll) y(t 4) b(12) y(t 5) uS = qr ys(t) u(t) = u1 + u2 + u3 + u4 + uS phi(1) = -y(t) phi(2) = -y(t -1) phi(3) = -y(t 2) phi(4) = -y(t 3) phi(5) = -y(t 4) phi(6) = -y(t -5) phi(7) = u(t) phi(S) = u(t -1) phi(9) = u(t 2) phi(10) = u(t 3) phi(ll) = u(t 4) phi(12) = u(t 5) phi(13) = u(t 6) phiphi = 0 1 stochastic gradient algorithm starts here FOR i = 1 TO n 180

PAGE 192

phiphi = phi(i) phi(i) + phiphi NEXT vphi(t) = phiphi rphi(t) = rphi(t -1) + vphi(t) IF vphi(t) > vphi(t -1) THEN mphi = vphi(t) ELSE mphi = vphi(t -1) END IF IF max > mphi THEN nn1 = max ELSE nn1 = mphi END IF max = nn1 tmphi = (SQR(rphi(t)) (LOG(rphi(t))) 1.1) IF nn1 > nuphi THEN nn = nn1 ELSE nn = mrphi END IF nnw(t) = nn thestphi = 0 FOR i = 1 TO n thestphi = thest(i) phi(i) + thestphi NEXT yst( t) = thestphi FOR i = 1 TO n thest(i) = thest(i) + (aa phi(i) (y(t + 1) yst(t))) I nnw(t) NEXT 'stochastic gradient algorithm ends here T= 11( thest(7)+thest(8)+thest(9)+thest( 1 O)+thest( 11 )+thest( 12 )+thest( 13)) in = -1 u(t) 181

PAGE 193

'Check M, make sure it does not exceed abs(2.5) IF ABS(m) > 2.5 THEN m =2.5 SGN(m) 'Output Control Voltage mode% = IS dtoa%(0) = 0 Select D/A Channel 0 dtoa%(1) = (m + 2.4475) 8I9 Convert to bits CALL dasi6(mode%, dtoa%(0), flagG/o) NEXT end of loop 'Send zero volts to the mode% = IS 'motor dtoao/o(O) = 0 dtoa%(I) = 2048 CALL dasi6(mode%, dtoao/o(O), flag%) FOR i = I TO dp record data PRINT #2, yst(i) PRINT #3, u(i) PRINT #4, y(i) PRINT #5, nnw(i) NEXT FOR i = I TOn PRINT #6, thest(i) NEXT FOR jj = I TO dp subroutine for plotting starts here yy(jj) = 5 y(jj) NEXT SCREEN I VIEW (50, 20)-(290, 140), I COLOR 0, 2 WINDOW (0, -2)-(sw, 2) style% = &H7777 LINE (sw, 0)-(0, 0) FOR qx = 0 TO sw -I 182

PAGE 194

CLOSE #2 CLOSE #3 LINE -( qx, yy( qx + 1) ), , style% NEXT LOCATE 2, 16: PRINT "OUTPUT OF 1HE SYSTEM" LOCATE 20, 5: PRINT 0 LOCATE 20, 12: PRINT 500 LOCATE 20, 19: PRINT 1000 LOCATE 20, 27: PRINT 1500 LOCATE 11, 3: PRINT 0 LOCATE 6, 3: PRINT .2 LOCATE 15, 3: PRINT -.2 LOCATE 23, 16: PRINT "Sampling ,k" SLEEP CLS SCREEN 1 VIEW (50, 20)-(290, 140), 1 COLOR 0, 2 WINDOW (0, -2)-(sw, 2) style% = &H7777 LINE (sw, 0)-(0, 0) FOR qx = 0 TO sw -1 LINE -( qx, u( qx + 1) ), , style% NEXT LOCATE 2, 16: PRINT "CONTROL INPUT II LOCA1E 20, 5: PRINT 0 LOCA 1E 20, 12: PRINT 500 LOCA1E 20, 19: PRINT 1000 LOCA1E 20, 27: PRINT 1500 LOCA1E 11, 3: PRINT 0 LOCA1E 6, 3: PRINT .2 LOCA1E 15, 3: PRINT -.2 LOCA1E 23, 16: PRINT "Sampling ,k" SLEEP subroutine for plotting starts here 183

PAGE 195

CLOSE #4 CLOSE #5 CLOSE #6 'Reset Motor 'Send zero volts to the mode% = 15 'motor dtoa%(0) = 0 dtoao/o( I) = 2048 CALL dasl6(mode%, dtoa%(0), flagG/o) END PRINT PROGRAM TERMINATED." 184

PAGE 196

REFERENCES [1] Astrom, K.J. and Wittenmark, B., "SelfTuning Controllers Based on PoleZero Placement,"JEE-D Proc., Vol. 127, May1980, pp. 120-130. [2] Bialasiewicz, J. T. "Advanced System Identification Techniques for Wind Turbine Structures with Special Emphasis on Modal Parameters" National Renewable Energy Laboratory. [3] Fraser R A, Daniel W. R. "Perturbation Technique for Flexible Manipulator" Kluwer Academic Publisher, 1991. [4] Godwin G.C., and Sin K.S., "Adaptive Filtering, Prediction and Control," Prentice Hall, 1984. [5] Graseby Optronics, "User Manual." PIN 79-10-009, Revision C, August, 1992. [6] Juang, J.-N, "Applied System Identification,"Prentice Hall, 1984. [7] Juang, J.-N, Horta, L.G and Phan M., "System/Observer/Controller Identification Toolbox."NASA Technical Memorandum 107566, NASA LaRC, Febnuuy, 1992. [8] Juang, J.-N, Phan, M,Horta, L.G and Longman, R.W., "Identification of Observer!Kalman Filter Markov Parameters: Theory and Experiments, "Journal of Guidance, Control, and Dynamics, Vol.16, No.2, March-April, 1993, pp.320-329. [9] Kraft, S.D., "Design and Evaluation of A Microprocessor-Based Adaptive DC Motor Position Control System, "MS thesis, University of Colorado" 1984. [10] Landau, I.D., "System Identification and Control Design,"Prentice Hall, 1990. [11] Newland D.E., "An Introduction to Random Vibrations and Spectra Analysis, "Longman, 1984. 185

PAGE 197

[12] Phan, M, Juang, J.-N, Horta, L.G and Longman, R.W., "Identification of Linear Systems by an Asymptotically Stable Observer,"NASA Technical Paper 3164, June 1992. [13] Ogata, K. ''Discrete-Time Control Systems,"Prentice Hall, 1987. [14] Radenkovic, MS and Michel, N.M, "Robust Adaptive Systems and Self Stabilization.,"/EEE Trans. Automat. Contr., Vol. 37, 1992 pp. 1355-1369. [15] Radenkovic, M.S and Michel, N.M., "Possible Bursting Phenomena in Self-Tuning Control,"Intemational Joumal of Adaptive Control and Signal Processing, Vol 7, 1993. [16] Yang, T.-C, Yang, J.C.W, Kudva, P. "Adaptive Control of a Single Flexible Manipulator with Unknown Load,"/EE Proceedings-D. Vol. 138, 1991, pp 153-159 .. 186