PAGE 1
IDENTIFICATION AND CONTROL OF A FLEXIBLE ROBOTIC ARM MODEL by Anteneh Kebede B.S., University of Colorado, 1992 A thesis submitted to the Faculty of the Grad1Iate School of the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Electrical Engineering 1995
PAGE 2
This thesis for the Master of Science degree by Anteneh Kebede has been approved by Miloje S. Radenkovic ,_ Hamid Z. Fardi osoL,Date
PAGE 3
Anteneh Kebede (M.S., Electrical Engineering) Identification and Control of a Flexible Robotic Ann Model Thesis directed by Professor Jan T. Bialasiewicz ABSTRACT The purpose of this thesis is to design a controller which can suppress vibrations of a physical model of flexible robotic ann. As part of the design, the identification of the flexible robotic ann model was carried out using the observer/Kalman filter identification and the recursive least squares algorithms. An adaptive pole assignment controller is used to control the flexible robotic ann model. A gradient estimator is used to eliminate the possible bursting phenomena in the adaptive control system. This abstract accurately represents the content of the candidate's thesis. I recommend its publication. Jan T. Bialasiewicz iii
PAGE 4
CONTENTS CHAPTER 1. INTRODUCTION. l 2. SYSTEM DESCRIPTION 4 2.1 DC Motor. 6 2.2 Flexible Robotic Ann Model. 8 2.3 Tip position Sensor 13 3 SYSTEM IDENTIFICATION 14 3 .l Identification Principle 14 3.2 Observer/Kalman Filter Identification IS 3.2.1 System and Observer Markov Parameters. 16 3.2.2 Computation of System Markov Parameters 23 3.2.3 Computation of Observer Gain Markov Parameters 28 3.2.4 Computational Steps of OKID 29 3.3 Recursive Parameter Identification 30 3.3.1 Recursive Least Squares Algorithm. 31 3.4 PRBS Signal for Process Identification 36 4. CONfROL DESIGN FOR THE FLEXIBLE ROBOTIC ARM MODEL ........... . 40 4.1 Pole Placement Design in State Space 41 IV
PAGE 5
4.2 Transfer FunctionBased Design of Linear Regulator 60 5. ADAPTIVE CONTROLLER . 71 5.1 Parameter Estimation 72 5 .1.1 Recursive Least Squares Algorithm . 73 5.1.2 Gradient Algorithm 76 5.2 Adaptive Algorithms for Closed Loop Pole Placement 79 5.3 Simulations . . . 83 6. IDENTIFICATION OF Tiffi FLEXIBLE ROBOTIC ARM MODEL. 92 6.1 Observer!Kalman Filter Identification 92 6.2 Recursive LeaSt Squares Algorithm .106 7. EXPERIMENTAL RESULTS ; . .. .110 7.1 Choice of Sampling Frequency and Order of the System Model . 110 7.2 NonAdaptive Controller .112 7.3 Adaptive Pole Placement 117 7.3.1 Deterministic Case 117 7.3.2 Stochastic Case .122 8. CONCLUSION .128 APPENDIX .130 REFERENCE. .185 v
PAGE 6
FIGURES Figure 1.1 Basic structure of an adaptive controller 2 2.1 Experimental setup of the flexible robotic arm model 5 2.2 The experimental flexible robotic arm model 5 2.3 Flexible robotic arm model 11 3.1 Structure of recursive least squares estimation 32 3.2 PRBS signal. . . . . . 39 4.1 Servo system with observed state feedback . 42 4.2 System response and command signal for the servo system with observed state feedback and integral control. . . 53 4.3 Observer output and command signal for servo system with observed state feedback and integral control . . . 54 4.4 Control signal for servo system with observed state feedback and integral control . . . . . . . 55 4.5 System output and command signal for the servo system with observer state feedback and integral control designed in Example 4.2. . . . . . 57 4.6 Observer output and command signal for servo system with Observer state feedback and integral control designed in Example 4.2. . . . . . . . . 58 VI
PAGE 7
4.7 Control signal for servo system with observer state feedback and integral control designed in Example 4.2 59 4.8 Block diagram of the closedloop system 61 4.9 System output and reference signal when no process zeros are cancelled 69 4.10 Control signal when no process zeros are cancelled 70 5.1 Schematic diagram of an adaptive control system. 72 5.2 System output and reference signal when a recursive least squares algorithm is used 85 5.3 Control signal when a recursive least squares algorithm is used 86 5.4 System output when a recursive least squares algorithm is used in the presence of unmodeled dynamics. 88 5.5 Control signal when a recursive least squares algorithm is used in the presence of unmodeled dynamics. 89 5.6 System output and reference signal when a gradient algorithm is used in the presence of unmodeled dynamics . 90 5. 7 Control Signal when a gradient algorithm is used in the presence of umnodeled dynamics 91 6.1 Computational steps of OKID 93 6.2 Input excitation. 94 6.3 Output measurement of the flexible robotic ann model 95 6.4 Observer pulse response sequence 96 6.5 System Markov pulse response. 97 vii
PAGE 8
6.6 Observer gain pulse response 98 6.7 Hankel matrix singular values .100 6.8 Estimated output of the system .102 6.9 Predicted output of the identified model and the real output .103 6.10 Normalized prediction error .104 6.11 Frequency response of the identified model. .105 6.12 Parameter estimates of the flexible ann when a recursive least squares algorithm is used . . . . . . 107 6.13 Output measurement when a recursive least squares algorithm is used . . . . . . . .108 6.14 Predicted output when a recursive least squares algorithm is used . . . . . . . .109 7.1 System output when a nonadaptive controller is used 7.2 Control signal when a nonadaptive controller is used 7.3 System output when a nonadaptive controller is used with payload added at the tip of the flexible arm 7.4 Control signal when a nonadaptive controller is used with payload added at the tip of the flexible arm .113 .114 . .115 .116 7.5 Tip position response when a gradient estimator is used in the adaptive control system . . . . . . . 119 7.6 Control signal when a gradient estimator is used in the adaptive control system . . . . . . .120 Vlll
PAGE 9
7. 7 Tip position response when a gradient estimator is used in the adaptive control system with a payload added at the tip of the arm . . . . 7.8 Control signal when a gradient estimator is used in the adaptive control system with a payload added at the tip of the arm . . . . . 7.9. System output when a stochastic gradient estimator is in the adaptive control system . . . . 7.10 Control signal when a stochastic gradient estimator is in the adaptive control system . . . . 7.11 Tip position response when a stochastic gradient estimator is used in the adaptive control system with payload added at the tip of the arm. . . . . . . 7.12 Control signal when a stochastic gradient estimator is used in the adaptive control system with a payload added at the .121 .122 .124 .125 .126 tip of the ann . . . . . . . . .127 ix
PAGE 10
TABLES Table 2.1 DC motor/tachometer Specification. . . . . . . . . . . 9 2.2 Properties of the Flexible Arm. . . . . . . . . . . I 0 3.1 Generation of Maximum Length PRBS ................. 38 6.1 Modal Parameters . . . . . . . . . . . . . . 101 X
PAGE 11
I wish to thank Professor Bialasiewicz for his help in motivating my work on this thesis, in providing me with meaningful direction in this research effort and funding my research work. I would like also to thank professors Radenk:ovic and Fardi for seMng in my thesis committee. XI
PAGE 12
CHAPTER 1 INTRODUCTION In control systems design, complete knowledge of the system model is essential. A classical control design method can be used when the system model is completely specified [4]. However, in practice system models are not known or are known partially. Therefore, identification of the system model parameters is necessary for optimal design of a control system .. System identification is the determination of the system model from the input/output measurements [10]. Depending on the stability of the system, either an openloop or a closedloop identification can be implemented. For unstable systems it is desirable to use a closedloop identification. For timevarying or slowly timevarying systems, identification of the system models only is not enough. For optimum system performance, it is necessary to design a selflearning controller which can account for variations of the system parameters. For unknown system models, a controller could be combined with online parameter estimation to design a controller using the estimated parameters instead of the true parameters [1,4]. 1
PAGE 13
This type of controllers are known as certainly equivalent adaptive controllers [4]. The basic structure of an adaptive control is shown in Fig. 1.1. There exists a wide range of adaptive controllers. The choice of an adaptive controller depends on the system performance and specifications. Some adaptive controllers may be more suitable than the others for a particular system . 1 notse ... plant outputs .... , 7' controller LL .... parameter estimator I" Fig. 1.1 Basic structure of an adaptive controller. An adaptive control loop may exhibit bursting phenomena due to unmodelled dynamics and disturbance present in a system. One such system Is a flexible manipulator which is characterized by infinite set of modes. 2
PAGE 14
However, to design a controller for a flexible manipulator a model with only a finite number of modes can be used. Thus, the unmodelled dynamics may give rise to possible bursting phenomena and the adaptive control system may become unstable [14]. In this case, a proper choice of an estimator will stabilize the adaptive control system. 3
PAGE 15
CHAPTER 2 SYSTEM DESCRIPTION The system considered to be identified and controlled consists of a power amplifier driving a DC motor and physical model of a flexible robotic arm model. The flexible robotic arm model is mounted directly on the shaft of a DC motor. The tip position of the arm to be controlled is sensed with an optical detector. The flexible robotic arm model is mounted on top of a D.C. motor which Is connected to a computer via an input/output board. The tip position of the ann is measured with position indicator system which includes the LED, the detector and the lens. The LED is placed at the tip of the robot ann. A lateral effect detector is centered above the center of the arm tip. The detector with a lens covers up to degrees in the horizontal plane. The ann bends horizontally freely but is stiff in vertical bending. The mechanical configuration of the system is shown in Fig 2.1 and Fig. 2.2 is the picture of the system. 4
PAGE 16
lense arm LED+ 1: fixed load motor and l tachometer I I Fig 2.1 Experimental setup of the flexible robotic ann model Fig. 2.2 The experimental flexible robotic arm model. 5
PAGE 17
2.1 DC Motor The robotic ann model is mounted on the top of the motor. This is an armaturecontrolled D.C. servomotor with a permanent magnet field and a tachometer connected to the load via a rigid shaft coupler which has a moment of inertia of 0.0011 ozins2/rad about the rotational axis. The complete model of the motor is described in [9]. The load moment of inertia is 0.62 ozins2 /rad about the rotational axis. The robotic arm model is mounted on the top of the load. The power amplifier provides current through the motor armature which is proportional to the input voltage generated by the amplifier. Peak output current is limited to .0 A and the output current step rise time is limited to a minimum of 0.7 msec. The combination of the static Coulomb friction and the viscous dumping form the friction torque in the system which are given by the following equation (2.1.1) where p is the coefficient of viscous friction, fc( tu) is the Coulomb friction. fi tu,T ,J is static friction, K5 is the static friction coefficient with a value of 3.0 ozm and Kc is the Coulomb friction with a value of 3.4 ozin. The coefficients Kc and K5 are obtained from the DC motor system measurements. The differential equations describing the dynamics of the DC 6
PAGE 18
motor and load have the following form: (2.1.2) and (2.1.3) The time constant of the RC filter at the output of the tachometer circuit is 100 times greater than the time constant r 2 due to the tachometer armature inductance. The tachometer gain is given by the transfer function =(2.1.4) where Kv =IOKg and Kg is the generator constant of the tachometer. The potentiometer gain is given by the transfer function = K lt1 p 1 s+tl (2.1.5) where KP is the DC gam of the potentiometer circuit and r 1 is the time constant of an RC filter which filters the output of the potentiometer. The motor characteristics are given in Table 2.1 [9]. The power amplifier and motor armature circuit provide the necessary power gam required for driving DC motor through direct control of the motor armature current. The 7
PAGE 19
dynamic equations of the power amplifier are coupled to the nonlinear motor dynamics given by Eq. (2.1.2) and Eq. (2.1.3). Assuming the tenn Ke O(t) to be a constant voltage source, the closedloop transfer function of the power amplifier is given by la(s) 2.06x l08(s+667) V,(s) (s+63448)(s+3379)(s+646) (2.1.6) Since the power amplifier allows direct torque control, the effects due to nonlinear armature circuit resistance are not considered in this system. 2.2 Flexible Robotic Arm Model A flexible robotic arm model is a distributed parameter system. Therefore, its analytical model is characterized by infinite set of natural modes [3]. The flexible robotic arm model is a uniform, slender beam connected to the annature of a D.C. motor via a rigid hub. The physical model of a flexible robotic ann is implemented as a beam which can bend freely in the horizontal plane but is stiff in vertical bending. The geometric and mass properties of the system are summarized in Table 2.2 and a sketch of the flexible robotic arm model is given in Fig. 2.3. 8
PAGE 20
Table 2.1 DC motor/tachometer specifications. Peak torque 250 ozin Continuous torque 30 ozin Friction torque 4 ozin Damping factor 0.4 ozin!Krpm Rotor inertia 0.0058 2 ozms Weight 52 oz Winding temperature 155 oc Thermal resistance 5 C/Watt Torque sensitivity 10 ozin/A BackE.M.F 7.4 V/K.pnn T enninal resistance 1.6 Ohms T enninal inductance 2.2 mH Peak Current 25 A Voltage gradient 7 V/Kprm Armature resistance 36 Ohms 9
PAGE 21
Define the parameters of the arm as follows: l =length of the beam x =distance along the of the beam r =hub radius =payload mass IP =payload inertia m =mass per unit length EI =flexual rigidity of the beam w(x,t) =flexural deflection of the beam Table 2.2 Properties of the flexible arm. Beam length, em 72 Beam nnr.n 1.5 Beam width, em 7.6 10
PAGE 22
Fig. 2.3 Flexible robotic ann model As shown m [3], the tip position y is given by y(x,t)=w(x,t)+(x+r)6h(t) (2.2.1) The kinetic energy of the system is given by (2.2.2) 11
PAGE 23
where denotes the derivative with respect to t and denotes the derivative with respect to x. The first term is due to hub rotation whereas the integral term arises from the motion of the flexible link. The potential. energy of the system is given by 1 V1 fEIW'(x,t'f dx e 2 0 (2.2.3) and the external work done on the system by the motor torque, T m is given by e ..., W=fr d6 Ill h (2.2.4) 0 where emax is the total angle of rotation of the arm. It is impractical to design a controller based on fullorder flexible arm because the flexible ann is characterized by infinite set of natural modes. Therefore, it is necessary to use a reducedorder lumpedparameter model to design a controller. One way to model a flexible arm is to use the assumedmodes method together with Lagrange equation [3]. However, in this thesis, to design a controller for flexible arm we use identified finiteorder models 12
PAGE 24
obtained using input/output data. 2.3 Tip Position Sensor An optical sensor, a UDT SCtO/D lateral effect detector, along a camera lens was used to measure the tip position of the flexible ann. The distance between the optical sensor and the lens is about t. 0 inch [ 5]. The detector has a tern x tern active area and will detect light from 350llOOnm, i.e., the maximum resolution is about t J.lm. Its peak responsivity is 0.5A/W at 900nm. The detector proportions the photocurrent among its 4 electrodes irrespective of image detail. The detector is incorporated in a computerbased control scheme usmg a UDT 431 position indicator which has a resolution of t part in 4000 [5]. The 431 position indicator has a bandwidth of 1kHz. The 431 can accommodate a I 00000: I range of light levels. The light source is provided by a high powered LED. At tOO rnA the LED emits t2 mW at 880 nm into a 15 beam. The LED is connected to a I 00 rnA LED supply connection on 431 position indicator. The complete description of the UDT 43t xy position indicator along the UDT SCIO/D lateral effect detector is given in [5]. 13
PAGE 25
CHAPTER 3 IDENTIFICATION SCHEMES 3.1 Identification Principles The dynamical behavior of systems can be described with various model structures, each one dedicated to a particular application. In some cases, the model of a system can be obtained from the laws of chemistry, physics, etc [4,10]. These type of models give a fairly complete system description. However, these models, in general, are complex and can rarely be used for the design of control systems [10]. In other cases, it might not be feasible / to determine the model of a system precisely from the laws of physics, chemistry, etc. In these instances it may be possible to determine the model of a system by observing the nature of the system's response under appropriate experimental conditions. This procedure is known as system identification meaning the experimental determination of the model of a dynamic process from input/output measurements [10]. Two types of identification methodologies will be used to identify the parameters of a flexible robotic arm model: observer/Kalman filter 14
PAGE 26
identification algorithm and recursive least squares algorithm. The observer/Kalman filter identification is based on identification of observer Markov parameters from which the state space model is obtained. The recursive least squares is based on minimizing a quadratic cost function. 3.2 Observer/Kalman Filter Identification Many system identification techniques have been developed and applied to identify a statespace model for modal parameter identification of flexible structures. The modal parameters include frequencies, damping and mode shapes. Most techniques are based on sampled pulse system response histories which are known as system Markov parameters [6]. One approach solves directly for system Markov parameters from system input and output data. But there is a drawback of this approach when inverting an input matrix which becomes particularly large for lightly damped systems. Another approach uses Fast Fourier Transforms of the input/output data to compute the Frequency Response Functions, and then compute system Markov parameters by using the Inverse Discrete Fourier transform [6]. Since system Markov parameters may exhibit very slow decay, asymptotically stable observer can be used to form a stable discrete statespace 15
PAGE 27
model for the system to be identified [6,8]. In this approach one can assign any poles desired for the observer, and specify the decay rate of the observer Markov parameters as well the number of parameters needed before they have decayed to negligible level. 3.2.1 System and Observer Markov Parameters Consider the discrete statespace model given by the following equations: where x(k+ 1 +Bfi..k) x(k) is an nvector y(k) 1s an mvector, u(k) is an rvector A IS nxn matrix B 1s nxr matrix C is mxn matrix D is mxr matrix (3.2.1) (3.2.2) Assuming that x(O)=O, the solution of Eq. (3.2.1) for IF0,1, .... /1 may be 16
PAGE 28
obtained by recursion as follows: .x(O)=O x(1)=Bu(O) .x(2)=ABu(O)+Bu(1) 11 .x(l1)= LA t1Bu(li1) t=l and, one obtains the solution of Eq. (3.2.2) as follows: y(O)=Du(O) y(1}=CBt(O)+Dt(1) y(2)=CABr{O)+CBt(1 )+Dt(2) 11 y(l1)= LCA tlBt(l1i)+Dt.(l1) tl Regrouping Eq. (3.2.3) and Eq. (3.2.4) into a matrix fonn yields u(O) u(l) u(2) u(3) ... u(l1) u(O) u(l) u(2) ... u(l2) y=[D CB CAB ... CA 1 2 B] u(O) u(l) ... u(l3) u(O) ... u(l4) u{O) mxrl rlxl 17 (3.2.3) (3.2.4) (3.2.5)
PAGE 29
where y=[v(O) Y(l) ... y(/1)] Equation (3.2 .5) can be written in the compact form: y=YU (3.2.6) The matrix Y contains Markov parameters to be determined The matrix U is an upper triangular input matrix. The matrix U is square for single input (r=l). This is the only case in which Y has a unique solution Y=ylT1 Even for r=l, the matrix U may become illconditioned and the solution may not be attainable [2]. For r 2, the solution for Y is not unique because there are mxr/ unknown equations in the Markov parameters matrix but only mxf equations. Assume that the system is asymptotically stable. Then, for sufficiently large p Ak :::::0 for Then, Eq. (3.2.6) can be approximated by y:::::YU where 18 (3.2.7)
PAGE 30
y=[y(O) y(l) y(2) ... y(p) ... y(/1)] U= mxl Y=[D CB CAB ... CA:"1B] r(p ... l) u(O) u(l) u(p) ... u(/1) u(O) ... u(p1) u(l2) u(O) u(lp1) r(p,..l)xl By choosing />r(p ... l) we will have more equations (mxl) than unknowns (mxr(p+ 1)) and we could approximately calculate p Markov parameters using pseudoinverse ut of the matrix U, i.e. (3.2.8) However, for lightly damped systems the size of the matrix U becomes too large to solve for pseudoinverse ut numerically [2]. The solution to this problem is to increase artificially the damping of the system, as proposed in [8]. By increasing artificially the damping of the system and implementing the observer model, it is possible to obtain the solution of system Markov 19
PAGE 31
parameters. Adding and subtracting Gy(k) on righthand side of Eq. (3.2.1) and rearranging yields x(k+ 1 )=(A +GC)x(k)+(B +GD)u(k)G}(k) where A=A+GC, =Ax(k)+Bv(k) B=[B+GD G], v(k)=[u(k)l .v(k) (3.2.9) (3.2.10) and G is an nxm matrix chosen to make A +GC as stable as desired [2]. The output of the system is given by Eq. (3.2.2). Markov parameters of Eq. (3.2.9) are referred as observer Markov parameters. The inputoutput description for the observer model is then given by y=YV (3.2.11) where y=[y(O) y(l) .v(2) .. .y(p)... .v(/1 )] mxl 20
PAGE 32
V= Y=[D ci cAB ... cir1i ci'2..8] mx[(m+r)(ll)+r] u(O) u(l) u(2) u(p) u(l1) v(O) v(1) ... v(p1) v(l2) v(O) ... v(p2) v(l3) . v(O) v(lp1) . v(O) [(mxr)(ll)+r]xl When using real data the eigenvalues of the A are placed at the origin [8, 12] (i.e. the deadbeat observer is considered) so that cikizo for hp. The observer Markov parameters can be obtained from real data using Eq. (3.2.11) with the truncated data, i.e., we have where y=YV Y=[D ci CAB ... cipli 1 mx[(m+r)p+r] 21 (3.2.12)
PAGE 33
u(O) u(l) u(2) v(O) v(l) V= v(O) u(p) ... u(l1) ... v(p1) ... v(l2) ... v(p2) v(l3) v(O) [(r>
PAGE 34
where p1Y=[D CB CAB ... CA B] y=[y(p) y(p+1) ... y(i1)] x=[x(O) x(1) ... x(ip1)] u(p) u(p+1) ... u(l1) v(p1) v(p) ... v(i2) V= v(p2) v(p1) ... v(l3) v(O) v(l) ... v(lp1) The effect of the nonzero initial conditions on measured data becomes negligible after p steps because the nonzero initial conditions are multiplied by jP [2]: The negligible effect of the initial conditions after p time steps makes the response of the system stationary [6]. 3.2.2 Computation of System Markov Parameten The system matrices A, B, C, and D are computed from the system Markov parameters and the observer gain matrix G is obtained from the 23
PAGE 35
observer gain Markov parameters. To compute system Markov parameters Y from the observer Markov parameters i, following the approach given in [6,8,12], partition Y such that where Y=D 0 Y=[ Y0 Y1 YP] To obtain Markov parameter Y1 consider Y=CB 1 (3.2.14) (3.2.15) (3.2.16) Adding and subtracting the term CGD on the right side of Eq. (3.2.16) and using Eq. (3.2.15) we get Y1 =CB+CGDCGD =C(B+GD) CGD (3.2.17) Markov parameter CAB is obtained by considering cl) Y2 in Eq. (3.2.15) 24
PAGE 36
f;1> =CAB+CGCBtC(A +GC)GD Therefore, (3.2.18) To compute Markov parameter Y3 consider (1) y3 in Eq. (3.2.14) from which (3.2.19) Similarly, one can obtain Markov parameters Y4 Y5 and so on. As shown in [12], the general relationship between the actual system Markov parameters and the observer Markov parameters is given by 25
PAGE 37
D==Y. ==Y 0 0 for k==l, ... JJ (3.2.20) for k=p+ l, ... CID Equation (3.2.20) shows that there are only p independent Markov parameters and there are only p+ 1 observer Markov parameters computes as least square solution from Eq. (3.2.12).. By the proper choice of p. r:n and r:2> are considered to be zero for k>p [8]. As established in [8], the relationship between system Markov parameters and observer Markov parameters can be developed further. Define y2 y3 ... YN+l y3 y4 ... YN+2 H= yp+l yp+2 ... YN+p (3.2.21) f(2) = [ y(2) (2) fi2)] Y ... p p1 where N is a sufficiently large arbitrary number. Further, the Hankel matrix H can be expressed by 26
PAGE 38
c CA H= CA2 A[B AB A2B .. AN1B] =RAQ (3.2.22) where R is the observability matrix Q is the controllability matrix, and A is the system state matrix. The size of the matrix H is mp x Nr. From Eq. (3.2.20) and Eq. (3.2.21) one obtains (3.2.23) The identified state matrix A represents the controllable and observable part of the system. The maximum rank of H is mp provided that Nr>mp. The system matrix A (of order n) should exist if p is chosen such that and f'2> is obtained uniquely. Therefore, p must be chosen such that The product mp is the upper bound on the order of the identified system model. Therefore, it is not necessacy to have more system Markov parameters than the necessary number to create a fullrank Hankel matrix. [6,8] 27
PAGE 39
3.2.3 Computation of Observer Gain Markov Parameters To compute the observer gain G, following the approach m [6,8], first recover the parameters fi kl23 1."' .n.. or = , ... (3.2.24) From Eq. (3.2.24), the first parameter is (3.2.25) The second parameter ts gtven by (3.2.26) The third parameter Is given by (3.2.27) The next k parameters are computed similarly. As established m [8], the general relationship is given by 28
PAGE 40
for k=2,3 ... .p for k=p+ 1, ... ,CD The observer gain G is obtained from where c CA R=CA2 (3.2.28) (3.2.29) (3.2.30) The integer k has to be sufficiently large so that the observability matrix R has the rank of n. 3.2.4 Computational Steps of OKID The computational steps of the observer!Kalman filter identification (OKID) algorithm, given a set of input and output data [6], are listed below: 29
PAGE 41
1) Choose a value of p which determines the number of observer Markov parameters to be identified from the given set of input and output data. Compute the leastsquares solution of the observer Markov parameter matrix Y. 2) Determine system and observer gain Markov parameters from the identified observer Markov parameters. 3) Identify a statespace model of the system and the observer gain from the recovered sequence of the system and observer gain Markov parameters by using ERA (Eigensystem Realization Algorithm) or ERA/DC (Eigensystem Realization Algorithm with Data Correlation) .. 4) Find the eigensolution of the realized state. matrix. Transform the realized model to modal coordinates for modal parameter identification which includes frequencies, dampings and mode shapes at sensor locations. 3.3 Recursive Parametric Identification An estimation algorithm with recursive formulation can be implemented on digital computer to estimate the parameters of the model. There are two classes of parameter estimation algorithm: online and offline. In the offline 30
PAGE 42
estimation algorithm, all the data are available prior to analysis and the data may be treated with no strict time. limitations. In online case, parameters are updated recursively during each sampling interval. Hence, it is necessary to use a relatively simple estimation algorithm in many practical applications. 3.3.1 Recursive Least Squares Algorithm In this section, the identification of sampled discretetime parametric dynamic model is discussed. A schematic diagram of a recursive parameter estimation is shown in Fig. 3.1. Consider a discretetime system given by .tA( 1) y(t+l)q q u(t+l) B(q 1) (3.3.1) where A(q1 ) and B(q1 ) are polynomials in the unitdelay operator q1 and are defined as follows: (3.3.2) (3.3.3) and d represents the pure delay of a system. 31
PAGE 43
u(t+ 1) pIan t y(t+I) + '" e(t+l) y(t) ,,. y(t+ 1) Fig. 3.1 Structure of recursive least squares estimation. Equation (3.3.1) can be written as y(t+ 1 )=8r cl>(t) (3.3.4) where (3.3.5) 32
PAGE 44
is the parameter vector, and P(tf =[ y(t) y(t1) ... y(t+ 1n),u(t+ 1d) u(td1) ... u(t+ 1m d)] (3.3.6) is the measurement vector. Then, the prediction model ts given by (3.3.7) where (3.3.8) is the estimated parameter vector. A recursive parameter adaptation algorithm with memocy is desired. Least squares algorithm provides a satisfactocy convergence speed and a high adaptation gain at the beginning when the optimum is far away [4,10]. The recursive least squares algorithm mtmmtzes the following quadratic cost function J(t)= .E [y(i+1)0(t)r P(i)]2 (3.3.9) tal 33
PAGE 45
As shown in [4], the recursive least squares parameter adaptation algorithm 1s given by P(t+ 1 )=P(t)1 The posteriori prediction error, e(t+l), 1s given by e(t+ 1)=y(t+ 1) (3.3.10) (3.3.11) (3.3.12) Equation (3.3.10) is based on posteriori prediction error, i.e., parameters are estimated after the action of the parameter .adaptation algorithm. The recursive least squares algorithm gives less and less weight to the new measurements since it is an algorithm with decreasing adaptation gain. For estimation of timevarying system parameters and other system parameters, a decreasing adaptation gain may not be suitable [10]. There are a number of variants of the recursive least square algorithm with a different choice of adaptation gain. As shown in [10], the least squares algorithm can be generalized by introducing two weighting sequences A1(t) and A2(t) into Eq. (3.3.11): 34
PAGE 46
(3.3.13) A number of choices for J..1(t) and J..2 (t) is possible depending on the type of a system. For example, for the identification of slowly timevarying system, A value of J..1 between 0.95 and 0.99 is a typical choice. The effect of A.1 is to give less and less weight to the old data. The cost function to be minimized in this case is given by: I J(t)= LA 1 )S(t)r (3.3.14) t=l If it is necessary to avoid a too rapid decrease of the adaptation gam, then one would choose J..1(t) and J..2 (t) as: J..1(t)=J..0 [ J..1(t1 )1 ]+ J..0 l2(t)=1 The criterion to be minimized in this case ts: 35
PAGE 47
I J(t)= L .11 (i}1t[y(; + 1) 6(t)r 4>(i)]2 (3.3.15) t=l This type of profile is very suitable for the identification of stationary systems. If .A.1(t) and .A.it) are chosen as .A.1(t)=.A.2 (t)=1, Eq. (3.3.11) is obtained. The criterion to be minimized in case is given by Eq. (3.2.9). This type of choice of .A.1(t) and .A.2(t) is suited for the identification stationary systems. 3.4 PRBS as Test Signal for Process Identification In order to correctly identify parameters of a system, a frequencyrich input must be used. A frequencyrich input can be obtained by the use of pseudorandom binary sequences (PRBS). PRBS are generated by a digital waveform generator which produces a binary signal by switching randomly between two output levels [2]. The PRBS is characterized by a sequence length within which the pulse width vary randomly but over the large time horizon it is periodic. The period is defined by the sequence length. The PRBS are generated by means of shift registers. The maximum sequence length L is L=t'1 where N is the number of stages of the shift register. Each stage of the shift 36
PAGE 48
register stores either 0 or 1. Note that at least one of the N stages should have a value of 1, although one generally stores a value of 1 on all N stages initially. At every pulse from the clock, all digits are moved one place to the right. The digit in the last righthand stage moves out to the output stage register while the first stage is filled by a new digit obtained by the modulo 2 addition of the digits present in the stages 1; and 1 h before the clock pulse arrived [2,11]. The stage number 1; and 1h are specified in Table 3.1 [10]. The pattern of every N adjacent digits is then wrique. Note the maximum duration of a PRBS signal is equal to N. In order to identify the steady state gain of the system correctly, the following condition must be satisfied: (3.4.1) where tR is the rise time of the system, T5 is the sampling period. The magnitude of the PRBS may be low, provided that the signal/noise ratio is high. For a low signal/noise ratio it is necessary to extend the duration of the test to obtain a satisfactory parameter estimation [10]. 37
PAGE 49
Table 3.1 Generation of maximum length PRBS NUMBER OF STAGES SEQUENCE LENGTH STAGES ADDED N L='J!fI I; and lh 4 15 3 and 4 5 31 3 and 5 6 63 5 and 6 7 127 4 and 7 8 255 3 and 8 9 511 5 and 9 10 1023 7 and 10 Example 3.1 Suppose that for system to be identified fmax=2Hz and tR=O.l sec. If N=5 and T5=0.I sec are chosen, then we have the inequality of Eq .. (3.4.1) is satisfied: NTS=0.5>tR=O.I The plot of a PRBS signal for N=5 and T5=0.1 is shown in Fig. 3.3. 38
PAGE 50
a; c: 0) Ci5 C/) m 0::: a.. 0.2 0.15 0.1 0.05 0 0.05 0.1 0.15 0.2 0 ,'0.5 1 ;;'1.5 2 2.5 Time (sec) Fig. 3.2 PRBS signal for N=5 and Ts=O.l sec. 39 '3
PAGE 51
CHAPTER 4 CONTROL DESIGN FOR THE FLEXIBLE ROBOTIC ARM MODEL The aim of system identification is to design a highperformance control system. There are exists a wide range of control laws. Some controllers may be more suitable than the others to apply for a particular system. Controllers are designed according to the requirements for a system. In some cases it may be necessary to combine a parameter estimation scheme with a control law, so that the control law is updated at each sampling interval (adaptive controllers). In other cases, only the offline identified model of the system may be enough to design a controller (nonadaptive controller). In this chapter two nonadaptive controllers will be designed. The first controller is based on the state space of the model. The second controller is a transferfunction based linear regulator. In both cases the pole assignment method is used. 40
PAGE 52
4.1 Pole Placement Design in State Space In this section a controller will be designed VIa pole placement for a servo system. In designing a controller one has to utilize the feedback of all the available state variables. However, all state variables are not available for direct measurement. Hence, it is necessary to estimate the state variables which are not available for direct measurement. Observation of the unmeasurable state variables is known as a minimumorder state observation whereas observation of all state variables is known as a fullorder observation [13]. Here, the fullorder observer is considered in designing a controller because fullorder observer results in better system performance when the measurement of the output variables involves noise. Note that the system matrices used in designing the controller are obtained from realtime identification of the robotic arm model and the data collected for the identification of system matrices may be contaminated with measurement noise. Although the measurement noise level is very low, the measurement of the output is relatively inaccurate. Hence, using the fullorder observer to design a controller for the robotic arm model is well justified. In this section, only the simulation of the system ts presented. In Chapter 7, the realtime performance of the system will be presented. 41
PAGE 53
Consider the servo system with observed state feedback shown in Fig 4.1. The integrator has been added to the system to eliminate the steadystate error. The output state variable is the only state variable available for direct measurement Hence, it is necessary to estimate the other state variables of the robotic arm model that are not available for direct measurement. However, for the reasons stated above, all the state variable of the system will be estimated. y(k) Fig. 4.1 Servo system with observed state feedback. 42
PAGE 54
The state equation describing the plant is given by x(k+ 1 )=A.x(k) + Bll(k) y(k)=C.x(k) and the observer equation is given by where :i(k+ 1 )=Ai(k)+BL'(k)+CG[v(k)jl(k)] jl(k)=Ci'(k) x(k)= plant state vector (nvector) u(k)= control vector (mvector) y(k)= plant output vector (mvector) :i(k)= observer state vector j(k)= observer output vector G=nx n matrix B=n xm matrix C=mxn matrix The integrator state equation is v(k)=v(k1 )+r(k) y(k) 43 (4.1.1) (4.1.2) (4.1.3) (4.1.4) (4.1.5)
PAGE 55
where v(k)=actuating error(mvector) r(k)=command input vector(mvector) Using Eq. (4.1.1) and Eq. (4.1.2), Eq. (4.1.5) can be written as v(k+ 1 )=v(k)+r(k+ 1 )y(k+ 1) =v(k)+r(k+ 1)QAl(k)+Bt(k)] = CAl(k)+v(k) CBtJ..k) +r(k+ 1) As shown in [13], the control vector u(k) ts gtven by u(k)= Kzi(k)+K1 v(k) (4.1.6) (4.1.7) The design parameters are K 1 K'b and G. The pole placement design and the observer design are independent of each other because the closedloop poles of the observed state feedback control system consist of the poles due to the pole placement design plus the poles of the observer. According to the separation principle they can be designed separately and combined together to form the observed state feedback control system. Hence, the integral constant K1 and the state feedback gain K2 are computed separately from the observer gain G. In other words, K1 and K2 are designed without the use of the 44
PAGE 56
observed state feedback. Shifting in time, we can rewrite Eq. ( 4 .1. 7) as u(k+ 1)+K1 v(k+ 1) (4.1.8) Substituting Eq. (4.1.1) and Eq.(4.1.5) into Eq. (4.1.8), we obtain Combining Eqs. (4.1.9) and (4.1.9), and introducing an augmented state vector consisting of x(k) and u(k), we obtain (4.1.10) where it is asswned that r(k+ 1)=r, for k>O, and lm denotes mdimensional unity matrix for sufficiently large k. Thus, Eq. (4 .. 110) can be written as (4.1.11) Subtracting Eq. (4.1.11) from Eq. (4.1.10), we obtain (4.1.12) 45
PAGE 57
where =(n+m)veetor (4.1.13) If we define Bl A=lo 0 =(n+m)x(n+m) matrix (4.1.14a) (4.1.14b) (4.1.14c) then Eqs. (4.1.12) and (4.1.13) can be written as (k+ 1 (4.1.15) (4.1.16) respectively. The controllability matrix for the system defmed by Eq. (4.1.15) is given by 46
PAGE 58
which has the dimension of(n+m) by m(n+m) and a rank of(n+m). Matrix K can be computed using the pole placement method. Equation (4.1.14c) can be written as [A1 B l K=[K2 Kl] II +[0 1 1 AC CB 111 Hence, we obtain [A1 B l1 Kl]=[K+[O I ]] II "' AC CB (4.1.17) K can be computed from Ackennan's fonnula: A A A .IIIIo AIJi'JR1 A A K=[O 0 ... l][A AB ... A B]ci>{A) (4.1.18) and +(A)is given by (4.1.19) where "1 's are the coefficients of the desired characteristic equation. 47
PAGE 59
The observer feedback gam matrix G can be obtained from the Ackerman's fonnula: G=4(A where c CA 1 0 0 Again m, 's are the coefficients of the desired characteristic equation. Example 4.1 (4.1.20) (4.1.21) A numerical example is given to illustrate the computation of K 1 state feedback gain K2> and the observer feedback gain G. The plant considered m this example is the robotic arm model. The plant model of the robotic arm is identified using real experimental data (see Chapter 6 for details). The order of the system is 6 and the system has one input and one output. The design requirement is that .the response of the system to any command input be deadbeat In other words, it is required that the observed state approaches the 48
PAGE 60
true state as fast as possible. Note that for the robotic arm model only the output state variable is available for direct measurement. The system matrices are l.Ollle+OOO 6.8724e002 4.980Se003 3.S749e003 6.9258eOOS 6.7708eOOS 1.4429e001 7.4473e00l 1.9958e00l 1.2432e002 3.0047e003 8.5978e004 3.0748e002 6.4023e001 1.7683e001 3.0144e001 3 .0228e003 1.9883e003 A= S.OOSle002 4.5664e002 9.5602e001 8.834le002 1.2854e001 2.645le002 8.S083e003 1.2266e001 l.0670e001 6.3616e001 7.1734e001 1.3588e001 5.271Se003 1.3169e001 7.3783e002 6.1830e001 6.6445e001 3.7099e001 B=[ 1.2228 6.2701 6.8474 2.4512 1.9035 1.3527JT C=[3.66e002 9.59e003 2.26e003 2.69e004 2.95eOOS 3.63e006] From Eq.(4.14a) and (4.14b), we obtain 1.011 6.872e02 4.98le03 3.575e03 6.9258e05 6.7708e05 1.223 1.443e01 7.447eOl 1.996eOl 1.243e02 3.005e03 B.S98e04 6.271 3.07Se02 6.402eOl 1.768eOl 3.014eOl 3.023e03 1.988e03 6.847 i= 5.008e02 4.S66e02 9.560eOl 8.834e02 l.285eOl 2.645e02 2.451 B.SOBe03 1.227eOl 1.067eOl 6.362eOl 7.173eOl 1.3S9eOl 1.904 5.272e03 1.317e01 7.378e02 6.183eOl 6.645eOl 3.710e01 1353 0 0 0 0 0 0 0 49
PAGE 61
7 B=[O 0 0 0 0 0 l]T The inverse of the controllability matrix Is 2 3 4s 1 (B A.B A. B A. B A. B A B A. B J = 0 0 0 0 0 1 l.626e02 7.353e02 6.406eOl J.ISOe02 1.664e02 8.37Be03 0 9.103e03 6.233e02 2.2Sle02 S.1S4e02 4.70Se02 1.271e02 0 = 2.621e02 3.48Se02 5.932e02 9.942e03 6.18Se02 1.564e03 0 5.61Be03 1.964e02 3.44Be02 1.23Se02 S.S36e02 2.S26e02 0 4.42Be02 6.447e03 B.B61e03 2.309e02 S.397e03 B.92Se02 0 3.239eOl l.lSle01 S.473e02 7.S7Be02 2.423e02 6.74Se02 0 The closedloop poles are placed at the ongm. Hence, the coefficients, a i's, inEq.(4.1.19)areequal to zero. Note that the order of A is 7 since n+m=7. From Eq. (4.1.19) we obtain ci(A)=A7 : 9.4lle01 1.936e01 2.407e02 1.36Se02 l.IS2e03 S.836e04 2.563 4.01Se01 2.8SOe02 7.642e03 2.82le03 1.346e04 6.342eOS 3.312eOl 1.783eOJ 2.40Be02 3.569e02 4.83e03 2.913e03 9.84Be04 9.2S3e02 = 2.467e01 3.048e02 I.SBle02 3.22Se02 8.704e03 6.210e04 1.033 9.022e02 8.S98e03 4.974e02 4.149e02 3.842e02 1.017e02 1.633e02 2.08SeOl 1.665e02 7.352e02 1.042e02 4.96le02 l.924e02 2.862eOl 0 0 0 0 0 0 0 50
PAGE 62
Matrix K ts computed from Ackerman's formula (Eq. (4.1.18)), and is given by K=[ 0.396 o.o637 o.o123 o.oos51 0.00121 o.oo125 o.933J K1 and K2 are computed from Eq.(4.1.17) and are given by K1=9.9162 K2=[1.625 0.027 0.0302 0.0047 0.000319 0.000799] Referring to Eqs. (4.1.20) and (4.1.21), we obtain G=A6 CA 0 CA2 0 CA3 o CA4 0 CAS 1 51
PAGE 63
Thus, G is g:tven by G=[2S.101 0.450 10.861 46.774 38.827 604.16S]T The system requires 13 sampling periods to reach the steadystate (i.e. any nonzero error will be driven to zero error in at most 13 sampling period) because the order of the plant plus additional integrator is 7 and the order of the state observer 1s 6. However, as shown in Fig. 4.2, the response required more than 13 sampling periods to settle. This can be explained by the presence of disturbance on the data was used for the identification of the system model. The output of the observed state approaches the true state as fast as possible as shown in Fig 4.3. Fig. 4.4 shows the control input. The change in the magnitude of the control input is due to a change in the input command signal which is a square wave. In general, the observer poles are chosen so that the observer response ts 4 to 5 times faster than the system response [13]. One way to decrease the response time of the observer is to choose all the observer poles at the origin. This is shown in Example 4.2. 52
PAGE 64
2 : 0.15 ____ _,. ________________________ ,e. ::J 0 0.1 ____ _, __ _, _______ ______ _, ____ _,___ : _, ___________ ......... 0.05 I o j .. ... ; .... ! i I : l 0.05 j : l l i i 0.1 1 u ..... : 'I' 0.15 0 100 200 300 400 500 Sample (k) Fig. 4.2 System response and command signal for the servo system with observed state feedback and integral control. 53
PAGE 65
0.2,_..... i 0 0.15 r 0.1 : . 0.05 ; : 0 ! ..... ; .... ! 0.05 11 ! .1 'i ; 0.15 I i ! 'U' .............. ;<: Hit, U'l\1# ..... ... ...... _. __ .. ; 0 100 200 300 400 500 Sample (k) Fig. 4.3 Observer output and command signal for the servo system with observed state feedback and integral control. 54
PAGE 66
i I l 2 l il ............................... 2 l ; i :; j : @' 1 1j; t! i i 0 In, l11. II!. i 11. u 11' 1 IY' . i1 1,_ IY .. I' I I I 1 .. .. .. r; 2 j_____________________ i : i 100 200 300 400 500 Sample (k) Fig. 4.4 Control signal for the servo system with observed state feedback and integral control. Example 4.2 Consider the system shown in Fig 4.1. Now it is desired that the closedloop poles be placed at z=0.8252+j0.1497, z=0.8252jO.l497 which results in the following characteristic equation: i l.6504z+0.7034=0 55
PAGE 67
The observer poles are placed at the origin Hence, the observer feedback gam matrix G is the same as in Example 4.1. Therefore, only K 1 and K 2 need to be computed in this example. K1 and K2 are obtained from Eq. (4.1.17): K1=0.5255 0.1805 0.0158 0.0021 0.0008 0.0061 0.0070) The responses of the system and the observer are shown in Fig 4.5 and Fig. 4.6, respectively. It can be seen that they are almost identical. As shown in Fig. 4.5, the system response does not exhibit overshoots because the observer poles are located at the origin and the closedloop poles generated by state feedback are at 0.8252j0.7035. The comparison of the results obtained in this example with the results in Example 4.1 shows that the settling time in Fig 4.5 is smaller than the settling time in Fig 4.2. This is due to the fact that the response speed of the observer is faster than the response speed of the system. In Example 4.1 the response speeds of the observer and the system were roughly the same. However, the rise time in Fig 4.2 is smaller than the rise time in Fig 4.5. As shown m Fig 4.7, the magnitude of the control signal is smaller than the one in Example 4.1 because only the observer poles are placed at the origin.. In Example 4.1 both 56
PAGE 68
the observer poles and the system poles are placed at the ongm which resulted in large control magnitude. Overall the design in Example 4.2 gives better system performance. 0.2 :i" 0.15 >. ::1 17n ' yhtrl r r r Q. 0.1 ::1 : .. ' 0 E 0.05 :. .. : .. 0 .... : . i .. .. 0.05 1 . ! . i l 0.1 i 0 .15 I \_ +''+'0.2 0 100 200 300 400 500 Sample (k) Fig. 4.5 System output and command signal for the servo system with observed state feedback and integral control designed in Example 4.2. 57
PAGE 69
0.15 f:,....... ( ( f 0.1 i 0.05 : ; .. .. G) .a 0 I i 0 j .. . !r0.05 ). ' .. l 0.1 ! . i. : i 1 \ \ ! .... .. ___________ .... ;1l.....+'0.15 0.2 :,;h:......!...=:...:,...;:;! 0 100 200 300 400 500 Sample (k) Fig. 4.6 Observer output and command signal for the servo system with observed state feedback and integral control designed in Example 4.2. 58
PAGE 70
::::J ::::J Q. .s e &: 0 (.) 0.4 0.3 j L. i i : i I : 0.2 0.1 o i :! i l +: +. I 1. 1 0.1 r r 1.. __ _: ___________ 1.. . i . 0 2 i l +i !\ ! ! ! 0.3 !! : : i : 0 100 200 300 400 500 Sample (k) Fig. 4.7 Control signal for the servo system with observer state feedback and integral control designed in Example 4.2. 59
PAGE 71
4.2 Transfer FunctionBased Design of Linear Regulator Consider a process characterized by the following difference equation (4.2.1) and d is the delay. It is desired to design a controller such that the tip of the robotic arm model follows given reference signal. This can be achieved using a linear regulator which is given by (4.2.2) Where the coefficients of the polynomials R(q1), S(q1 and T(q1 are control parameters designed to obtain the desired closedloop response of a system. Combining Eq. (4.2.1) and Eq. (4.2.2) we obtain (4.2.3) and the desired closedloop transfer function is given by (4.2.4) 60
PAGE 72
A block diagram of the closedloop system is shown in Fig. 4.8. r(t) T R s R w(t) d q B A y(t) Fig. 4.8 Block diagram of the closedloop system The design task is to design a controller such that the closedloop system gives the desired response to command signal r(t), i.e., q4TB = q4B,. .AR+q 4BS A,. (4.2.5) 61
PAGE 73
In our design, we shall follow the procedure outlined in [1,4,10]. The coefficients of the polynomials R S and T are control parameters which are computed from the following relationships obtained by equating the numerators and the denominators in Eq. (4.2.5) AR+q dBS=Am T=B m where Am is a polynomial specifying the desired closedloop poles. Defining p = max deg (n m) (4.2.6) (4.2.7) Eq. (4.2.6) has a unique solution when A(q1 ) and B(q1 ) do not have a common factor. The degree of the polynomial Am(q1 ) should satisfy the following inequality deg Am(q1 ) s; 2p + 1 The polynomials R(q1 ) and S(q1 ) are given by (4.2.8) (4.2.9) and the degrees of R(q1 ) and S(q1 ) are obtained from deg S(q1 ) = n5 =deg B(q1)1 (4 .. 2.10) 62
PAGE 74
In order to solve Eq. (4.2.6), we put it m a matrix form Mx=l (4.2.11) where (4.2.12) and the matrix M is given by 1 0 0 0 0 0 a I 1 0 bo 0 0 a2 a I 1 bl bo 0 a2 0 bl bo M= 2p+1 a 1 b ho II m1 a a I b bl II Ill b Ill a b II Ill 2]1+1+ The vector x is obtained from 63
PAGE 75
(4.2.1J) If A and B are relatively pnme, R and S can be designed by asstgmng arbitrarily the closedloop poles [1]. The closedloop poles of the system are obtained from the desired characteristics of the system. The desired characteristics equation can be obtained from the specification given to the designer. Generally the desired polynomial equation is given for a continuous time system, specifying c.> O> and C and the discretization of the continuoustime system is carried out beforehand with the properly chosen sampling interval T s.. The desired characteristic equation, corresponding to the in deadbeat performance, where all closedloop poles are located at the origin, ts gtven by Am(q1) =I In this case the system response settles in at most n steps. This method has no restriction on process zeros, i.e., the process zeros can be stable or unstable. Procedure 1 It is assumed that the process zeros are the closedloop zeros, the desired response is characterized by Am and Bm. Then, as established in [1], the design procedure is summarized as follows: 64
PAGE 76
Step 1 Solve the equation with respect to R and S. The degrees of R and S are gtven by Eq. ( 4.2.1 0) Step 2 Use the control law given by where Ru(t)=Tu(t)Sy(t) A,.(l) T=B =1111 B(l) This design procedure does not cancel unstable system zeros and can be used for nonminimum phase systems (systems with unstable zeros). Cancellation of the unstable process zeros (nonminimum phase system) must be avoided. The only restriction of this method is that A and B should not have common factors. However, the system zeros can be simplified since deg Am is normally less than deg (AR+q dBS). It follows from Eq. ( 4.2.5). that factors of B which are not also factors of Bm must be factors of R. The openloop zeros (factors of B correspond to openloop zeros) which are not the desired closedloop poles must be cancelled. Thus B is factored as 65
PAGE 77
B=B ... B(4.2.14) where zeros of B+ are stable (well damped modes) and zeros of Bare unstable (poorly damped modes). A necessary condition for solvability of the servo problem is B =B Bm t (4.2.15) This equation defines B As shown in [I], the design procedure in which mt the openloop zeros which are not desired closedloop zeros are cancelled can be described as follows : Procedure 2 Data Given the process model A and B and the desired response characterized by Am and Bm. Assume Eqs. (4.2.14) and 4.2.15) are satisfied and d Step 1 Factor polynomial B into Step 2 Solve the equation 66
PAGE 78
with respect to R1 and S. The degree of R1 and S is obtained from deg S=deg B 1 deg R1 =deg B+d1 Step 3 Use the control law given by where Ru(t)=Tu(t)Sy(t) R=RB1 and T=B 1111 (4.2.16) A and Bare copnme smce A and B are assumed to be copnme. This method can also be used for nonminimum phase systems. However, the decomposition in step I is time consuming if deg B is very large. In selftuning algorithms, where computation time is critical, each calculation has to be repeated in each step between each consecutive sampling instants. Note that for minimum phase system all the process zeros can be cancelled. Example 4.3 The system considered in this example is a flexible robotic arm model which is a nonminimum phase system. Thus, unstable process zeros must 67
PAGE 79
not be cancelled. Further, to avoid the factorization problem, no process zeros are cancelled. Note that the model of the flexible robotics arm was obtained from realtime identification. The model is given by where .A(q 1)=10.2SOSq 10.1901q 20.13Blq 30.0927q """0.0664q 50.0840q 6 B(q 1)=0.00130.00llq 1+0.012q 2+0.0939q 3+0.101lq 4+0.0622q s+0.037lq 6 The desired characteristic equation 1s and The design parameters are the coefficients of the polynomials r, R and S. The polynomials R and S are obtained by solving AR+qdBS=Am and are giVen by R(q 1)=1 +0.25llq 1 +0.2549q 2+0.2464q 3+0.1883q 4+0.1023q s+0.0760q 6 S(q 1)=0.4464+0.7068q 1 +0.4508q 2+0.1977q 3+0.0791q 4+0.1722q s and T=A,.(l)/B(1)=3.3014 68
PAGE 80
The output of the system and the control input are shown m Fig. 4.9 and Fig. 4.10 respectively. 0.25 ..,..,r..,, 0.2 1,r""'". \" .,..'t\'iT' 'S 0.15 .... , ... ; ... ' 0 E i en 0.1 ; .. : .. : 0.05 : .. i or'! ,.0.05 0.1 i   .. ; : .......................... : .. i .. ! 0.15 ! .. . j .. . J 0.2 [ _j 0.25 0 50 1 00 150 200 250 300 350 Sample (k) Fig. 4.9 System output and reference signal when no process zeros are cancelled. 69
PAGE 81
1U c C) .l 1 lrten o.5 +++,g c 0 (.) 1 0 \ I I : ;, r. 0.5 : !! +i :! 1 i 1 0 50 100 150 200 250 300 350. Sample (k) Fig. 4.10 Control signal when no process zeros are cancelled. 70
PAGE 82
CHAPTER 5 ADAPTIVE CONTROL The basic idea behind designing an adaptive control system is to combine a particular parameter estimation scheme with any control law. The plant parameters are estimated recursively and the control law parameters are computed in each step, using the estimated parameters instead of the true ones [1,4]. This approach is known acertainty equivalence adaptive control. The control law parameters can be directly identified by parameterizing the system directly in terms of the control law. This class of algorithms is known as direct algorithms. If the parameters of the system are explicitly estimated and are available as intermediate result to compute the control law parameters, then this type of algorithm is called indirect algorithms [1,4]. The block diagram of an adaptive control system is shown in Fig. 5.1. 71
PAGE 83
r l w(noise) H u controller plant controller parameter parameter estimation calculation T Fig. 5.1 Schematic diagram of an adaptive control system 5.1 Parameter Estimation y 1+It is highly desirable that estimation algorithm be simple and easy to implement because online estimation algorithms update parameter estimates between consecutive sampling instants. Two types of estimation schemes will. be discussed in this paper: recursive least squares algorithm and gradient algorithm. 72
PAGE 84
5.1.1 Recursive LeastSquares Algorithm Consider a process model given by A(q1 )y(t) =q ..tJ B( q1 )u(t) (5.1.1) Equation ( 5 .1.1) can be written as y(t)=6T cj>(t) (5.1.2) where (5.1.3) is the parameter vector, and cl>(tl =[y(t) . y(tn), u(td) . .u(tdm)] (5.1.4) is the measurement vector. If the parameters are unknown, a recursive least squares algorithm will be used to estimate the parameters. The prediction model is described as (5.1.5) where is estimated parameter vector and cj>(t) is given by Eq. (5.1.4). As shown in [1,4], the recursive least squares algorithm is given by 73
PAGE 85
.LP(tdl)cl(td) H(t)T=t:J(tlf + [y(t)cl(td)Tt:J(tl)TJ 1 +cl(td)TP(td1 )cl>(td) P(td)=l!A[P(tdl)P(td1)4(td)4(td)TP(td1)] A +cl(td)TP(td1 )td) (5.1.7) (5.1.8) The weighting sequence ).. can be used if greater weighting is necessary to more recent data in timevarying systems or to discard initial data in nonlinear systems. Typical values of ).. are between 0.95 and 0.99. However, using ).. <1 coupled with change in command signal may cause bursts in the system output [1]. The cause of this type of burst can be explained as follows. When there are no changes in command input for a long time, P(t)t) will be zero and thus there will be no change in the parameter estimate. This will lead to the negative term in Eq. (5.1.8) to be zero because the negative term represents a reduction in parameter uncertainty due to the last measurement. Then Eq. (5.1.8) becomes P(t)=P(t1) A (5.1.9) and the covarince matrix P becomes very large due to the lack of persistent excitation [1]. A change in the command signal may result in large changes 74
PAGE 86
in parameter estimates and thus the in system output. Bursting phenomena may also occur in adaptive control system if unmodelelled dynamics and external disturbances are present [15]. Even with perfect modelling of systems, algorithms with vanishing gain could exhibit bursting phenomena. Although it gives fast converging rate initially, least squares algorithm gain approaches zero. Bursting occurs periodically. This means that a selfexcitation mechanism may stabilize the system [14]. However, these bursts which cause very large control signals are not tolerable in practice. There are many ways to eliminate bursts. Use of exponential data weighting with variable forgetting, resetting of the covariance matrix, switching of the estimator if P is not positive semidefinite or stop updating P when the prediction error is smaller than the given value may eliminate a possible bursting phenomena in the adaptive control loop [I ,4]. Although bursting can occur in perfect modelling case, we are interested, m this research, in bursting which occurs due to unmodelled dynamics and possibly external disturbances. The reason being that the flexible arm has infinite number of modes and has to be modelled with finite number of modes. Thus, the unmodelled dynamics in the flexible arm may cause bursting in the ad&ptive loop. The proposed modifications for least squares algorithm 75
PAGE 87
m [1] were implemented during the realtime runmng of the flexible arm system and did not work well. However, the gradient algorithm, which does not require persistently exciting conditions, gave satisfactory results. 5.1.2 Gradient Algorithm Consider a system with unmodelled dynamics where A1(q1), i=1,2 denote additive and multiplicative system perturbations and A3(q1 ) denotes external disturbance [14]. The polynomials A and B are given by Since the parameters are not known they have to be estimated. Define a vector of parameters estimates (5.1.11) and a measurement vector ci(t)r =[ y(t), ... u(t), ... u(tm)] (5.1.12) 76
PAGE 88
The cost function to be minimized for deterministic case ts: (5.1.13) and the cost function to be minimized for stochastic case is: (5.1.14) As shown in [14], the gradient algorithm for deterministic adaptive control system is given by (5.1.15) where (5.1.16) and (5.1.17) where }.. is a fixed number satisfying 77
PAGE 89
As shown in [15], the stochastic gradient algorithm ts given by 6'(t+ 1 1) 3(t)r n(t) a>O &>0 (5.1.18) (5.1.19) (5.1.20) The global stability analysis for the design adaptive systems in the presence of unmodeled dynamics and external disturbance is developed in [14], where the usefulness of the developed methodology is demonstrated on the robust deterministic and stochastic adaptive control systems. A stochastic gradient algorithm is designed in [15], where it is shown that it has a better rare of convergence of the tracking error than the extended least squares (or least squares) selftuning algorithm. 78
PAGE 90
5.2 Adaptive Algorithms for ClosedLoop Pole Placement The parameters to be estimated are given by and the desired closedloop transfer function is given by There are several assumptions which have to be made: I) n=deg A(q1 ) and m=deg B(q1 ) are known. 2) A(q1 ) and B(q1 ) are relatively coprime. 3) The system delay is known a priori. (5.2.1) 4) The time delay in the desired closedloop transfer function is equal to or less than the system's delay. 5) For nonminimum phase systems, the unstable zeros must be included in the desired closedloop transfer function as zeros, I.e., B =B Bwhere the zeros of Bare unstable. Ill ... The unstable zeros within the system bandwidth have significant effect on the 79
PAGE 91
properties of the closedloop system since they can not be cancelled [1]. The control signal is generated by R(q"1)u(t) =T(q "1)r(t)S(q 1)y(t) The polynomials R and S are obtained by solving the equation AR+qdBS=Am where Am is the desired closedloop poles. (5.2.2) (5.2.3) If Am is a stable polynomial, then the closedloop system is asymptotically stable. Generally, the desired closedloop poles are chosen in the form of dominant poles of a secondorder continuoustime systems [1,10]. For higher order systems the remaining poles can be assigned near or at the origin. For the deadbeat controller all the closedloop poles are assigned at the origin. A m can be chosen by discritization of a secondorder continuoustime as (5.2.4) with the damping coefficient C, sampling period T8 and frequency w given by where w0 is the natural frequency. 80
PAGE 92
Once the closedloop system perfonnance is specified, an adaptive algorithm for closedloop pole assignment can be implemented using Fig. 5.1. Depending on the type of system (stable or unstable zeros) the following adaptive algorithms can be used for closedloop assignment [1]. The algorithms are based on indirect identification. Algorithm 1 (No process zeros are cancelled) Data It is assumed that the process zeros are the closedloop zeros. The desired response is characterized by Am and Bm Step Estimate the parameters of the model Ay(t)=qdBu(t) by one of the appropriate estimation schemes. Step 2 Solve the equation AR +qd BS=Am (5.2.5) with respect to R and S. The degrees of R and S are given by Eq. (4.2.10) Step 3 Use the control law given by 81
PAGE 93
Ru(t)=Tu(t)Sy(t) where T=B =A (1)/B(l) 1111 Ill The steps 1 through 3 are repeated at each sampling interval. The algorithm can be used for nonminimum phase systems since it does not simplify system zeros. The next algorithm simplifies system zeros .. Algorithm 2. (stable process zeros are cancelled) Data It is assumed that the unstable process zeros are included in the closedloop system as zeros. The desired response is characterized by Am and Bm. Step 1 Estimate the parameters of the model Ay(t)=qdBu(t) by one of the appropriate estimation scheme. Step 2 Factor the polynomial B as where the zeros of B+ lie inside the unit circle and the zeros of B" lie outside the unit circle. 82
PAGE 94
Step 3 Solve the equation AR1+qdBS=Am with respect to R1 and S. The degrees of R1 and S are given by Eq. (4.2.16) Step 4 Use the control law given by Ru(t)=Tu(t)Sy(t) where The steps 1 through 4 are repeated at each sampling period. The drawback with this algorithm is that the factorization in step 2 is time consuming. Computation time is critical especially for highorder systems and the factorization in step 2 may take too long time for the control input to be generated at each sampling period. For minimum phase systems all the process zeros can be cancelled. 5.3 Simulations The system used for simulation is a flexible robotic arm model. The polynomials A and B used in the simulations are obtained from realtime identification (see Chapter 6) of the system. Since the flexible robotics arm 83
PAGE 95
model is nonminimum phase system, Algorithm 1 is used in all simulations. Am(q"1)=1 is chosen The reference signal is a square wave of amplitude .2 and a period of 500 samples. Example 5.1 The estimation scheme used in this example is the recursive least squares algorithm given by Eqs. (5.1.7) and (5.1.8). The initial value of the covariance matrix is chosen as P 0=0.11 where I is a 13 by 13 identity matrix. Further, J.=l is chosen. The polynomials R, S and T after 2000 iterations are given by R(q"1)=I+0.2511q1+0.2549q"2+0.2464q3+0.1883q4+0.I023q5+0.o760q6 S(q"1)=0.4464+0.7086q"1+0.4508q"2+0.1977q"3+0.0.0791q4+0.1722q"5 T=3.3014 respectively. The output and the control signal are shown in Fig 5.2 and Fig 5.3, respectively. The high gain of the output at the beginning is due to the fact that the estimates were at great distance from the optimum initially. However, as time elapsed, the system output was able to follow the reference signal due to the convergence of the parameter estimates to the true ones. 84
PAGE 96
e >. :J c.. :J 0 E Q) en 2.5 2 r!r+; 1.5 jL ........... ++1 : I : r1"j.. t0.5 i1+t: l 0 0.5 i : trt+. 1 tj++ .. 1.5 .. i 2 0 500 1000 1500 Sample (k) 2000 2500 Fig 5.2 System output recursive least squares and reference signal when a algorithm is used .. 85
PAGE 97
'0 12 tL.l .................................... i ........... ! 1 0 1ii c: o 8 l..l.L.. .. iCi) ; ; g c: 0 (.) 6 ; 4 ; ++ji2 1 I 0 ............... .................................... r [ [ [ 2 4 ;.. +ii0 500 1000 1500 2000 2500 Sample (k) Fig. 5.3 Control signal when recursive least squares algorithm is used. Example 5.2 In reality the flexible ann is characterized by infinite number of modes. Thus, we have a case of unmodelled dynamics which could give bursting phenomena. To imitate this realtime situation in simulation we estimate 9 parameters for the system descnDed by 13 parameters. The order of the system 86
PAGE 98
to be estimated is 4. The polynomials A and B to be estimated are given by A(q1)= 1 +a1q1+a2q2 +as'l3+a4q4 B(q)=bo+htql+b2 respectively. All the initial conditions are the same as in Example 5.1. The polynomials R S and T after 2000 iterations are given by respectively. R(q1)= 1 +0.1740q1+0.1268q2+0.1532q3 0.0126q4 S(q1)=2.0162+ 1.0977q1 0.8229q2+0.0575q3 T=6.91 The output and the control signal are shown in Fig. 5.4 and Fig. 5.5 respectively. As shown in Fig. 5.4, except at the beginning, the output of the system follows the reference signal. However, the system behaved poorly when a recursive least squares algorithm was used during the experiment. 87
PAGE 99
>. :::;, c. :::;, 0 E .s! en 1 0.5 jf+t i : 0 0.5 i!.. t+j j i i ; 1 i"+0 500 1 000 1500 2000 2500 Sample (k) Fig 5.4 System output when a recrusive least squares is used in the presence of unmodeled dynamics. 88
PAGE 100
g 1: 0 (.) 6 +_l_ 4 2 ...1.iti0 ................... j .. 2 1i_::+t'j i . 4 L .................................. .l.... !L. = l I 6 .. i i l 8 fri.. 0 500 1000 1500 2000 2500 Sample (k) Fig. 5.5 Control signal when a recrusive least squares is used in the presence of unmodeled dynamics. Example 5.3 During a realtime run the least squares algorithm did not work well. However the gradient algorithm given by Eq. (5.1.14) gave a satisfactory results. Here Example 5.2 is repeated with estimation scheme given by Eq. (5.1.14). The polynomials R, S and T are obtained from Eq. (5.2.2) and are given by 89
PAGE 101
R(q1)=1 +0.1917q1+0.1673q2+0.2155q3 0.0355q4 S(q1)=2.0002+ 1.3011q10.602q2 0.1806q3 T=8.2020 The output and the control signal are shown in Fig. 5.6 and Fig, 5.7 respectively. i 0.5 'S ]tt.. 0 E i en v 0 .................. 0.5 ;j+f.1 i l .. tooOooToOool l 1.5 0 500 1 000 1500 2000 2500 Sample (k) Fig 5.6 System output and reference signal when graident algorithm is used in the presence of unmodeled dynamics. 90
PAGE 102
:::::r a; c: 0) Ci5 ,g l 4 : I 2 ! _______________________ : _______________________ j ________________________________ i v! o _____________ T! c: ' ' 0 2 l_________________ ].__ ________________________________ L_ ______________ Lu : I : 4 ________________________________ .L _____________________________ j ______________________________ .l _______________________________ L_ _____________________________ : : : 6 +lL+l 8 j l 0 500 1000 1500 2000 2500 Sample (k) Fig 5.7 Control signal when gradient algorithm Is used in the presence of unmodelled dynamics. 91
PAGE 103
CHAPTER 6 IDENTIFICATION OF THE FLEXIBLE ROBOTIC ARM MODEL In this chapter the system parameters will be identified through two methods: 1) Observer/Kalman filter identification 2) Recursive least square algorithm On both approaches a frequencyrich PRBS signal Is used for correct identification of the system. 6.1 Observer/Kalman Filter Identification The system to be identified has one input and one output. The experiment was performed for the openloop system. The identification method used m this section is offline: the data from the output of the system were recorded and made available for later identification of the system. The identification of the system was carried out using a matlab program OKID [7], which was developed at NASA Langley Research Center. Following [6,7], the computational steps of OKID are shown in Fig. 92
PAGE 104
6.1. The input is a PRBS signal with a total of 1023 samples. The sampling period is 0.008 seconds. I input and O\!lJ!Ut data I l observer parameters system Markov observer gain parameters Markov parameters system matrice! observer gain l mada1 parameter identification Fig. 6.1 Computational steps of OKID. The first step in identifying the system is to choose p for Eq .(3.2.12). p is chosen to be 6. Thus, the maximum system order will be pm=6, where m=l is the number of outputs. Although, p is required to be at least four times larger than the effective order of the system, choosing p=6 gives the better system responses for the given system. The observer Markov parameters are obtained from Eq. (3.2.12). The input excitation and the ouq:)ut measurement 93
PAGE 105
used to compute the observer Markov parameters are shown in Fig 6.2 and in Fig. 6.3 respectively. The observer pulse response sequence is shown in Fig . 6.4. l 0.1 .. r+tr! ........ ,.' ! 16 o.o5 .. ..... L ... c .9> UJ .. . 1UJ CD a::: a. 0 f, ___ +j.... .. . . ; i I 0.051.. +__ J,_..... ... ........ ,_ .. __ ! i i j i ! 0.11'+...... 4; l.JLl...J;L .. L.... L+JLU l 0 0.1 0.2 0.3 0.4 0.5 Time (sec) Fig. 6.2 Input excitation. 94 rh.,,;.,.. r... . 1. ... i + ............ ............. ! i i .... ' l 0.6 0.7 0.8
PAGE 106
: : ' 0.15 i i+L+j++::I i +i : ........ iiI 1 I 0 0.05 0 i 0.05 1+r i i i i L ; ... ! 0 1 2 3 4 5 6 7 8 9 Time (sec) Fig. 6.3 Output measurement of the robotic arm model. 95
PAGE 107
e 1 E e 0.8 . t++> (U ::! 0.6 'Q) 0 0.4 0.2 i i 1 ; ; : j ; i ! !t+l l l i rJtr; ! 0 0.02 0.04 0.06 0.08 0.1 Time (sec) Fig. 6.4 Observer pulse response sequence. 96
PAGE 108
Next, the system Markov parameters and the observer gam Markov parameters are computed from the identified parameters. The system Markov parameters are cOmputed by using Eq. (3.1.19) where as the observer gain Markov parameters are obtained from Eq.(3.26). The system pulse response and the observer gain pulse response are shown in Fig. 6.5 and Fig. 6.6 respectively. f!! 0.02 ,...,.......,...r.,,.,, E 0 i:;:::=+! r+1 r , : Tr i 0.02 tU ::! 0.04 1 .......... J ........... ..... L ....... .......... ; ............ tfL _______________ ____ ................. :0.06 ... .................... l 1 1 i 0.08 ................ 0.1 .................. .l .................... l .................... t ........... _,,, ... .i ..... ............ .i ....... .l. .. ii1 i i l . l l 0.12 0 1 2 3 4 5 6 7 8 9 Time (sec) Fig. 6.5 System Markov pulse response. 97
PAGE 109
CD en "5 D. i (!) ... CD 2: CD (I) .c 0 1 +f+rttil= i 0.8 0.6 ! .. +++f+t+0.4 0.2 l 1 I 1 1 l 0 : ; T... 0 1 2 3 4 5 6 7 B 9 Time (sec) Fig. 6.6 Observer gain pulse response. The system model and its corresponding observer gain are obtained from the system Markov parameters and observer gain Markov parameters using the Matlab program ERADC [7]. The system order identified from ERADC was chosen to be 6. The plot of the Hankel matrix singular values is shown in Fig. 6.7. Note that the number of the nonzero singular values equals the system order. The magnitude of the Hankel matrix singular values measures the contribution of each state. In general, p should be chosen at least four 98
PAGE 110
times larger than the order of the system model for accurate identification of Kalman filter gain. Since the system data was assumed to have only a very low measurement noise level, i.e., the signaVnoise ratio is relatively large, the order of the system n is chosen to be equal the number p. The state matrix A, input matrix B, output matrix: C and the transfonnation matrix D are given below. l.Ollle+OOO 6.B724e002 4.9BOSe003 3.S149e003 6.92SBeOOS 6.770BeOOS 1.4429e001 7.4473e001 1.9958e001 1.2432e002 3.0047e003 B.S97Be004 3.074Be002 6.4023eOO 1 1.76B3e001 3.0144e00l 3.0228e003 1.9BB3e003 A= S.OOSle002 4.S664e002 9.S602e00l B.B34le002 1.28S4e00l 2.64Sle002 B.SOBle003 1.2266e001 l.0670e001 6.3616e001 7.1734e00l 1.3SB8e001 S.271Se003 1.3169e001 7.37B3e002 6.1B30e001 6.644Se001 3.7099e001 B=[ 1.22286.2701 6.8474 2.4512 1.9035 1.3527f C=[3.66e002 9.59e003 2.27e003 2.69e004 2.95eOOS 3.63e006J D=3.1895e4 99
PAGE 111
CD 'CI ::I = c: C) cu > 00 10 10 10 10 10 10 10 10 10 u ..; "' ... 1 1 1 1 2 3 4 Number Fig. 6.7 Hankel matrix singular values. 5 6 The modal parameters including frequencies, damping ratios and mode shapes are obtained from system matrices. The modal parameters are shown in Table 6.1. The first four modes are the flexible robotics ann model modes and the last two modes are from the DC motor dynamics. In Table 6.1 the fourth column shows mode singular values and the fifth column shows Modal Amplitude Coherence. 100
PAGE 112
Table 6.1 Identified modal parameters for flexible robotic ann model. Mode No Damping(%) Frequency(Hz) Mode SV MAC 1 18.246 52.515 0.033084 1 2 18.246 52.515 0.033084 1 3 35.03 27.348 0.11155 1 4 35.03 27.348 0.11155 1 ,8.105 I : Figure 6.8 shows the estimated output and the measured output. And Fig. 6.9 shows the predicted and the measured outputs of system. As shown in Fig .. 6.8, the estimated output is in good agreement with the measured output because the estimated output is reconstructed from the identified observer and the identified observer is correcting for system uncertainties. In contrast, the predicted output is only reconstructed from the identified model which dose not correct for system uncertainties. As the result there is significant difference between the predicted output and the measured output. The error between the predicted output and the measured output is shown in Fig 6.10 Also shown in Fig 6.10 is the variance of the predicted output. The frequency response of the identified model is shown in Fig 6.11. 101
PAGE 113
Output Number = 1 0:15 ... :I c. ... :I 0 0.1 s ca E = U) 0.05 w 0.05 0 1 2 3 4 5 6 7 8 9 Time (sec) Fig. 6.8 Estimated output of the system. 102
PAGE 114
::1 c. :::r 0 "0 =s a. Output Number = 1 0.1 I' I' I ,1, I I o' I 0.1 0 1 2 3 4 5 6 7 8 9 Time (sec) Fig. 6.9 Predicted output of the identified model and the real output. 103
PAGE 115
... g w c:: 0 := 'C a.. 0.3 0.2 0.1 Sample (k) Fig. 6.10 Nonnalized prediction error. 104
PAGE 116
m "C c: a; (!) CJ Q) "C Q) rn tU .t::. c. ... 3 200 10 10 10 10 10 frequency (Hz) 200 0 200 3 400 10 10 10 10 10 frequency (Hz) Fig. 6.11 Frequency response of the identified model. 105
PAGE 117
6.2 Recursive Least Squares Algorithm The recursive least squares algorithm discussed in Chapter 3 is used to identify the parameters of the flexible robotic ann model. As shown in Fig.. 6.12, some parameters of the fleXIble ann are not converging fast enough. The output measurement is shown in Fig. 6.13 and the predicted output is shown in Fig .. 6.14. The recursive least squares algorithm with a variable forgetting factor was used identify the parameters of the plant. The forgetting factor A 1(t) is given by A1(t)=A1(t1)+ lA0 where A1(0)=0.99 and A0=0.99 were used. These chosen values gave better result overall. Further, A2(t)=I. The criterion minimized in this case is given by Eq. (3.3 .14). The reason for choosing the variable forgetting factor was to avoid the rapid decrease in adaptation gain when the estimates are at great distance from the optimum at the beginning of the adaptation mechanism. However, this was not the case for few parameter estimates which converge slowly as shown in Fig. 6.13. The reason is the forgetting factor A 1(t) tends toward 1 faster which resulted in decreasing gain. However, the use of other values for A1(0), l0 and other profiles of weighing sequences were not as 106
PAGE 118
successful as the one used here for the given system. 0.15 ..,..,....,....,....,...,.en s cu E 0.05 w I : : E f! 0.05 cu c.. 0.1 0.15 0.2 0.25 1j I rTrri i I : : j +ll+t.. r= i l : 0 5 10 15 20 25 30 35 Time (sec) Fig. 6.12 Parameter estimates of the flexible arm when a recursive least squares algorithm is used. 107
PAGE 119
0.06 0.04 c: Q) E 0.02 :::3 rn ca Q) 0 :E :::3 .90.02 :::3 0 0.04 lllrlllll 0.06 0.08 0.1 0.12 0 2 4 6 8 10 12 14 16 Time (sec) Fig. 6.13 Output measurement when a recrusive least squares algorithm is used. 108
PAGE 120
.90 "l:l "l:l tl. 0.04 0.02 0 0.02 lllil 0.04 0.06 0.08 0.1 0 2 4 6 8 10 12 14 16 Time (sec) Fig. 6.14 Predicted output when a recrusive least squares algorithm is used. 109
PAGE 121
CHAPTER 7 EXPERJMENTAL RESULTS A number of experiments were conducted to control the flexible robotic arm model. An adaptive algorithm based on pole assignment was implemented on the computer to control the flexible robotic ann model. The algorithm used in this experiment is described in section 5.2.1. The estimation schemes used in the algorithm are given by Eqs. ( 5 .1.14) and (5.1.17). The attempts to use the recursive least squares estimation scheme were unsuccessful largely due to the unmodelled dynamics present in the system. The schematic diagram used to implement the adaptive pole placement controller is shown in Fig 5.1. 7.1 Choice of Sampling Frequency and Order of the System Model Before implementing the adaptive control scheme, the order of the system, n, has to be determined. Since the system has infmite number of modes where each mode is modeled by a second order system, the system can not be ovennodelled. The plant is always undennodelled. However, by 110
PAGE 122
choosing n large enough, the system can be modelled accurately enough. The order of the system n should be chosen large enough so that the dominant poles are included in the design of the controller [ 16]. But n can not be too large because it will take longer time for the computer to implement the control scheme. The control parameters should be computed during each sampling interval. This will not be possible if n was chosen too large. In this thesis, n=6 is chosen This number was determined through experimentation and gave satisfactory result. The sampling period T5 was chosen to be T5=.02 sec. This value of T5 Is large enough for the control signal to be computed during each sampling interval and yet it is small enough to give sufficiently high sampling rate. The sampling theory states that the sampling frequency should be at least twice the system bandwidth. However, in practice, the sampling frequency should be much higher than system bandwidth depending on the particular performance requirement of the system. The rule of thumb in choosing the sampling frequency is that the sampling frequency must be possibly as much as 25 times the system bandwidth [2, 16]. The bandwidth of the system is 2Hz. In this thesis, the sampling frequency of f5=50Hz is chosen. Note that the choice of T5 is limited by the computation speed of the computer. 111
PAGE 123
7.2 NonAdaptive Controller The parameters identified, using the recursive least squares algorithm, were used to design the deadbeat pole placement controller. The polynomials R, S and T, designed in example 4.3, were used in the experiment. However, the tip of the flextble arm was going out of the range of the detector frequently and the output of the flexible arm was approximately twice the reference signal. Therefore, it was decided to change the value of T so that the output of the flexible arm follows the reference signal. The value of T=2, which was found by trial and error, gave better performance. The tip position and the control signal are shown in Fig. 7.1 and Fig. 7.2 respectively. As shown in Fig. 7.1, from t==O to t==10 sec there is a 30% error between the output and the reference signal. And, after t> 10 sec the output of the flexible arm did not follow the reference signal exactly. The nonadaptive controller did not reduce the vibration of the flexible arm. However, the system output exhibit little oscillation due to the change in signal. The performance of the system got even worse when a payload was added at the tip of the flexible arm.. Just after the release of the payload, the error between the output and the reference signal reduced to 20%. However, for t>10 sec the error was 12%. The output and the control signal 112
PAGE 124
(when a payload was placed at the tip of the arm) are shown in Fig. 7.3 and Fig. 7.4, respectively. :::::. 0.2 fi.................... >0.15 :2 .& ::I 0 E .!! en 0.1 ....................... .................... 0.05 0 ................... 0.05 ....................... .......................................... 0.1 ....................... .................... ........................ 0.15 ................... 0.2 ,..., _, ... ,'ijliiillii 0.25.!.7.:...!=,;!';;o.......... 0 10 20 30 40 50 60 70 80 Time (sec) Fig. 7.1 System output when a nonadaptive controller is used. 113
PAGE 125
1,..r..... 0.8 +':'+' lJ..i........... ______ ...;..8 0.6 1 i r!_________________ T ________________ j! C) 0.4 Ci5 E 0.2 ..................... L ___________________ '"_____________________ ]___ _________________ l i : 6 c..> 0 02 j j i +i :: : ; 0.4 rf!..................... .L _____________________ ! i : 0.6 '_____________ ) ____________________ 11: : 0.8 Ll ........ ;. l 1 . : 0 10 20 30 40 50 60 70 80 lime (sec) Fig. 7.2 Control signal when a nonadaptive controller is used. 114
PAGE 126
0.25 r...r,.u., 0.2 t+........... e r i 0.15 :::J 0.1 0 E 0.05 1 0 0.05 0.1 0.15 .. 0.2 !t+li\!t+!tfj 0 10 20 30 40 50 60 70 80 Time (sec) Fig. 7.3 System output when a nonadaptive controller Is with a payload added at the tip of the flexible arm. 115
PAGE 127
0.8 ,..,..r.o.6 L..................... J ......... L. j ; 0 l 1 : 0.4 JL.;o.2 1L1;e c 8 0 ... .. ==. . 0.2 ....i.l_! i 0.4 !..1); : 0.6 .._______________ !._.Ji 0.8 !;; 1 : 0 10 20 30 40 50 60 70 80 lime (sec) Fig. 7.4 Control signal when a nonadaptive controller Is with a payload added at the tip of the flexible ann. 116
PAGE 128
7.3 Adaptive Pole Placement Two types of estimation schemes were used in the adaptive pole placement controller during the experiments: the gradient algorithm for the deterministic case and stochastic gradient algorithm for stochastic case. 7 .3.1 Deterministic Case The gradient algorithm given by Eq. (5.1.14) was used in the adaptive controller scheme. The reference signal was a square wave with amplitude .2 volts and a period of 1000 samples. The value of n0=10 and i..=0.7 were used. Figure 7.5 shows the output of the system for deterministic case. As shown in Fig. 7.5, the output exhibits oscillation at the beginning of the experiment and at change of the command signal. The reason that the output exhibited oscillation at the beginning was that the parameter estimates were far from the optimum. And the oscillations due to the change of command signal may be explained as follows. When the are no changes and the parameter estimates are constant, the output follows the command signal with no oscillation However, when the command signal changes bursting occurs m the adaptive loop. This bursting cause fluctuations in other variables such as 117
PAGE 129
parameter estimation error [1]. However, a selfstabilizing mechanism occurs in the adaptive system and the adaptive system remains globally stable [14]. The control signal is shown in Fig. 7.5. Comparing the results between the nonadaptive and adaptive controllers, the vibration of the flexible arm had reduced significantly when using the adaptive pole placement controller. The output of the system in the nonadaptive case exhibited smaller overshoot (at the change of the reference signal) over a shorter period than the output of the system in the adaptive case. However, the error between output and the reference signal in the nonadaptive case is much larger than the error in the adaptive case. 118
PAGE 130
>o c::: 0 := a; 0 Q, Q, := 0.5 r' 1t. it+_: :......... 0.4 0.3 0.2 0.1 0 . 0.1 t.................... 0.2 . l : .. !0.3 t! 1! l 0.4 i 0 10 20 30 40 50 60 70 80 time, sec Fig. 7.5 Tip position response when a gradient estimator is used in the adaptive control system. 119
PAGE 131
e 1 5 t, tt:::1_ 1 } .:_ooJ;l l o.5 L ______ ! .. L8 : 11.. .. Ill. 0.5 11t. 1 1 i i 1 +1r:i i 1.5 J...L..!.' . 0 10 20 30 40 50 60 70 80 time ,sec Fig. 7.6 Control signal when a gradient estimator is used in the adaptive control system. Figures 7.7 and 7.8 show the output and the control signals, respectively, when a payload is added at the tip of the flexible robotic arm model. The weight of the payload is 1.5 oz. The payload was released at t=2 sec (approximately) when the tip of the arm reached quiescent state. As shown in the plots the adaptive controller was able to adjust with the release of the payload. And the vibration of the flexible arm had reduced significantly. 120
PAGE 132
However, the system exhibited large oscillations when the sign of the reference signal changed. The magnitude of overshoots are approximately the same as with the case when no payload were released However, the settling time was longer with payload release. Note that the control action was larger than the one without a payload release. Heavier payload results in large control action since the amount of energy necessary to move the payload increases with the increase in payload weight. o.4 r}_i,_Trl i 0.3 !: rrj5 0.2 _.!"'_: '"'_!!!., ..................... ;::; c;; &. 0.1 .... ............... ' ...................... ____ ._ .......................... ......... . .................... Q. ;::; 0 ........... ........ .. ................. ....................... ___ ._ ................................ ............................. : .................... 0.1 . 0.2 ; ; ...................... :ull!iiW""i0.3 0.4 0 10 20 30 40 50 60 70 80 time, sec Fig. 7.7 Tip position response when a gradient estimator is used in the adaptive control system with a payload added at the tip of the arm. 121
PAGE 133
' 2 L.]L....................... L .................. j 1.5 +____________________ j _______________________ i! 1 +:t++(1) 05 i ___________________ rc: 8 0 lilA.. !1.. _______ 'IIJ., ................ _______ ... IJ.. _________________ .............. .. ,., II'" T'' 1'1' 0.5 i : i 1 1____________________ i._ _________________ .. ______________________ L ................ ___ L _________________________________ __ : i 1.5 _______________________ l__ ________________________________________ _l_ __________________________________________ [ ___________________ ................... .. 2 _____________________ L_ _______ _________ J _______________________ !____________________ [.. ____________________ . I 2.5 0 10 20 30 40 50 60 70 80 time, sec Fig. 7.8 Control signal when a gradient estimator is used in the adaptive control system with a payload added to tip of the arm. 7 .3.2 Stochastic Case The parameter estimation scheme for stochastic case is giVen section 5.2.2. The reason for using the stochastic gradient algorithm is to determine if external disturbances affect the output of the system. Although the stochastic 122
PAGE 134
gradient algorithm is used, the disturbance model was not used. The adaptive pole placement controller with stochastic gradient algorithm .Is used to control the tip position of the flexible robotics arm model. The model of the system has 13 parameters. The design parameters are the polynomials R, S and T. The order of the model is 6 with 13 parameters. The following parameter values were chosen in the stochastic gradient algorithm: a=l, o=O.l and r41(0)=e. The output of the flexible arm follows the reference signal for t> 10 sec. However, the settling time was larger than the case where the deterministic gradient algorithm was used. And the vibration of the flexible arm was also larger. Figures 7.9 and 7.10 show the output of the system and the control input, respectively, when the stochastic gradient algorithm was used. The settling time got even larger with the release of payload. Note that the noise model was not used in the adaptive control system due the constraint imposed by the computational power of the computer. In other words, the computational time will increase significantly due to the added parameters of the noise model and computation time is very critical since the control signal has to be generated between each sampling interval. The response of the tip position and the control signal with a payload release are shown in figs. 7.11 and 7.12 123
PAGE 135
respectively. Note that the weight of the payload was 1.5 oz. 0.5 0.4 0.3 0.2 c 0 ;; 0.1 0 Q. Q. = 0 0.1 0.2 0.3 0.4 i i ; ; l k ; !l It l :II' !f I ) 10 20 30 <40 50 eo 70 80 time, sec Fig. 7.9 Tip position response when a stochastic gradient estimator is used in the adaptive control system. 124
PAGE 136
1.5 r.: +1 . i 1V c C) i+ri ;;; 0.5 :rre c 8 0 'II.._ !Jl !r' 0.5 trr! i I 1 f+i! i 1 .5 t!f. i 0 10 20 30 40 50 60 70 80 time, sec Fig. 7.10 Control signal when a stochastic gradient estimator is used in the adaptive control system 125
PAGE 137
C' >c::: 0 ;I ;;; 0 Q. Q. ;:I 0.5 0.4 0.3 0.2 0.1 0.1 0.2 time, sec Fig. 7.11 Tip position response when a stochastic gradient estimator is used in the adaptive control system with a payload added at the tip of the arm 126
PAGE 138
2.5 2 Q1.5 :I ii 1 c C) c;; _g 0.5 c 0 8 0.5 1 1.5 2 2.5 .. i!u.. II!!.. ... 1r Rr r :r i i i i ) 10 20 :30 40 50 EiO 70 ao time sec Fig. 7.12 Control signal when a stochastic gradient estimator is used in the. adaptive .control system with a payload added at the tip of the arm 127
PAGE 139
CHAPTER 8 CONCLUSION The identification of the flexible manipulator was carried usmg two methods: the observer!Kalman filter identification and the recursive least squares algorithm. The identification of the flexible manipulator using the observer/Kalman filter identification was carried offline. Attempts to implement a controller based on the identification of the system using observer!Kalman filter identification algorithm was unsuccessful. In contrast, the identification of the system using recursive least algorithm was carried out online. The corresponding deadbeat controller was relatively successful m controlling the flexible manipulator. The design and implementation of vanous controllers was shown in Chapter 7. However, the implementation of a poleplacement controller based on a model obtained using the observer/Kalman filter identification was not successful because the numerical model of the flexible robotic arm model was inadequate for the design of robust controller in the closedloop application. Therefore, direct use of the identified plant model in control 128
PAGE 140
design was the only way to achieve desired performance [6]. The adaptive controller with detenninistic gradient algorithm gave better performance overall . The adaptive controller with stochastic gradient algorithm could give better perfonnance if the noise model was included in the design of the controller. However, this was not done due to the constraints imposed by the computational power of the computer. 129
PAGE 141
APPENDIX % Program prbsm.m % This program is written by Prof. Jan Bialasiewicz (see ref. 2). % This program generates the pseudorandom binary sequence (PRBS) % as test signal for identification of the flexible robotic arm model. % N=input('N (=trffs=510)='); A=input('A(magnitude )='); sf=input('sf='); L=2"N1; pr=input('period=') g=pr*L; e=O PRB=zeros(g, 1 ); for i=l:pr SR=ones( l,N); SR 1 =zeros( 1 ,N); %initialization of the Output Register %initialization of the Shift Register %mod 2 sum of the first and last digit of SR is %stored in SRl(l) for m=l:L f=m+e; SRSUM=SR(sf)+SR(N); %MODULO 2 SUMMATION if SRSUM==O SRl(l)=O; elseif SRSUM2 SRI(l)=O; else SRl(l)=l; end %0UfPUT GENERATION if SR(N)=l PRB(f)=A; else PRB(f)=A; end 130
PAGE 142
%SH1FTING SR TO THE RIGHf AND UPDATING for l=l:N1 end e=e+L end SR(N+ 1l)=SR(Nl); end SR(l)=O; SR,;,SR+SR1; save c:\das16\du.m PRB ascii 131
PAGE 143
%PROGRAM fiokuy.m % This program is written by Prof. Jan Bialasiewicz (see ref 2) .. %This is the IDENTIFIER program %for single input multiple output system. %When calling this program the user has to load % (in the MATLAB format) frequencyrich inputoutput time histories %OBSERVER/KALMAN FILTER IDENTIFICATION ALGORITHM is used. % %UserDefined Parameters: % % T sampling interval used to obtain % inputoutput time histories % u input sequence (single column vector) % y output sequence (a matrix with m columns) % m number of outputs % p number of observer parameters % Note: Maximum order of the system which % can be identified is p*m. % %Variables Available: % % Al,Bl,Cl,Dl identified state space representation % G observer gain matrix % y response to u of the system to be identified % y 1 response to u of the identified system % y2=[y yl] % error identification error. % pt=length(u); %number of points in the sample t=[T:T:T*pt ]'; m=input('m(number of outputs)=');%number of system outputs %r=input('r(nwnber of inputs)=');%nwnber of system inputs r=l; disp(Declare p number of. observer Markov parameters.') disp('Maximum order of the system to be identified is p*m') p=input('p(number of observer Markov parameters)='); %number of observer Markov parameters 132
PAGE 144
%system of maximum order p*m can be identified [A1,B1,C1,D1,G]=okid(m,r,T,u,y,'lq',p);%identification y1=dlsim(A1,B1,Cl,D1,u); %simulation to determine identification error y2=[y y1]; plot(y2),title('System and identified model response') disp('Both real system output and identified system output') disp('as response to the same input are now displayed in the Figure Window.') disp('To save the plot you can open New Figure now.') disp('ENTER to continue.') pause error=yy 1 ;%identification error plot( error),title('Identification Error') disp('Identification error is now displayed in the Figure Window') disp('Identified system matrices are available as AI, B1, C1, Dl') disp('Observer gain matrix is available as G') [mag1,phase1,w]=dbode(A1,B1,Cl,D1,T,1); lmagl =20*log(mag1(:,1 )); v=[.1,10,100,50]; subplot(2, 1,1) semilogx(w,lmag1,'g'),grid, ... axis(v) xlabel('frequency (Hz)'), ... ylabel('Gain dB'); pause subplot(2, 1 ,2) semilogx(w,phase1(:,1)),grid, ... xlabel('frequency (Hz)'), ... ylabel('phase ( deg)'); pause 133
PAGE 145
% program observer.m % This matlab program generates the control signal based on pole % placement. % This program also calculates the integral constant K 1 the state feedback % gain constant K2 and the observer gain G. Later, K1 K2 and G were % used to in realtime simulation. % a0=zeros(1,6); A1=[a b aO 0]; B1=[zeros(6,1); 1]; axO=zeros( 6,50 1 ); % initialize the plant state variables. axeO=zeros(6,501); % initialize the observer state variables. xO=zeros( 6,1 ); xeO=zeros( 6,1 ); vO=l; ghl=inv([Bl A1*Bl fl*Bl f2*Bl f3*B1 f4*Bl f5"'Bl ]); ap=zeros(l,6); aap=[ap 1]; dff=f61.6504*f5+.7034*f4; khat=aap*ghl*f6; ai=aeye(6); ca=c*a; cb=c*b; kk=[khat + aap ]; k=kk*(inv([ai b; ca cb])); kl=k(:,7); for j=1:6; k2(1j)=k(lj); end ff=a*a*a*a*a*a; ke I =ff'l'( inv([ c;ca;c*a*a;c*a*a *a;c*a"'a*a*a;c*a*a*a *a*a]) )*([0;0;0;0;0; 1 ]); n=input('n= ') hold off h=O:l:n; r=.1S*square(2*pi*h/200); plot(h,r);grid pause 134
PAGE 146
hold on vO=.lS; x0(3)=.15; xO(l)=.19; for i=l:n+l v=c*a*xO + c*b*k2*x0+(1 c*b*k1)*vO+r(i); u=k2*xeO+kl*v; x=a*xO+b*u; y=c*xO; xe=a *xeO+ b*u+kel*(yc*xeO); ys=c*xeO; yo(:,[i])=y; yol(:,[i])=ys; uol(:,[i])=u; axO(:,i)=xO; axeO( :,i)=xeO; xO=x; vO=v; xeO=xe; end plot(h,yo,'g');grid; xlabel('sample k') ylabel('magnitude'); title('output of the system'); pause plot(h,yo 1, 'b');grid; xlabel('sample k') ylabel('magnitude'); title('estimated output of the system'); pause hold off plot(h,uo 1, 'b'};grid; xlabel('sample k') ylabel('magnitude'); title('control input'); pause 135
PAGE 147
% Program rsl.m % This matlab program uses a recursive least algorithm to estimate the % parameters of the system and update the control parameters indirectly. % The transfer functionbased linear regulator is used to generate a control % signal. The desired closedloop poles are placed at the origin. % n=13; N=1999; ttl =0: 1 :N+ 1; T=l; w=n1; ys=0.2*square(2*pi*ttl/500); Thest=zeros( n,N+ 2); p=.O.l *eye(n); u=zeros(l ,N+ 2); x=zeros( 1271 ); y=zeros(l ,N+ 2); I); yst=zeros( I ,N+ 2); yerr=zeros( I 7 N+ 2); b=zeros(12,1); tho=[0.2316696 0.1983438 O.I73241 O.I429359 0.1172126 0.1168989 2.48718704 2.125822E03 5.835298E03 6.650744E02 0.0889677 0.0554328 4.76457702 ]; thest(1)=.2I 136
PAGE 148
thest(2)=.1722; thest(3)=.19 ; thest(4)=.154138 ; thest(5)=.10855; thest(6)=.089 ; thest(7)= 4.012411e4; thest(8)=8.991258e4 ; thest(9)=1.0627e3; thest(10)=2.005586e2 ; thest( 11 )=2.205883e2 ; thest(12)=1.162508e2 ; thest( 13 )=2. 806634e2 ; for t=7:N+ 1 % main program al =thest(1 ); a2=thest(2); a3=thest(3 ); a4=thest( 4 ); aS=thest( 5); a6=thest(6); bO=thest(7); b 1 =thest(8); b2=thest(9); b3=thest( 1 0); b4=thest( 11 ); b5=thest(12); b6=thest(13 ); den=[thest(1) thest(2) thest(3) thest(4) thest(S) thest(6)]; num=[thest(7) thest(8) thest(9) thest(10) thest(11) thest(12) thest(13)]; roots( den); roots(num); 137
PAGE 149
b( 1 )=thest( 1 ); b(2)=thest(2); b(3)=thest(3); b(4)=thest(4); b( 5)=thest( 5); b(6)=thest(6); b(7)=0; b(8)=0; b(9)=0; b(10)=0; b(l1)=0; b(12)=0; for i=l:w % Subroutine for Gauss elimination with pivoting. temp=abs( ff(i, 1) ); for j=l:w1 if temp < abs(ff(ij+1)) 138
PAGE 150
end temp=abs(ff(ij+ 1)); end s(i)=temp; end for m=l:w templ=abs(ff(m,m))/s(m); for c=m+l:w if tempi < abs(ff(c,m))/s(c) templ=abs(ff{c,m))/s(c); temp2=s(m); s(m)=s(c); s( c )=temp2; temp3=ff{m,: ); ff{m,:)=ff{c,:); ff{c,:)=temp3; temp4=b(m); b(m)=b(c); b(c)=temp4; end; % end if end; % c for i=m+l:w ff{i,m )=ff{i,m)/ff( m,m); for j=m+l:w ff(ij)=ff(ij)ff{i,m)*ff(mj); end; %end j b(i)=b(i)ff(i,m)*b(m); ff(i,m)=O; end; % end i end; % m b(w)=b(w)/ff(w,w); k=w for i=l:w1 k=k1; for j=k+l:w b(k)=b(k)ff(kj)*b(j); 139
PAGE 151
end; %j b(k)=b(k)/ff(k,k); end; %i % end of subroutine u1=b(1)*u(t1)b(2)*u(t2)b(3)*u(t3)b(4)*u(t4); u2=b(5)*u(t5)b(6)*u(t6); u3=b(7)*y(t)b(8)*y( t1 )b(9)*y( t2)b( 1 O)*y( t3); u4=b(11 )*y(t4)b(12)*y(t5); uS=T*ys(t); u(t)=(u1+u2+u3+u4+u5); % Control signal generation % *************Recursive least squares algorithm***************** phi=[ y(t) y(t1) y(t2) y(t3) y(t4) y(t5) u(t) u(t1) u(t2) u(t3) u(t4) u(t5) u(t6)]; phi1=[y(t) y(t1) y(t2) y(t3) y(t4) y(t5) u(t) u(t1) u(t2) u(t3) u(t4) u(t5) u(t6) ]; 140
PAGE 152
y(t+ 1)=tbo'*phi1; L=p*phi/(1 +phi'*p*phi); p=(pp*phi*phi'*p/(lamo+phi'*p*phi) )!lamo; yst(t)=thest'*phi ; thest=thest+L *(y(t+ 1 )yst(t));_ %**************end of recursive least squares algorithm************* yerr( t )=y( t )ys( t ); Thest( :,t )=thest; T= 11( thest(7)+thest(8)+thest(9)+ thest( 1 O)+thest(11 )+thest( 12)+thest( 13) ); end plot(ttl,ys); pause hold on plot(ttl,y,'g');grid; xlabel(' Time, (sec)'); ylabel('Output and reference signals y(t) r(t)'); title('FIGURE 3, Output ,y(t), and reference signals r(t)'); gtext('r( t )'); pause hold off plot(Thest( :,2:N+ 1 )','');grid xlabel(' time'); title(' FIGURE 4, parameters estimated'); % end of program rsl.m 141
PAGE 153
% Program unmorsl.m % This matlab program uses a recursive least algorithm, in the presence of % unmodelled dynamics, to estimate the parameters of the system and % update the control parameters indirectly. % n=9; N=1999; tt1=0:1:N+1; w=n1; ys=0.2*square(2*pi*ttl/500); Thest=zeros(n,N+2); p=0.1*eye(n); u=zeros(1,N+2); y=zeros(1,N+2); tho=[.2316696 .1983438 .173241 .1429359 .1172126 .1168989 2.487187E04 2.125822E03 5.835298E03 6.650744E02 .0889677 .0554328 4. 764577E02]; thest=zeros(n, 1 ); yst=zeros( 1 ,N+ 2); yerr=zeros( 1 ,N+ 2); b=zeros(8,1); thest(1)=.21 thest(2)=.176; thest(3)=.19 ; thest(4)=.1542 ; thest( 5)=5 .0 124e4; thest(6)=8.9912e5; thest(7)= 1. 627e4; 142
PAGE 154
thest(8)=2.004586e2; thest(9)=5 .20588e3; T=1; for t=7:N+1 % main program a1 =thest(l ); a2=thest(2); a3=thest(3 ); a4=thest( 4 ); bO=tbest( 5); b 1 =thest( 6); b2=thest(7); b3=thest(8); b4=thest(9); b(l )=thest(l ); b(2)=thest(2); b(3)=thest(3); b(4)=thest(4); b(5)=0; b(6)=0; b(7)=0; 143
PAGE 155
b(8)=0; end for i=t :w % subroutine for Gauss elimination with pivoting temp=abs(ff{i,t)); for j=t:wt if temp < abs(ff{ij+t)) temp=abs( ft{ ij+ t) ); end s(i)=temp; end for m=t:w tempt =abs(ff{m,m))/s(m); for c=m+t:w if tempt < abs(ff{c,m))/s(c) temp 1 =abs( ff( c,ni) )Is( c); temp2=s(m); s(m)=s(c); s( c )=temp2; temp3=ff( m,: ); fit c,: )=temp3; temp4=b(m); b(m)=b(c); b( c )=temp4; end; % end if end; % c for i=m+1:w ff{i,m)=ffti,m)/ff(m.,m); for j=m+1:w ff{ij)=fftij)ffti,m)*ff{mj); end; %end j b(i)=b(i)ff(i,m)*b(m); ff{i,m)=O; end; % end i 144
PAGE 156
end; % m b(w)=b(w)/ff(w,w); k=w; for i=1:w1 k=k1; for j=k+1:w b(k)=b(k)ff(kj)*b(j); end; %j b(k)=b(k)/ff(k,k); end; %i % end of the subroutine u1=b(l)*u(t1)b(2)*u(t2)b(3)*u(t3)b(4)*u(t4); u2=b(5)*y(t)b( 6)*y(t1 )b(7)*y(t2)b(8)*y(t3); u3=T*ys(t); u(t)=(u1+u2+u3); % control signal %*********** recursive least algorithm********.******** phi=[ y(t) y(t1) y(t2) y(t3) u(t) u(t1) u(t2) u(t3) u(t4)]; phi1=[y(t) y(t1) y(t2) y(t3) y(t4) y(t5) u(t) u(t1) u(t2) u(t3) u(t4) 145
PAGE 157
u(t5) u(t6) ]; y( t+ 1 )=tho'*phi 1; L=p*phil( 1 +phi'*p*phi); p=(pp*phi*phi'*p/( 1 +phi'*p*phi) ); yst( t )=thest'*phi ; thest=thest+L *(y(t+ 1)yst(t)); Thest( :,t )=thest; T= 1/( thest( 5)+thest( 6)+thest(7)+ thest(8)+thest(9) ); end thest plot(tti,ys); pause hold on plot( tt I ,yst, 'g');grid; xlabel(' time'); ylabel('Output and reference signals y(t) r(t)'); title('FIGURE 3, Output ,y(t), and reference signals r(t)'); gtext('r( t )'); pause hold off plot(Thest(:,2:N+ I )','');grid xlabel(' time'); title(' FIGURE 4, parameters estimated'); %*********** END of program unrnorsl.mu 146
PAGE 158
% gradien.m % This matlab program uses a gradient algorithm, in the presence of % unmodelled dynamics, to estimate the parameters of the system and % update the control parameters indirectly. n=9; N=1999; ttl=0:1:N+l; w=n1; ys=0.2*square(2*pi*tt1/500); Thest=zeros( n,N+ 2); ba=1 ; u=zeros(1,N+2); y=zeros(l,N+2); tho=[ .2316696 .1983438 .17324I .I429359 .1172I26 .II68989 2.48718704 2.12582203 5.83529803 6.65074402 .0889677 .0554328 4.76457702 ]; T=1; thest=zeros( n, I); yst=zeros(I ,N+ 2); yerr=zeros( I ,N+ 2 ); b=zeros(8, 1 ); thest(l )=.2 I thest(2)=. i 76; thest(3)=.19 ; thest(4)=.1542 ; thest(5)=5.0I24e4; 147
PAGE 159
thest(6)=8.9912e5; thest(7)=1.627e4; thest(8)=2.004586e2; thest(9)=5.20588e3; mphi=zeros( 1 ,N+ 2); nphi=zeros( 1 ,N+ 2); rr=zeros( 1,N+ 2 ); lam=0.001; n0=30; for t=7:N+ 1 % main program a1 =thest( 1 ); a2=thest(2); a3=thest(3 ); a4=thest( 4 ); bO=thest( 5); b1=thest(6); b2=thest(7); b3=thest(8); b4=thest(9); b(l )=thest(l ); b(2)=thest(2); b(3)=thest(3); 148
PAGE 160
b( 4 )=thest( 4 ); b(5)=0; b(6)=0; b(7)=0; b(8)=0; for i=l:w % subroutine for Gauss elimination with pivoting temp=abs( ff(i, 1) ); end for j=t:w1 if temp < abs(ff(ij+ t)) temp=abs(ff(ij+ 1)); end s(i)=temp; end for m=l:w tempt =abs(ff(m,m))/s(m); for c=m+l:w if tempi < abs(ff(c,m))/s(c) tempt =abs(ff( c); temp2=s(m); s(m)=s(c); s( c )=temp2; temp3=ff(m,:); ff( m,: )=ff( c,: ); fftc,:)=temp3; temp4=b(m); b(m)=b(c); b( c )=temp4; end; % end if end; % c for i=m+t:w ff(i,m)=ff(i,m)/ff( m,m ); for j=m+t:w fftij)=ff(ij)flti,m)*ff(mj); end; %end j b(i)=b(i)ffti,m)*b(m); :ffti,m)=O; 149
PAGE 161
end; % end i end; % m b(w)=b(w)/ff(w,w); k=w, for i=1:wl k=k1; for j=k+1:w b(k)=b(k)ff{kj)*b(j); end; %j b(k)=b(k)/ff(k,k); end; %i % end of subroutine u1=b(1)*u(t1)b(2)*u(t2)b(3)*u(t3)b(4)*u(t4); u3=b(5)*y(t)b(6)*y(t1)b(7)*y(t2)b(8)*y(t3); uS=T*ys(t); u(t)=(u1+u3+u5); % control signal % *************gnKtient algoritlun******************** phi=[ y(t) y(t1) y(t2) y(t3) u(t) u(t1) u(t2) u(t3) u(t4)]; phi1=[y(t) y(t1) y(t2) y(t3) y(t4) y(t5) u(t) u(t1) u(t2) u(t3) u(t4) 150
PAGE 162
u(t5) u(t6) y( t+ 1 )=tho'*phi 1; vphi(t )=phi'*phi; mphi(t)=lam*mphi(t1 )+vphi(t); rr(t)=nO+mphi(t); L=ba*philrr(t); yst( t+ 1 )=thest'*phi thest=thest+L *(y(t+ 1 )yst(t+ 1 )); of gnutient yerr( t )=y( t )ys( t ); Thest( :,t)=thest; T=1/( thest( 5)+thest( 6)+thest(7)+ thest(8)+thest(9) ); end plot(tt1,ys ); pause hold on plot(tt1,y,'g');grid; xlabel(' time'); ylabel('Output and reference signals y(t) r(t)'); title('FIGURE 3, Output ,y(t), and reference signals r(t)'); gtext('r(t)'); pause hold off plot(Thest(:,2:N+ 1)','');grid xlabel(' time'); title(' FIGURE 4, parameters estimated'); 151
PAGE 163
'* This program implements a recursive least square to identify a flexible '* robotic ann model. This is accomplished through the use of MetraByte's '* model Dash16 AID and D/A convertor. '* '* The following variables are used to accomplish this task: '* p: covarince matrix '* phi: measurement vector '* thest: parameter vector '* lam: forgetting factor '* DtoA (0): Controller output sent to DAC via D/A channel 0 '* AtoD (0): Position Feedback sent to ADC via AID channel 1. '* 1: Dummy loop counter. '* mode%: Dash16 parameter. '* flag"4: Dash16 parameter. '* '* CLS INPUT "dp =", dp INPUT "n1= ", n1 INPUT "n2= ", n2 DIM Atodo/o(4) DIM dio%(4) DIM postbo/o(4) DIM dtoa%(4) DIM r(1 TO dp) 'InitializeDash 16 mode% = 0 flag% = 0 Atod%(0) = &H300 'Base address of Dash16 Atodo/o(1) = 2 'Set DMA level Atod%(2) = 3 'Set Interrupt level CALL das16(mode%, Atodo/o(O), flag%) 'Set Scan Limits and Channels mode%= 1 'Set mux to toggle from 152
PAGE 164
Atodo/o(O) = 1 'channel one to channel Atod%(1) = 2 'three. CALL das16(mode%, Atod%(0), tlag"!O) 'Set Programmable Timer Rate mode% = 17 'This sets the sampling time Atodo/o(O) = nl to .n1 *n2/1E6 seconds Atodo/o( 1) = n2 CALL das16(mode%, Atodo/o(O), tlag"!O) 'Reset Motor mode% = 15 'output zero volts to motor dtoa%(0) = 0 dtoa%(1) = 2048 CALL das16(mode%, dtoao/o(O), flag%) n = 13 DIM lam(1 TO dp) DIM p(1 TO n, 1 TO n) DIM phi(l TO n) DIM 1(1 TO n) DIM thest(1 TO n) DIM u(1 TO dp) DIM y(1 TO dp) DIM yst(1 TO dp) DIM pphi(l TO n) DIM phip(I TO n) DIM pphip(I TO n) DIM pphiphi(l TO n, I TO n) DIM pphiphip(l TO n, 1 TO n) DIM sdiv(I TO n, 1 TO n) lam(6) = .99 OPEN "du.m" FOR INPUT AS #I OPEN "adpu.m" FOR OUTPUT AS #2 OPEN "adpys.m" FOR OUTPUT AS #3 153
PAGE 165
OPEN "adpxs.m" FOR OUTPUT AS #4 OPEN "adtbs.m" FOR OUTPUT AS #5 FOR j = 1 TO dp INPUT #1, u(j) NEXT CLOSE #1 FOR i = 1 TO n p(i, i) = .1 NEXT 'MAIN PROGRAM STARTS HERE FORt= 7 TO dp mode%= 4 Atodo/o{O) = 2 Atod%(1) = V ARPTR(posfb%(0)) Atod%(2) = 1 'Read data from tip position This mode allows only 3 conversion per sample time interval. CALL das16(mode%, Atod%(0), flag>lo) POSFEEDBACK = posfb%(0) I 204.8 CALL das16(mode%, Atod%(0), flag%) xs = posfb%(1) I 204.8 y(t + 1) = xs phi(1) = y(t) phi(2) = y(t 1) phi(3) = y(t 2) phi(4) = y(t 3) phi(5) = y(t 4) phi(6) = y(t 5) phi(7) = u(t) phi(8) = u(t 1) phi(9) = u(t 2) 'recursive least square algorithm 'starts here. 154
PAGE 166
phi(lO) = u(t 3) phi(ll) = u(t 4) phi(l2) = u(t 5) phi(I3) = u(t 6) lam(t) = .99 lam(t 1) + .OI FOR i = 1 TO n pphi(i) = 0 FOR j = I TO n 'recursive least square algorithm starts here. pphi(i) = p(i, j) phi(j) + pphi(i) NEXT NEXT FOR i = I TO n phip(i) = 0 FOR j = I TO n phip(i) = phi(j) p(j, i) + phip(i) NEXT NEXT phipphi = lam(t) FOR i = 1 TO n phipphi = phip(i) phi(i) + phipphi NEXT FOR i = 1 TO n l(i) = pphi(i) I phipphi NEXT FORi= 1 TO n FOR j = 1 TO n pphiphi(i, j) = pphi(i) phi(j) NEXT NEXT FOR k = 1 TO n FOR i = 1 TO n pphiphip(k, i) = 0 FOR j = I TO n 155
PAGE 167
pphiphip(k, i) = pphiphi(k, j) p(j, i) + pphiphip(k, i) NEXT NEXT NEXT FOR i = I TO n FOR j = I TO n sdiv(i, j) = pphiphip(i, j) I phipphi NEXT NEXT FOR i = I TO n FOR j = I TO n p(i, j) = (p(i, j) sdiv{i; j)) I lam(t) NEXT NEXT thestphi = 0 FOR i = I TO n thestphi = thest(i) phi(i) + thestphi NEXT yst(t) = thestphi FORi = I TO n thest(i) = thest(i) + l(i) (y(t + I) yst(t)) NEXT recursive least algorithm end here. m = u(t) 'Check M, make sure it does not exceed abs(2.5) IF ABS(m) > .4 THEN m = .4 SGN(m) 'Output Control Voltage mode% = I5 dtoa%(0) = 0 Select DIA Channel 0 dtoao/o(I) = (m + 2.4475) 8I9 Convert to bits CALL dasi6(mode%, dtoa%(0), flag'lo) FOR i = I TO n PRINT #5, thest(i) NEXT I 56
PAGE 168
NEXT 'Send zero volts to the mode% = 15 'motor dtoao/o(O) = 0 dtoa%(1) = 2048 CALL das16(mode%, dtoa%(0), flagG/0) FOR ii = 1 TO dp PRINT #2, u(ii) PRINT #3, yst(ii) PRINT #4, y(ii) NEXT CLOSE #2 CLOSE #3 CLOSE #4 CLOSE #5 PRINT "PROGRAM TERMINATED" END write data into files. 157
PAGE 169
'* This program implements a nonadaptive controller based on pole '* placement to control a flexible robotic ann model. This is '* accomplished through the use of MetraByte's model Dash16 AID '* and D/ A convertor. '* '* The following variables are used to accomplish this task: '* u: input vector '* y: output vector '* ys: square wave input '* T, R and S: control polynomials '* DtoA (0): Controller output sent to DAC via D/A channel 0 AtoD (0): Position Feedback sent to ADC via AID channel 1. '* I: Dummy loop counter. '* mode%: Dash16 parameter. '* flagG/o: Dash16 parameter. '* '* INPUT "n1= ", n1 INPUT "n2= II' n2 DIM Atod%(4) DIM dio%(15) DIM posfb%(4) DIM dtoa%(4) 'InitializeDash16 mode% = 0 flag% = 0 Atod%(0) = &H300 'Base address of Dash16 Atodo/o(1) = 2 'Set DMA level Atod%(2) = 3 'Set Interrupt level CALL das16(mode%, Atod%(0), flagG/o) mode% = I Atodo/o(O) = 1 Atod%(1) = 2 'Set Scan Limits and Channels 'Set mux to toggle from 'channel 1 to channel .2 CALL das16(mode%, Atodo/o(O), flagG/o) 158
PAGE 170
mode% = 17 Atodo/o(O) = n1 Atod%(1) = n2 'Set Programmable Timer Rate This sets the sampling time to n1 *n2/1E6 seconds CALL das16(mode%, Atod%(0), flag'/o) 'Reset Motor mode% = 15 'output zero volts to motor dtoa%(0) = 0 dtoa%(1) = 2048 CALL das16(mode%, dtoa%(0), flag%) CLS INPUT "dp =", dp INPUT "T =", T DIM u(1 TO dp) DIM y(l TO dp) DIM r(1 TO dp) OPEN "ysl.m" FOR INPUT AS #1 OPEN "radpul.m" FOR OUTPUT AS #2 OPEN "radpyl.m" FOR OUTPUT AS #3 FOR j = 1 TO dp INPUT #1, r(j) NEXT CLOSE #1 'MAIN PROGRAM STARTS HERE FORt= 7 TO dp 159 'to write data to files read data from a file
PAGE 171
y(t) = xs 'Read Position Feedback mode% = 4 Atod%(0) = 2 Atod%(I) = VARPTR(posfbo/o{O)) Atod%(2) = I This mode allows only 2 conversion per sample time interval. CALL das16(mode%, Atodo/o{O), flagO/o) POSFEEDBACK = posfb%(0) I 204.8' CALL das16(mode%, Atod%(0), flagO/o) xs = posfb%(1) I 204.8 ui = .2511 u(t I) + .2549 u(t 2) + .2464 u(t 3) u2 = .1883 u(t 4) + .1023 u(t 5) + .076 u(t 6) u3 =.4464 y(t) +.7086 y(t I) + .4508 y(t 2) + .1977 y(t 3) u4 = .0791 y(t 4) + .1722 y(t 5) u(t) = T r(t) ui u2 u3 u4 'Calculate the Control Signal m = 1 u(t) 'Check M, make sure it does not exceed abs(2.5) IF ABS(m) > 2.5 1HEN m = 2.5 SGN(m) 'Output Control Voltage mode%= 15 dtoa%(0) = 0 Select Dl A Channel 0 dtoa%(1) = (m + 2.44755) 8I9 Convert to bits CALL dasi6(mode%, dtoa%(0), flagO/o) NEXT 'Send zero volts to the mode% = I5 'motor dtoa%(0) = 0 dtoa%(1) = 2048 CALL das16(mode%, dtoa%(0), flag%) FOR i = I TO dp record data PRINT #2, u(i) 160
PAGE 172
CLOSE #2 CLOSE #3 PRINT #3, y(i) NEXT mode% = 15 'Reset Motor dtoao/o{O) = 0 'Send zero volts to the dtoa%(1) = 2048 'motor CALL das16(mode%, dtoao/o(O), flagOA.) PRINT "PROGRAM TERMINA1ED" END 161
PAGE 173
'* This program implements an adaptive controller based on pole '* placement to control a flexible '* robotic arm model. The parameter '* estimator is a gradient algorithm. '"' This program uses a gauss elimination with partial pivoting '* to solve for control parameters S(q1 ) and R(q1). '* This is accomplished through the use of MetraByte's model '* Dash16 AID and D/ A convertor. '* The following variables are used to accomplish this task: '* '* '* '* phi: measurement vector thest: parameter vector u: input vector '* y: output vector '* ys: square wave input '* T, R and S: control polynomials '* vphi, nO, nphi, nn and rphi: gradient algorithm parameters '* DtoA (0): controller output sent to DAC via D/A channel 0 '* AtoD (0): tip position sent to ADC via AID channel 1. '* 1: Dummy loop counter. '* mode%: Dash16 parameter. '* flagO/o: Dash16 parameter. '* '* CLS INPUT "nl= ", nl INPUT "n2= ", n2 INPUT "dp =" dp INPUT "nO =", nO INPUT "qr =", qr INPUT "lam =", lam DIM Atod%(4) DIM posfb%(4) DIM dtoa%(4) 'InitializeDash 16 mode%= 0 162
PAGE 174
flagG/o = 0 Atod%(0) = &H300 Atod%(1) = 2 Atod%(2) = 3 'Base address of Dash 16 'Set DMA level 'Set Interrupt level CALL das16(mode%, Atodo/o(O), flag%) mode%= 1 Atod%(0) = 1 Atod%(1) = 2 'Set Scan Limits and Channels 'Set mux to toggle from 'channel 1 to channel 2 CALL das16(mode%, Atod%(0), flagG/o) mode% = 17 Atod%(0) = n1 Atod%(1) = n2 'Set Programmable Timer Rate This sets the sampling time to n1*n2/1E6 seconds CALL das16(mode%, Atod%(0), flag%) 'Reset Motor mode% = 15 'output zero volts to motor dtoa%(0) = 0 dtoa%(1) = 2048 CALL das16(modeo/o, dtoa%(0), flagG/o) n = 13 w = 12 DIM phi(1 TO n) DIM 1(1 TO n) DIM thest(1 TO n) DIM u(1 TO dp) DIM yy(l TO dp) DIM y(1 TO dp) DIM ys(l TO dp) DIM yst(l TO dp) 163
PAGE 175
DIM ff{1 TO w, 1 TO w) DIM b(1 TO w) DIM s(1 TO w) DIM vphi(1 TO dp) DIM rphi(l TO dp) DIM nn(1 TO dp) OPEN "ysl.m" FOR INPUT AS #1 OPEN "dyst.m" FOR OUTPUT AS #2 OPEN "dadpu.m" FOR OUTPUT AS #3 OPEN "dadpy.m" FOR OUTPUT AS #4 OPEN "dnn1.m" FOR OUTPUT AS #5 OPEN "dthest.m" FOR OUTPUT AS #6 FOR j = 1 TO dp INPUT #1, ysG) NEXT CLOSE #1 sw = 2000 max= 0 thest(l) = .185 thest(2) = .17622 thest(3) = .19 thest(4) = .146 thest(5) = .108 thest(6) = .08 thest(7) = 5.0124104 thest(8) = 8.9912505 thest(9) = 1.62783204 thest(lO) = 2.00558602 thest(ll) = 5.20588302 thest(12) = 4.16250802 thest(13) = 2.800663E02 aa = 1 'MAIN PROGRAM STARTS HERE 164
PAGE 176
FORt= 7 TO dp mode%= 4 Atod%(0) = 2 Atod%(1) = VARPTR(posfb%(0)) Atod%(2) = 1 'read tip position This mode allows only 2 conversion per sample time interval. CALL das16(mode%, Atod%(0), flagG/o) POSFEEDBACK = posfb%(0) I 204.8 CALL das16(mode%, Atod%(0), flag%) xs = posfb%(1) I 204.8 flt1, 1) = 1 ff( 1, 7) = thest(7) ff(2, 1) = thest( 1) flt2, 2) = 1 ff(2, 7) = thest(8) flt2, 8) = thest(7) ff(3, 1) = thest(2) flt3, 2) = thest(1) ff(3, 3) = 1 flt3, 7) = thest(9) ff(3, 8) = thest(8) ff(3, 9) = thest(7) ff(4, 1) = thest(3) ff( 4, 2) = thest(2) ff(4, 3) = thest(1) flt4, 4) = 1 ff( 4, 7) = thest( 10) ff( 4, 8) = thest(9) ff(4, 9) = thest(8) ff(4, 10) = thest(7) ff(5, 1) = thest(4) ff(5, 2) = thest(3) ff(5, 3) = thest(2) ff(5, 4) = thest(1) ff(5, 5) = 1 ff( 5, 7) = thest(ll) AR+qdBS=Am is put in matrix form 165
PAGE 177
fft5, 8) = thest(10) fft5, 9) = thest(9) fft5, 10) = thest(8) fft5, 11) = thest(7) flt6, 1) = thest(5) ff(6, 2) = thest(4) ff(6, 3) = thest(3) ff(6, 4) = thest(2) ff(6, 5) = thest(1) ff(6, 6) = 1 ff(6, 7) = thest(12) ff(6, 8) = thest(11) ff(6, 9) = thest(10) ff(6, 10) = thest(9) flt6, 11) = thest(S) ff(6, 12) = thest(7) ff(7, 1) = thest(6) ff(7, 2) = thest(5) ff{7, 3) = thest(4) ff{7, 4) = thest(3) ff(7, 5) = thest(2) ff(7, 6) = thest(1) ff(7, 7) = thest(13) ff(7, 8) = thest(l2) ff(7, 9) = thest(ll) ff(7, 10) = thest(lO) ff(7, 11) = thest(9) fit7, 12) = thest(S) ff(8, 2) = thest(6) ff(8, 3) = thest(5) ff(8, 4) = thest(4) ff(8, 5) = thest(3) ff(8, 6) = thest(2) fitS, 8) = thest(l3) ff(S, 9) = thest(12) ff(S, 1 0) = thest(11) ff(S, 11) = thest(10) fitS, 12) = thest(9) 166
PAGE 178
ffl9, 3) = thest(6) ff(9, 4) = thest(5) ffl9, 5) = thest(4) ff(9, 6) = thest(3) fft9, 9) = thest(13) ff{9, 10) = thest(12) ff(9, 11) = thest( 11) ff(9, 12) = thest(IO) ff(10, 4) = thest(6) ff{10, 5) = thest(5) ff{IO, 6) = thest(4) ff(10, 10) = thest(13) flt10, 11) = thest(12) flt10, 12) = thest(ll) flt11, 5) = thest(6) ff{11, 6) = thest(5) ff( 11, 11) = thest(l3) ff(11, 12) = thest(12) ff(12, 6) = thest(6) ff(12, 12) = thest(l3) b(l) = thest(1) b(2) = thest(2) b(3) = thest(3) b(4) = thest(4) b(5) = thest(5) b(6) = thest(6) b(7) = 0 b(S) = 0 b(9) = 0 b(10) = 0 b(ll) = 0 b(12) = 0 FOR i = 1 TO w subroutine for gauss elimination temp = ABS(ff{i, 1)) starts here FOR j = 1 TO w 1 IF temp < ABS(:ff(i, j + 1)) THEN temp = ABS(ff(i, j + 1)) 167
PAGE 179
END IF s(i) = temp NEXT NEXT gg = 0 FORm= I TOw gg=gg+I tempi = ABS(ff(m, m)) I s(m) FOR c = m + I TO w IF tempi < ABS(fftc, m)) I s(c) THEN tempi = ABS(fftc, m)) I s(c) temp2 = s(m) s(m) = s(c) s(c) = temp2 FORi= m TO gg FOR j = I TO w temp3 = ff{i, j) ff{i, j) = m: c, j) fftc, j) = temp3 NEXT NEXT temp4 = b(m) b(m) = b(c) b(c) = temp4 END IF NEXT FOR i = m + I TO w ffti, m) = ff{i, m) I ff(m, m) FOR j = m + I TO w ff(i, j) = ff(i, j) ff(i, m) ff(m, j) NEXT NEXT b(i) = b(i) ffti, m) b(m) ff(i, m) = 0 NEXT 'end of m b(w) = b(w) I ff(w, w) I68
PAGE 180
kk=w FOR i = 1 TO w 1 kk=kk1 FOR j = kk + 1 TO w b(kk) = b(kk) ff(kk, j) b(j) NEXT b(kk) = b(kk) I fflkk, kk) NEXT subroutine for gauss elimination 'ends here y(t + 1) = xs ul=b{1)* u(t 1) b(2) u(t 2) b(3) u(t 3) b(4) u(t 4) u2 = b(5) u(t 5) b(6) u(t 6) u3 = b(7) y(t) b(8) y(t 1) b(9) y(t 2) b(10) y(t 3) u4 = b(l1) y(t 4) b(12) y(t 5) u5 = qr ys(t) u(t) = u1 + u2 + u3 + u4 + u5 'calculate control signal phi(1) = y(t) phi(2) = y(t 1) phi(3) = y(t 2) phi(4) = y(t 3) phi(5) = y(t 4) phi(6) = y(t 5) phi(7) = u(t) phi(8) = u(t 1) phi(9) = u(t 2) phi(10) = u(t 3) phi(ll) = u(t 4) phi(12) = u(t 5) phi(l3) = u(t 6) phiphi = 0 'gradient algorithm starts here FOR i = 1 TO n phiphi = phi(i) phi(i) + phiphi NEXT 169
PAGE 181
vphi(t) = phiphi rphi(t) = lam rphi(t 1) + vphi(t) nn(t) = nO + rphi(t) thestphi = 0 FOR i = 1 TO n thestphi = thest(i) phi(i) + thestphi NEXT. yst( t) = thestphi FOR i = 1 TO n thest(i) = thest(i) + (aa phi(i) (y(t + 1) yst(t))) I nn(t) NEXT gradient algorithm end here T =l/(thest(7)+thest(8)+thest(9)+thest(10) + thest(11) + thest(l2) + thest(l3)) m = 1 u(t) 'Check m. make sure it does not exceed abs(2.5) IF ABS(m) > 1.5 THEN m = 1.5 SGN(m) 'Output Control Voltage mode%= 15 dtoa%(0) = 0 Select D/A Channel 0 dtoa%(1) = (m + 2.4475) 819 Convert to bits CALL dasl6(mode%, dtoa%(0), flag%) NEXT 'Send zero volts to the mode% = 15 'motor dtoao/o{O) = 0 dtoao/o{ 1) = 2048 CALL das16(mode%, dtoa%(0), flagGAt) FOR .i = 1 TO 4000 PRINT #2, yst(i) PRINT #3, u(i) PRINT #4, y(i) PRINT #5, nn(i) NEXT FOR ii = 1 TO n 'record data 170
PAGE 182
PRINT #6, thest(ii) NEXT FOR jj = 1 TO dp 1 subroutine to plot the output yy(jj) = 5 y(jj) 1 and the control signal NEXT scREEN 1 VIEW (50, 20)(290, 140), 1 COLOR 0, 2 WINDOW (0, 2)(sw, 2) style% = &H7777 LINE (sw, 0)(0, 0) FOR qx = 0 TO sw 1 LINE ( qx, yy( qx + 1) ), , style% NEXT LOCATE 2, 16: PRINT "OtiTPUT OF TilE SYSTEM" LOCATE 20, 5: PRINT 0 LOCATE 20, 12: PRINT 500 LOCATE 20, 19: PRINT 1000 LOCATE 20, 27: PRINT 1500 LOCATE 11, 3: PRINT 0 LOCATE 6, 3: PRINT .2 LOCATE 15, 3: PRINT .2 LOCATE 23, 16: PRINT "Sampling ,k" SLEEP CLS SCREEN 1 VIEW (50, 20)(290, 140), 1 COLOR 0, 2 WINDOW (0, 2)(sw, 2) style% = &H7777 LINE (sw, 0)(0, 0) FOR qx = 0 TO sw 1 LINE ( qx, u( qx + 1) ), , style% NEXT LOCATE 2, 16: PRINT "CONTROL INPUT II 171
PAGE 183
CLOSE #2 CLOSE #3 CLOSE #4 CLOSE #5 CLOSE #6 LOCATE 20, 5: PRINT 0 LOCATE 20, 12: PRINT 500 LOCATE 20, 19: PRINT 1000 LOCATE 20, 27: PRINT 1500 LOCATE 11, 3: PRINT 0 LOCATE 6, 3: PRINT .2 LOCATE 15, 3: PRINT .2 LOCATE 23, 16: PRINT "Sampling ,k" SLEEP 'Reset Motor 'Send zero volts to the mode% = 15 'motor dtoa%(0) = 0 dtoao/o(1) = 2048 CALL das16(mode%, dtoao/o(O), flaglo) END 'end of program 'PRINT PROGRAM TERMINATED 172
PAGE 184
'* This program implements an adaptive controller based on pole '* placement to control a flexible robotic arm model. The parameter '* estimator is a stochastic gradient algorithm. '* This program uses a gauss elimination with partial pivoting '* to solve for control parameters S(q1 ) and R(q1). '* This is accomplished through the use of MetraByte's model '* Dash16 AID and D/ A convertor. '* The following variables are used to accomplish this task: '* '* phi: measurement vector '* thest: parameter vector '* u: input vector '* y: output vector '* ys: square wave input '* T, R and S: control polynomials '* vphi, nnw, nphi, nn and rphi: gradient algorithm parameters '* DtoA (0): controller output sent to DAC via D/A channel 0 '* AtoD (0): tip position sent to ADC via AID channel 1. '* I: Dummy loop counter. '* mode%: Dash16 parameter. '* 'flagG/o: Dash16 parameter. '* '* CLS INPUT "n1= ", n1 INPUT "n2= ", n2 INPUT "dp =" dp INPUT "qr =", qr DIM Atod%(4) DIM postbo/o(4) DIM dtoa%(4) 'InitializeDash16 mode% = 0 flagG/o = 0 Atod%(0) = &H300 'Base address of Dash 16 173
PAGE 185
Atod%(1) = 2 Atod%(2) = 3 : 'Set DMA level 'Set Interrupt level CALL dasl6(mode%, Atod%(0), flago/o) Set Scan Limits and Channels mode% = 1 'Set mux to toggle from Atodo/o(O) = 1 'channel one to channel 2 Atod%(1) = 2 CALL das16(mode%, Atod%(0), flag%) mode% = 17 Atod%(0) = n1 Atod%(1) = n2 'Set Programmable Timer Rate 'This sets the sampling time to n1*n2/1E6 seconds CALL das16(mode%, Atod%(0}, flag%) 'Reset Motor mode% = 15 'output zero volts to motor dtoa%(0) = 0 dtoao/o(1) = 2048 CALL das16(mode%, dtoa%(0), flag%) n = 13 w = 12 DIM phi(1 TO n) DIM 1(1 TO n) DIM thest(1 TO n) DIM u(1 TO dp) DIM yy(1 TO dp) DIM y(1 TO dp) DIM ys(l TO dp) DIM yst(l TO dp) DIM ff(1 TO w, 1 TO w) DIM b(1 TO w) DIM s(1 TO w) DIM vphi(1 TO dp) DIM rphi(l TO dp) DIM nnw(1 TO dp) 174
PAGE 186
OPEN "ysl.m" FOR INPUT AS #1 OPEN "syst.m" FOR OUTPUT AS #2 OPEN "sadpu.m" FOR OUTPUT AS #3 OPEN "sadpy.m" FOR OUTPUT AS #4 OPEN "snnl.m" FOR OUTPUT AS #5 OPEN "sthes.m" FOR OUTPUT AS #6 FOR j = 1 TO dp INPUT #1, ys(j) NEXT CLOSE #1 sw = 2000 max= 0 thest(1) = .185 thest(2) = .17622 thest(3) = .19 thest(4) = .146 thest(5) = .108 thest(6) = .08 thest(7) = 5.01241E04 thest(8) = 8.9912505 thest(9) = l.627832E04 thest(lO) = 2.005586E02 thest(ll) = 5.205883E02 thest(l2) = 4.162508E02 thest(13) = 2.800663E02 rphi( 6) = EXP( 1) aa = 1 'MAIN PROGRAM STARTS HERE FORt= 7 TO dp 175
PAGE 187
'Read tip position mode% = 4 I This mode allows only one Atod%(0) = 2 1 conversion per sample time Atod%(1) = VARPTR(posfb%(0)) 1 interval. Atodo/o(2) = 1 CALL das16(mode%, Atodo/o(O), flag'/o} POSFEEDBACK = posfb%(0) I 204.8' CALL das16(mode%, Atod%(0), flag%) xs = postbo/o(1) I 204.8 fft1, 1) = 1 fft1, 7) = thest(7) flt2, 1) = thest( 1) ff(2, 2) = 1 flt2, 7) = thest(8) :ff{2, 8) = thest(7) fft3, 1) = thest(2) fft3, 2) = thest(1) fft3, 3) = 1 flt3, 7) = thest(9) flt3, 8) = thest(8) fft3, 9) = thest(7) fft4, 1) = thest(3) fft 4, 2) = thest(2) fft4, 3) = thest(1) ff(4, 4) = 1 fft4, 7) = thest(lO) flt4, 8) = thest(9) flt 4, 9) = thest(8) flt4, 10) = thest(7) fft5, 1) = thest(4) flt5, 2) = thest(3) ff(5, 3) = thest(2) fft5, 4) = thest(1) fft5, 5) = 1 ff( 5, 7) = thest( 11) flt5, 8) = thest(10) AR+qdBS=Am is put in matrix form 176
PAGE 188
ff(5, 9) = thest(9) ff(5, 10) = thest(8) ff(5, 11) = thest(7) ff(6, 1) = thest(5) ff(6, 2) = thest(4) ff(6, 3) = thest(3) fft6, 4) = thest(2) fft6, 5) = thest(1) fft6, 6) = 1 ff(6, 7) = thest(12) ff{6, 8) = thest(ll) ff{6, 9) = thest(lO) fft6, 10) = thest(9) ff{6, 11) = thest(8) ff(6, 12) = thest(7) ff{7, 1) = thest( 6) ff(7, 2) = thest(5) ff(7, 3) = tbest(4) ff(7, 4) = thest(3) ff(7, 5) = thest(2) ff{7, 6) = thest(1) ff(7, 7) = thest(13) ff(7, 8) = thest(12) ff{7, 9) = thest(ll) ff(7, 10) = thest(10) ff(7, 11) = tbest(9) ff{7, 12) = thest(8) ff(8, 2) = thest(6) ff(8, 3) = thest(5) ff(8, 4) = thest(4) ff{8, 5) = thest(3) ff(8, 6) = thest(2) ff(8, 8) = thest(13) ff(8, 9) = thest(12) ff(8, 10) = thest(ll) ff{8, 11) = thest(lO) ff{8, 12) = thest(9) ff{9, 3) = thest(6) 177
PAGE 189
ff(9, 4) = thest(5) ff(9, 5) = thest(4) ff(9, 6) = thest(3) ff(9, 9) = thest(13) ff(9, 10) = thest(12) flt9, 11) = thest(ll) ff(9, 12) = thest(10) :ffl:10, 4) = thest(6) ff{10, 5) = thest(5) ff(10, 6) = thest(4) :ffl:10, 10) = thest(l3) ff(IO, 11) = thest(12) ff(10, 12) = thest(ll) ff(11, 5) = thest(6) ff(ll, 6) = thest(5) :ffl:11, 11) = thest(13) :ffl:11, 12) = thest(12) :ffl:12, 6) = thest(6) ff(12, 12) = thest(13) b(1) = thest(1) b(2) = thest(2) b(3) = thest(3) b(4) = thest(4) b(5) = thest(5) b(6) = thest(6) b(7) = 0 b(8) = 0 b(9) = 0 b(10) = 0 b(ll) = 0 b(l2) = 0 FOR i = 1 TO w subroutine for gauss elimination starts temp = ABS(ff(i, 1)) 'here. FOR j = 1 TO w 1 IF temp < ABS(ff(i, j + 1)) THEN temp = ABS(ff(i, j + 1)) 178
PAGE 190
END IF s(i) = temp NEXT NEXT gg = 0 FORm= I TOw gg=gg+I tempi = ABS(ff(m, m)) I s(m) FOR c = m + I TO w IF tempi < ABS(ff(c, m)) I s(c) THEN tempi = ABS(ff(c, m)) I s(c) temp2 = s(m) s(m) = s(c) s(c) = temp2 FORi= m TO gg FOR j = I TO w temp3 = ff( i, j) ff(i, j) = ff(c, j) ff(c, j) = temp3 NEXT NEXT temp4 = b(m) b(m) = b(c) b(c) = temp4 END IF NEXT FOR i = m + I TO w ff(i, m) = ff(i, m) I tT(m, m) FORj =m+ 1 TOw ff(i, j) = ff(i, j) ff(i, m) ff(m, j) NEXT b(i) = b(i) ff(i, m) b(m) ff(i, m) = 0 NEXT NEXT 'end of m b(w) = b(w) I ff(w, w) 179
PAGE 191
kk=w FOR i = 1 TO w 1 kk=kk1 FOR j = kk + 1 TO w b(kk) = b(kk) ff(kk, j) bG) NEXT b(kk) = b(kk) I ff(kk, kk) NEXT 1 subroutine for gauss elimination ends y(t + l) = xs I calculate the control signal u1= b(1) u(t 1) b(2) u(t 2) b(3) u(t 3) b(4) u(t 4) u2 = b(5) u(t 5) b(6) u(t 6) u3 = b(7) y(t) b(S) y(t 1) b(9) y(t 2) b(10) y(t 3) u4 = b(ll) y(t 4) b(12) y(t 5) uS = qr ys(t) u(t) = u1 + u2 + u3 + u4 + uS phi(1) = y(t) phi(2) = y(t 1) phi(3) = y(t 2) phi(4) = y(t 3) phi(5) = y(t 4) phi(6) = y(t 5) phi(7) = u(t) phi(S) = u(t 1) phi(9) = u(t 2) phi(10) = u(t 3) phi(ll) = u(t 4) phi(12) = u(t 5) phi(13) = u(t 6) phiphi = 0 1 stochastic gradient algorithm starts here FOR i = 1 TO n 180
PAGE 192
phiphi = phi(i) phi(i) + phiphi NEXT vphi(t) = phiphi rphi(t) = rphi(t 1) + vphi(t) IF vphi(t) > vphi(t 1) THEN mphi = vphi(t) ELSE mphi = vphi(t 1) END IF IF max > mphi THEN nn1 = max ELSE nn1 = mphi END IF max = nn1 tmphi = (SQR(rphi(t)) (LOG(rphi(t))) 1.1) IF nn1 > nuphi THEN nn = nn1 ELSE nn = mrphi END IF nnw(t) = nn thestphi = 0 FOR i = 1 TO n thestphi = thest(i) phi(i) + thestphi NEXT yst( t) = thestphi FOR i = 1 TO n thest(i) = thest(i) + (aa phi(i) (y(t + 1) yst(t))) I nnw(t) NEXT 'stochastic gradient algorithm ends here T= 11( thest(7)+thest(8)+thest(9)+thest( 1 O)+thest( 11 )+thest( 12 )+thest( 13)) in = 1 u(t) 181
PAGE 193
'Check M, make sure it does not exceed abs(2.5) IF ABS(m) > 2.5 THEN m =2.5 SGN(m) 'Output Control Voltage mode% = IS dtoa%(0) = 0 Select D/A Channel 0 dtoa%(1) = (m + 2.4475) 8I9 Convert to bits CALL dasi6(mode%, dtoa%(0), flagG/o) NEXT end of loop 'Send zero volts to the mode% = IS 'motor dtoao/o(O) = 0 dtoa%(I) = 2048 CALL dasi6(mode%, dtoao/o(O), flag%) FOR i = I TO dp record data PRINT #2, yst(i) PRINT #3, u(i) PRINT #4, y(i) PRINT #5, nnw(i) NEXT FOR i = I TOn PRINT #6, thest(i) NEXT FOR jj = I TO dp subroutine for plotting starts here yy(jj) = 5 y(jj) NEXT SCREEN I VIEW (50, 20)(290, 140), I COLOR 0, 2 WINDOW (0, 2)(sw, 2) style% = &H7777 LINE (sw, 0)(0, 0) FOR qx = 0 TO sw I 182
PAGE 194
CLOSE #2 CLOSE #3 LINE ( qx, yy( qx + 1) ), , style% NEXT LOCATE 2, 16: PRINT "OUTPUT OF 1HE SYSTEM" LOCATE 20, 5: PRINT 0 LOCATE 20, 12: PRINT 500 LOCATE 20, 19: PRINT 1000 LOCATE 20, 27: PRINT 1500 LOCATE 11, 3: PRINT 0 LOCATE 6, 3: PRINT .2 LOCATE 15, 3: PRINT .2 LOCATE 23, 16: PRINT "Sampling ,k" SLEEP CLS SCREEN 1 VIEW (50, 20)(290, 140), 1 COLOR 0, 2 WINDOW (0, 2)(sw, 2) style% = &H7777 LINE (sw, 0)(0, 0) FOR qx = 0 TO sw 1 LINE ( qx, u( qx + 1) ), , style% NEXT LOCATE 2, 16: PRINT "CONTROL INPUT II LOCA1E 20, 5: PRINT 0 LOCA 1E 20, 12: PRINT 500 LOCA1E 20, 19: PRINT 1000 LOCA1E 20, 27: PRINT 1500 LOCA1E 11, 3: PRINT 0 LOCA1E 6, 3: PRINT .2 LOCA1E 15, 3: PRINT .2 LOCA1E 23, 16: PRINT "Sampling ,k" SLEEP subroutine for plotting starts here 183
PAGE 195
CLOSE #4 CLOSE #5 CLOSE #6 'Reset Motor 'Send zero volts to the mode% = 15 'motor dtoa%(0) = 0 dtoao/o( I) = 2048 CALL dasl6(mode%, dtoa%(0), flagG/o) END PRINT PROGRAM TERMINATED." 184
PAGE 196
REFERENCES [1] Astrom, K.J. and Wittenmark, B., "SelfTuning Controllers Based on PoleZero Placement,"JEED Proc., Vol. 127, May1980, pp. 120130. [2] Bialasiewicz, J. T. "Advanced System Identification Techniques for Wind Turbine Structures with Special Emphasis on Modal Parameters" National Renewable Energy Laboratory. [3] Fraser R A, Daniel W. R. "Perturbation Technique for Flexible Manipulator" Kluwer Academic Publisher, 1991. [4] Godwin G.C., and Sin K.S., "Adaptive Filtering, Prediction and Control," Prentice Hall, 1984. [5] Graseby Optronics, "User Manual." PIN 7910009, Revision C, August, 1992. [6] Juang, J.N, "Applied System Identification,"Prentice Hall, 1984. [7] Juang, J.N, Horta, L.G and Phan M., "System/Observer/Controller Identification Toolbox."NASA Technical Memorandum 107566, NASA LaRC, Febnuuy, 1992. [8] Juang, J.N, Phan, M,Horta, L.G and Longman, R.W., "Identification of Observer!Kalman Filter Markov Parameters: Theory and Experiments, "Journal of Guidance, Control, and Dynamics, Vol.16, No.2, MarchApril, 1993, pp.320329. [9] Kraft, S.D., "Design and Evaluation of A MicroprocessorBased Adaptive DC Motor Position Control System, "MS thesis, University of Colorado" 1984. [10] Landau, I.D., "System Identification and Control Design,"Prentice Hall, 1990. [11] Newland D.E., "An Introduction to Random Vibrations and Spectra Analysis, "Longman, 1984. 185
PAGE 197
[12] Phan, M, Juang, J.N, Horta, L.G and Longman, R.W., "Identification of Linear Systems by an Asymptotically Stable Observer,"NASA Technical Paper 3164, June 1992. [13] Ogata, K. ''DiscreteTime Control Systems,"Prentice Hall, 1987. [14] Radenkovic, MS and Michel, N.M, "Robust Adaptive Systems and Self Stabilization.,"/EEE Trans. Automat. Contr., Vol. 37, 1992 pp. 13551369. [15] Radenkovic, M.S and Michel, N.M., "Possible Bursting Phenomena in SelfTuning Control,"Intemational Joumal of Adaptive Control and Signal Processing, Vol 7, 1993. [16] Yang, T.C, Yang, J.C.W, Kudva, P. "Adaptive Control of a Single Flexible Manipulator with Unknown Load,"/EE ProceedingsD. Vol. 138, 1991, pp 153159 .. 186
