Citation |

- Permanent Link:
- http://digital.auraria.edu/AA00002590/00001
## Material Information- Title:
- Direct robust adaptive control
- Creator:
- Reid, Gary Wayne
- Publication Date:
- 1993
- Language:
- English
- Physical Description:
- vi, 159 leaves : illustrations ; 29 cm
## Thesis/Dissertation Information- Degree:
- Master's ( Master of Science)
- Degree Grantor:
- University of Colorado Denver
- Degree Divisions:
- Department of Electrical Engineering, CU Denver
- Degree Disciplines:
- Electrical Engineering
- Committee Chair:
- Radenkovic, Miloje
- Committee Members:
- Bailasiewicz, Jan T.
Bose, Tamal
## Subjects- Subjects / Keywords:
- Adaptive control systems ( lcsh )
Adaptive control systems ( fast ) - Genre:
- bibliography ( marcgt )
theses ( marcgt ) non-fiction ( marcgt )
## Notes- Bibliography:
- Includes bibliographical references.
- General Note:
- Submitted in partial fulfillment of the requirements for the degree, Master of Science, Department of Electrical Engineering.
- Statement of Responsibility:
- by Gary Wayne Reid.
## Record Information- Source Institution:
- University of Colorado Denver
- Holding Location:
- Auraria Library
- Rights Management:
- All applicable rights reserved by the source institution and holding location.
- Resource Identifier:
- 28863939 ( OCLC )
ocm28863939 - Classification:
- LD1190.E54 1993m .R44 ( lcc )
## Auraria Membership |

Full Text |

] DIRECT ROBUST ADAPTIVE CONTROL
by Gary Wayne Reid B.S., University of Colorado at Denver, 1986 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Master of Science 1993 This thesis for the Master of Science degree by Gary Wayne Reid has been approved for the Department of Electrical Engineering by Miloje S. Radenkovic / Jan T. Bialasiewicz of '3_t?3 Date Tamal Bose Reid, Gary Wayne (M.S., Electrical Engineering) Direct Robust Adaptive Control Thesis directed by Assistant Professor Miloje S. Radenkovic ABSTRACT An adaptive control algorithm known as the normalized gradient algorithm will be analyzed. The mathematical properties as well as the derivation of the normalized gradient algorithm are discussed. A unique approach to robustness to unmodelled dynamics and time-variations is also proposed. Computer simulations using the normalized gradient and the least squares algorithms are used to substantiate theory. The simulations show the performance of the algorithms for some simple systems. A simplified practical example is also included. The form and content of this abstract are approved. I recommend its publication. Signed: m Miloje S. Radenkovic ACKNOWLEDGMENTS I would like to thank the Martin Marietta Corporation for sponsoring my graduate work at the University of Colorado through their program for study under company auspices. Dr. Miloje Radenkovic, my thesis advisor, provided me with inspiration as well as help and guidance throughout my thesis research. An acknowledgment should also go out to all my family and friends for their support and patience. For her assistance during the publication phase of this thesis, I would like to thank Rebecca Wrinkle for her typing and editorial support. IV CONTENTS CHAPTER 1. INTRODUCTON.............................................1 1.1 Overview........................................ 1 1.2 Thesis Outline.................................. 4 1.3 Notation and Terminology......................... 5 2. ROBUST ADAPTIVE CONTROL OF TIME-INVARIANT SYSTEMS..................................7 2.1 An Introduction to Robust Adaptive Control.............................. 7 2.2 Deterministic Adaptive Control....................9 2.3 Global Stability Analysis and Mathematical Formalization of the Self-Stabilizing Mechanism in Robust Adaptive Control.............21 2.4 Physical Explanation of the Self-Stabilization Mechanism and Robustness of the Parameter Estimates........................................29 2.5 Simulation Examples..............................34 3. ROBUST ADAPTIVE CONTROL FOR TIME-VARYING SYSTEMS................................. 62 3.1 An Introduction to Time-Varying Adaptive Control..........................................62 3.2 The Time-Vaiying Plant Model....................64 3.3 T ime-V ary mg Parameter Estimation.............71 3.4 Certainty Equivalence Control Law...............74 3.5 Simulation Examples.............................80 4. CONCLUSIONS.........................................Ill APPENDIX A. MATLAB Input Files for Time-Invariant Simulation Examples.............................113 B. MATLAB Input Files for Time-Varying Simulation Examples............................ 129 REFERENCES........................................... 155 VI CHAPTER 1 INTRODUCTION 1.1 Overview Adaptive control is necessary in many applications, because the parameters in a system model undergo significant changes or cannot be measured with sufficient accuracy, rendering the application of classical feedback either unreliable or impossible. Reasons for this include the effects of changes in altitude or air speed of high performance aircraft, the inability to determine system response at high frequencies, or the use of low-dimensional approximations for complex high-dimensional systems. In some systems, the suppression of nonlinear or elastic components may lead to simplified models that exhibit large errors in certain ranges of conditions. Adaptive control attempts to address those problems by changing the parameters of the feedback. Historically, early adaptive algorithms employed heuristic models in classical control such as the Ziegler-Nichols rules and elementary experiments on the system to tune the parameters of a standard three term controller [6]. In the mid-seventies, some important advances were made, along two main approaches. The first consists of combining an identification scheme to track the changing values of system parameters and a control scheme designed as if the parameters were known and time-invariant. The difficulty with this approach, called "self-tuning," is in the simultaneous presence of identification and control in the same feedback loop. It may be impossible to identify system parameters while changing the parameters of the controller. However, it was shown in 1973 that the optimal control can often be obtained in the limit, even if the identification is not accurate. This was followed by a considerable number of applications that confirmed the validity of this approach. Today, there are several adaptive controllers commercially produced in the United States and in Europe, especially in Sweden, which use modifications of "minimum variance" control. The second approach consists of adjusting the parameters of the controller so as to make the behavior of the controlled system match closely that of a chosen reference model. This is called "model reference adaptive control." The system is assumed to be deterministic. Stability analysis methods such as those based on the use of Lyapunov functions are a primary tool for showing the stability of the adaptive control system. Self-tuning and model reference adaptive control apply primarily to linear systems. For nonlinear systems specialized adaptive control algorithms exist. A prime example is adaptive control of manipulator robots, where the parameters change with mass, shape, and orientation of objects being handled. An important property of such systems is that the unknown parameters enter linearly in the nonlinear system equations. By using an approach that resembles self-tuning, stable controllers have been designed. Another major breakthrough, which occured in the late seventies, was when a number of researchers established bounded input bounded state (BIBS) 2 stability of adaptive controllers applied to linear-time-invariant (LTI) systems [10, 11, 13, 31], However, later it was shown that this theory was inadequate since it could be shown that the algorithms would fail if applied to systems having small unmodeled dynamics [36], This problem has been the subject of intense study and there have been several publications applicable to these type of situations [15, 17, 18, 21, 25], It also has been established that persistency of excitation (PE) of the system states lead to exponential stability and that gives robustness to unmodeled dynamics and time-variations [1, 3, 4, 28]. These results are subject to the constraint that we can persistantly excite all of the dynamics associated with our system. This can be difficult task in the presence of potentially destabilizing feedback as it occurs in adaptive control. It has since been proven that the persistency of excitation constraint in the presence of time-variations and unmodeled dynamics is not necessary subject to a stability assumption [24]. It has also been shown that direct adaptive control algorithms using the certainty equivalence principle can be used to stabilize a linear system with unknown parameters when the plant can be modeled exactly, there is no mismatch in the relative degree, the sign and a lower bound for the high frequency gain is known and a stable invertibility condition is satisfied [13, 32]. Certainty equivalence adaptive controllers estimate the controller parameters or the process parameters in real-time and then use these estimates as if they are equal to the true one. The uncertainties of the estimates are not considered in this type of control. Certainty equivalent controllers remain the mainstream in adaptive control due largely to the literature from the 1950's and 1960's on deterministic 3 adaptive control which led to certainty equivalence control as the norm for adaptive control [7]. To show stability for the linear time-invariant certainty equivalence controller with no unmodelled dynamics is trivial. However, when modifications are made to the certainty equivalence controller due to the addition of unmodelled dynamics and time-variations stability can become an issue. The need to apply adaptive methods to high performance systems has continually been a source for investigation of alternative, more powerful adaptive control strategies. The goals of adaptive control are still far from being realized and there is a widespread agreement that the field is still very much in its infancy. Much of the literature consists of the design and analysis of particular algorithms. The notion of robustness, together with the now stated goals of adaptively controlling distributed parameter systems such as combustion and flow control, will undoubtedly broaden the field of adaptive control. This thesis presents an adaptive control algorithm which is robust in the sense that it is able to maintain exponential stability in the presence of unmodelled dynamics and time-variations. The algorithm is unique in the sense that it uses a new approach for the algorithm's normalization sequence. 1.2 Thesis Outline Chapter 2 begins with an introduction to robust adaptive control. A brief history of robust adaptive control is given as well as the effect 4 unmodelled dynamics can have on an adaptive system. Next, the derivation for the normalized gradient algorithm is presented. The mathematical formalization of the self-stabilization phenoma and a global stability analysis are then shown. To illustrate these concepts, some simulation experiments are presented to conclude Chapter 2. An introduction to time-varying adaptive systems is at the beginning of Chapter 3. Some models are then developed for time-varying linear systems. A few examples of time-varying models are given. The operation of the parameter estimator i$ then explained. The following section shows how certainty equivalent control can be applied to time-varying systems. Some simulations of time-varying adaptive systems are shown to conclude this chapter. In Chapter 4, the conclusions from Chapter 2 and Chapter 3 are summarized. Comparisons between different estimation techniques are discussed and the effects of unmodelled dynamics and time-variations on these adaptive algorithms are all covered in this final chapter. where T is the set of positive integers, while R+ is the set of nonnegative real numbers. When nx(t) is uniformly bounded over allt > o, x is said to be inl^. 1.3 Notation and Terminology For a discrete time function x: T->R+, we define the following seminorm (1.1) 5 H will denote the space of transfer functions T (z) which are analytic and bounded outside and on the unit circle in the z plane, s x is the operator defined by SxT(z) = t(X1/2z) (1.2) for a fixed parameter A, o < A < L s XH ~ is the space of transfer function T (z) such that SxT(z) e H . In other words T (z) e s XH if T (z) is analytic and bounded outside and on the circle |z | = A1/2 in the z plane. For T (z) e H the H norm is defined by |(r(z)||H-: = max(T(z)|. (1.3) |Z| = 1 Likewise, the norm of the s XH space is defined by l^(z)|li-:= XT(z)\\h. = max^(A1/2z)|. (1.4) This norm is induced by the 1 ^ norm of the input and output signals of r (z). When performing majorizations, in order to account for initial conditions we will use nonnegative functions Â£i(t) = cjF, 0 < cL < oo, 0 < p < l. (1.5) 6 CHAPTER 2 ROBUST ADAPTIVE CONTROL OF TIME-INVARIANT SYSTEMS 2.1 An Introduction to Robust Adaptive Control In the seventies, significant progress was made in adaptive control theory, under the assumption that physical systems are described precisely by linear system models [14,22]. In these works, it was assumed that the system parameters are unknown and that the relative degree and an upper bound on the order of the system are known. At the beginning of the eighties, a disturbing fact was discovered, namely, that an adaptive controller designed for the case of a perfect system model in the presence of external disturbances or small modelling errors, can become unstable [10,35]. In order to guarantee stability, a variety of modifications of the algorithms originally designed for the ideal case have been proposed, such as cr and Â£i modification, relative dead-zone, projection of the parameter estimates and the like [15, 17, 21, 25, 27, 29, 30]. A remarkable unification of the existing robust deterministic adaptive control methods is developed in [16]. This paper is where most of the significant works in the area of deterministic adaptive control are cited. In addition to this work other authors have proven that the projection of the parameter estimates is, in fact, sufficient to guarantee stability in the presence of small unmodelled dynamics [26, 40], A unified approach to robust deterministic and stochastic adaptive control is presented where the algorithms with the projection of the parameter estimates are considered in [34]. It is shown that small algorithm gains may produce unacceptably large input and output signals in the adaptive loop. In the same paper, it is proved that the admissible unmodelled dynamics do not depend on the algorithm gain and it is the same as in nonadaptive control. The purpose of this chapter is to show that adaptive control is globally stable and robust with respect to unmodelled dynamics and external disturbances without introducing a or ei modifications, dead-zone algorithms or projection techniques. The above-mentioned modifications usually result in increased algorithm complexity and require a' priority information related to the parameters of the nominal system model. The normalized gradient algorithm will be analyzed. This analysis uses a similar approach proposed by [41]. Global stability and robustness of the considered algorithm is obtained under the assumption that the reference signal has a sufficiently large level of excitation compared with the intensities of the unmodelled dynamics and external disturbances. Then it is proved that during the adaption process there are time internals in which persistent excitation (PE) conditions are satisfied in the adaptive loop. This implies that in these time intervals parameter estimates can drift only in a certain set defined by the level of excitation of the reference signal and the intensities of the unmodelled dynamics and external disturbances. It is shown that in the time intervals where the PE conditions in the adaptive loop can not be characterized, the tracking error is large enough so that the corresponding Lyapunov function decreases, implying boundedness of the parameter estimates. The fact that the parameter estimates are bounded is 8 enough to establish global stability of the considered algorithm. Parameter estimation error and the intensity of admissible unmodelled dynamics and external disturbances are specified in terms of the H~ norms of the corresponding transfer functions and the level of excitation of the reference signal. The results established in this chapter represent mathematical formalization of the self-stabilization mechanism which is inherent to the considered adaptive controller. Note that the problem of robust identification of open-loop unstable systems is still unresolved until now. The presented results can be considered as a contribution in this direction. 2.2 Deterministic Adaptive Control Consider the following discrete time single input single output (SISO) system with unmodelled dynamics A(g _1)y(t + l) = (2.1) = B (g _1)[l + A2(g ) + A(g _1)A2(g _1^i (t) +. co(t + 1) where {y(0}, MO} and {to(0}are output, input and disturbance sequences, respectively, while q~l represents the unit delay operator. The polynomials A(g-1) and B(g) describe the nominal system model and are given by A(g _1) = 1 + a# _1+. .+amg B(g_1) = Jb. + (jb. o). (2.2) In equation (2.1) A/(g-1), / = 1,2, denote multiplicative and additive system perturbations. The transfer functions A(.(z_1), / = 1,2, are causal, and is stable. 9 The system in equation (2.1) can be stabilized by designing the adaptive controller so that for a given reference signal y*(0, the following criterion is minimized J = [y(0 y'mf. (2.3) Concerning disturbance oi(t) and reference signal y*(0, it is assumed: |u)(0| < ka < oo and |y*(0| < ml < ~ (2.4) for all t > 1. Note that the system model in equation (2.1) can be written in the form +.1) = 6o
\ho I Po I
i = 1 j (1
+(l 2.0g 1 + 5.0g 2)A(g _1)u (t 1) |