Citation
An investigation of genetic algorithm and simulated annealing search methods with control system applications

Material Information

Title:
An investigation of genetic algorithm and simulated annealing search methods with control system applications
Creator:
Coppes, Keil D
Publication Date:
Language:
English
Physical Description:
xiii, 375 leaves : illustrations ; 29 cm

Subjects

Subjects / Keywords:
Engineering design -- Data processing ( lcsh )
Computer-aided design ( lcsh )
Aeronautics -- Data processing ( lcsh )
Aeronautics -- Data processing ( fast )
Computer-aided design ( fast )
Engineering design -- Data processing ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 373-375).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Electrical Engineering.
Statement of Responsibility:
by Keil D. Coppes.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
34845416 ( OCLC )
ocm34845416
Classification:
LD1190.E54 1995m .C67 ( lcc )

Downloads

This item is only available as the following downloads:


Full Text

PAGE 1

AN INVESTIGATION OF GENETIC ALGORITHM AND SIMULATED ANNEALING SEARCH l\1ETHODS WITH CONTROL SYSTEM APPLICATIONS by Keil D. Coppes B.S., University of Colorado, 1990 A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Electrical Engineering 1995

PAGE 2

This thesis for the Master of Science degree by Keil D. Coppes has been approved by Mike Radenkovic Jan T. Bialasciewicz Joseph Hibe Date

PAGE 3

Coppes, Keil D. (M.S., Electrical Engineering) An Investigation of Genetic Algorithm and Simulated Annealing Search Methods with Control System Applications Thesis directed by Associate Professor Mike Radenkowic. ABSTRACT Some controls problems exhibit complex plants that have tight tolerances or have parameters which change several times during the design period (particularly in the aerospace industry). These require designs to repeatedly be produced on a per-item basis. Substantial cost savings can be achieved by employing and design methods to reduce the manual portion of the design effort. In a highly expensive aerospace industry an expense reduction of an order of magnitude is the vision. Search methods may be used to automatically design controllers by minimizing an objective function that measures the performance of the controller parameter set for different parameter values. Search method design can also employ higher fidelity models than closed-form solution methods, eliminating the possibility of rework inherent in designing to simplified models and then validating the design with higher fidelity models. A quantitative basis with which to measure and refine real methods would be a great help in approaching search method design. In this paper a generalized search method structure and a set of metrics to be embedded in the structure are developed to provide such a basis. The basis is applied to the performance and operations of two

PAGE 4

new search methods, and The results of a literature search on these methods are presented to aid understanding the results. Finally, the refined methods are applied to some sample control design problems and conclusions are drawn. This abstract accurately represents the content of the candidate's thesis. I recommend its publication. Signed Mike Radenkovic

PAGE 5

ACKNOWLEDGMENTS I would like to thank the following for their contribution to this work: My father, for explaining to me that it was the disciplined ones who stuck to who got higher degrees (the idea of doing it), My mother, for her reviewing my work to make sure it read correctly, and for her patience as closeted myself in a room with a computer trying to stick to it, staying up and getting up late (the support), Chris Voth, who first applied Genetic Algorithms to engineering design at the Martin Marietta Astronautics Group (the seed), Jonathan Murray, who got me into controls in the first place, for his explanations of metrics, engineering management theory and practice, showing me the ropes of how to plan, manage, and measure real engineering (how to do it), Norman Osborne, Robert Zermeuhlen, Dave Wilks, and Sandy Scherer of the Martin Marietta Astronautics Group for arranging leave time (the time to do and above all, my Lord, without whom I wouldn't have the patience, discipline, or will to finish the work at all (the true reason behind everything).

PAGE 6

CONTENTS Figures ....................................................................... ............................................... x Tables ..................................................................................................................... xiii Chapter 1. Introduction .......................................................................................................... 1 1. 1. Motivation .......................................................................................................... 1 l.2. Open and Closed Form Design Approaches ......................................................... 2 1.3. Thesis Outline ..................................................................................................... 6 2. A Basis for Comparison ........................................................................................ 7 2.1. The Perfonnance Measurement Problem ............................................................ 7 2.2. Performance Metrics ........................................................................................ 10 2.2.1. Accuracy Metric ............................................................................................ 11 2.2.2. Iteration Cost Metrics ................................................................................... 12 2.2.3. Run Metric ....................... ............................................................................ 13 2.2.4. System Metrics ................................................................... ......................... 15 2.3. Method Architecture ........................................................................................ 15 2.4. Comparison Strategy ........................................................................................ 18 3. Genetic Algorithm Methods ................................................................................ 21 3.1. Overview ......................................................................................................... 21 3.2. Coding ................................................ ............................................................. 24 3.3. Objective Function ........................................................................................... 26 3.4. Mating Pool Fonnation .................................................................................... 27 3.4.1. Generation Gap and Steady State Replacement ............................................. 27 3.4.2. Parent Selection Methods and Fitness Remapping ......................................... 28 3.5. Reproduction Methods ................... ................................................................. 30 3.6. Success Criteria and Convergence .................................................................... 32

PAGE 7

3 6.1. Detennining Convergence ....................... ...................................................... 32 3.6.2. Convergence Issues ....................................................................................... 33 3.7. Related Methods .............................................................................................. 34 3.8. Thesis GA Implementation Capabilities ............................................................ 34 4. Simulated Annealing Methods ............................................................................. 36 4.1. Overview of the Method .................................................................................. 36 4.2. Temperature Schedules .................................................................................... 37 4.3. Generating Function ........ ..... ...................... ........................ ............................ 38 4.4. Acceptance Function ..... .................................................................................. 39 4.5. Success Criteria and Convergence ........... ........................................................ 40 4.6. Simulated Annealing Method Varieties ............................................................. 40 4.7. Comparison with Other Methods ...................................................................... 41 4.8. Thesis SA Implementation Capabilities ........................................................ .... 41 5. Method Examination: Test Problem .......................................................... ........ .42 5.1. Problem Statement ................................................... ......... ............................. 42 5.2. Genetic Algorithm Measurements ............................................ ........................ 43 5.2.1. "Best" Parameter Values ............................................................................... 45. 5.3. Simulated Annealing Measurements ................................................................. 58 5.3. 1. Response to Parameters Fit .................................................... ...................... 60 5.3.2. "Best" Parameter Values .......................... .................................................... 62 6. Sample Application Problems .............................................................................. 76 6.1. Mass-Spring-Damper Problem with Center-of-Gravity Feedback ...................... 76 6.1.1. Baseline Results ............................................................................................ 80 6.1.2. Genetic Algorithm Application ...................................................................... 82 6. 1.3. Simulated Annealing Application ............................ ............. ........................ 86 6.2. End-point Manipulator .................................................................... .............. 89 6.2.1. Baseline Results ............................................................................................ 92 6.2.2. Genetic Algorithm Application ...................................................................... 95 6.2.3. Simulated Annealing Application ........... ....................................................... 98 7. Conclusion ........................................................................................................ 101 7.1. Summary of Results ....................................................................................... 101 7.2. Areas for Further Investigation ............. ......................................................... 102

PAGE 8

7.2.1. Metric ................................................................... ..................... 7.2.2. of Problem Functions ............................. 7.2.3. ............................................................ 7.2.4. Optimizations ...................................... Analysis ................................................. A.I.I. ........................................................................................... A.I.2. ......................................................... A.l.3. ................................................................................. A.1.4. .......................................................................................... A. 1.5. ............................... Implementation ................................................. A.2 1 .......................................................... A2.2 ...................... .............................................. A2.3 ................................................................. Implementation ........................................ A2.5 .................................................................. A2.6 ...................................................... A 2. A2.8 .................................. ................... Implementations ....................................... B.I Implementation ........................................ C: C.I. ........................................ C.3. Problem ................................... ............

PAGE 9

............................ C.5. ................................... C.6. Problem ................................................ Abbreviations .................. ................................................................. ......... .................................... .......................................................

PAGE 10

FIGURES Figure 1. 1 Solution type and convergence ........................................................................... 3 2. 1 Desired comparison method ............................................................................... 7 2.2 Searches seen as feedback systems ..................................................................... 8 2.3 Evaluation based on method performance sampling ............................................ 9 2.4 System metrics built from metrics on single search runs .................................... 11 2.5 Search information model. ................................................................................ 15 2.6 Search methods as instances of a generalized search method ............................. 17 2.7 Comparison strategy .......................................... ...... ..... ................. ............... 18 3. 1 Genetic algorithm flow chart ......... ......... ......... ............................... .............. 24 3.2 Sample binary coded chromosome ......................... .......................................... 25 3.3 Sample crossovers ............................................................................................ 31 4. 1 Simulated annealing flow chart ......................................................................... 37 5.1 GA responses residual graphics ........................................................................ 46 5.2 Final distance adjusted response vs. Population size ......................................... 48 5.3 System metric adjusted response vs. Population size ......................................... 49 5.4 Final distance adjusted response vs. Fraction of population competing .............. 50 5.5 Final distance adjusted response vs. Fraction of population competing .............. 50 5.6 Final distance adjusted responsevs. Fraction of population replaced ................. 51 5.7 System metric adjusted responses vs. Fraction of population replaced ............... 51 5.8 Final distance adjusted response vs. Tournament size ....................................... 52 x

PAGE 11

5.9 System metric adjusted response vs. Tournament size ....................................... 52 5. 10 Final distance adjusted response vs. Probability of best winning ...................... 53 5.11 System metric adjusted response vs. Probability of best winning ..................... 53 5. 12 Final distance adjusted response vs. Number of crossovers ............................. 54 5.13 System metric adjusted response vs. Number of crossovers ............................ 55 5.14 Final distance adjusted response vs. Mutation probability ................................ 55 5.15 System metric adjusted response vs. Mutation probability ............................... 56 5.16 VFSR response fit residual graphics (ReAnneal) ............................................. 61 5.17 Final distance adjusted response vs. Parameter initial temperature (ReAnneal)62 5.18 System metric adjusted response vs. Cost initial temperature (No ReAnneal). 63 5.19 Adjusted responses vs. Parameter initial temperature ...................................... 64 5.20 Very fast simulated annealing initialization code ............................................. 65 5.21 Adjusted responses vs. Temperature annealing scale factor (ReAnneal) .......... 67. 5.22 Adjusted responses vs. Temperature ratio scale factor .................................... 68 5.23 Adjusted responses vs. Parameter to cost temperature scale factor ................. 69 5.24 Adjusted response vs. Test period .................................................................. 71 5.25 Final distance adjusted response vs. Reanneal rescale value (ReAnneal) .......... 72 5.26 Final distance adjusted response vs. Default delta X (ReAnneal) .................... 73 6.1 Three-body mass-spring damper (MSD) system ............................................... 76 6.2 MSD system controller structure ...................................................................... 78 6.3 MSD plant pole locations ................................................................................. 78 6.4 MSD plant frequency responses ....................................................................... 79 6.5 MSD baseline controller + Plant open loop frequency response ........................ 80 6.6 MSD baseline controller disturbance frequency response .................................. 81

PAGE 12

6.7 MSD baseline controller closed loop poles ....................................................... 81 6.8 MSD GA-designed controller + plant open loop frequency response ................ 84 6.9 MSD GA-designed controller disturbance frequency response .......................... 84 6.10 MSD GA-designed controller closed loop poles ............................................. 85 6.11 MSD SA-designed controller Plant open loop frequency response ............... 87 6.12 MSD SA-designed controller disturbance response ......................... 87 6.13 MSD SA-designed controller closed loop poles .............................................. 88 6.14 Geometry of the elastic arm model. ................................................................ 89 6.15 Robotic arm plant frequency responses ........................................................... 91 6.16 Robotic arm plant pole locations .................................................................... 91 6.17 Block diagram for end-point position controL ............................................... 92 6.18 Robotic arm baseline controller closed loop poles ........................................... 93 6.19 Robotic arm baseline controller Plant open loop frequency response ............ 93 6.20 Robotic arm GA-designed controller closed loop poles ................................... 96 6.21 Robotic arm GA-designed controller + Plant open loop frequency response .... 97 6.22 Robotic arm SA-designed controller closed loop poles ................................... 99 6.23 'Robotic arm SA-designed controller Plant open loop frequency response .. 100 A. 1 Generalized search information model .......................................................... 105

PAGE 13

TABLES Table 5.1 Genetic algorithm parameter ranges ................................................................. 43 5.2 Genetic algorithm test problem performance ..................................................... 44 5.3 Genetic algorithm "best" parameter values summary table ................................ 56 5.4 Genetic algorithm second pass summary table .................................................. 57 5.5 Very fast simulated annealing parameter ranges ................................................ 58 5.6 Very fast simulated annealing test problem performance ................................... 59 5.7 Very fast simulated annealing "best" parameter values summary table ............... 74 5.8 Very fast simulated annealing second pass summary table ................................. 74 6.1 Mass-spring-damper baseline results table ........................................................ 80 6.2 Mass-spring-damper genetic algorithm results table .......................................... 82 6.3 Mass-spring-damper simulated annealing results table ....................................... 86 6.4 Robotic arm baseline results table ..................................................... .......... .... 94 6.5 Robotic arm genetic algorithm results table ...................................................... 95 6.6 Robotic arm simulated annealing results table ................................................... 98

PAGE 14

1. Introduction 1.1. Motivation Certain types of controls applications exhibit plants that vary from customer to customer or whose parameters change during the design period. In aerospace applications the variety of payloads and tight tolerances on weight and performance requires control designs to repeatedly be produced on a per-payload basis even when the launch vehicle type has been flown several times. In addition, payload mass properties change repeatedly as the payload is constructed, thus changing the plant parameters and requiring redesign or revalidation. Currently this and validation against high fidelity models is performed by hand at great cost. Should portions of this design and validation effort be automated, the cost would be greatly reduced. This cost savings would be achieved by reducing the total design effort required by employing a and automated design method to speed design. order of magnitude cost reduction may be possible if the proper automated design methods are developed. As a real example, the parameters for the optimal discrete time powered-flight control employed by Martin Marietta on the Transfer Orbit Stage(TOS) were automatically generated from payload mass-properties data via a MATLAB2 script and verified using EASY53 FORTRAN-based time-domain l Copyright, Orbital Sciences Corporation 2 Copyright, the Mathworks, Inc.

PAGE 15

simulations. As a result, the TOS recurring design time was reduced from 6 months to 3 man-days (2 weeks real time), a savings of over 90%. Aerospace is not alone with regard to plants with tight constraints and changing parameters. This can occur in cost or performance-driven designs or any situation where the system involves inter-related thermal, mechanical, electrical, chemical, or other dynamic systems under concurrent development. Although the author's primary interest is in aerospace applications, these methods could benefit many other arenas. 1.2. Open and Closed Form Design Approaches Several approaches have been taken to meet the goal of automated design, employing both open and closed form solutions. A closed form solution is one that directly maps the parameters of the system to be controlled to the parameters of the controller. Such closed form solutions have the advantage of always producing an answer. However, closed form solutions require more insight into the structure of the problem and the analytic effort involved may be very costly. Some problems may not produce a closed form solution at all. Open form solutions (those solutions produced by methods that do not guarantee an answer) can be a two-edged sword though. Although open form methods may produce solutions in lieu of a closed form solution, they may fail to converge or may converge to an unacceptable answer (see figure 1.1). From a convergence standpoint, closed form solutions should be preferred if the cost 3 Copyright, BOEING Computer Services. 2

PAGE 16

in producing them is not too great. On the other hand, most closed form solutions operate on some simplified model of the system, perhaps reducing the number of modes involved or linearizing the system. Once the control design has been completed, the controller is applied to some higher fidelity model of the system to validate its performance. If the controller fails, the system returns to the design phase and additional design costs (rework) are incurred while the "bugs" are worked out of the design. --Closed Form ( Guaranteed Convergence ) ___ Open Form Solutions Neural Networks -----Search-based Methods -----Figure 1.1 Solution type and convergence H-infinitj / The TOS was a relatively simple plant with easily isolated high-frequency modes. However, although the optimal control method employed promised phase margin of 60 degrees and infinite gain margin, the control yielded less than 45 degrees and less than 12 dB of gain margin when applied to the validation model. This is understandable as the optimal control design did not take the high-frequency modes of the plant into account. Although a workable solution was always produced, it was always conceivable that the solution was a working solution as the validation was 3

PAGE 17

decoupled from the design. More complex or more tightly constrained problems may not yield usable closed form solutions on a regular basis. In these cases each design must be generated as a totally unique case (i.e., with no a-priori knowledge other than the structure of the plant and prior experience) or the designer must employ an open form solution. Some open form solution methods are listed below. Hinfinity or H2 Methods: Automated Hinfinity and H2 control methods have already been applied to launch vehicle design and other problems with some success. These methods attempt to optimize a norm based on weighting functions that model the design constraints, plant variations, and noise inputs to the plant. Search techniques are employed to find some acceptable (but non-optimal) solution in an Hinfinity or H2 transformation space. However, mapping the design constraints into proper weighting functions in the transformation space still presents difficulties and approximations in the models used may preclude viable solutions. Neural Networks: A neural network may also be used to design controllers. If the training weights are automated, the neural net essentially becomes a weighted search. Generalized Search Methods: Search-based design methods make no effort to employ transformations to simplify problem design. Rather, the designer assigns a structure to the controller (or encodes it into the search), sets constraints and design goals (e.g. phase and gain margins, time-domain characteristics, performance over a set of variate models), and then uses a computer to numerically search for the solution over the problem space that these define. The advantage of these methods is that setup is straightforward and directly uses the high-fidelity models already required for

PAGE 18

validation. Frequency and time-domain analysis quantities may be directly incorporated into the design goals. If the search method can handle nonlinearities in its search space, response of the controller to plant nonlinearities may be included as well. Any search method may be employed, but results will vary with the attributes of the algorithm. Of additional interest is the fact that and the probability of proper performance in the lab is greatly enhanced. Overall, these methods require much greater computer resources than closed form methods, but with the growing power and speed of computers they are becoming a viable design option. The limiting factors in these methods are then the speed of the design engine and the performance of the search method employed. Measuring the performance of the search method (predicted accuracy, cost of generating solutions, etc) is important in choosing a particular search method for design and in finding the best set of parameters for use with that method. A quantitative basis for performing these measurements on real methods would be a great help. In this paper a generalized search method structure and a set ofmetrics to be embedded in the structure are developed. The engines for and searches have been embedded in this generalized search method structure and measured with the developed metrics. The genetic algorithm and simulated annealing methods are newcomers to the field of search methods and are of particular interest because they do not employ partials (and so do not require a differentiable problem space). This allows full-up nonlinear models to be employed in the objective function. 5

PAGE 19

1.3. Thesis Outline In Chapter 2, a quantitative measurement basis for comparing the performance of various varieties of search methods is developed. A unified set ofmetrics is developed for test measurements. In later chapters the basis and metrics are applied to finding "best" sets of parameters for a genetic algorithm and simulated annealing method. In Chapter 3, the workings of various genetic algorithm methods are examined in light of some literature in the field. This chapter aids understanding the results produced by applying the measurement basis to the genetic algorithm. Chapter 4, the workings of various simulated annealing methods are examined. This chapter is included to aid in understanding the measurement results produced from applying the measurement basis to a simulated annealing method. Chapter 5, the genetic algorithm and simulated annealing methods are applied to a difficult test problem. The measurement basis is applied to an ensemble of runs for each method as the parameters for each method are allowed to vary over set ranges. The results are then analyzed to drive out the "best" parameters within the ranges. Insight into each method (from chapters 3 and 4) aids in understanding the particular "best" parameter values. In Chapter 6, the genetic algorithm and simulated annealing methods are applied to the design of controls for a launch vehicle analogue and robotic arm. Results are compared to results generated by H2 and successive loop closure (launch vehicle and robotic arm respectively). In Chapter 7, conclusions are drawn and some areas for further investigation are presented. 6

PAGE 20

2. A Basis for Comparison This chapter visits the performance measurement problem and then defines a set of metrics, a common search method architecture and a concept for performance testing. 2.1. The Performance Measurement Problem Ideally we would like to develop some test into which we may submit any search method and receive some numerical measurement of the method's performance (figure 2.1). This would allow objective, direct comparisons between methods. Method Evaluation 1----7) Performance Figure 2.1 Desired comparison method However, formulating such a situation is not simple. The ultimate goal of the search method is to find the global minimum of some objective function, F(w), by iterating over a series of observations of F(w) (the objective function maps a .parameter set to a scalar value representing the fitness of those parameters to solve the problem). When applied to particular objective functions, search methods define 7

PAGE 21

feedback systems with some internal states and output w, where w is the position vector defining the current estimate of the location of the global optimum of F(w) (figure 2.2). Performance Method Evaluation Figure 2.2 Searches seen as feedback systems For encoded methods such as those under examination here, performance may be measured by stimulating the search method and objective function with some starting state, w, and observing the resulting behaviour. Because most search methods also have parameters that tailor how they propagate the current state, multiple sets of observations with different parameters must be taken as well as sets of observations with different starting states (figure 2.3). This is because modifying the method parameters modifies the operation of the method, so that performance for one set of parameter values will be different than that for another set of parameter values. Also, different starting states give different trajectories through the search space. 8

PAGE 22

EVALUATION -' Performance Method parameters, V H ----------Method _..J "'-1 if' Figure 2.3 Evaluation based on method performance sampling Knowledge of how the search converges to the global optimum for some particular objective function does not necessarily imply knowledge of how the search will converge to the global optimum for some other objective function. Measuring the general perfonnance of a search method raises an additional problem. Each objective function is unique and therefore defines a different search system with different perfonnance. To measure convergence to the global optimum of any objective function, the position of each global optimum must be known. Of course, if the location of the global optimum were known the search would be unnecessary. Some assumption must be made about how the search will perfonn among some class of functions. Usually a difficult test function or suite of test functions is employed, with the assumption that these are more difficult to solve than most application problems and therefore bound the method perfonnance. This technique will be used later in measuring the search methods examined in this work. 9

PAGE 23

2.2. Performance Metrics A metric is a measurement that reflects some quality of a system or process with respect to some user requirement or goal. Multiple metrics may be weighted and combined into a system figure of merit (or system metric). Metrics are therefore very useful in comparison situations as the system qualities may ultimately be reduced to a single number or a small set of numbers (that is not to say that the construction of a system metric is an easy task). In this case the processes are search methods. The major concern is to choose a set of metrics that are common to the examined search methods and that rightly represent the general qualities of the method perfonnance. The system metric for a search method is a weighting of 1) the success of the search method in locating the global optimum of the search space, and 2) the amount of computational effort the method expends in converging to a final value. The system metric must also be valid over the defined range of search method parameters. Therefore search runs with different control variable combinations must be employed in order to calculate it. The precursor to defining the system metric is to define a metric for the properties of a single search run (run metric). The run metric reflects the same properties as the system metric, but for a single run. The system metric will reflect the statistical properties of the run metrics for a series of search runs (figure 2.4).

PAGE 24

X c Xi rand (R: ) mean(M run ) stddev( Mrun ) 1 Mrun. is the metric for the properties of run i with control parameters Xi Figure 2.4 System metrics built from metrics on single search runs However, although one may glibly name metrics, each metric must be clearly understood and defined in such a manner that it may be attached to specific numeric measurements. This includes not only the system and run metrics, but also the metrics from which the system and run metrics are built. 2.2.1. Accuracy Metric (Eopt) "Accurate: in exact conformity to truth, to a standard or rule, or to a free from error or defect." 1 The IIsuccess of the search method in locating the global optimum of the search space may be measured by the error in the estimate of the global optimum position. The accuracy of a given search iteration is simply the distance of the current estimate or set of estimates from the location in the problem space that gives the optimum value: I[BS61], 9.

PAGE 25

is the ith coordinate of the estimate where is the ith coordinate of the global optimum is the number of coordinates The accuracy measurement is lumped with other measurements in the run metric. The units of search accuracy may be mixed as the units of the input parameters to the objective function may be mixed. 2.2.2. Iteration Cost Metrics (NOops Neval s ) The "amount of computational effort a method expends" may be divided into 1) evaluations of the objective function, and 2) all other operations performed by the search method (method operations). Computational cost corresponds to the number of mathematical operations needed to achieve a given result. The most time consuming operations in a computer program are the floating point operations and the time spent in memory swapping to disk; all others are negligible by comparison. If properly defined, the operation of a basic search structure can be made to contain almost no floating point operations and its operations may be neglected in terms of overall cost. The major cost is then confined to the operations of the method and to the evaluation of the objective function. The cost of method operations for a search iteration may be measured directly by counting the number of floating-point operations that the method incurs. With regard to objective function evaluations, it is 12

PAGE 26

easier to measure the number of times that the objective function is evaluated than the internal number offlops. It is assumed that the objective function always has the same number of internal operations. Iteration cost is then measured by 1) the number of flops incurred in the method during the iteration, Nflops, and 2) the number oftimes the fitness function is evaluated during the iteration, Neval s Units of iteration cost are number of flops and number of evaluations, respectively. For this work the effects of disk swap were neglected under the assumption that machines with large amounts of RAM would solve this problem. 2.2.3. Metric (Mrun(evals), Mrun(flops mentioned above, the run metric must measure 1) the success of the search method in locating the global optimum of the search space, and 2) the amount of computational effort the method expends in converging to a final value. Because of the split in the cost metric, the run metric comes in two flavors, one based on the number of objective function evaluations AI ) = =1, O
PAGE 27

Q(2.2) The run metric is a penalty function on the results of each iteration. As each iteration i passes, the iteration gain, ai, grows increasingly larger so that continuing non-convergence is increasingly penalized. This is so that the score for runs that find the global optimum but overshoot it will tend to be greater than runs that find the global optimum and stay there. A search method that finds the global optimum will not be penalized after finding it because Eopt goes to zero. Therefore, the faster the run converges, the smaller will be. The longer the fails to converge, the greater will become. will also be smaller if the search trajectory is a straight path to the global optimum than if it follows a contour with the same step size at each iteration. Note that b, the iteration gain step, greatly affects the tradeoff between perfonnance in final distance from the global optimum and cost perfonnance. b is too great, the system metric will be heavily penalized in cost in the early portion of the search as the iteration gain grows large very quickly. In this case the system metric will be smallest for searches that terminate early, regardless of their performance in finding the global optimum. The units of may be mixed due to the units of Eopt. However the units may be thought of as distance-cost. The iteration gain, aj, is dimensionless.

PAGE 28

2.2.4. System Metrics (Mmethod(evals)' O'method(evals), Mmethod(flops)' O'method(flops The system metrics are simply the mean, standard deviation, and max and values of Mrun for each variety of Mrun. Like Mrun, the units may be mixed and may be thought of as distance-cost. 2.3. Method Architecture The effects of method overhead must be isolated as much as possible or be clearly identified so that they may be removed. Development of a common abstraction and architecture for the search methods reduces unnecessary differences in method overhead. Object-oriented analysis (OOA) provides ready tools by which to generate this abstraction (in particular, the analysis techniques developed by ShlaerlMellor in [SM88] and [SM92]). The resultant infonnation model (figure 2.5) defines a 1. Problem Space RI 3. Search 'Parameter Limits is accessed by '/, Problem Space Der [R I) Curnulalive Search Cost 'Fitness FlUIction "-/'/ Search Method 'Allowable Dimensions ellplores Current lteralion (R3) is used by Search Conv. T3!J!et Metric /'/-, Itcralion Limit /' 2. Search Method R3 is employed by uses employs is contained by Propagalion FlUIction 4. Iteration /' R4 'Convergence Metric Function conlains Iteralion Number 'Method Parnmeter Array Convergence Measure 5. Data Point DaIa List (R4) 'Dimension Last lteralion 'Coord. Representation Next Iteralion 'Fitness Figure 2.5 Search information model 15

PAGE 29

general object-oriented architecture in which to examine and implement the various search methods. Five object types emerge from the analysis2 : A problem space contains the information that defines the problem parameter ranges over which searches are allowed to occur, the dimensionality of allowed searches, and the objective function that evaluates the value of particular parameter combinations. A search method defines an algorithm that may be used to search any problem space. s. A search applies a search method to a problem space. An iteration is a collection of data points examined by a search at one time. A data point stores a parameter vector, its dimension, and the value of the problem space fitness function for that vector. These five object types span all the search methods under examination and many others. Figure 2.6 shows the same search methods to be instances ofa generalized search method object. 2 A detailed listing of the analysis products is included in Appendix

PAGE 30

Search Method Propagation FlIDction CODV"'l!ence Metric Fn Mediod Paruneter Amy Search Searcb terminates when a) fiIDess function applied to cUlJeDl data point (itention ) reacbes satisfactory target value, b) observed point (iteration) fiIDesses stabilize,or c) when a certain number of iterations bas completed. 0) fiIDess function applied to current data point (iteration ) reaches satisfactory target value, b) cooling function bas expired,or c) wben a number of iterations : has been completed. _._ _._._._ ( Searcb terminates when : a) fiIDess fllDction applied to current popnlation (iteration) CODV"'1!CS to satisfactory : target value. or b) when a certain number of genentions : (iterations) have completed. Iteration --_._.--, ._._._._._ _.-. Calculates next data point (iteration) based on the previous data pointls \ (iteration/s). Calculates next data point (iteration) based on die cooling function at the next time point (iteration number) and the =t data point (attached to the =t iteration) ............ -.----Calculltes next generation (iteration) based on the current generation (itention). Data Point / Each data point cousists of a floating-point parameter anay. The fiIDess of eacb data point is evaluated at the same time as the iteration. __ -._ _._._._ Each data point consists of a floating-point parameter may_The fiIDess of eacb data point is evaluated at die same time as is created. ..... _.-._._ _._ -._--/. consists of a string repm;entation of floating point panmeter anay. The fiIDess of eacb individual (data point) is e"aluated as \. it is created. ; .. --.... Figure 2.6 SearCh methods as instances of a generalized search method The methods share characteristics in that each search method uses an objective function in some problem space (not shown) and employs a method-specific set of functions throughout the course of a search. During the search, iterations are created 17

PAGE 31

generate sufficient data to allow comparison between different methods and to bound perfonnance for the application problems. The data stream from the data generation step should return the set of parameters, V, to the test problem for each run, and Mrun measurements of both varieties for each run. Then, a linear least squares fit on the resultant data generates sensitivity curves indicating a "best" set of parameters for later use on some sample application problems where "best" is either the lowest final distance from the global optimum (accuracy or raw perfonnance) or the lowest final value of the system metric. Statistical infonnation on the Mrun measurements is also calculated. Applying the "best" parameters to the sample problems should then give the reader some idea of sample differences in the results obtained by using the various methods. There are two major assumptions in this strategy that must be valid in order to get valid results. They are as follows: Assumption 1: The perfonnance of the method M(.) on the tested function, bounds the perfonnance M(.) on the application function, F(w). Or, in other words, must be more difficult for the method to optimize than F(w) and have similar features in its topology to F(w). The control problems under examination tend to be multimodal. should also be multimodal and nonlinear as well, but more difficult than F(w) and have nonlinearities in order to bound nonlinear control problems. F(w) is a simpler function than T(w), then the method should converge more quickly for F(w). This is an argument from greater difficulty to lesser difficulty. 19

PAGE 32

Assumption 2: A linear least squares fit of the form Y = 9(Xi) E on the data generated from a sufficient number of trials of M(Xi) on the tested function, T(w), is sufficient to characterize the behaviour of method M(.) for test problem T(w), where is the number of trials of method M(X) on test problem T(w), n is the number of control variables in M(X), X is a randomly generated n x i matrix where each column Xi is a valid vector of control variable values, M(Xi) is method with control parameters Xi, Y is a vector of length of convergence performance measurements corresponding to the results of independent trials applying M(Xi) to test problem T(w), 9(Xi) is a vector of n elements composed of real-valued functions where each element is a function of the scalar element Xi n E is the calculated error for each element of 9(Xi). This assumption pertains to the capability of the least squares fit method to summarize the trial data. This approach was suggested by input from a design of experiments philosophy. Monte Carlo analysis was also considered, but was discarded due to its high cost and lack of returned data on the sensitivity of method performance to control parameter values. Validation of these assumptions is beyond the scope of this thesis. 20

PAGE 33

3. Genetic Algorithm Methods 3.1. Overview Genetic Algorithms (or GAs) are adaptation methods based on the theory of evolution. They were first rigourously introduced by Holland in his 1975 work [HoI75]. They are well summarized by Beasley, Bull, and Martin [BBM93]: GAs use a direct analogy of natural behavior. They work with of "individuals", each individual is assigned a "fitness score" according to how good a solution to the problem it is.... The highly fit individuals are given opportunities to "reproduce", by "cross breeding" with other individuals in the population. This produces new individuals as "offspring", which share some features taken from each "parent". The least fit members of the population are less likely to get selected for reproduction, and so "die out" .... By favouring the mating of the more fit individuals, the most promising areas of the search space are explored. If the GA has been designed well, the population will to an optimal solution to the problem. 1 To speak in more technical terms, GAs work with a varying set of data points (e.g. controller parameter sets) and their associated solutions. Each individual is a data point and its fitness is the value ofthe objective function at its location in the problem space. Its "genetic material" or "chromosomes" are an encoded set of the data point parameter values. This is not unique for a numerical search method. What 1[BBM93], pp. 1-2. 21

PAGE 34

distinguishes the genetic algorithm from many other algorithms is that multiple data points or a "population" are examined simultaneously. New data points are not generated via steps or linear searches in the problem space, rather data points are selected from the population in some method that favors the individuals with the best solutions and are "cross-bred". Sets of consecutive symbols from the parents "genetic material" are combined by "crossover" to produce genetic material for the offspring. There is also the possibility of symbols being randomly changed by a "mutation" operator. The fitnesses of the offspring are then evaluated, and the offspring replace some of the less fit members of the population. Heuristically, as selection is organized to favor the more fit and the less fit are replaced in each "generation", the average fitness of each succeeding generation should be higher than the previous. There is no mathematical proof that GAs with finite populations will find the optimal solution, but real and finite GAs produce usable solutions anyway. There are theories beyond the heuristic as to why GAs work. Holland advanced the [HoI7S]. A schemata is a template which constrains certain genes in a chromosome to hold certain values. To borrow an example, the chromosome 1010" contains the schemata" 10##", "#0#0", "## 1 #", "10 1 #", and many others. 2 (The #'s in each schemata maybe replaced by either 1 's or O's in any combination) According to the theory, good schemata cause good fitnesses, and individuals with those schemata will be more likely to be selected to pass on their genes to the next generation, thus increasing the probability of it containing a better set of solutions. Holland proved that the property of allows a generation of22[BBM93, p.7]

PAGE 35

binary strings to usefully process on the order of 3 / schemata, where n is the number of bits and L is some constant integer. These results were expanded by Goldberg, Whitley, Radcliffe and others. Goldberg worked with short schemata and called them stating that successful coding put related genes close together on the chromosome and minimized interaction between the genes. Research continues in the area of coding theory. Another theory ofGA (and general numerical search) functioning is the idea of exploration and exploitation. probes previously uninvestigated areas of the problem space, and uses information from already investigated areas to try and produce better answers. Holland [HoI75] showed that Although infinite populations are impossible in practice, finite GAs remain useful as their exploration and exploitation attributes still function in a limited fashion. The above is sufficient to highlight the major issues for GAs: coding method, objective function formulation, population size, method by which mating individuals are chosen, and reproduction method. Convergence characteristics are also an issue. Figure 3. 1 shows the GA flow expressed in the generalized search structure.

PAGE 36

Start Initialization Generate Initial Population 3.2. Coding Propagation -----_ _-_ __ __ -_ Select Individuals to Reproduce _-_ _._ _: 1) Recombme Seleaed IndlVlduals J Convergence. Evaluate Con \ Convergence vergence Criterion Criterion Met? Figure 3.1 Genetic algorithm flow chart The coding of a population is the method by which the input parameters to the problem are represented. The coded parameters (or are joined together to fonn a A common coding method is binary encoding. Consider the problem of designing a controller with two poles and no zeros. Under binary encoding a chromosome might take a value as shown below in figure 3.2. Parameter values are stored as unsigned binary numbers and used in fitness functions either directly as integers, or as values in some range defined in the problem space. Alternatively, the genes may be coded in binary floating point fonnat or character fonn. 24

PAGE 37

> sample a gene sample b gene Chromosome: 0110 0100 0101 0011 1101 1101 0111 0111 a (a lower limit) (a range size)(25683) / b (b lower limit) (b range size)(56695) / Figure 3.2 Sample binary coded chromosome The question of which coding method is the best still remains. [BBM93b] states that a comparison by Janikow and Michalewicz of binary and floating-point representations found that the floating point coding gave faster, more consistent, and more accurate results. However, in the same section, the paper notes contention by Goldberg that binary representations provide the highest number of schemata and should perform best and by Antonisse that alphebets with many symbols should perfonn best. Goldberg's contention is also noted by Voth in [Vot93]. The GA implemented in the research for this thesis allows either binary or floating point coding. Some researches employ more complicated genetic structures. chromosome implementations have two sets of genes rather than a single set (as in chromosomes), with some genes being dominant over another. This wayan alternate set of solutions may be remembered for use in later generations. Haploid chromosome implementations are the most prevalent. In some schemes, genes can be switched on and off to allo,w different behaviours under different conditions. For GAs

PAGE 38

that allow the order of the genes in the chromosome to be modified, the genes are tagged so that they may be properly used by the fitness function. Allowing gene reordering is an attempt to find the best gene order to solve the problem. The process of switching the order of two genes is known as The GA implementation this thesis supports only haploid chromosomes with fixed gene order. 3.3. Objective Function objective function takes a chromosome as an input and returns a single real value. The returned value should be proportional to how well the parameter values encoded the chromosome solve the problem under examination. Because the objective function defines the modes (or "peaks" and "valleys") the problem space, formulation of the objective function is the most important issue in actual GA performance. In real problems evaluation of the objective function also tends to be the most computationally expensive portion of the problem. The easiest objective functions for a GA (or for that matter any search) technique to solve are regular and smooth so as to guide chromosome values to the best solution. One method of constructing objective functions is to construct Penalty functions increase in value as the chromosome performs more poorly. Such formulations attempt to find the optimum solution by finding the minimum of the penalty function. The thesis GA implementation assumes positive valued penalty functions. This is convenient as control problems often involve minimization, whether minimum deviation from a required time response or minimization of some frequency norm. 26

PAGE 39

If evaluating the objective function is computationally expensive, an approximation may be used initially to quickly gain approximate results. Later a more accurate formulation may be used. However, this reopens the possibility of needing rework to meet the validation model. 3.4. Mating Pool Formation Reproduction is the mixture of genetic materials from two "parents". These are individuals chosen from the population by some selection technique. When the materials are mixed two "offspring" individuals are produced to replace some other individuals in the population. The set of all the individuals chosen to reproduce in a given generation is called the 3.4.1. Generation Gap and Steady State Replacement The is the ratio of the number of individuals replaced in each generation to the total number of individuals in the population. When the generation gap takes a value of 1, the entire population is replaced in each generation. When the generation gap takes a very small value only a few individuals are replaced. the offspring only replace others in the next generation if the offspring have a higher fitness score. The perceived value of steady state replacement is that more highly fit parents can survive longer, but also compete directly with their offspring. [BBM93] reports that an investigation by Goldberg and Deb claimed no evidence that steady state replacement was fundamentally better than generational and that the same effects could be produced by manipulating the selection method. With a generation gap less than 1, some members of the population must be '1.7

PAGE 40

removed in order to make space for the new offspring. Individuals to die may be chosen by random or may be the individuals in the population with the lowest fitnesses. The thesis GA allows either specification of the generation gap as a numeric percentage or specification of the number of individuals to be replaced. The individuals to be replaced are chosen from the lowest fitnesses in the population. (As a historical note, steady state replacement of the lowest ranking individuals is the technique employed by the GENITOR package built by Darrell Whitley of CSu. GENITOR was employed by Voth in his work at Martin Marietta ([Vot93]) which later prompted this thesis. 3.4.2. Parent Selection Methods and Fitness Remapping is the process of choosing individuals for reproduction. Parent individuals are chosen and added to the mating pool until it is full. Individuals are then mated at random. Culling selects parents from some percentage of the top individuals. For instance, a culling of 60% will select parents from the top 60% of the individuals in the population with each individual having the same probability of selection for mating. chooses individuals at random from the population, but maps the probability of selection of any given individual to that individual's fitness. According to the schema theorem, each individual's reproduction probability should be directly proportional to its raw fitness. Unfortunately, because the population is not infinite, this can cause premature convergence, where some "super-fit" individual with 28

PAGE 41

a significantly better fitness than the others comes to dominate the population.3 This is known as There are several alternative methods for remapping the fitness. sorts the individuals in the population by their fitnesses and then allocates reproduction opportunities by the ranking values. Then the raw fitness value of any individual is not an issue as the relative values alone determine the probabilities. (or transforms the raw fitnesses into reproductive trials by subtracting (2 x average fitness max fitness) from each fitness. Then the adjusted fitnesses are normalized by the average value of the adjusted fitnesses. Note that there is a danger of negative fitnesses to be guarded against and the possibility of overcompression is still present. These methods are called methods. Alternatively, the mating pool may be filled without going through explicit remapping. randomly chooses sets of individuals from the population and then competes them against one another with the individual having the highest fitness being most likely to win. The winners are added to the mating pool until is full. There are a couple of different variations on the tournament. Obviously, the number of individuals selected from the population may be varied. As the number of individuals in the tournament increases the likelihood of selecting a more-fit individual increases. This increases the on each individual as there are more competitors that might have higher fitness values. tournaments may also be held so that the better individual wins only with some probability where is 3[BBM93], plO.

PAGE 42

greater than 0.5 but less than 1. This decreases the selection pressure as individuals with lower fitnesses have a greater chance to win than in non-probabilistic tournaments. The thesis supports tournaments of various sizes as well as probabilistic tournaments. No explicit fitness remapping methods are implemented. This choice was supported by a note in [BBM93] that a comparison by Goldberg and Deb concluded that fitness ranking and tournament selection can be made to give similar performance by proper choice of parameters. 4 The thesis also allows probabilistic tournaments with more than two individuals. In these touptaments multiple sub tournaments are held. The chosen set is ranked and the individuals are competed against each other from the best fitness to the worst with the probability option applied in each sub-tournament. 3.5. Reproduction Methods is the most common reproduction method, working on the idea that exchanging material from high-performing parents may produce higher-performing offspring. Crossover takes the two parents, cuts the chromosomes at a random position, and exchanges the resultant material to create two new offspring. In the chromosome is cut in a single place. crossover cuts the chromosome in more than one place. crossover creates a random crossover mask and then exchanges the material based on its values. Which is the best 4[BBM93], p. 12

PAGE 43

crossover technique has not been proven and remains a topic of debate in GA research. Figure 3.3, below, illustrates some of the crossover techniques. Single point Crossover Parent Parent 2 Child Child 2 Multiple point Crossover, N=3 Cut point 100 0101 0011 101 1011 0111 1011 0111 0101 0011 Cut points --..,...--Parent I 0 10 Parent 2 1 01 Child 0101 Child 2 1110 Uniform Crossover Mask Parent I Parent 2 Child Child 2 o 01 0011 1 11 0111 o 11 0111 1 01 0011 Figure 3.3 Sample crossovers is applied independently of crossover. In mutation each bit is randomly altered with some small probability. This gives every point in the problem space some possibility of being visited, at least for an infinite population. Varying the mutation as time progresses has also been proposed (Thomas Back [Bac92], [Bac92b only selection and mutation are used, mutation may act as a simple hill-

PAGE 44

climbing method. This is known as A similar method is used in simulated annealling to avoid having to formulate the derivative of the fitness function. Another operator that has been proposed is the parents into the offspring. This might choose a better value between the locations of the parents. The operator adds some small random amount to an individual or multiplies it by some random value close to one. 3.6. Success Criteria and Convergence When designed, the goal of any search is to converge to the optimum value of the fitness function. The genetic algorithm seeks to have every member in the population do so. However, as in any other search, the mechanism to cause the search area to move through the search space all the way to the optimum and then decide to stop. there has its limitations.Some of the methods of determining convergence and descriptions of the problems that arise follow. 3.6.1. Determining Convergence From the point of view of the convergence function, unless previous generations are viewed, all that can be seen are the fitness values of each member of the population and the chromosomes of the individuals in the population. By one definition, the population has converged when the difference between the highest and lowest fitnesses in the population is less than some predefined value. Or, if all that is desired is to find an individual that exceeds some particular fitness, convergence is when an individual having the needed fitness is seen. 32

PAGE 45

3.6.2. Convergence Issues The first issue in returning the optimum parameters for a fitness function is getting the population to the optimum. When the GA is started, the fitness values range all over. As the generations go on, the range offitness narrows due to the improving gene pool. Although this is a desired result, it can cause problems. With. some individuals may be sufficiently higher in fitness than the rest of the population that their genes quickly come to dominate the population. Because of this lack of variety in the gene pool, the GA is unable to explore new areas by crossover and is left with the relatively slow technique of mutation for exploration. This case corresponds to a local minimum. In the case of the population has almost completely converged, but has not found the global optimum. Again, there is not enough variety to push towards better answers. The different parent selection methods were devised to try to solve these problems. (or and are two major problems in finding the optimum, but they are related to the formulation of the fitness function rather than to the internals of the GA. Epistasis occurs when genes interact with one another. Because one gene may mask the effect of another, may be difficult for the GA to push towards the solution. Classical GA theory assumes that the genes do not interact significantly. Because separating the effects of genes in the fitness function may amount to solving the problem, research in this area is an important topic. Deception occurs when individuals which are not in the global optimum increase more rapidly in the population than those in the global optimum. In this manner the population may converge to a non-optimum value. The final problem is that of As genes become predominant in the 33

PAGE 46

population, they become essentially fixed. The "drift" is towards all members of the population having the same gene. When all members of the population share the same gene, change towards the global optimum via crossover is locked out, even if the gene is not optimal. This may cause the GA to stop short of the optimum. 3.7. Related Methods There are several methods in the literature that combine genetic algorithms with some other method. [MG92] combines genetic algorithms and simulated annealing. Almassy and Verschure combine portions of genetic algorithms and neural networkS for adaptive control in [AV92]. 3.8. Thesis GA Implementation Capabilities The capabilities of the GA implemented in this thesis may be summarized as foIIows: Coding: Fitness Function: Population Size: Population Replacement: Parent Selection Methods: Crossover Methods: Mutation: Fixed point and floating point Any single-valued function Sizes specifiable by long integer Specifiable by number of individuals or percentage of population Culling, detenninistic and probabilistic tournaments of user-specifiable size and win probability 1 to 64 discrete crossovers or unifonn crossover Probability of bit mutation specifiable by user

PAGE 47

Convergence: May be measured either by best individual or mean of population fitnesses. Occurs if 1) spread of high/low fitnesses is greater than user defined range. 2) all chromosomes are identical. 3) convergence measurement is less than user-defined target fitness value. 4) number of generations exceeds a user-defined value. See the appendix for the implementation code.

PAGE 48

of the Simulated annealing (or SA) methods are search methods that correspond to the cooling process in a heated solid. Their invention was initially credited to Kirkpatrick [Kir83], but independent credit is given to Pincus and Cerny by Ingber [Ing93]. SA has been successfully applied to the travelling salesman problem, circuit design, data analysis, imaging, neural network training, finance, and targeting trajectory design [Ing93]. SA is similar to hillclimbing methods in that it performs the exploration portion of its search with a single data point. However, unlike hill climbing, SA adds noise to the current data point's location according to some generating distribution to form a new data point and then decide whether or not to accept it. Acceptance is determined. by an acceptance function a temperature schedule and the difference in fitness values between the two points. What is of note is that SA sometimes allows the search to go uphill, allowing the method to climb out of local minima in order to find better solutions. As time progresses, the temperature schedule produces increasingly lower temperatures so that fewer uphill points are accepted. Also of note is that it may be mathematically proven that SA will converge to the global optimum given time and a proper temperature schedule. This is noted in [Rosl], [Ing92], and [Ing93]. Of course, this does not guarantee convergence in a

PAGE 49

time period useful to the user. Over-accelerating the temperature schedule is one of the problems seen in simulated annealing. The major components of simulated annealing are the temperature schedule, generating function, acceptance function, and convergence function. The major issues map directly to these components. Below, figure 4.1 shows the simulated annealing flow expressed as elements for the generalized search structure. Start Prol!a2ation "L,Choose a new point= g(x) Initialization \ / Set x = probability h Generate '! Reduce the temperature by annealing an Initial ./' Point, x ConVerE;ence \ Set high Evaluate \ temperature, --\ Is Convergence Convergence !... Criterion Met? Criterion End Figure 4.1 Simulated annealing flow chart 4.2. Temperature Schedules The temperature schedule is the primary driving force of the simulated annealing search. It drives the stochastic range of the generating function so that steps generated at the beginning of the run have a much greater probable size than steps made later in the run. It also drives the acceptance function, making uphill moves 37

PAGE 50

more likely to occur early in the run than later. Most importantly, the temperature schedule is a vital contributor in proving the statistical surety of finding the global optimum. It must be noted that convergence within a finite or usable time is not guaranteed by the convergence property, but rather that the global optimum will be found at some unknown time between the start of the run and infinity. Heuristic arguments that SA convergence to the global optimum is guaranteed are presented in [Ing92] and [Ing93] with references to more formal proofs. Two issues of note with respect to temperature schedules are and In quenching a faster schedule is used than may be proven to converge. Because SA may take a great deal of time to find an acceptable answer, some users may resort to quenching. Quenching may still be extremely effective, but the property of proven convergence is lost. Reannealing occurs when an SA method resets or scales its own temperature values or schedules during the course of a run. When reannealing occurs, the pseudo gradient of the best minimum found at that point in the process is calculated as well as the sensitivities of of each parameter. Based on this information, the temperature values are rescaled to give a more precise search in a more limited area. This allows the method to focus more narrowly on the search surface as time progresses. 4.3. Generating Function The generating function is the probability density function for the probability of making some.1x step in the search space. The generating function also contributes to the mathematical convergence properties of the method. Ingber's heuristic uses various generating functions with different temperature schedules to prove that for 38

PAGE 51

particular combinations of generating functions and temperature schedules, the global optimum will be found as time goes to infinity. This is done by proving that the probability that the optimum will not be sampled as time goes to infinity is zero. Though not always applicable to practical problems, this is a useful property. Some generating functions used in SAs are: = 21r -IT = ( 2 D n i=1 E[-l,l] = E[ Boltzmann distribution Cauchy distribution Very fast simulated reannealing distribution (4.1) (4.2) (4.3) where in all cases D is the dimension of the problem and T is the value of the temperature. 4.4. Acceptance Function The acceptance function determines whether the last point created according to the generating function will be used or not. The acceptance function used in simulated annealing is M) 1 exp( ]-1 where LlE is the difference in fitness between the last created point and the last accepted point. Ifh(LlE) is greater than a sample 39

PAGE 52

from a unifonn distribution, then the point is accepted. temperature decreases the probability of accepting uphill points decreases. 4.5. Success Criteria and Convergence Convergence or success in simulated annealing runs may be detennined by several different methods. 1) If the target value for the fitness function has been reached, further search is unneeded. 2) When the temperature reaches a very low value the run may be tenninated. Hopefully this indicates convergence to the global optimum. 3) If the number of iterations exceeds a set limit the run may be stopped. The iteration limit is necessary because the convergence property does not guarantee convergence in a finite time. 4.6. Simulated Annealing Method Varieties Varieties of the method have different temperature schedules and generating functions. Each variety is not gone into in detail here, but the high-points are noted. (BA) was the earliest version of SA, limiting the temperature schedule to not be faster than where k is the iteration number. [Ing92] In(k) and [Ing93] reference a proof of the convergence property for BA used with a specific acceptance function in Geman and Geman's paper [Gem84]. BA is also the slowest algorithm in the SA algorithm family. (FA) (also known as employs a temperature schedule of = i. For aD-dimensional problem space, the distribution is This schedule is faster than that of 40

PAGE 53

BA, but is still considered slow. Arguments for the statistical guarantee of convergence are similar to those for BA. (VFSR), which was renamed was the latest technique at the time this paper was written. VFSR employs the temperature schedule 1'0 in each parameter dimension. The constant tenn is a userexp( defined parameter used to scale the temperature rate for different dimensions. Therefore, a two parameter problem might have two different temperature schedules operating simultaneously. VFSR also differs from BA and FA in that it 1) operates over predefined ranges rather than ranging freely, and 2) perfonns reannealing (discussed previously). 4.7. Comparison with Other Methods Simulated annealing is statistically guaranteed given proper fonnulation of the temperature schedule, and infinite time [Ros 1 ][IR92]. Ingber claims that this property makes SA superior to GAs. For the problems he tested in [Ing94], his results do show superior perfonnance for SA. However this remains to be verified. 4.8. Thesis SA Implementation Capabilities For the thesis SA Ingber's very fast simulated reannealing algorithm was reimplemented in the generalized search structure. See the appendices for the implementation code. 41

PAGE 54

5. Method Examination: Test Problem This section applies the search engines to a difficult test problem. The test problem serves a dual purpose. serves as a proving ground over which metrics on the search engines may be run, but also provides data for analyzing the dependence of the method peIiormance on values of the method parameters. 5.1. Problem Statement The test problem for the thesis has a global optimum at the origin of the problem space and has on the order of 1020 local minima, with holes that grow deeper as the origin is approached. is a paraboloid with axes parallel to the coordinates. This problem (problem 5.1 below) appeared in Ingber's and Rosen's work [IR92] as the primary problem in a suite of standard test problems formulated by Dejong. if 1 o = 1 0.49999 ] 42 {l.0, 1000.0, 10.0, 100.0}, c=0.15, 1000.0,

PAGE 55

The final distance from the global optimum (or accuracy) is a good indicator of search success. The system metrics shown previously are also useful in measuring cost tradeoffs. 5.2. Genetic Algorithm Measurements (min) (max) FLOAT: .) 0.0000 0.9000 Mutation Probability .) 0.0100 1.0000 of population replaced each generation .) 0.0100 1.0000 Top of population elig1ble to compete to mate .) 0.5000 0.9900 probability of best competitor winning tourney. INT: .) 5 2000 Population size .) 0 1 Representation (O=Floating point, l=Binary) .) 30 Number of crossovers .) 2 10 Number of competitors in tourney .) 0 1 Steady state propagation toggle flag Table 5.1 Genetic algorithm parameter ranges Nine .of the genetic algQrithm methQd parameters were randQmly varied .over the ranges shQwn above in table 5.1. A sample of 1515 search runs was generated with each run starting with a randQmly generated populatiQn. Runs were randQmly repeated in .order tQ get multiple results fQr the same set .of method parameters. There were twQ varied methQd parameters that acted as functiQnality selectQrs fQr the GA. These were the encQding methQd flag fQr the chrQmQsQmes and the flag that tQggled steady state behaviQur. Because .of the flags' binary nature the data set was split .over their variQUS cQmbinatiQns and run behaviQur was examined within the sub-classes

PAGE 56

formed. A summary of the results is shown below. Mean St. Dev. Min Max Mean St. Dev. Min Max Mean St. Dev. Min Max Mean St. Dev. Min Max Mean St. Dev. Min Max System Total Run System Cumulative Final Final Metric Obj. Func. Metric Search Number of Distance (Function Evaluations (Cost) Cost Iterations from Evaluations) Optimum Genetic Algorithm Results, N 1515 2,185.64 51,120.41 4,424.70 45,878.79 20.41 228.00 73,607.34 197,988.00 Floating point encoding, 2,015.05 53;.000.74 5,585.42 47,674.97 50.48 228.00 73,607.34 197,988.00 Floating point encoding, 1,862.64" 52,455.64 3,023.63 46,334.75 39.48 464.00 26,661.13 179,692.00 2.85E+06 6.89E+07 99.43 1,219.14 5.85E+06 6.08E+07 5.92 1,840.55 1.47E+04 3.158+04 6.00 0.05 7.75E+07 2.16E+08 100.00 13,068.03 non-steady state, N = 370 2.56E+06 7.14E+06 1. 50E+04 7.75E+07 22 6.248+07 O. 00 5.42E+04 100.00 ,'''4.0:4' 2.09E+08 100. 00 steady state, 386 2.30E+06 3.95E+06 2.638+04 4.59E+07 6.'98E+07'"" 6.17E+07 6.18E+04 2.16E+08 99.54 4 47 ", 'f/a9G; 64:: 44.00 1 0 0 0 O' Binary encoding, non-steady state, 356 2;438.84 48,288;56 4,227.66 43,713.90 20.41 330.00 50,207.33 188,326.00 Binary encoding, steady 50,616.73 4,507.36 45,661.98 35.77 444.00 44,017.02 191,374.00 2.95E+06 4.87E+06 3.44E+04 3.84E+07 state, 3.56E+06 6.72E+06 1. 47E+04 5.94E+07 6.30E+07 5.71E+07 3.15E+04 2.11E+08 403 7.45E+07 6.12E+07 6.51E+04 2.13E+08 'ioo.oo i,405.09' O. 00 ,982.58 100.00 ii;439;05i, Table 5.2 Genetic algorithm test problem performance A first inspection shows that final distances for the floating point encodings are

PAGE 57

about the same, as are those for the binary encodings. The mean final distances for the floating point encodings tend to be smaller than those for the binary encodings although the floating point encodings tend to accumulate more function evaluations. Steady state systems have a larger mean method overhead cost. When the system metrics are examined, the results for floating point encoding and steady state propagation are found to be lower both in mean and standard deviation from all the other subgroups. As a note, the system metric iteration gain step was set such that each iteration was weighted 3% more than the previous iteration. 5.2.1. "Best" Parameter Values A set of parameters tuned for raw performance on the final distance from the global optimum objective function was desired for usage on sample problems as well as a set that trades off performance for lower cost. BBN Cornerstone a data visualization and data fitting applications package, was applied to the data from the genetic algorithm runs, in particular those data corresponding to floating point encoding and steady state propagation. A pair of third-order least squares fits was performed on the test run data. For the raw performance parameter set, a curve fit was built between natural logarithms of the final parameter distances and the input parameters to each test run. For the tradeoff parameter set, the natural logarithm of the system metric on function evaluations was fit. The logarithms were necessary in order to generate good fits. The below graphics 1 Release 1. 1. 1, Copyright 1993, Bolt Beranek and Newman Inc. 45

PAGE 58

on the residuals between the fit and actuals reflect the accuracy of the fits (figure 5.1). Residuals Histogram -,-4 -3 -2 -1 0 1 2 3 4 5 Residuals vs. Fitted Response 5 0-+--,..,,; Residuals Probability Plot 5 -4 -2 0 2 4 S Residuals Histogram E! cr -3 -2 -1 0 1 2 3 4 5 ..8esiduals vs. Fitted Response -5 E':s = .... > w':s -.a ':s-i Residuals Probability Plot 5 0 2 4 S Figure 5.1 GA responses residual graphics 46

PAGE 59

The residuals histograms approach a normal distribution, though a little skewed. The residuals versus fitted responses show minor trending towards higher values. The residuals probability plots show some residuals greater than expected at about 5, but in general the fits are close to expected. BBN Cornerstone averaged out the effects of all the parameters except the one under examination and subtracted the constant term of the curve fit to produce adjusted responses (responses not including the constant term). The values for the parameter sets were then read off from the adjusted response plots for the individual parameters. 5.2.1.1. Population Size Figure 5.2 shows the adjusted responses between the final search distances at the end of each test problem run and the population sizes employed. The graphic shows that decreased final distance corresponds to greater population sizes. For the range examined the population size giving minimum final distance is 2000. This correlates well with genetic algorithm convergence theories that claim guaranteed convergence to the global optimum for infinite populations with infinite numbers of bits in the representation propagated along infinite time. It is also interesting to see the saddle point between populations of850 and 1400, though its significance is not known. 4.7

PAGE 60

5.2 vs. size The population size is a major driver in the cost of getting results, most likely because of the number of schema captured in the population at initialization. That the population size is a tradeoff shows up in figure 5.2, where the tradeoff between the final distance and the cost returns an optimum population size of 400. However, although using the population size of 400 reduces the cost, a penalty is paid in loss of final accuracy.

PAGE 61

en c = a: 'a 6 .. 500 1000 1500 2000 en c = Co en a: 'a 7 3t;. 0 .) ;, oj 1:0 110 .. ;) <1 ;/10 El 3mo .. 1>" 9 co .. $ .,. 11 # "0 e 0[) a 500 1000 1500 2000 Figure 5.3 System metric adjusted response ,'s. Population size 5.2.1.2. Parent Selection Parameters The parent selection process has several parameters: the percentage of the population eligible to compete in the tournament, the percentage of the population to be replaced as a result of the mating process, the size of the tournament to select parents, and the probability of the best individual in each tournament match being selected to mate. 5.2.1.2.1. Percentage of Population Competing The fit for final distance to percentage of population competing shows a minimum at about 0.71 or 0.75 (figures 5.4,5.5). This would eliminate about the bottom quarter of the population from being eligible for the tournament, gradually weeding them out and only allowing the top portion of the population to compete. 49

PAGE 62

This pattern also holds true for the fit on the system metric. This is understandable as this parameter drives the final fitness directly without additional overhead by merely changing the range of the random number generator used to select individuals for the tournament. o 0.1 0.4 0.5 O.S 0.7 0.6 0.9 Pct_ Figure 5.4 Final distance adjusted response vs. Fraction of population competing o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Pct Figure 5.5 Final distance adjusted response vs. Fraction of population competing 5.2.1.2.2. Percentage of Population Replaced The percentage of the population replaced is a major driver for the system metric response (figures 5.6, 5.7). has a much smaller effect on the final distance response. The low parameter value at the minimum supports those who claim that steady state propagation works best when varying only a few members of the 50

PAGE 63

population. The percentage of population replaced sees its minimum at the bottom of the tested range at 0.01 (1%) for the system metric and at about 0.65 (65%) for ratings by final distance from the minimum of the test problem. C Q a. 't:! 5 = f 0 0.2 0.3 0.4 0.5 O.S 0.7 0.8 0.9 Figure 5.6 Final distance adjusted response vs. Fraction of population replaced 0.1 02 0.3 0.4 0.5 O.S 0.7 0.8 0.9 Figure 5.7 System metric adjusted responses \s. Fraction of population replaced 5.2.1.2.3. Tournament Size The response varies very little within the tournament sizes tested (figures 5.8, 5.9). This is most likely due to masking of this parameter by the probability of the best individual winning the tournament. the highest rated individual in the tournament 51

PAGE 64

has a significantly greater chance of winning than with lower ratings then the remaining individuals in the tourney receive no chance to compete. The minimum then understandably appears for a tourney size of2 individuals as shown below. 1_:' I!' __ .. 1. 9 Figure 5.S Final distance adjusted response vs. Tournament size =L c a Q a lot 8 B e sJ e a 2 3 4 5 6 7 8 9 Figure 5.9 System metric adjusted response vs. Tournament size 5.2.1.2.4. Probability of Best Individual Winning Tournament Q The minimum penalty incurred by adjusting probability of the best individual winning the tournament occurs at 0.5 (figures 5.10, 5.11). This corresponds to the explanation given in the previous subsection. 52

PAGE 65

vs. 0.5 6.95 6.9 fs.a5 0.5 0.55 0.6 0.65
PAGE 66

5.2.1.3. Reproduction Parameters There are two parameters that control "reproduction" the genetic algorithm. These are the number of crossovers that are performed and the probability that mutation will occur for any given child produced. 5.2.1.3.1. Number of Crossovers The fit between the number of crossovers and the adjusted response is particularly interesting (figures 5.12, 5.13) (uniform crossover is coded as N_Crossovers= -1 and no crossover as 0). The figures indicate that 1) multiple point crossover can produce similar results to uniform crossover, even with as few as 6 crossover points, and 2) that for the floating point representation (64 bits) the minimum for the data sampled occurs at 12 crossovers (about every five bits for a 64 bit floating point encoding). The question of how many crossovers is best has been asked by many researchers. Here, the answer is 12 crossovers Figure 5.12 Final distance adjusted response vs. Number of crossovers

PAGE 67

= oa to 8.8 e : S Co g 9 .. 2 e 610 0 61 9 jJJ a o @ .. e Q i> .. "G 0 a 0 .. G e .. 9 0 0 011< e = e Cl 0 01 5 20 25 30 Figure 5.13 System metric adjusted response vs. Number of crossovers 5.2.1.3.2. Mutation Probability The curve fit for the mutation probability shows low points both at 0 and at 0.9 (figures 5.14, 5.15). This may reflect the contentions from some researchers that mutation is only a minor effect and the contentions from others that high mutation rates correspond to hillclimbing. Noting the low spread of the points around the mutation probability of zero, but wanting to keep some mutation, a mutation rate of 5e-3 is chosen. -.. 0 co e o 0.2 0.3 0.4 0.5 O.S 0.7 _Prob Figure 5.14 Final distance adjusted response \'s. Mutation probability 55

PAGE 68

a 0.1 0.2 0.4 0.5 0.6 0.7 0.8 0.9 Figure 5.15 System metric adjusted response vs. Mutation probability 5.2.1.4. Summary Table And Second Pass Results is appropriate at this point to summarize the findings of the first pass on the test problem and to apply the found parameter sets to a second pass with the problem being to obseIVe any improvements. The parameter sets for the second pass are as shown below in table 5.3. APPLICATION PARAMETERS: Genetic Algorithm Search "Best" Values Best Best Final system Distance Metric Method Parameter FLOAT: *) 0.005 0,005 Mutation Probability *) 0.65 0.01 of pOpulation replaced each generation *) 0,71 0;71 Top of population eligible to compete *) 0.5000 0.5000 PrOb of best competitor winning "tourney. INT: *) 2000 400 Population size *) 0 0 Representation (O=Floating point, l=Binary) *) 12 12 Number of crossovers *) 2 2 Number of competitors in tourney *) 1 1 Steady state propagation toggle flag Table 5.3 Genetic algorithm "best" parameter values summary table 56

PAGE 69

The results from the second passes are shown below in table 5.4. Mean St. Dev. Min Max Mean St. Dev. Min Max Mean St. Dev. Min Max System Total Run System Cumulative Final Final Metric Obj. Func. Metric Search Number of Distance (Function Evaluations (Cost) Cost Iterations from Evaluations) Optimum Genetic Algorithm Results, N = 1515 2,185.64 51,120.41 2.85E+06 6.89E+07 99.43 1,219.14 4,424.70 45,878.79 5.85E+06 6.08E+07 5.92 1,840.55 20.41 228.00 1.47E+04 3.15E+04 6.00 0.05 73,607.34 197,988.00 7.75E+07 2.16E+08 100.00 13,068.03 Parameters Optimized for Final Distance, 298 :,.".,-::c,".:.," 1,144.97 123,986.04 1.89E+06 2.03E+0.8 619.29 5,920.18 1.02E+06 9.86E+06 175.92 78,880.00 2.90E+05 1.28E+08 4,817.90 126,000.00 7.93E+06 2.07E+08 Parameters Optimized for System Metric, N 400 71, 59 600.00 2. B 6E+06 8. 48E+06 30.25 13.51 144.62 0.00 1.21E+06 600.00 5.40E+05 600.00 5.78E+06 6.44E+00 B.48E+06 8.4BE+06 4.77 62.00 100.00 100 00 o 0 0 100.00 394'12' 100 0 Table 5.4 Genetic algorithm second pass summary table For the parameter set optimized for minimum final distance on the test problem the mean number of objective function evaluations has increased substantially as has the mean search overhead cost. However the mean final distance from the test problem minimum has dramatically shrunk. The perfonnance for the system metric has been degraded. Perfonnance has been traded for low cost. In future work the gain for the system metric as the iteration number increases should be lowered to allow for more cost and better perfonnance. 57

PAGE 70

5.3. Simulated Annealing Measurements METBOD EVALUATION: Very Fast Simulated Reannealing (min) (max) FLOAT: INT: *) 1.00eS *) 1.0000 *) 50.0000 *) 1,Oe-5 *) 1.0000 *) 5.0000 *) 0.0001 *) 10 *) 0 1.00e10 10.0000 250.0000 1.0e-4 5.0000 500.0000 0.0100 500 1 Cost Initial Temperature Problem Variable Initial Temperature Temperature Annealing Scale (initializer) Temperature Ratio Scale Factor (initia1izer) Parameter to cost Scale Factor (initializer) Reannealing Rescale Value defDeltaX (interval for tangent calculation) Test Period (acceptances) Reanneal Flag (O-no anneal, 1-anneal) Table 5.5 Very fast simulated annealing parameter ranges Nine of the VFSR method parameters were randomly varied over the ranges shown above in table 5.5. A sample of 1817 search runs was generated with each run starting with a randomly generated starting point in the problem space. Like the GA, runs were randomly repeated in order to get mUltiple results for the same set of method parameters. There was a single mode selector in the method, namely the reanneal / no reanneal flag. The data set was split and the results with and without reannealing activated were compared. A summary of the results is shown below in table 5.6. The system metric iteration gain was set to 0.01 in attempt to compensate for the differences in population size between the genetic algorithm and simulated annealing (300 used as the comparison GA population size, 3% /300 0.01%). 58

PAGE 71

Mean st. Dev. Min Max Mean St. Dev. Min Max Mean St. Dev. Min Max System Total Run System Cumulative Final Final Metric Obj. Func. Met:dc Search Number of Distance (Function Evaluations (Cost) Cost Iterations from Svaluations) Optimum N = 11,871.53 618,171. 59 14,176.15 1,031,971.3:2. 246.01 37,471.00 123,723.00 5,751,652.00 7. ,495. 45 312,975:72 8,336.54 551,215.39 246.01 37,471.0C 83,051.67 4,285,425.00 16.,.806.14 962,321.04 17,420.23 1,304,172.48 .35 50,559.00 123,723.00 5,751,652.00 36E+06 7.08E+07 718.72 1,892.80 62E+06 1. 18E+08 360.54 1,411.98 2.80E+04 4.27E+06 7.00 76.96 1. 42E+07 6.60E+08 1,000.00 8,612.26 8.59E+05 3.5 9.55E+05 6.32E+07 4.27E+06 121.45 r)522;?t: 13.00 ',. 76.96' 9.47L+06 4.93E+08 1,000.00 .8,612.26 1. 93E+06 2.00+06 5.08E+04 1. 42E+07 L10E+08 '420. 951;:64s:'3!i': 1.49E+08 304. 31 5.80E+06 7. 6.60E+08 of the

PAGE 72

final distance for the non-reannealed method being about 200 greater than that for the reannealed method. This implies that while the non-reannealed method gives better results for cost, does not perform as well as the reannealed method if the much greater cost for reannealing is expended. To go from the non-reannealed to reannealed results expends about 300% the cost of the non-reannealed method for about a 22% mean performance improvement and a 20% reduction in the standard deviation of the performance. The system metrics reflect the performance for cost tradeoff as well. 5.3.1. BBN Cornerstone was applied to the results from the reannealing method. A pair of third-order least squares fits was performed on same test run data as the GA, but with the SA method applied to the test problem. It was necessary to take the logarithm of the response in order to get a good fit. Figure 5.16 shows the accuracy of the fit. The residuals histograms tend towards the normal. The system metric residuals show the system metric fit to be a bit conservative at the extremes and the final distance residuals are sometimes a bit larger than expected. Overall the fit is fairly good.

PAGE 73

400 -3 -2 -1 0 1 2 3 5 .. .i! ,..... nI c 0 ___ 2 5 -4 -2 0 2 4 c 400 200 -4 -3 -2 -1 0 1 2 3 -5 -=-= =1 C 0 -a --= 3 5 ___ ___ -4 -2 0 2 4 Figure 5.16 VFSR response fit residual graphics (ReAnneal) 61

PAGE 74

5.3.2. "Best" Parameter Values 5.3.2.1. Initial Temperatures The temperatures employed during the search control how much excitation is in the system and how much exploring of the problem space occurs. The cost temperature controls the statistical test that determines whether new search points should be accepted or not. The higher the cost temperature, the more likely uphill moving search points are to be accepted. The parameter temperatures control the step sizes used to explore the problem space on a per-parameter basis. Parameters with higher temperatures are more likely to take large steps than parameters with lower temperatures. Because the temperatures are exponentially decaying over time, the initial temperatures form an upper bound on the temperatures. C 3.1 a:: 3.05 'C S 3 o 1000000000 1000000000 3000000000 5000000000 7000000000 .. "'"'$ ;P 3000000000 5000000000 7000000000 9000000000 9000000000 Figure 5.17 Final distance adjusted response vs. Parameter initial temperature (ReAnneal)

PAGE 75

Figure 5.18 System metric adjusted response vs. Cost initial temperature (No ReAnneal) Figures 5. 17 and 5. 18 show the responses of the final distance and system metric to the cost initial temperature. The final distance response varies very little when compared to the system metric response which varies by about an order of magnitude. The final distance metric curve fit only varies by about 25% from maximum to minimum. However, for a curve fit of a logarithm this may be within the bounds of error. The value at the upper end of each curve fit is chosen as the "best" value at lelO. For the parameter temperature range investigated, the curve fits for final distance and system metric vary very little over the range (see figure 5.19). The lowest values occur at 10 and 1 for final distance and system metric fits respectively. 63

PAGE 76

1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10 3.15 e 0 0 oQo c e $ e c. c-.$ 0 6 $ ". 01 e 41 3.1 a:: 41 .. s o!! .. 10 {> 3 05 .. e ee +6 4 4.5 5 5.5 6 6.5 7 5 c &.4.5 4 41 a:: 3 := :[2.5 4 6 C. 3.676 a:: ;3.674 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 6.5 9 Figure 5.19 Adjusted responses "so Parameter initial temperature 64

PAGE 77

5.3.2.2. Temperature Decay Rate Parameters INITIALIZATION double tempScale -log(VFSR_tempRatioScale(M)) exp(-log (VFSR_tempAnnealScale (M) ) VFSR_tScaleCost(S) tempScale + VFSR_costParmScale(l'-I); for (pNum=O;pNum
PAGE 78

control the rate at which the temperatures for the individual parameters decay. the temperature scale factors increase, the temperatures decay more rapidly. Therefore as the temperature annealing scale factor increases the temperatures decay more slowly. The temperature scale factor ratio also affects the temperature scales. The temperature scale factor ratio must always be less than 1 in order to keep the temperature scale factors positive. With this infonnation in mind, the logarithm of the temperature scale factor ratio will always be negative. Therefore, as the temperature scale factor ratio becomes increasingly smaller the temperature scale factors will increase and the temperatures will decay more rapidly. The parameter to cost temperature scale factor sets the cost temperature scale factor to some multiple of the parameter temperature scale factors. Figures 5.21, 5.22, and 5.23 show the responses of the final distance and system metric to the temperature decay rate parameters. These lead to parameter choices of 50 and 250 temperature annealing scale factors, le-4 and le-5 temperature scale factor ratios, and 3.3 and 5.0 parameter to cost temperature scale factors for best distance and system metric responses respectively.

PAGE 79

4 g,. 3 = G) .,. 0 ;Jo tr 0., G) c c: .. ;0 a = ... .. .:;. 0 ;to "6 "g .. 010 e "';:, <10 .} .. '" C c 6' ,s; '" co ", ':11 i?', C. Ie o .. .. .. .. "0 ... ,go .. ,. a ; ,0; s 4>0 co oQ .: ., .. tP e ... c: # co .. 0 .. 0 .. 0 0 .. 0 ., 0 .. ,. 00 \l) ID .i. 6" .. .. .... '" e Figure 5.21 Adjusted responses vs. Temperature annealing scale factor (ReAnneal) 67

PAGE 80

4 10'1 = i3.5 10'1 a: 3 1.5e-05 2.5e-05 3.5e-05 4.5e-05 5.5e-05 6.5e-05 7.5e-05 8.5e-05 9.5e-05 3.1 =-10'1 .! 3.08 .! 10'1 .. Q 2e-05 Co .. =10'1 : .. .! 3.68 "'$ ,go '= :.J .. e 10'1 ,p. 0 t3 .66 eo Ie-OS 2e-05 3e-05 4e-05 5e-05 6e-05 7e-05 1;1 8e-05 ge-05 oj ., eo 8e-05 ge-05 Figure 5.22 Adjusted responses vs. Temperature ratio scale factor 68 .. 0.0001 0.0001

PAGE 81

CIl 4 C 3.5 CI.I a:: 3 oS e 2.5 1:0. 2 1 1.5 2 4 4.5 5 CIl 3.2 ... 0 0 .a ... .. c co .. .... Q.. .. 341 = eo 41 t3.15 Ii> e e CI.I .. a:: .. .. .. 3.1 .. 0 oS .. .. 1.5 2 2.5 3 3.5 4 4.5 5 CI.I ..
PAGE 82

5.3.2.3. Reannealing Parameters There are three parameters that drive the reannealing process. They are the test period (number of search point acceptances prior to reannealing), the reannealing rescale value, and the delta-x used in calculating the problem space tangents used in the reannealing process. The test period drives the VFSR method to reanneal each time it expires. The test period is counted by the number of search points that have been accepted by statistical test. The method may reanneal more often iftoo few points are being accepted. Figure 5.24 shows that lower final distances tend to be found longer test periods are used. This is most likely due to the search being allowed more time to explore before reannealing. Although the system metric data is from a non-reannealing situation, it is included because the test period is also used to check whether to tenninate the search or not. Thus the test period is also relevant for non-reannealing setups. Chosen values for the test period are 500 and 125 for the final distance and system metric fit test periods respectively. 70

PAGE 83

o so 100 150 200 250 300 350 400 450 500 CD e 0 .. ... "'. o e co ... .go to ;). 11 3.2 eo .p o!jo 01 8 Q ,. c.. e... .11 8. ., 0 '4, .... CD ... 1:1: .. Se I) oGo!!; 3 e .;; co .. Me2 oJ ... isS e = 41 D .. o CD 3.7 .. C a. .. go ,. = ... f 0 50 100 150 200 250 300 350 400 450 Figure 5.24 Adjusted response vs. Test period 71

PAGE 84

The reannealing rescale value is used to exponentially scale down the temperature when reannealling is performed. The higher the rescale value, the faster the temperature is scaled down. Increasing the value also has the effect of scaling the temperature in larger steps. Because the optimization for the system metric does not employ reannealing only the final distance fit is shown below (figure 5.25). However, the response varies very little over the tested range. Nonetheless a value of 500 is chosen as the best in the range for final distance performance. 200 250 300 3.15 .. o. 0 0 .qo .. 'I) .. .. e. 0 e co 3.1 e e+ II 0 1 'I> .- ;; .. 'a .. S ... S 3.05 e .a Figure 5.25 Final distance adjusted response vs. Reanneal rescale value (ReAnneal) 72

PAGE 85

The default delta-X value is used in calculating the tangents used by the reannealing process to reset the temperature schedules. This parameter is only used during reannealing. Although only minor differences are seen over the examined range lower values of the delta-X value imply more accurate tangents and therefore better performance. A best value of 1 e-5 is therefore appropriate for the range examined (figure 5.26). o O.OOOS 0.0015 0.0025 0.0035 0.0045 0.0055 0.0065 0.0075 0.0065 0.0095 51 3.15 a C D, C a. 3.1 C .. "CS e 0 ... 3.05 :s .. 9 a .. 0 0" .e D. .JJ "0 1I .... ... .. o 0.0005 0.0015 0.0025 0.0035 0.0045 0.0055 0.0065 0.0075 0.0085 0.0095 Figure 5.26 Final distance adjusted response vs. Default delta X (ReAnneal) 5.3.2.4. Summary Table And Second Pass Results is appropriate at this point to summarize the findings of the first pass on the test problem and to apply the found parameter sets to a second pass with the problem to observe any improvements. The parameter sets for the second pass are as shown below in table 5.7. 73

PAGE 86

APPLICATION PARAMETERS: Very Fast S1.mIllated Annealing Search "Best" Values Best Best Final system Distance Metric FLOAT: *)1.00el0 1.00el0 *) 10.0 1.0 *) 50.0 250.0 *) 1.0e-4 1.0e-5 *) 3.30 5.00 *) 500.0 N/A *) 0.0001 N/A tNT: *) 500 125 *) 0 1 Method Parameter Cost Initial Temperature Problem Variable Initial Temperature Temperature Annealing Scale (initializer) Temperature Ratio Scale Factor (initializer) Parameter To Cost Scale Factor (initializer) Reanneal Rescale Value defDeltaJC (interval for tangent calculation) Test Period (acceptances) Reanneal Flag (O-ReAnneal, l-No ReAnneal) Table 5.7 Very fast simulated annealing "best" parameter values summary table The results from the second passes are shown below in table 5.8. :1ean st. Dev. >lin :1ean St. Dev. Max i-!ean St. Dev. Max System Total Run System Cumulative Final Final Metric Obj. Func. Metric Search Number of Distance (Function Evaluations (Cost) Cost Iterations from Evaluations) Optimum Very Fast Simulated Reannealing Results, N = 1817 11,871.53 618,171.59 1. 36E+06 7.08E+07 718.72 1,892.80 14,176.15 1,031,971.31 :'.62E+06 1. 18E+08 360.54 1,411.98 246.01 37,471.00 2.80E+04 4.27E+06 7.00 76.96 123,723.00 5,751,652.00 1.42E+07 6.60E+08 1,000.00 8,612.26 Parameters Optimized for Final Distance, N 800 (95.91 hours) is', 820.80> ...... ')7;2{), 725;38 1. 12,727.78 1,294.80 83,782.33 894,679.08 1.47E+06 71,992.00 1.49E+05 3,694,793.00 9.65E+06 1.03E+08 8.30E+06 4.25E+08 Parameters Optimized for System Metric, N 800 (6.19 hours) l,56S;i6< 1.8:0E+05 660.50 2,635.38 7.51E+04 176.15 37,442.00 2.01E+04 4,227.40 56,130.00 4.80E+05 .'.h:oog;pp: 3.22E+05 4.25E+06 1,000.00e 6.36E+06 1,000.00 Table 5.S Very fast simulated annealing second pass summary table 74

PAGE 87

The second pass results confinn the results of the curve fits. The parameters determined to reduce the final distance produced a reduction of 9.5% in mean final distance and a reduction of about 10% in the standard deviation of the final distance. As a tradeoff, though, the mean number of objective function evaluations increased by 14%. The parameters determined to reduce the system metric on objective function evaluations produced reductions of 86% and 95% in the system metric and system metric standard deviation respectively. The mean number of objective function evaluations fell by 93% from the mean for all the runs perfonned. However, the mean final distance increased by 48%. These data verify that the system metric reflects the tradeoff between the number of objective function evaluations during the run and the raw perfonnance of the search. The system metric iteration gain step may be manipulated to give better tradeoffs in the future. 75

PAGE 88

6. Sample Application Problems The sample problems were chosen to be representative of some real systems with known usefulness. Problems were also chosen that had previously generated solutions so that some comparisons could be made to results generated using the search methods. As the author currently works in the aerospace industry, examples from that arena were chosen. 6.1. Mass-Spring-Damper Problem with Center-or-Gravity Feedback "aero" force: a em MI = M3 = 1.0, = 0.2; b l = b 2 = 0.01; kl = = 1.0 Figure 6.1 Three-body mass-spring damper (MSD) system. This problem is very similar to controlling a 3-stage launch vehicle. Many not most) communication and earth-observation satellites are placed orbit by this kind 76

PAGE 89

of system. Each mass corresponds to the moment of inertia of a rocket stage, with the springs and dampers to represent the flexible modes in the structure and the couplings between the stages. The states are the angular positions of each stage with reference to a vehicle centerline. This problem was used by Chris Voth in [Vot93]. To quote the paper: "This problem emulates the dynamics of a 3-stage flexible launch vehicle. The control input is a force applied to the first mass, and the sensor outputs for feedback are the position of the third mass (analogous to an inertial measurement unit attitude gyro) and the velocity of the second mass (analogous to a rate gyro). The dynamics of the system are augmented with a destabilizing force proportional to the displacement of the center of mass acting on each body in proportion to its mass. This destabilizing force mimics the destabilizing effects of aerodynamic forces during the atmospheric flight phase of a launch vehicle. "The control law is an autopilot for controlling the position of the center of mass, of the system.... Stability margin robustness requirements for the control law are: "1. stability: all closed-loop poles must be left of -0.01 rad/sec "2. rigid-body gain margins must be 6 dB "3. rigid-body phase margins must be 30 degrees "4. first mode phase stabilization: open-loop peak gain of the first elastic mode must be 6 dB at the control input "5. first mode phase margins must be 60 degrees "6. the crossover gain margin between the first and second modes must be 8dB "7. second mode gain stabilization: open-loop peak gain of the second elastic mode must be :::;; -10 dB at the control input" 1. [Vot93], p.4 77

PAGE 90

G(s) x em Figure 6.2 MSD system controller structure. liThe controller consists of a proportional (displacement) feedback loop with a gain Kd and filter Fd and a rate feedback loop with a gain Kr and filter Fr. The controller contains a total of 7 free design parameters The objective function is formed from penalties on the stability margins. 3 .. .................. .... ," ................................ 2 ......... ; ...... .... : ........... : ................ ..... ...... :il' 0 : \;.x -2 .... : ........... ........... : .......... ........... = ......... -3 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 Real Part Part 0.2132 a -0.213 0 -0.005 -0.005 -1 -0.055 3.3162 -0.055 -3.316 Figure 6.3 MSD plant pole locatioDs I'req (racil Freq DaDpilllJ -1 0.2132 O. 033932 1 0.2132 O. 033932 0.005 1 0.15915 0.005 1 0.15915 0.016583 3.3166 0.52786 0.016583 3.3166 0.52786 Looking at the plant poles, it is seen that there is a pole in the right half plane. 78

PAGE 91

is therefore necessary to employ a closed loop control to keep the rocket stack aligned with the commanded flight path. a (9 : : ::::::: :: : :::::::: 10-3 Figure 6.4 MSD plant frequency responses The plant open-loop gain responses show a smooth rolloff on the disturbances as frequency increases. However there are two bending modes in the system at about 1 and 3.5 79

PAGE 92

6.1.1. Baseline Results The baseline controller was designed by Chris Voth using H2 design. All requirements except the crossover gain margin requirement were met (table 6.1). However, the system is stable as reconfinned by the gain-phase plot, (figure 6.5). (Recall that there is an unstable mode in the system causing the encirclement of the -1 point). Zr tr 0.186 1.311 0.799 1.161 0.3 1.612 1.31 Low (mIX) deg "60 deg Table 6.1 Mass-spring-damper baseline results table / -40 ................. : ................. : .............. ... : .. / ...... -100 0 100 200 Figure 6.S MSD baseline controller Plant open loop frequency response 80

PAGE 93

10 .5 -c: ":,,: 0 : '!-. : :: :::: .. :\.,.:::::: : .i:l:-! J :!:I!l:u \ \ 4 Imaq -0.139 0.1325 0.72457 0.19226 0.030599 2 -0.139 -0.133 0.72457 0.19226 0.030599 '.0:' -0.392 0.59878 0.65401 0.10409 X x -0.392 -0.524 0.59878 0.65401 0.10409 0 -0.032 1. 0053 0.031674 1.0058 0.16008 -0.032 -1. 005 0.031674 1. 0058 0.16008 -0.485 1.3161 0.34588 1. 4027 0.22324 -0.485 0.34588 1.4027 0.22324 -1.722 0 1 1.7221 0.27409 X -0.064 3.3307 0.019104 3.3313 0.5302 -2 -1.5 -0.5 0 -0.064 -3.331 0.019104 3.3313 0.5302

PAGE 94

6.1.2. Genetic Algorithm Application Low (max) Set Table 6.2 Mass-spring-damper genetic algorithm results table When the performance-optimized set ofGA parameters is applied to the MSD problem, the penalty on the final set of problem parameters is 0.05 as shown above in table 6.1. This is superior to the baseline result of 0.1240. The superior performance is gained by giving a little on the rigid body margins in order to reduce the disturbance effects. Although the rigid body requirements are not met in the strict sense, 29.428 deg and 5.8448 dB are close to the requirements of30 deg and 6 dB. It is interesting to note that the objective function for the mass-spring damper problem weights all the requirements for margins and mode peaks equally. If the margins were weighted significantly higher than disturbance perfonnance the tradeoff would be more in favor 82

PAGE 95

of the margins. The system metric optimized parameter set produces far less impressive results, yielding a final objective function value of 1. 96 and failing half of the margin requirements. On the other hand the system metric optimized set took less than 20 minutes to run on a SUN Sparc 10 as opposed to over 40 hours on the same machine for the performance-optimized run. This shows once again that the system metric. parameters perform some optimization for cost. should also be noted that the objective function for the MSD problem was implemented in MA TLAB and was called from C using the MA TLAB engine utilities. The MSD problem parameter values were implanted into MA TLAB using the engine utilities and the result was passed back to C through stdout. Because the objective function called several other functions to produce the result it was necessary for MA TLAB to load them from disk each time the objective function was evaluated. This significantly slowed the objective function, producing the very large times for the MSD performance run. Using fewer calls in the objective function would reduce the time needed to perform runs.

PAGE 96

Figures 6.8 and 6.9 show the frequency responses of the center of gravity to the control input for the MSD control parameter sets generated and the frequency responses of the center of gravity to the disturbance input. 20 20 ........... ... : ....... ........ : .... ... .... ... ....... -40 .......... : ... ........... ............. : .. .. -J -100 0 100 200 5l i-,: -60 : -100 0 100 200 Figure 6.8 MSD GA-designed controller plant open loop frequency response CD 'iii ____ 20 10 -10 -20 10-3 ro'. .: :. -:. : >
PAGE 97

Figure 6.10 shows the closed-loop pole locations for the GA results. Freq Real. (rad/ I'req Part Part Oaapinq a_I -0.175 0.1267 0.60621 0.21753 0.034621 4 -0.175 -0.129 0.60621 0.21753 0.034621 :x -0.31 0.506 0.52229 0.59332 0.09443 2 -0.31 -0.506 0.52229 0.59332 0.09443 -0.032 1. 0014 0.032043 1. 0019 0.16041 0 .. ;< ..... ;:; -0.032 -1. 007 0.032043 1.0079 0.16041 x -0.652 1.3376 0.43629 1. 4684 0.23668 -0.652 -1. 338 0.43829 1.4884 0.23688 -2 ............ -1. 695 0 1.6946 0.26973 '( -0.065 3.3303 0.019511 3.3309 0.53013 -4 -2 -1.5 -0.5 0 -0.065 -3.33 0.019511 3.3309 0.53013 i'req Raal. Imaq (rad/ Part Part Oaap1nq aec) -0.023 0 1 0.02265 0.003605 -0.347 0.3226 0.73201 0.4735 0.07536 -0.347 -0.323 0.73201 0.4735 0.07536 2 -0.316 0.7164 ').40452 0.78559 0.12503 ;"' -0.318 -0.718 0.40452 0.18559 0.12503 X ":c,, -0.016 1. 0096 0.015895 1.0098 0.16071 .E i:< -0.016 -1. 01 0.015895 1.0098 0.16071 -2 -1. 908 0.8699 0.90966 2.0965 0.33368 -1.908 -0.67 0.90986 2.0965 0.33366 ;( -0.065 3.3266 0.025678 3.3219 0.52965 -4 -1.5 -0.5 0 -0.085 -3.327 0.025678 3.3279 0.52965 Figure 6.10 MSD GA-designed controller closed loop poles 85

PAGE 98

6.1.3. Simulated Annealing Application Kr tr Low Sys. Set Table 6.3 Mass:,spring-damper simulated annealing results table Applying simulated annealing to the MSD problem produces very similar results to those produced with GAs. They are a bit worse for the performance optimized set and a bit better for the system metric optimized set. The times consumed by each run are also similar to the GAs (35.8 versus 40.71 hours for performance and 20.0 versus 17.7 minutes). Like the GA, SA also lets up on the rigid body margins for the performance optimized SA parameter set. This is understandable as the same objective function is being used. However, the rigid body margin requirements are badly blown for the system-metric optimized case in favor of reducing the disturbance frequency response. This again points to increasing the weighting on meeting the margin requirements. 86

PAGE 99

Figures 6.11 and 6.12 show the open ioop control and plant responses and the results of the tradeoffs made by the SA runs. The disturbance response tradeoff made by the system metric optimized SA run is particularly interesting as it shows good results at low frequencies. 20 .............. .... .. .. ....... ............ : .. : ....... 20 ........ .. ........... ... : ....... ::l: ::: -40 .............. ............. .. ; ...... -100 0 100 200 Figure 6.11 MSD SA-designed controller Plant open loop frequency response 30 30 ....... : ........... "-, \f ........... .\ 20 10 C 0 -10 -20 10-2 100 20 :ii' 10 '0 iO 0 -10 -20 \ \ 10-2 laO Figure 6.12 MSD SA-designed controller disturbance frequency response

PAGE 100

Figure 6.13 shows the closed-loop pole locations for the SA run results. Real. 4 Part Part DaIIII'111q X -0.16 0.084 0.8B564 0.IB079 0.028773 2 -0.16 -0.084 0.88564 0.IB079 0.028773 -0.237 0.5022 0.42622 0.55515 0.089354 []) -0.237 -0.502 0.42622 0.55515 0.068354 ell 0 -0.022 1.0084 0.021647 1. 0086 0.16053 -0.022 -1. 008 0.021847 1. 0086 0.16053 ?< -1. 321 \) 1 1.3274 0.21126 -2 -0.987 1. 3376 0.59383 1. 6624 0.26458 -0.987 -1.338 0.59383 1. 6624 0.26458 -4 -0.067 3.3284 0.020173 3.3291 -1.5 -1 -0.5 0 -0.067 -3.328 0.020173 3.3291 0.52984 IIaq 4 Part Part Dazzp111q X -0.077 0 1 0.07688 0.012236 2 -0.051 0.501 0.1019 0.50362 0.080154 -0.051 -0.501 0.1019 0.50362 0.090154 -0.944 0.291l 0.95557 0.98751 0.15717 V 0 .,.,. -0.944 -0.291 0.95557 0.98751 0.15717 :X v -0.034 1. 0099 0.033343 1.0104 0.16082 -0.034 -1. 01 0.033343 1. 0104 0.16082 -2 :0.78337 1. 786 0.28425 -1. 399 1.1l01 -1. 399 -1.11 0.78337 1. 786 0.29425 -4 -0.066 3.3244 0.019845 3.3251 0.52921 -1.5 -0.5 0 -0.066 -3.324 0.019845 3.3251 0.52921 Figure 6.13 MSD SA-designed controller closed loop poles

PAGE 101

6.2. Tip p, E, I Torque, T Figure 6.14 Geometry of the elastic arm model. The second example problem is the controller design for a flexible robotic manipulator with end-tip position feedback. A long manipulator ann is already in use aboard the space shuttle and more manipulators are envisioned for use with space station construction. Similar anns may be used in earth-bound construction. End-point position feedback avoids inaccuracies in tracking the reverse kinematics of the arm from the base through to the manipulator tip, resulting in a more accurate positioning of the arm tip. Also, as the "end-point position is known less stiffness is needed in the robotic arm and lighter arms may be used. (position feedback is done by installing a light source on the manipulator tip and then tracking it via optical sensors.) This problem was posed in the doctoral thesis of Dr. Eric Schmitz with regard to figure 6.14 as follows: "Consider a uniform pinned-free beam oflength L, moving in the horizontal plane .... The moment of inertia of the beam about the root is A

PAGE 102

lumped inertia IH models the actuator assumbly (hub). A tip mass mt (of negligible moment of inertia) is added at the other end of the arm (tip). Line Or is a fixed reference line; Ox is the tangent line to the beam's neutral axis at the hub. The displacement of any point P along the beam's neutral axis at a distance x from the hub is given by the hub angle 8(t) and the small elastic deflection co(x,t) measured from the line Ox .... Axial defonnations are neglected. The hub angle 8 can be arbitrarily large. III For the robotic arm system where the following matrices were used to represent an arm without a tip mass: 2 o 1 o -0.2 o -139.1 -0.35 o -467.2 1 -0.8 0 -2311.0 o 64.7 o 192.16 o 194.1 0 =[0 2.27 0 6.75 0 6.818 0 2.84 0] [ 1.12 0 -1.098 0 0.898 0 -1.208 0 0] 0 0 0 0 0 0 0 0 1 o 0 0 5.78 0 11. 63 0 0 1 -1.44 80.875 -64.7 The three rows ofH are those for the tip sensor (meters), hub pseudo-rate (rad/sec) and the strain-gauge measurement vectors. The input is torque from the actuator motor. 1 [Sch8S], p.14-1S. 2 [Sch8S], p.168.

PAGE 103

Open-loop plant frequency responses and plant poles are shown below in figures 6. 15 and 6. 16. Although the plant is stable, several of the poles are very lightly damped due to the flexibility of the arm. 100 0=-:-,..., .,.......,---'-"-:., 10 1010 Figure 6.15 Robotic arm plant frequency responses Fraq Imaq (rad/ 40 .......... : ............... ....... .. Part Part DaDpinq 30 ..... .. ......... ; .... ........... c ............... : .............. 0 0 0 20 ......... ..... ................ ; ................ iraq 0 -0.2 0 0.2 0.031831 10 ............... : ...... . ...... ; ...... ........ : ...... ........ 0> -0.175 11. 7 93 0.014B3B 11. 794 1.B771 0 -0.175 -11. 79 0.014B3B 11. 794 1.8771 -10 -0.4 21. 611 0.01B506 21. 615 3.4401 ,: -20 -0.4 -21.61 0.01B506 21. 615 3.4401 -30 -0.72 48.067 0.014977 4B.073 7.651 -40 -0.72 -48.07 0.014977 48.073 7.651 -80 -60 -40 -20 -64.7 0 64.7 10.297 Figure 6.16 Robotic arm plant pole locations 91

PAGE 104

6.2.1. Baseline Results e (8) (8) s 1 p Joint Angle Arm D).namics e (8) osition sensor Figure 6.17 Block diagram for end-point position control The baseline controller was designed by Dr. Schmitz using successive loop closure. The controller takes the form of The rate loop gain was chosen so that the damping coefficient for the first vibrational mode of the arm was about 0.7. Similarly the tip position sensor feedback block constants were chosen to give dominant closed loop poles with damping of about 0.74 and give a rigid-body bandwidth of about 0.66 This is reflected in figure 6.18 (below). The majority of the poles have also been damped by the controller with the exception of the highest frequency poles. 92

PAGE 105

50 i'raq x IImq (ud/ i'req 40 P=t Part DaDI'iDg' 30 -3.061 2.1993 0.13192 4.1417 0.66013 20 3.061 -2.199 0.13192 4.1411 0.66013 10 -3.36 10.41 0.30555 10.996 1. IS 0 ........ _.0 ........ -3.36 -10.41 0.30555 10.996 1. 75 -10 ;.:-; ...... : .......... ..... -13.0B 1B.BB1 0.56946 22.969 3.6556 -20 -13.0B -19.99 0.56946 22.969 3.6556 -30 -29.59 B.9161 0.955 29.925 4.1621 -29.59 -9.916 0.955 29.925 4.1621 -40 -0.666 41.562 0.01401 47.561 1.5705 -50 :-< -30 -25 -20 -15 -10 -5 0 -0.666 -41.56 0.01401 41.561 1.5105 Figure 6.18 Robotic ann baseline controller closed loop poles The open-loop frequency response from the control input to the tip position is shown below. 40 30 : ........ : ......... : ......... : ........ : ........ ......... ....... .. -30 ... ... -; ......... ... ; ..... ........ : ......... ......... ....... .. -40 ......... : ......... : ... ..... : ......... ; ......... : ......... : ... -50 .. -300 -250 -200 -150 -100 -50 o Figure 6.19 Robotic arm baseline controller Plant open loop frequency response 93

PAGE 106

The author designed an objective function to measure the design. The requirements for the objective function were: 1. The closed loop system must be stable. 2. No unstable poles allowed in the controller. 3. Lowest frequency closed-loop poles must have damping greater than 0.7. 4. All other closed loop poles should have damping greater than 0.3. The frequency of the lowest frequency pole should be pushed as high as possible. When applied to the constants for the baseline controller, the results in table 6.4 were generated, showing good results for each requirement. 0.5COO 3.0000 3.0000 30.0000 >0.7 >0.3 >0.73 >0.3" 0.0500 0.0000 0.0000 0.0000 0.0000 0.0500 Table 6.4 Robotic arm baseline results table 94

PAGE 107

6.2.2. Genetic Algorithm Application Kr a Reqnt Table 6.5 Robotic arm genetic algorithm results table When the GA was applied to the robotic ann problem, the results in table were generated. The results for the perfonnance-optimized set of GA parameters are superior to the baseline design as they provide a closed-loop rigid body frequency of 0.70 rather than 0.66 and a dominant pole damping of 0.9998 rather than 0.74. The remainder of the complex poles have damping greater than 0.3 with the exception of the highest frequency set. It is also interesting to note that the run executed in just over 8 hours. This allowed overnight runs. The system metric set was significantly cheaper to generate than the 95

PAGE 108

performance set, but failed to damp the ann poles or push up the lowest frequency present in the ann. Figures 6.20 and 6.21 show the closed loop pole locations and open-loop control frequency responses. ;.( x>;.( .. .. .E .X x -40 -30 -20 -10 0 150 100 50 0 -50 -100 -ISO -60 ,... -: -50 -40 x: -30 -20 -10 o Real Part -4.384 -4.384 -5.434 -4.498 -4.498 -6.625 -6.625 -38.72 -1.2 -1. 2 Part -0.384 -0.333 -0.333 -0.523 -0.523 -1. 006 -i.006 -53.92 -31.69 -31. 69 Fraq (rad/ Freq Part DlIIIPiD'J 0.0924 0.99978 4.3845 0.69782 -0.092 0.99978 4.3845 0.69782 0 1 5.4343 0.86489 9.0638 0.44452 10.119 1.6104 -9.064 0.44452 10.119 1.6104 20.933 0.30175 21. 956 3.4944 -20.93 0.30175 21. 956 3.4944 0 1 38.12 6.1624 41.908 0.025047 47.923 7.6272 -47.91 0.025047 47 .923 7.6272 Fraq Daaq (racil Freq Part DaJilliD'J 0 1 0.38355 0.061044 3.3143 0.10009 3.331 0.53015 -3.314 0.10009 3.331 0.53015 17.398 0.030041 17.406 2.7703 -17.4 0.030041 17.406 2.7703 46.283 0.021729 46.294 7.3678 -46.28 0.021729 46.294 7.3678 0 1 53.918 8.5813 121. 58 0.25224 125.64 19.997 -121. 6 0.25224 125.64 19.997 Figure 6.20 Robotic arm GA-designed controller closed loop poles 96

PAGE 109

20 .............. -: ................ ........ .... .. .... ... 0 ............... ............ ................. : ... : ........ co :. 20 ....... ....... .:.. .. ........ \ .;. ............ .1. .. : ........ : : J-,: ..l. ts -40 ................. : ................. : .............. : ... \ ........

PAGE 110

6.2.3. Simulated Annealing Application a b other Table 6.6 Robotic arm simulated annealing results table The simulated annealing application to the robotic arm problem produced results that are less satisfactory than those for the genetic algorithm. However, the performance-optimized SA parameter set produces the same general shape to the closed loop pole layout. The difference is that the dominant poles are of lower frequency and the damping for the non-dominant complex poles does not quite meet the requirement of 0.3 damping. Because there is less overhead for the SA algorithm the performance run evaluates the objective function about the same number of times but takes only 5 hours to run. The system metric-optimized set does not produce the 98

PAGE 111

required dampings or push up the closed loop system frequency. The closed loop poles and open-loop frequency responses are shown below. 50 I'req Real. (ra.d/ I'req Part Part Daqoillq -0.97 0.3252 0.94814 1. 0229 0.1628 :>( -0.97 -0.325 0.94814 1. 0229 0.1628 .... Cl -3.931 13.564 0.27834 14.122 2.2477 0 x: .................. ................ 1' E -3.931 -13.56 0.27834 14 .122 2.2477 X -7.466 19.484 0.35782 20.e65 3.3208 X -7.466 -19.48 0.35782 20.865 3.3208 -41.27 0 1 41. 266 6.5676 -1.036 48.11 0.021533 48.121 7.6586 -50 -1. 036 -48.11 0.021533 48.121 7.6586 -60 -40 -20 0 -50.51 0 1 50.512 8.0392 I'req 150 (racl/ I'req l'art Part DaqoiDg' 100 .......... : .......... ............ : ..... ...... ........... -0.985 0 1 0.98512 0.15679 50 .. -0.144 3.3075 0.043414 3.3106 0.5269 -0.144 -3.308 0.043414 3.3106 0.5269 OJ 0 ........... -0.633 17.39 0.036359 17.402 2.7696 -0.633 -17.39 0.036359 17.402 2.7696 -50 .......... x 45.578 7.254 -45.59 0 1 -1.129 46.236 0.024409 46.25 7.3609 -100 ", -1.129 -46.24 0.024409 46.25 7.3609 -150 -31.35 104.5 0.28732 109.1 17.363 -50 -40 -30 -20 -10 0 -31.35 -104.5 0.28732 109.1 17.363 Figure 6.22 Robotic arm SA-designed controller closed loop poles 99

PAGE 112

/ .. 20 .......... ..... : .... ..... ..... : ..... ....... : ... : ... 0 1-20 z : : : -:--40 -60 '-:__ __ '--_-:......'----J -100 0 100 200 -; .;: -40 ................ : ................. ...... ........ : ... ; ........ Figure 6.23 Robotic arm SA-designed controller Plant open loop frequency response

PAGE 113

7. Conclusion 7.1. Summary of Results At this point we summarize the results of the work. A generalized search structure and system metric set were posed and successfully implemented in C++. When a least squares fit was applied to the results from running a test problem, sets of "best" parameter sets were generated. These parameter sets consistently performed better for the quantity for which they were optimized, whether the system metric or raw performance (accuracy). This shows that using least-squares on the run measurements is a workable method for refining method parameters. The work done reviewing genetic algorithms and simulated annealing proved helpful in understanding the reasons for the "best" parameter sets indicated by the least squares fits. When applied to application problems, the "best" parameter sets showed mixed results. Some of the search results show that use of the system metric does indeed trade performance for cost. Unfortunately, the system metric iteration gain was set sufficiently high that the raw performance term in the system metric equation consistently lost out to the cost term. Because a set of method parameters was also optimized for raw performance some very good results were still obtained in the application problems. In each application the performance-optimized parameter sets produced designs comparable to the hand-generated or H2-generated baselines. This 101

PAGE 114

validates genetic algorithms and simulated annealing methods with performance optimized parameter sets as viable methods for design. Overall, this work laid the foundation for further work with method evaluations and produced some sample results and infrastructure for genetic algorithms and simulated annealing. 7.2. Areas for Further Investigation The following are some areas for further investigation: 7.2.1. Enhancement of Search Structure and Further Validation of System Metric There remain several portions of the work that can be improved. Although the .search results show that the search metric performs some tradeoff between cost and performance more tests should be performed with different system metric iteration gain steps to validate that tradeoff Also the system metric embedded in the search structure should be modified to update each time the objective function is evaluated. This would ramp up the system metric gain each time the objective function was visited (independently of the population size employed in each search iteration) and allow for direct system metric comparisons between GA and SA methods or any other method implemented in the search structure. Finally, the methods should be applied to a wider set of test and application problems and the statistical robustness of each method and set of method parameters should be quantified. 102

PAGE 115

7.2.2. Examination of Problem Spaces and Objective Functions The interaction of the method with the objective function ultimately determines how the method performs on a particular problem. would be useful to have information on how to develop objective functions to solve various control problems and to have a set of standard weights to use in objective functions. The objective function in each problem defines the shape of the problem space. Some methods may work better in certain types of spaces than in others. A standard for classifying problem spaces (discontinuous, smooth, complex, etc.) might well serve as the basis for a rule to specify when to use a particular search method over some other method. 7.2.3. Examination of Additional Methods Simulated annealing and genetic algorithms are not the only search methods in existence. Not only are there other search methods, but there have been some attempts at crossing SA and GAs. Some of these other methods may prove superior to GA and SA. Once the measurement techniques have been improved, these other methods should be investigated with a view to create a search method design toolbox. 7.2.4. Large-scale Concurrent Engineering Optimizations Finally, there is some work in the field examining the application of optimization methods to concurrent engineering. This work and some of the work done in formally decomposing complex systems may be useful in improving concurrent engineering on complex projects with multi-disciplinary workgroups. 103

PAGE 116

APPENDIX This section includes the analysis and codes to support a generalized search structure. The section begins with the results of an object-oriented analysis (OOA) of the problem, then proceeds to the C++ code to implement the results of the OOA. Note that the included implementation is but one of many possible implementations. The packages chosen were employed because they were those at the author's disposal. is the author's hope that the included implementations are sufficiently clear for reuse in other applications.

PAGE 117

RI Problem Space Der (Rl) Cumulative Search Cost o Parameter Limits /" is accessed Search Method -= o Fitness FWlction explores Cum:nt Iteration (R3) o Allowable Dimensions Search Conv. Target Metric Iteration Limit is used ),(ethod Tailoring Parameters employed R3 employs R2 o Propagation FWlction Iteration Number Last Iteration o Convergence Metric FWlction o Convergence Measure Next Iteration o Random Initialization FWlction Data List (R4) Method States o Method Parameter Arrays R4l contained by o Iteration Data Persist contains uses o Dimension o Fitness o Coordinate A.I

PAGE 118

A.l.t. Problem Space A problem space is an Rn space with strict limits on the first n-I dimensions and an nth dimension defined as a function of the first n-I dimensions. The first n-l dimensions represent the parameters to be used to solve the problem. The function, called the fitness function, is a non-negative norm based on application of the parameters as a solution to the problem under examination. A.1.1.1 Problem Space.Parameter Limits Array The parameter limits array is an (n-l) X 2 array which defines the limits for parameters in the problem space. Elements [i, 1] and [i,2] respectively define the maximum and minimum values for parameter i. the range of representable floating point numbers A.1.1.2 Problem Space.Allowable Dimensions Array The allowable dimensions array is an m-dirnensional array where m <= n. This array contains all the allowable numbers of parameters. For example, if the solution to the problem is allowed to be two or three dimensional, the allowable dimensions array would contain 2 and 3. integers between 1 and (n-I). A.I.t.3 Problem Space.Fitness Function The fitness function is a pointer to a user-defined function that is a non-negative norm of the suitability of the first n-l parameters to be a solution to the problem under examination. The fitness function must take the dimension of the proposed solution 106

PAGE 119

and an array of parameters as arguments. The fitness function must be real-valued for all allowed numbers of parameters as defined in the allowable dimensions array. A.l.2. Search Method A search method is an algorithm that enables the search of a problem space via multiple iterations in an attempt to find that combination of parameters that optimizes the fitness function. This algorithm includes a set of functions that specify the data encoding, the mapping of one iteration into the next, and that measure convergence to the target measurement. Also included is an array of method parameters that may be used to tailor the algorithm. A.1.2.1 Search Method.Propagation Function The propagation function specifies the transformation between the search data contained in past iterations and the search data contained in the next iteration. also specifies the cost of performing such an operation. A.l.2.2 Search Method.Convergence Metric Function The convergence metric function specifies the method by which iteration data is converted into a measurement of search convergence. The output measurement should have the same scaling and units as the problem space fitness function. A.l.2.3 Search Method.Random Initialization Function The random initialization function should randomly generate an initial iteration. A.l.2.4 Search Method.Method Parameter Array This array of floating point parameters may be used by the various functions 107

PAGE 120

above to tailor the operation of the search method. A.I.3. Search A search is an application of a search method to a given problem space. A search may be limited via number of iterations, number of costed operations, or convergence beyond a given value of the search method convergence metric function for the current and past iterations. A.1.3.I Search.Problem Space Pointer The problem space pointer specifies that problem space in which the search is to operate. A.1.3.2 Search.Search Method Pointer The search method pointer specifies the search method to be employed by the search. A.1.3.3 Search.Current Iteration Pointer The current iteration pointer points to the current iteration instance A.I.3.4 Search.Convergence Target Metric The convergence target metric is a floating point number that must be exceeded by the search method convergence metric function when applied to current and past iterations in order to terminate the search. A.I.3.S Search.Iteration Limit The iteration limit is the maximum number of iterations allowed before the 108

PAGE 121

search is terminated. If the limit is infinite, the limit is encoded as -1. A.l.3.6 Search.Cumulative Search Cost The cumulative search cost is the cost of all iterations performed up through the current iteration. The cost is generated via the sum of the costs of the applied search method functions. A.I.3.7 Search. Echo Flag Progress information may be echoed during the search as each iteration is generated and evaluated. The echo flag toggles this function. A.l.4. Iteration An iteration is the collection of data points examined by the search at one time. A.I.4.1 Iteration. Iteration Number The sequence number of the current iteration. The iteration may be used as an index in later analysis. of the search with which the iteration is associated. A.l.4.2 Iteration.Convergence Measure The value of the search method convergence metric function obtained for the iteration and its immediate predecessors. A.l.4.3 Iteration.Data List An array of pointers to the data points under examination by the iteration. 109

PAGE 122

A.1.4.4 Iteration. Last Iteration Pointer A pointer to the last iteration prior to this iteration. A.I.4.S lteration.Next Iteration Pointer A pointer to the iteration generated after this iteration. A.1.S. Data Point A data point is a container that stores an encoded parameter vector (or coordinate in the problem space), its dimension, and the value of the problem space fitness function for that vector. A.1.S.1 Data Point.Coordinate Representation The coordinate representation is an encoded and stored parameter vector in the problem space. A.1.S.2 Data Point.Dimension The dimension is the length of the coordinate representation data structure. A.I.S.3 Data Point.Fitness The fitness is the captured value of the evaluated search method fitness function for -the coordinate representation in the data point.

PAGE 123

A.2 The object-oriented analysis detailed in Appendix A was implemented in C++. In addition to those objects identified in the OOA a matrix object type was defined in order to simplify the handling of vectors and matrices. and a method evaluation type was defined to help manage examining the perfonnance of methods. Each section contains a header (.h) file and a source code (.c) file. The code for the implementation is as follows: 1. Header file 2. Matrix 3. Data Point Problem Space 5. Iteration Search Search Method 8. Method Evaluation

PAGE 124

A.2.1 Top-level Header 112

PAGE 125

HEADER: General Search Structure Top Level oosean:h.h This header file attaches to all the files needed to support a qeneral search structure t1fndef OOBEADER. B I#define OOBEADER. 1 FUNCTION: MonitoredRun: Created: lu:oppes 'ifndef CONVFAC'l' 'define CONVFACT (0.0077) iendif Performs a search run with IterLimit iterations for search S, and reports the memory usaqe per iteration on be externally set (invalid on SUN) void MonitoredRun(Search& S, lonq IterLimit); iendif

PAGE 126

//======================================================= // TOP-LEVEL: General Search Structure // File: // ooheader.cpp rnla // This header fi1e attaches to a11 the fi1es needed to support a qenera1 search // structure //========================================================= 'define _OOBEADER_CPP 1 // 'inc1Ude 'include lifndef iinclude "metheval.. cpp" iend1f // --SYSTEM PARAHETERS--------------------------------------------------------------iifndef 'define 3.4e38 lfend1f // The f10at 1im1t for the system // ---------------------------------------------------------------------------------// // HEADER mSTORY // en.ted: koppes // Modified: // // kcoppes 1) oomethodqlp moved before ooiterat.qIp so that heration could use method timtt.iDIIS. kcoppes kcoppes kcoppes kcoppes 2) Moved class file stdio, stdlib includes here Moved pspace before oomethod to allow Randlnit timtt.ion to compile properly Removed MAXP ARM definition as dynamic memory alloca1ial now Transferred source into Microsoft Word. Added run management fimtt.ialS rev 1 2 //=======================================================

PAGE 127

1/ FUNCTION: 11========================================================= 11========================================================= ( printf:"\n\nPre-search pause (tli) ... ",IterLimit); } char en getc(stdin); (S.-Search(); }; printf("\n Search run in progress\n"); S.IterNum = 0; printft"\n Setup:"); printft"\n Run:\n"); S.IterLimit = IterLimit: S.SetupSummaryData(); S.Runt) ; printf("\n");

PAGE 128

A.2.2 Matrix Type Implementation

PAGE 129

========================================================== HEADER: !\Iatrix 4 File: matriLh This class defines data and methods for dealing with matrices. The use of the matrix type reduces the complexity involved in using vectors. Note: there are hooks left for later expansion into complex variable applications. /tifndef _MATRIX_B *define _MATRIX_B 1 lifndef WIN31 typedef void" BGLOBAL; typedef unsigned DWOlUl; /tdefine huge /tendif --file dependencies-------------------------------------------------------------/tinclude /tinclude 'include 'include /tinclude 'define (,.this)

PAGE 130

HEADER: ll------data inc complex huge HGLOBAL hglb; complex Allccace(long request); Matrix ( ) ; n, int mr; ; Matrix Col(int col) ; Row lint e(int i, int e(int i) ; clint i, int c (int i) ; r(int i, int r(int 1); i(int int: i(int 1); Assume ( A[], Assume ( double Assume int A[], il.ssume float A[], Assume ( double A[ 1. int m, int int int :1) ; int nr; i:1t double complex complex" complex complex floac floa': floac floac void void void void void void int complex floac void void void void void Matrix Assume I complex AI], int m, int n); lengt:h(void); realmax (void) ; drealmax (void) ; Print ( ) ; TraceWrite(FILE" TraceFile); definelint i,int j); define (int i); undefine (void); operat:or + (Matrix& B); m, m, int n) ;

PAGE 131

HEADER: MATRIX Class Definition (cont) (cont) 11------funct1on members----Matrix operator -(Matrix& :1 ; Matrix oper3tor .. (Matrix& complex det ( Matrix inv(voidl; Matrix oper3tor (int n, :'!':t ml; void operator (int void operator ( float void operator (double void operator (Matrix& int operator (:-latrix i!"'1t operator < {complex :;''' int. operator < ( float 9; ; int oper3':or > (!-latrix int operator > (complex int operator > ( float int operator >= (Matrix int operator >= (complex int operator >= ( float int operator <= (Matrix int operator <= (complex int operator <= ( float int minv(Matrix& sm, Matrix& dm); CLASS IIISTORY: Matrix Created: koppes :\Iodlfted: kcoppes kcoppes kc:oppes kcoppes kcoppes kcoppes Moved source file into Miaosoft Word 1) Completed float complex fonns of comparison operators 2) imaginary element extra<1ion operator for completeness 3) Ro=named max,realmax to be realmax, drealmax capability dump matrix infomation to extemally opened file on a single line (trage information) Added waming;i for unsuccessful matrix allocations Removed Matrix return from Assume limaion allocation lim<1ions, replaced with T. rev rev 2 revla rev 3 rev 4 ============================================================= 119

PAGE 132

HEADER: :\lATRIX (cont) MACRO: :\lAT2COL. XMAT2COL. // ========================================================= // MACRO: // Converts row-col indexing to row-only indexing for this matrix // Inputs: int t,r Index variables // Created: tOl7l94 lu:oppes // ========================================================= #define MAT2COL(t,r) ((unsigned) 'define ((unsigned) MACRO: CYCLEMA.TROWS cycles i index from 1 to the of rows in this matrix MACRO: CYCLEMA.TCOLS cycles i index from 1 to the of columns in this matrix MACRO: CYCI.EMA.TELS cycles i indices through the address of each element in matrix Inputs: int i,j Index variable Created: tOn194 KOppes //=========================================================== idefine CYCI.EMA.TROWS(i) for(i=O;ithis->m);i++) idefine CYCLEHATCOLS(i) for(i=O;ithis->n);i++) 'define CYCI.EMA.TELS(i,j) for(i=O;i< for(j=O;j< (this->n);j++) 'define for(i=O;i< for(j=O;j< (A.n);j++) // ========================================================= // Point to real/complex elements in this or other matrices Inputs: // Created: int t,r Matrix A tonl9" Index variables matrix to operate on KOppes fdefine BLEHENT(i,j) 'define XBLBMENT(A,1,j) (A. x [:
PAGE 133

CLASS: File: T!,latrb matrU.c:pp This class defines data and methods for dealing with matrices. matrix type reduces the compleXity involved in using vectors. 4 The use of the Note: there are hooks left for later expansion into complex variable applications. *define _HAnUX_CPP 1 typedef void* BGLOBAL; typedef unsigned DWOIU); *define huge --file dependencies-------------------------------------------------------------'include *include 'include *include *include 'define T (*this)

PAGE 134

CLASS: charint complex huge TMatrix name Dimensions of the Data array HGLOBAL hglb; Handle for data complex TMatrix ( ); .o, i::t ml; TMatrixlint n, m, char-name); -TMatri;.,: (); TMatr::'x TMatrix complex complex" complex complex float float float void void void void Col lint col); Rowlint row) ; int elint 1); clint i, int i) ; r (int int j) ; r (i:-.t i) ; i(int int i); Assume ( double Ass..::ne ( float: Ass:::nel double A[l, A[l, A[l, flo.[) int int n) ; int int n) ; int int B[] int double int void int complex void void void void void TMatrix TMatrix TMatrix complex A[), int m, int n); leng:.h(void); real-max (void) ; drealmax (void) ; Print ( ); TraceWrite(FILE" TraceFile); define(int i,int j); det:.ne(int i); undefine (void) ; operator operator -operator (TMatrix& B); (TMatrix& El); (TMatrix, B); m, :::t:

PAGE 135

CLASS: MATRIX Class Deftnilion (cont) (cont) 11------funct1on members----complex (void); TMatri:< inv(void); TMatrix opera 'Cor (int n, in'C m); void (int void ( float void operator (double Al; void (TMatrix& int < (TMatrix B} ; int < (complex int < ( E} ; > int > (complex int operator > ( float int operator >= (TMatrix int >= (complex int operator >= ( float int operator <= (TMatrix int operator <= (complex int operator <= (float int sm, TMatrix& dm); CLASS HISTORY: TMatris Created: 10nt94 ka:oppes Modified: kcoppes Moved source file into Microsoft Word kcoppes I) Completed float complex forms of comparison operaton; Added imaginary element Operator for completeness 3) Renamed max.realmax to realmax, drealmax kcoppes Added capability to matrix infomation to externally opened file .011. B single line (trace information) kcoppes waming5 for lDISua::essfu1 matrix allocations kcoppes Oianged memory Blloca1ion method kcoppes Removed Thlatrix Assume kooppes aUocation func!ionx, replaced T. rev 1 rev 2 rev2a rev] rev 4

PAGE 136

CLASS: MATRLX MACRO: MATlCOL. XMATlCOL. ek'. ========================================================= MACRO: HAT2COL, XMAT2COL Converts Inputs: row-col indexing int t,r to raw-only indexing for this matrix Index variables Created: IOn194 kl:opPeli 'define HAT2COL(t,r) *define XMAT2COL(A,t,r) ((unsigned) MACRO: CYCLEHA'l'lU)WS cycles i index from 1 to the of rows in this matrix MACRO: CYCLEHATCOLS cycles i index from 1 to the of columns in this matrix MACRO: cycles i Inputs: & indices through the address of each element in matrix Created: int 1, j Index variable lOnl94 kl:oppes *define CYCLEMATROWS (i) 'define CYCLBMATCOLS(i) *define CYCLEMATELS(i,j) for(i=O;i< for(j=O;j< 'define XCYCLEHATBLS(A,i,j) for(i=O;i< ;.=-.m);i++) forlj=O;j< (A.n);j++) MACRO: XBLEMBN'l' Point to reallcomplex elements in this or other matrices Inputs: int t,r Index variables ereated: TMatr1xA IOn194 'define ELBHENT(i,j) 'define XBLENBNT(A,i,j) matrix to operate on k&:oppes )

PAGE 137

CLASS: :\L\TRlX (cont) :\IemMAllocate. DoubleAllocate etc:. // // FUNCTION: // A110cates memory from 'the enVironment. // Inputs: DWOlW bytes NUmber of bytes to allocate // BGLOBAL& hqlb Reference to memory address (returned) // Returns: void huge pOinter to allocated memory // Created: kl:oppes //========================================================= void (unsigned bytes, void* hqlb) void "a malloc (bytes) ; if (a=NULL) { printf(" Error (MemA.lloc1 : Sock of memory failed. \n"); }; return a; } FUNCTXON: A1locate Xnputs: DOUbleA1loc, XntA1loc, CharA1loc type-cast memory. lonq n BGLOBAL& hqlb NUmber of type elements to allocate Reference to memory address (returned) // Returns: // Created: type-cast pointer to allocated memory kl:oppes double DOUbleA1loc(lonq ndDubles,vOid* hqlb) unsigned bytes ndoubles*sizeof(double); double" 3 (double "I malloc(bytes); int IntA1loc(lonq nint,void* hqlb) { unsigned bytes = nint+sizeof(int);. int .. 3 = (int W) malloc(bytes); return a; char nchar,void* hqlb) unsigned bytes nchar*sizeof(charl; char" 3 (char .. ) malloc(bytes); return a;

PAGE 138

CLASS: to if (request<=O) { pri:-,':f(" r1:"o1:": of %i bytes delete T.x; complex *a = *) malloc((unsigned) ::equest)*sizeof(complex)); { p::intf(" Er::or: Lock of matrix memory failed. \nU); T.x = a; return T.x;

PAGE 139

"CLASS: MATRIX (cont) "FUNCTION: TMatriJ: Constructor ========================================================= FUNCTION: TKatrix (polymorphic) Creates a matrix instance. --------------------------------------------------------------------------------1: Default constructor Created: 1017194 k&:oppes --------------------------------------------------------------------------------TKatrix::TKatrix(){ T.n 0; T.m = 0; T.x NULL; -::.name NULL; T.hglb=NUL:; 2: create m x n matrix, zeroed Inputs: int m, n the number of rows and columns Created: Ion /94 k&:oppes Modified: kcoppes warning for unsua:essful matrix allocation TMatrix::TMatrix(int H, int N) { T.m = M; T.n = N; T.name = NULL; T.hglb = NULL; long a = (unsicrned) M+N; if T.Allocate(a==NULL) {p=intf("\nFailed TMatrix allocate"); exit(-l);} int CYCLEMATELS(t,=' ELEMENT[t,r) = complex(O); } 3: Inputs: Created: ModlOecl: create int char* IOnJ94 n x m matrix, zeroed with name n,m the number of rows and columns name the name of the matrix k&:oppes kcoppes Added warning for unsucx:essful matrix allocation TMatr,ix: :TMatrix(int m, int n, char* nallle) ( } T.n = n; T.m = m; T.hglb = 0; T.name = NULL; long a = (unsigned) m+n; if T.Allocate(a NULL) {printf("\nFailed TMatrix allocate"); exit(-l);} T.name = (char malloc(strlen(name)+sizeof(char)); strcpy(T.name,name); int t,r; CYCLEMATELS(t,r) ELEMENT(t,r) = complex();

PAGE 140

CLASS: MATRIX FUNCfION: -TMatril.Col.Row FUNCTION: ..-rMatrix De1etes a matrix and its contents Created: k&:oppes TKatJ:ix::-TKatrix() { if(T.x 1= NULL) { T.undefine();}; if(T.name NULL) { free(T.namel; }; } FUNCTION: Co1 Extracts a co1umn from a matrix Inputs: int co1 co1umn extract Returns: THatrix* Iso1ated co1umn Created: IOn194 k&:oppes 1\l0di8et1 : 1/20/9S kc:oppes Employed instead ofthis->m for casting reasons ========================================================= THatJ:ix THatrix::Co1 (int co1) { int msize T.m: TMatrix y(msize,l): int row: CYCLEMATROI-lS(rowl XELE:1ENT(y,row,O) ELEMENT(row,coll; return y; } FUNC'l'ION: Row Extracts a row from a matrix Inputs: int row row to extract Returns: THatrix* Iso1ated"row Created: IOn194 k&:oppes :\lodiOetI: kcoppes Employed instead ofthis->n for casting reasons THatrix THatrix::Row (int row) { int nsize = T.n: TMatrix y(l,nsizel: int col: CYCLEMATCOLS(col) XELE:1ENT(y,O,coll ELEMENT(row,coll: return y:

PAGE 141

CLASS: MATRLX (cont) e. c (pointer/complex element enl'llc:Don opel'llton) FUNCTION: e (polymorphic) Returns a pOinter to the specified matrix element --------------------------------------------------------------------------------Returns pointer to element i,:) of this matrix Inputs: int i :) Address of matrix element Returns: complex* Pointer to matrix element Created: kc:oppes co=Plex* THatrix::e(int i, int j){ return } Inputs: Returns: Returns pointer to element i,O of this matrix int 1. Address of matrix element Cftated: complex* Pointer to matrix element --------------------------------------------------------------------------------complex* THatrix::e(int i){ return &ELEMENT(i,O); } FCNCT:ION: c (polymorphiC) Returns the complex value of a specified matrix element VAR:IANT 1: Returns element 1,:) of this matrix :Inputs: Returns: Created: int complex i,j Address of matrix element matrix element Complex THatrix::c(int i, int j){ return ELEMENT(i,j); } VlUUANT 2: Returns element 1.,0 of this matrix :Inputs: // Returns: Created: int complex 10/15194 Address of matrix element matrix element complex THatrix::c(int i){ return ELEMENT(i,O);

PAGE 142

(CODt) FUNCTION: rJ elementertraction operuton) FUNCTION: r (polymorphic) Returns the real value of a specified matrix element VARIANT 1: Inputs: Returns real value of element i, of this matz:ix int i, Addz:ess of matz:ix element Retuz:ns: float z:eal value of matrix element Created: kcoppes float TMatrix::Z:(int i, int j){ return real(ELEHENT(i,j); }; VARIANT 2: Returns real value of element i, 0 of this matrix Inputs: int i Addz:ess of matrix element Returns: float real value of matrix element Cnated: kcoppes float THatz:ix::r(int i)( if (i>T.m) (printf(" Error: Invalid coordinate"); exit(-l); }; return real(ELEMENT(i,O); FUNCTION: i (polymorphic) Returns the imaginazy value of a specified matz:ix element VARIANT 1: Returns imaqinary value of element 1, of this matrix Inputs: int 1, Addz:ess of matrix element Returns: float imaq value of matrix element Crnted : kcoppes float TKatr1x::i(int 1, int j){ return imag(ELEMENT(i,j); }; VARIANT 2: Returns imaginazy value of element 1,0 of this matrix Inputs: int i Addz:ess of matz:ix element Returns: float imago value of matz:ix element Created: 10/15/94 kcoppes float THatz:ix::i(int i)( } if (i>T.m) (printf(" Error: Invalid coordinate"); exit(-l); }; return imag(ELEMENT(i,O);

PAGE 143

CLASS: FUNctION: Assume ========================================================= FUNCTION: Assume (polymorphic) an array to a matrix =========================================================== --------------------------------------------------------------------------------Assiqn real float array to matrix Inputs: The assiqning variable (m*n elements) float[] int m,n row and column dimensions of matrix Created: lom94 --------------------------------------------------------------------------------void TKatrix::Assume (float A[l, int H, int N) ( } long a = (unsigned) M*N; for(int t=Q;t
PAGE 144

CLASS: MATRLX .-\nume --------------------------------------------------------------------------------VARIANT 3: Assign real integer uray to matrix Inputs: int[] int A m,n The assigninq variable (m*n elements) row and column dimensions of matrix --------------------------------------------------------------------------------void TMatrix: :Assume (int A[], int m, int n) } long a = (unsigned) for(int t=Q;t
PAGE 145

CLASS: (cont) (cont) --------------------------------------------------------------------------------VARIANT 5: Assign real imaginary d.ouJ:)l.e arrays to matrix Inputs: fl.oat [] 1nt A,B m,n Cnated: 1017194 kcoppes The assigning variabl.e (m*n el.ements) row and col.umn dimensions of matrix --------------------------------------------------------------------------------void TKatrix: :Assume (double A[), double ,int m, 1nt n) long'" (unsigned) mn; for(1nt t=O;t
PAGE 146

FUNCTION: len2th. realmaL drealmu // FUNC'l'ION: length // Returns the maximUm dimension of this // Returns: int length // Cftated: IOn194 // int TKatrix::length(void){ int a; if (T.m T.n) else return 3; } // // FUNC'l'ION: realmax of matrix kcoppes a T.m; 3 = (cont) matrix // Returns real val.ue of element of this matrix with the maximum real part // // Returns: // // Cftated: // l\IodlOed: complex real val.ue of element with max real part 1018194 lu:oppes kcoppes Changed funaion name to realmax to reflect ac:tual function activity took real of resuh //======================================================= complex TKatrix::realmax(void){ complex highest ELEMENT(O,O); int i,j; if(T. length ()==1) return camplex(real(hiqhest; CYCLEMATELS(i,j) (if(T.r(i,j) real(highest)) highest T.c(i,j); ); return complex(real(hiqhest; } // // FUNC'l'ION: drealmax // Returns real val.ue of element of this matrix with the maxim!m real part (float) Returns: float real val.ue of element with max real part // Cftated: 10/15194 kcoppes ModlOed: ka:JPPeS Changed function name to realmaxto reflect actual funttion activity // float TKatrix: :drealmax(void) ( float mxm = (float) real(T.realmax()); return mxm;

PAGE 147

CLASS: // // // // 12/13/94 Layout only [H] [H] \n",T.m,T.n); if (t==O) prinef("!"); prinef," "); prinef(" "); printf(" float a = T.i(t,r); printf("'+20.16e",a); if (t='l'.1II-1) printf(" I"); printf("\n"); } // // // // // } //========================================================= TraceFile)( if (TraceF1le!=NOLL) fprintf(Tr3ceFile," int %i %1 ",T.m,T.n); fprintf (TraceFile, %e ", T. r (t, r), T. i (t, r) ); Error: TMatr1x trieci to to undefined trace file");

PAGE 148

CLASS: FUNCTION: define (cont) ========================================================== FUNC'l'ION: define (polymorphic) Sets size to this matrix and allocates memory --------------------------------------------------------------------------------1: Set up a 2-dimensional matrix Inputs: int i,j row and column size of matrix Created: ,"=oppes --------------------------------------------------------------------------------void TMatrix::define(int i,int j){ long a = (unsigned) i*j; T.Allocace(a); = i; for(int t=O;t
PAGE 149

CLASS: FUNCfIO:'iIOPERUORS: MATRL\: undefine FUNCTION: undefine sets matrix size to zero and deal locates memory Created: kc:oppes ========================================================= void THatrix::undefine(void)( } T.m = 0; T.n = 0; if('l'.x != NULI.) (freeIT.x); }; T.x = NULL; OPERA'l'OR: T Adds two matrices Inputs: THatrix& B matrix to add '!'Matrix Sum matrix Created : kc:oppes THatrix '!'Matrix::operator T (THatrix& B) ( if 'l'.n!=B.n) (T.m!=B.m } Error(TMatrix +): must be the same size"); TMatrix *y; y = new TMatrixIT.m,T.n); for (int i=O;ie Ii, = T. c (i, j) B. c (i, return *y; ex1t(-l); }; ========================================================= OPERATOR: Subtracts two matrices Inputs: THatrix& B matrix to subtract THatrix Difference matrix Created : kc:oppes '!'Matrix '!'Matrix::operator (THatrix& B} ( if T.n!=B.n) (T.m!=B.m } printf(" ErrorlTMatrix -I: matrices must be the same size"); TMatrix *y; y = new TMatrix(T.m,T.n); for (int i=O;ie(i,j))= T.cli,j) B.cli,j); return *y; ex1t(-l); };

PAGE 150

CLASS: :\lATRIX OPER-\TORS: OPERATOR: two matrices Inputs: TKatrix& THatrix B matrix to operate on matrices Created: TKatrix TKatrix::operator (THatrix& if (T.n!=B.m) -I: cols musc TMat.r':':< new TMat.rix(T.:n,3.n); for (1nt i=O;ie(i,j)=cellval; }

PAGE 151

CLASS: : () { TMatrix y(l,l); y.x[O) = return y; ========================================================= II II II II II II A II T.m; 1; T.n l; T.Al1==ate(1); (A); II II II II II T.m = 1; T.n = i; T.Al1ocate(1); T.x[O) complex(3);

PAGE 152

CLASS: OPERATOR: MATRLX --------------------------------------------------------------------------------VARIAN'l' 3: Assign doubl.e to matrix Inputs: Created: fl.oat A IOnl94 The assigning variabl.e kcoppes --------------------------------------------------------------------------------void TKatrix::operator = (doubl.e A) { T.n 1; T.m = 1; T.Alloc3te(1); T.x[O) complex:A); } --------------------------------------------------------------------------------VARIAN'l' 4: Copy matrix and contents Inputs: TKatrix B The assigning variabl.e Created: IOnl94 kcoppes Modlftecl: kcoppes Changed memory alice for matrices to allow matri.'( arrays longer than f>..1AXINT --------------------------------------------------------------------------------void TKatrix::operator = (TKatrix& B) { } int t,r; T.:r. (B) .m; T.n = (B) .n; T.undefine();. T.define(B.m,B.n); "(T.e(t,r)).= B.c(t,r);

PAGE 153

CLASS: OPERATOR: (polymorphic) Compares two 1-element structures (this B) --------------------------------------------------------------------------------VARIANT 1: compare matrix to matrix Inputs: TMatrixB The vaJ.ue to compare to Created: kcoppes Modified: kcoppes Layout mod only. ---------------------------------------------------------------------------------int TMatrix::operator < (TMatrix B) { if (!T.length()==1)&&(B.length()==1) exit(-1); return -1; else if (abs (ELEMENT (0,0 abs (XEI..EMBNT (B, 0,0) else =eturn 0; return l; --------------------------------------------------------------------------------VlIRIANT 2: Compare matrix to complex Inputs: TMatrixB Created: kcoppes The value to compare to ---------------------------------------------------------------------------------int TMatrix::operator (complex B) { if (! (T.length()=1 { ex1t(-l); else if(abs(ELEMBNT(O,O abs(B 1; else rS1:urn 0; --------------------------------------------------------------------------------VlIRIANT 3: Compare matrix to float Inputs: float B The value to compare to Created: k&:oppes Modified: kooppes Layout mod only. ---------------------------------------------------------------------------------int TMatrix::operator (float B) { if T .lenqth () =1) { exit (-1); :e1:urn -1; else if(aba(BLBKBNT(O,O B) ::eturn l; else ::e1:urn 0;

PAGE 154

CLASS: !\lATRIX OPERATOR: OPERATOR: > (polymoxph1c) Compares two l-element structures (this> B) --------------------------------------------------------------------------------VARIAN'l' 1: Compare matrix to matrix Inputs: THatrixB The value to compare to Cftated: 10m94 KOppeS Modlfted: kcoppes Layout mod only. ---------------------------------------------------------------------------------int THatrix::operator (THatr1x B) ( if (!T.length()==l)&&(B.length()==l) exit(-l); return -1; else if(abs(ELEHBNT(O,O > abs(XBLEHENT(B,O,O) return 1; recurn 0; --------------------------------------------------------------------------------VARIANT 2: Compare matrix to complex Inputs: TKatr1xB The value to compare to Cftated: KOppes ---------------------------------------------------------------------------------1nt TMatrix::operator (complex B) { if (!(T.length()==l ( exit(-l); return else if(abs(ELEMENT(O,O > abs(B recurn 1; else re,:urn 0; } --------------------------------------------------------------------------------VARIANT 3: Compare matrix to Inputs: float The value compare to Cftated: 12112194 KOppes ---------------------------------------------------------------------------------1nt TKatrix::operator B) { } if (! T length 0==1) ( ex1t(-l) ; if(abs(ELBKBNT(O,O B) return 1; else return 0; return -1;

PAGE 155

CLASS: MATRIX OPERATOR: ======================================================== OPERATOR.: >= (polymorphic) Compares two I-element structures (this >= B) ========================================================== --------------------------------------------------------------------------------VARIANT I: Compare matrix to matrix Inputs: TMatrixB The value to compare to Created: lom94 Modifted: l.\yout mod only. int TMatrix::operator >= (TMatrix B) ( if (!T.lenqth()==l)&&(B.length()==l) exit(-l) ; else if(abs(ELEMENT(O,O abs(XELBKENT(B,O,O) return l; else return 0; } --------------------------------------------------------------------------------Compare matrix to complex Inputs: TMatrix B The value to compare to Created: lom94 ModlOed: l.\yout mod only. 1nt TMatrix::operator >= (complex B) { (! (T.lengthO==l ( ex1t(-I); else if(abs(ELEMENT(O,O abs(B return 1; else return 0; } VARIANT 3: Compare matrix to float Inputs: float B The value to compare to Created: 12/12194 int TMatrix::operator >= (float } (! T length 0==1) ( exit (-1) ; return -1; else if(abs(KLBMENT(O,O >= B) return 1; else return 0;

PAGE 156

MATRIX OPERATOR: <= (polymorphic) Compares two 1-element structures (this <= B). --------------------------------------------------------------------------------VARIANT 1: compare matrix to matrix Inputs: TKatrixB The value to compare to Created: lOnl94 k&:oppes l\Ioditlrd: kooppes Layout mod only. int TKatrix::operator <= (TKatr1x B) { if (!T.length()==1)&&(B.lenqth()==1) { exit(-1); return -1; else if(abs(BLEMENT(O,O <= abs(XBLEKENT(B,O,O) return 1; else return 0; --------------------------------------------------------------------------------Compare matrix to complex Inputs: TKatrix '!'he val.ue to compare to Created: 1211Z94 k&:oppes ---------------------------------------------------------------------------------int TKatrix::operator <= (complex B) { if (! (T.lengthO==1 { ex1t(-1); return -1; else if(abs(ELEMENT(O,O <= abs(B return 1; 0; } 3: Compare matrix to float Inputs: Created: float B The val. ue to compaJ:e to Jtcoppes int TKatrix::operator <= (float B) { (! T.lenqthO=l)) { ex1t(-1); return -1; else 1f(abs(BLEHENT(O,O <= B) return 1; else return 0;

PAGE 157

CLASS: MATRLX (cont) det FUNCTION: det Takes the determinant of this matrix Returns: THatrix Resul.tant matrix Adapted: koppes from matrb function set developed NIgel Salt., nao@cu.compulJnk.co.uk compl.ex THatr1x::det(void) { if (T.m!=T.n) { } Error{det): must be square"); complex 0; for (int i=O;i
PAGE 158

CLASS: MATRLX invJdent (cont) //======================================================= // FUNCTION: inv // Returns the inverse of this matrix (assumes non-sinqular) // // Returns: // // Adapu-d: TMatrix Resultant matrix lu:oppes //========================================================= TMatrix TMatrix::inv(void)( } TMatr-ix +INV; !NV new TMatri:-: t J; if(minv(T,*INV( pri:-.-.:f("\nErrortinvJ: failed inversion":; exit(-l); }; retur:1 INV; //========================================================= // FUNCTION: ident // Returns an identity matrix // // Input: // Returns: // Adllpu-d: int TMatrix TKatr1x ident(1nt i) if (1<1) ( Size of identity matrix Resultant matrix lu:oppes from matri'l function set developed by Nigel SIIIt, nao@dLrompullnk.ro.uk fprintf(stderr," size must be > 0"); TMatrix "y; 1 = new TMatrixti,i); } for (int t=O;tm;t++) "ty->e(t,tJ) = 1; return ex1t(-l) ;

PAGE 159

CLASS: MATRIX nsolve (cont) FUNCTION: FUNCTION: nso1ve So1ves a system in N unknowns Input: THatrix& data Augmented matrix to so1ve; Returns: int Solution is returned in data. Error f1aq Adapted: kcoppes from matriI function set developed by Nigel Salt. nao@ciLcompullnk.co.uk 1nt nso1ve(TMatrix& data) { if(data.m!=(data.n-1{ fprin1:f(s-.:derr," Error(nsolve): improperly sized aug mtx"); return 1; }; } in1: cols = data.n; int rows = data.m; for (int i=O;i=O;j--) -::IaO:3.e(i,j) data.c(i,i); or(j=1+1;j=i;k--) -= da1:a.c(j,i) data.c(i,k); } for(1=rows-2;1>=O;1--) for(1nt j=co1s-2;j>1;j--) +data.e(i,cols-l) -= data.c(i,j) data.c(j,cols-l); +data.e(i,j) = 0; } return 0;

PAGE 160

CLASS: MATRIX (cont) minv FUNCTION: minv Finds the inverse of a matrix Input: TMatrix& TMatrix& 1nt sm Source TMatrix resultant matrix Returns: Adapb!d: Error flaq lu:oppes from matrix function set developed Salt. nao@c1Lcompulink.co.uk int minv(TMatrix& sm, TMatrix& dm) 1f(sm.m sm.n) { fprir.-::(sc.jerr, non-s::;uare macrix"); return 1; }; if (sm.det()=coraplex(O.O)) ( fprintf(stderr,"'nError(minv): matr:':< is singular"); return 1: }; TMatrix d(sm.m,sm.n+1); int k; dm.define(sm.m,sm.n); for (1nt 1=O;1
PAGE 161

A.2.3 Data Point Type Implementation 149

PAGE 162

HEADER: DataPolnt. oodata.c:pp This class defines a data point. iifndef __ DATA_H idefine __ DATA_B rev 4a --file dependencies-------------------------------------------------------------'include iinclude "oodata.h" #include "pspace.cpp" ---------------------------------------------------------------------------------#define T (*this) class DataPoint public: 11------data members--------int: float: dimension; Fitness; The number of dimensions encode
PAGE 163

CLASS: DataPoint Class Definitions (coot) (cont) class BinaryDataPoint public: public DataPoint ( 11------data members--------binType .. DATA; II------function members---Data storage element (T.DATA NULL; *Pl; void void void void void" -Pl; Pspace, :..roid) ; = E); operacor new(size_t size:; class IntegerDataPoint public: public DataPoint ( 0; 11------data members-------int "DATA; Data storage element II------function members---void Allocate(ProbSpace P); class FloatDataPoint public: public DataPoint { 11------data members--------Matrix DATA; members---Data storage element } int void void void void" Iundef T lendif Evaluate(ProbSpace P); Pl; Print:void); operacor = B); new(size_t size);

PAGE 164

========================================================= CLASS: FUe: DataPolnt, dass defines a point. ========================================================= I#define __ OODATA_CPP --file dependencies-------------------------------------------------------------I#include #ifndef __ PSPACE_CPP /#include "pspace. CPP" Hendif ---------------------------------------------------------------------------------#define T (*this) class DataPoint { public: 11------data memcers---------float: jimension; fitness; The number of dimensions encoded in the data point The evaluated fitness of the data point. } CLASS HISTORY: DataPolnt Created: 9128194 kc:oppeos Modified: k&:oppeos Moved steIIo. stdUb indudes to oosean:h.h 12/9/94 k&:oppes Moved source Ole into Mlc:rosoft Word rev k&:oppes Erron c:oJTKted during port testing rev 2 k&:oppes Fonnat c:leaned up rev 3 k&:oppes Changed aDoc:adons to fannalloc: rev 3b 2120/95 lu:oppes CblUlged aDoc:adonS rev 4 2121/95 k&:oppes Fleshed out Bmu,.DataPoint dass rev4a 152

PAGE 165

CLASSES: CLASS: CLASS: DATAPOIl"T 8. DERIYATIVES 8lnaryDataPoint 8lnaryDataPoint (cont) Thb class defines a liinary stored data point. Bits are stored as instances of a special aiding class. bin'l'ype Class: blnType The binType class is used only association with 8inaryDataPoiot and therefore with it. For compiling reasons, It precedes 8lnaryDataPolnt. class binType{ public: 11------data members--------union { double struet [unsigned struet [unsigned f; long ie, L; int i[4J; U; e; int rtype; rt:ype=O, floating point data; rtype=l, scaled offset binary double mnRange, II------function members---binType(int rl } double void int void 'Joid r(voidl; SetFloatValue(double 8); IsInrange(voidl; operator (doucle B) Print(void); CLASS HISTORY: Created: 2 121195 k&:oppes {T.rtype=r; T.rnnRange=T.mxRange=O; }; {T.SetF:!.;)atValuedouble)B); };

PAGE 166

CLASSES: CLASS: DATAPOINT DERIVATIVES 8lnaryDataPoint (alDilllary class binType) (coot) (cont) // ======================================================== // FUNCTION: r (b1nType) // Returns the real value of the b1nType e element as a double // Returns: double real value of th1s->e // Created: //========================================================= double b1nType::r(vo1d){ double tmp this->e.f; 1f(T.rtype) ( } // // if (T mnRanqe = T. mxRanqe) { prin-!:f ("Errcr (binType. r I: invalij r,mge" I; exit (-1); } ; tmp (double)T.e.L.lo / (4294967295.0); tmp = tmpW(T.mxRange-T.mnRange)+T.mnRange; return blip; FUNCTION: SetFloatValue (b1nType) Assigns a value to the data element of b1nType Created: void b1nType::SetFloatValue(dcuble 1f(!T.rtype) (T.e.f B; /+printf("\n '!'.Print();w/ } else ( 1f(T.mnRanqe T.mxRanqe) { printf("Error (binType.SetfloatVal): invalid range"); ex1t(-l); }; double tmp = tmp = tmp+(4294967295.0); T.e.L.lo = (unsigned long) tmp; T.e.L.hi = 0; double tmp2; 1f(modf(tmp,&tmp2)
PAGE 167

CLASSES: CLASS: DATAPOINT'" DERIVATIVES BlnaryDataPolnt (auJilliary class FUNCTION: IsInrange (b1nType) (cant) (cant) Returns 1 if the data is within the declared ranges for the instance. Created: !Koppes 1nt b1nType: : IsInrange(void) { 1f(T.rtype==l) return 1; double int rval; tmp = T.ri); mn = T.mnRange; mx testl = tmp-mx; rval = 1; if (test1>O) rval=O; if (test2=O;t-=l){ if (t==1) t3rgData T.e.L.lo; else targData = T.e.L.hi; printf(" "l; for (unsigned int 1=32;1>=1;1-=1) 1; if(T.rtype>O: nWds if(taJ:gData&(1(1-1))) el.se printf("O"); printf (" (% +e l", T. r ( ) ; printf(" ['xl [%xl",T.e.L.lo,T.e.L.hil; printf(" {bl {%xl {%xl (%xI",T.e.U.i [01 ,:'.e.U.i[l] ,'!'.e.U.i [2] ,T.e.U.i[3]);

PAGE 168

CLASSES: CLASS: DATAPOINT DERIVATIVES BlnaryDataPolnt (cont) (cont) class BinaryDataPoint public: public DataPoint ( 11------data members--------binType DATA; Data storage element II------function members-----8inaryDa"aPoin" (void) (T.DATA -3insryDa"sPoint(void); NULL; :.dimension } void ... ,::.id void void* Evaluate(ProbSpace Allccs"e(ProbSpace rtype); ?rint(void) ; operator = 8); operator newlsize_t size); CLASS IllSTORY: 8lruuyDataPolnt Created: 0;) Modified: kcoppes Added binType aiding class. fL"(ed memory allocation. fleshed oul. BinaryDataPoint class

PAGE 169

CLASSES: CLASS: DATAPOINT DERIVATIVES BinaryDataPoint FUNCTION: (cont) (cont) Deletes the contents of a B1naryDataPoint Structure CreaUd: delete FUNCTION: Evaluate (B1naryDataPoint) Evaluates the data point fitness and stores it Inputs: ProbSpace *pspace The problem space to eval.uate in Returns: int Cost The cost involved in unpacking data to evaluate the fitness function. Created: 1nt BinaryOataPoint::Evaluate(ProbSpace *P) ( if(P=NULL) (printf(" bindatapoinLevaluate Error: !';ull Pointer"); exit(-l); }; If(P->userFitness==NULL) ( print!i" floatdatapoint.evaluate ::::::or: Null func:::.:n"); exit(-l); }; int Cost = 0; TMatrix D(T.dimension,l); for(lnt t=O;tFitness(D); this->Fitness = real(a); return Cost; Unpack data for evaluation Cost 2; D.undefine();

PAGE 170

CLASSES: CLASS: DATAPOINT DERIVATIVES BlnaryDataPolnt FUNCTION: Al10cate (BinaryDataPoint) (coat) (coot) Al10cates memory for the binary data point. Number of e1ements is dependant upon the prab1em space in which the data point resides Inputs: Prabspace *pspace The prab1em space to a110cate for Created: ModJfted: kcoppes kcoppes changed memory allocation to fannalloc Added setting of Probspace pointer void BinaryDataPoint::Al1ocate(PrabSpace *Pspace) ( int N Pspace->maxparms; T.dimension N; if(T.DATAI=NULL) delete T.DATA; void" hglb; T.DATA (binType for(int t=O;tGetRange(T.dimension,t, (*(T.DATA+t)).mnRange, (*(T.DATA+t)).mxRange); (*(T.DATA+t)).rtype rtype;

PAGE 171

CLASSES: CLASS: DATAPOINT DERIVATIVES BlnaryDataPolnt (cont) (cont) ======================================================== FUNCTION: Prlnt (BlnaryDataPolnt) Prlnts the and blt pattern of the data Created: lu:oppes vold BlnaryOatapolnt::Prlnt(vold) for(unslgned t=O:t
PAGE 172

CLASSES: CLASS: DATAPOINT DERIVATIVES InteeerDataPolnt (cont) CLASS: IntegerDataPolnt class defines a integer stored data point. class InteqerDataPoint: public DataPoint ( public: 11------data members--------int Data storage element II------function members----'Icid .lUl:>,::ate(ProbSpace +P); } ============================================================= CLASS IIISTORY: IntegerDataPoint Created: kcoppes (unused in thesis) ============================================================= FUNCTION: Allocate (InteqerDataPoint) Allocates memory for the integer data point. Number of elements is dependant upon the problem space in which the data pOint resides Inputs: Created: Modlfted: ProbSpace *pspace The problem space to allocate for kcoppes kcoppes Changed memory allocation to malloc ========================================================= void InteqerDataPoint::Allocate(Prob8pace *P) ( int N = P->rnaxparms; this->DATA (int+)rnall:>c(N+si=eof(int)); this->dimension N; }

PAGE 173

CLASS: DATAPOINT DERIVATIVES floatDataPolnt floatDataPoint Thb class delinH a Ooating point stored data point. class FloatDataPoint public DataPoint { public: members---------(cont) TMatr:':< 'JATA; II------function members-----.loatData?cint!voidl; Data storage element int Evaluate!ProbSpace -Pl; void Allocate(ProbSpace -Pl; Print l ; void void void" operator = Bl; operator new!size_t sizel; 'define T (*this) ========================================================= CLASS HISTORY: floatDataPolnt Created: ModIOed: 9128194 ki:oppes kcoppes added Evaluate funl1ion kcoppes added new fiml1ion

PAGE 174

CLASSES: CLASS: // DATAPOINT DERIVATIVES F1oatDataPoint // FUNCTION: -FloatDataPoint (cont) (cont) // Deletes the contents of a FloatDataPoint structure // Created: koppes //========================================================= this->DATA.undeiine(); // FUNCTION: Evaluate (FloatDataPoint) // Evaluates the point fitness stores it // Inputs: // Probspace *pspace The problem space to evaluate in // Returns: // int Cost The cost involved in unpacking to // evaluate the fitness function. // Created: koppes // ========================================================= int FloatDataP01nt::Evaluate(ProbSpace *P) ( } 1f(P=NULL) ( printf(" floatdatapoint.evaluate !::rrcr: Null Pointer"); ex.1t(-l); }; if (P->userF1tness==NULL) ( printf(" floatdatapoint.evaluate Errcr: Null ex:I.t(-l); }; int C:>st 0; TMatr:'x D; D this->ntness return Cost; // No unpackinq needed for float1nq-po1nt this->DATA; complex a = P->Fitness(D); D.undefine(); real(a);

PAGE 175

CLASSES: CLASS: DATAPOINT & DERIVATIVES F1oatDataPolnt (cont) (cont) // ============================================================= // FUNCTION: Allocate (FloatDataPoint) Allocates memory for the floating point data point Inputs: Created: Modified: PrObSpace *pspace The problem space to allocate for koppes kcoppes Removed macro use as was removed from TMatrix class //========================================================= void FloatDataPoint::Allocate(PrObSpace *P) { int N = P->maxparms; } // FUNCTION: Print (FloatDataPoint) // Print the data point values Created: k&:oppes void FloatDataPoint::Print(VOid) printf("\n"); N; for(int i=O;idimension;i++) printf("\n this->DATA.r(i,O)); printf("\n"); } OPERATOR: = (FloatDataPoint) // Assigns one float data point to another Created: 'ZI17I95 kcoppes ========================================================= void FloatDataPoint::operator (FloatDataPoint& T.dimension a.dimension; ?Fitness = a.Fitness; } OPERATOR: new (FloatDataPoint) Allocates memory for a float data point Created: k&:oppes void* FloatDataPoint::operator new (size_t size) ( void" hglb; return } ltundef T 3.;)ATA;

PAGE 176

A.2.4 Problem Space Implementation 164

PAGE 177

HEADER: ProbSpace FDe: pspace.cpp This defines a problem space. ========================================================== #ifndef __ PSPACE_B *define __ PSPACE_B 1 dependencies--------------------------------------------------------------'include --variable *ifndef KAXFLOAT #define MAXFLOAT 3.4e38 The for the system #endif ---------------------------------------------------------------------------------class prObSpace{ public: ll------data members--------Matrix parmrange; Matrix dimensions; int maxparms; int intparm; int NumFitFnEvals; complex (userFitness) (Matrix& pI; Name of the problem space Parameter ranges (min,max) Allowed numbers of parameters MaxilllWll number of parameters Parameter type integer flag 1 if parm is cast to int, else Number of function returned. User-defined fitness function

PAGE 178

HEADER: ProbSpace(voidl {); ProbSpace(int ProbSpace(Matrix indiml; ProbSpace(int numparm, Matrix indimj; ProbSpace(Matrix indim, Matrix inrange); void complex AllccRanges(voidl; Fitness (Matrix' pl; double Range (int dimen, ':'nt pa!:!!".num); void void int void void void int int int void void } GetRange(int dimen, int minrange, double' maxrangel; FixDimtint numparm); IsValidDim(int dimension); OpenRanges(voidl; OpenRanges(int numparmi; SetRanges(Matrix inrange); IsValidParmSet (Matrix PI; IsValidParm(Matrix p, int IsValidParm(double value, int dim, int parmnuml; Print(voidl; operator (ProbSpace& 5l;

PAGE 179

l/HEADER: PROBSPACE (cont) "MACROS // MACRO: PSPACE RANGE WIDER // Returns 1 if range for pSpace dimension Dim, parameter pNum is wider than Range // // // // // Inputs: ProbSpace* pSpace Prob1em space of interest int Dim Dimension to interrogate int pNum Parameter number daub1e Range Range to compare to Created: 018/94 koppes #define PSPACE_RANGE_WIDER(pSpace,Dim,pNum,Range) (aos( :pSpace->parmrange(Di.:n-lj.r\pNum,lll -(pSpace->parmrange[Dim-lj .r\pNum,Olll > Range // MACRO: Returns // Inputs: // // // Created:' // PSPACE INTPAIUf pointer to integer array signifyinq integer type or not ProbSpace pSpace int Dim 10/8/94 kcoppes Prob1em space of interest to interrogate 'define PSPACE_INTPARH(pSpace,Dim) pSpace->intparm(Dim-1) #endif \

PAGE 180

//======================================================== // CLASS: // File: ProbSpace pspace.cpp // This class defines a problem space. rev "a // Note: parmranqe paints to an array of THatrix pointers. Pointer N should // point to a matriX of ranges for an N+l dimenSional parameter set. // this method different parameter dimensions are allowed. to have different ranges. // ========================================================== // --variable dependencies---------------------------------------------------------ltifndef HAXFLOAT #define MAXFLOAT 3.4e38 lIendif // The float limit for the system // ---------------------------------------------------------------------------------// --file dependencies-------------------------------------------------------------'include lIinclude IIdefine farmalloc malloc 'define farfree free 'include i.1.fndef cpp lIinclude "matrix.cpp" lIendif // --------------------------------------------------------------------------------// --variable dependencies---------------------------------------------------------IIdef.1.ne T C*th.1.s) // ---------------------------------------------------------------------------------

PAGE 181

CLASS: TMatrix+ parmrange; TMatrix dimensions; int maxparms; int.parm; inc Numfi c fnEva 1 s; complex (TMatri;.:' pI; ProbSpace(void) ProbSpace(int numparm); ProbSpace(TMatrix indim); ProbSpace(int numparm, TMatrix indiml; ProbSpace(TMatrix indim, TMatrix inrange); void complex double AllocRanges(voidl; Fitness (TMatrix, pI; Range(int dimen, int parmnuml; II II II II II II II II void GetRange(int dimen, int parmnum, double& minrange, double' maxrange); void int void void void int int int void void FixDimlinc numparml; IsValidDim(int OpenRanges(void); OpenRanges(int. numparm:; SetRanges(TMatrix inrange); IsValidParmSet (TMatrix pI; IsValidParm(TMatrix p, i:-.t parmnumj; IsValidParmldouble value, int. dim, int. parmnuml; Printlvoid); operator (ProbSpace& Bl;

PAGE 182

CLASS: PROBSPACE Definition (cont) (cont) //======================================================== CLASS HISTORY: ProbSpace // Created: // Modlfled: 10/6i94 // 12/14/94 // 1/20/95 // 1124/95 9/9194 kroppes kcoppes I) Moved system parameters MAXFLOAT. ARM to oosearch.h 2) Moved stdio. stdlib includes to oosearch.h kcoppes kcoppes kcoppes kcoppes kcoppes kcoppes kcoppes kcoppes kcoppes kcoppes kcoppes kcoppes Chgd class name from ProbSpaceDefto ProbSpace 1) Removed limit on of parameters 2) Reversed order of ar&,> in Fitness I) Ro:placed dimensions and numdim with nlatrix dimensions Changed argument of user Fitness to P..latrix 3) Removed SetDims funttion as dimensions is a direct assigment with the TMatrix type Made parmrange a pointer to matrices to allow ranges to change w. differing numbers of parameters Transferred source to Microsoft Word rev I Corrected port errors rev 2 Cleaned up formatting rev 3 monitoring of number offrtness fualtion evaluations rev 3a-not released Far malloc pass revision rev3b Added A1locRanges FunctlOD to allocate memory for ranges. rev 4 Added G.:tRanges Function return ranges rev4a Changed userFitness to employ reference revS //=========================================================

PAGE 183

"CLASS: PROBSPACE FUNC'fION: ProbS pace Constructor // // FUNCTION: PrabSpace // Creates a space instance //========================================================== // ---------------------------------------------------------------------------------// VARIANT 1: only numparm parameters open ranges MAXFLOAT // Inputs: // int numparm the number of parameters // Created: 9/9/94 koppes // Modified: // kcoppes kcoppes Added memory allocation for parameter ranges dependant upon dimension number. Added initialization offitness fimction evaluation // --------------------------------------------------------------------------------PrabSpace: : PrabSpace (int numparm) ( T.name = NULL; } FixDim(numparm) ; OpenRanges(numparm);. T.NumFitFnEvals 0; // Set s.1.ngle d.imension // open ranges to widest t // In.1.tial.ize fitness function counter ---------------------------------------------------------------------------------// VARIANT 2: user-specified. ranges. // // Inputs: int // TKatrix inrange // // Created: 9/9/94 koppes // Modlfled: numparm the number of parameters user-specified. range array // kcoppes Exchanged previous arguments for TMatrix type kcoppes Added mem alloca1ion for pannrange kcoppes Added initialization of fitness fimction evaluation counter. ---------------------------------------------------------------------------------PrabSpace: :PrabSpace(int numparm, TKatrix inrange) ( T.name = NULL; } FixDim(numparm) ; SetRanges(inrange); T.NumFitFnEvals = 0; Set Single allowed d.imens.1.on Set user-spec.1.fied. ranges In.1.tial.ize fitness function counter

PAGE 184

CLASS: FUNCI'ION: PROBSPACE ProbS pace --------------------------------------------------------------------------------VAlUANT 3: User-defined dimension array, ranqes Inputs: TMatrix indim array of allowed dimension sizes Modlflecl: kooppes exchanged previous arguments for lMatrix t)Jle kooppes added mem allocation for pannrange kcoppes Added initialization offitness funttion evaluation counter. prObBpace::prObBpace(TMatrix indim) { } T.name = NULL; T.maxparms T.dimensions = indim; OpenRanges(T.maxparms); T.NumFitFnEvals = 0; Set allOWed dimensions Open ranqes to widest limit Initialize fitness function eval counter sinqle parameters in indim allowed. number of entries in inranqe must be [max .. of parma] [2] Inputs: THatrix india TMatrix inranqe array of allowed d1mension sizes user-specified ranqe array Modillecl: kcoppes exchanged args for Thlatrix type kooppes changed realmax for drealrnax to reflett changes in matrix class kcoppes Added initialization of fitness funttion evaluation counter. --------------------------------------------------------------------------------PrObBpace: : PrObBpace (TMatrix indim, TMatrix inranqe) ( T.name = NULL; } T.dimensions = indim; Set, allowed d1mensions int dimen = (int)floorlindim.drealmax()); FixDim( dimen ); Set the maximwD dimension SetRanges(inrange); T.NumFitFnEvals 0; Set user ranqes In:1tialize fitness function eval counter

PAGE 185

CLASS: PROBSPACE (cont) PSPACE RANGE WIDER. PSPACE PA.RMTYPE PSPACE_RANGB_WIDER. Returns 1 if range for pSpace dimens:1.on Dim, parameter pNUm :1.s wider than Range Inputs: PrabSpace* int int pSpace space of :1.nterest Dim D1mens:1.on to interrogate pNUm Parameter number Range Range to compare to ========================================================= 'define PSPACE_RANGE_WIDER.(pSpace,Dim,pNum,Range) (abs( (pSpace->parmrange(Dim-l] .rlpNum,ll I -(pSpace->parmrange(Dim-l] .rlpNum,OIII > Range I PSPACE INTPARM Returns pOinter to integer array signifying integer type or not Inputs: PrabSpace pSpace int Dim Created: space of interest Dimension to interrogate 'define PSPACB_INTPARM(pSpace,D:1.m) pSpace->intparm[Dim-lj FUNCTION: Creates matrices to contain the range information ========================================================= typedef int intptr; void PrabSpace: : (VOid) { T.parmrange = new T.intparm = new intptr[T.maxparms]; for(int } int N = lintl T.dimensions.r(t,OI; T.parmrange(N-l].define(N,2); T.intparm[N-l] = new int[Nj;

PAGE 186

CLASS: // PROBSPACE Fltnrss. Ranee // FUNCTION: Fitness (cont) // The or success norm for the parameter space // Inputs: // Returns: // Created: // ModUled: array of parameters float Fitness the value of the function keoppes kcoppes kcoppes Reversed order of Changed input arg to nlatrix type kroppes Added increment of fitness funaion evaluation counter kcoppes Fixed fitness funaion evaluation // Complex PrabSpace: : Fitness (TMatrix& p) complex fit; } if (iIsValidDim(p.length())) ( printf(nCProbSpace.Fitness> Error: Dimension"); T.NumFitFnEvals 1; fit T.userFitness(p); return fit; ex1t(-l); //======================================================= // FUNCTION: Range // :Returns the range for a parameter // // // // Inputs: int dimen parmnum Returns: double range size Parameter to return range for Returned. range // Created: keoppes // double ProbSpace: :Range(int c11men, pa.rmnum) { int N = dimen-l; double rng = (T.parmrange[N]).r(parmnum,l) return rng; }

PAGE 187

PROBSPACE FUNCfION: GetRange,FbDlm,lsVaiidDIm FUNCTION: GetRanqe sets inputs to value of ranqes for a given parameter Inputs: double& minrange,maxranqe vars to set to min and max range values Created: 11l1195lu:oppes void ProbSpace: : GetRange(int d1men, int parmnum, double& minrange, double& maxrange) { int N = dimen-l; minrange = maxrange = (T.parmrange[N)).r(parmnum,l); } FUNCTION: FixDim Restricts number of parameters to a single choice. Inputs: int numparm number of parameters Created: lu:oppes Modlfted: kcoppes removed limits on # of parameters kcoppes changed dimensions Thlatrix twe void ProbSpace: : FixD1m(int numparm) { T.maKparms } = numparm; T.dimensions numparm; fix of parameters & dimension FUNCTION: IsVal1dDim Checks to see if dimension is in list of valid dimensions Inputs: the number of parameters indicates invalid/valid Returns: Created: 9/9/94 k&:oppes ModlDed: ka.ppes Fixed dimension check. int Probspace:: IsVa1idD1m (int dimension) { } int valid 0; Aasume invalid dimension for (int i=O; i
PAGE 188

CLASS: PROBSPACE OpenRanJ!es FUNC'l'ION: OpenRanqes // Opens ranqes to their widest //========================================================== // ---------------------------------------------------------------------------------// VARIANT 1: open ranqes to their widest t // Crrated: kcoppes // Modified: kooppes added for loop to encompass all ranges kcoppes memory allocation for ranges // ---------------------------------------------------------------------------------void ProbSpace: : OpenRanqes (void) { } T.AllocRanges(); memory for ranqes. for(int { int N (int) T.dimensions.r(t,O); for(int i=O;i
PAGE 189

CLASS: FUNCfION: PROBS PACE SetRanl!es FUNCTION: SetRanqes Assigns ranqes Inputs: TMatr1x inranqe input ranqes for a dimension Created: kcoppes Modified: kcoppes added for loop to encompass all parameter ranges kcoppes Added memory allocation for ranges. void PrObSpace::SetRanqes(TMatrix inranqe) { T.AllocRanges(); allocate memory for ranqes. int N = inrange.m; T.parmrange[N-l] inrange; parameters & set inteqer flaqs off for(1nt i=O;i Error: max & min ranges for parm %i reversed.", i) ; printfl" Correct & recompile."); ex1t(-l); } if (inranqe.r(1,O) >= -HAXFLOAT)&&(inranqe.r(1,l) HAXFLOAT) { printf(" Space printf("Error: %i dimensional set ",N); printfl"parm %i has a greater than ex1t(-l); }

PAGE 190

Is\,aJidParDl (c:ont) //========================================================== // FUNCTION: IsValiclParm // Checks to see if is in // ======================================================== ---------------------------------------------------------------------------------// input to be checked // // Inputs: // Returns: // int // Created: kc:oppes // Modified: p of 0/1 indicates kcoppes reformulated with kcoppes added arg to parmrange occurences ---------------------------------------------------------------------------------int probSpace: : int parmnWII) { int vaiid=O; int N = p.length()-l; // addressed dimension of p parm if (parmnWII p. () ) valid = ( (p.r(parmnum,O) >= (T.FarmrangeiN] ).r(parmnum,O) p. r(parmnum, 0) <= (T.parmrange [N] ). r (parmnum, 1) ); return .... alid; ---------------------------------------------------------------------------------// VMIANT 2: input a to be checked // // Inputs: // // Returns: int dim int int parmnWII // Created: kcoppes being checked d1mension in space number of parameter 0/1 indicates // ---------------------------------------------------------------------------------int PrabSpace: : vaJ.ue, int dim, int parmnWII) { int valid=O; int N = dim-l; if (parmnWII N) // addressed dimension of space valid = (( value >= &&( value <= (T.parmrange[N]).r(parmnum,l) ); return valid;

PAGE 191

ClASS: FUNCTION: FUNCTION: Checks to Inputs: PROBSPACE IsValldParmSet IsValiclParmSet see if all parameters int p[] int d1JIIension are in ranqe and dimension is valid Array of parameters Number of parameters Returns: int valid indicates invalid/valid Created: la:oppes int ProbSpace::IsValiclParmSet (TMatrix p)t int t, if(IsValidDim(p.length() for(i=O:i
PAGE 192

CLASS: PRODSPACE (cont) FUNCTION: Print // ============================================================= // FUNCTION: IsVal1dParmSet // OUtputs the parameters for a problem space // Created: kcoppes //========================================================= PrObSpace::Pr1nt (){ int a,i,N; for(a=O;a<70:aH) printf("-":; pri:-::::t"\n"l; t "PROELEM S?ACE DEFINI"!':ON\n":; 1f(T.name != NULL) print.f'''\n''); printf("Maximum number of par3meters: %i\n", '-this) .maxparms;; 1f(T.maxparms > 1) { printf("Allowed numbers of parameters:\n"); for (1=O;1
PAGE 193

CLASS: {faLfLeelT.name); T.name T.name = (char .) } strcpy(T.name,9.name); } T.dimensions T.maxparms T NumFi tFnEvals T.userFitness T.AllocRanges(); 9.dimensions; 9.maxparms; 9.NumFitFnEvals; B.userFitness; int dimen = T.dimensions.L(t,O)-l; T.parmrange[dimen) = B.parmrange[dimenJ; T.incparm[dimen] [lJ B.intpar:r.idimenJ }

PAGE 194

A.2.S Iteration Type Implementation 182

PAGE 195

//========================================================= // HEADER: // File: // Iterate ooiterat.h // This class an iteration. revS //======================================================= 'ifndef __ ITERAT_S #define __ ITERAT_S 1 typede '/oid+ "GLOBAL; 'define huge // --file dependenc1es-------------------------------------------------------------'include 'include "oomethod.h" // --------------------------------------------------------------------------------'define T (*this) // CLASS HISTORY: Iterate // Created: kcoppes // Modified: // kcoppes I) Moved includes for stdio and stdlib to oosearch.h 2) Added class predecl for SearchMethod kcoppes changed data from explicit arrays to pointer.; kcoppes I) Added Method as a member 2) Removed herNum member moved it to Search // kcoppes Moved source file into Miaosoft Word rev I kcoppes Added capability to stale infornation to exlemalJy opaled file rev 2 kcoppes I) Added monitoring of number offimct.ion evaluations and c:ost // 2) Added representative fMess variable to iteration // kcoppes Added class destnJC10r rev 3 // kcoppes Excilanged Jterate(lastherate) en for Propagate to solve mem problems rev 4 Added 'new' operator // kcoppes Modified memory allocation for iteration BinaryDataPoints revS //

PAGE 196

SearchMethodMethcd; ProbSpace-PrSpace; Search Srch; ConvMeasure; double IterFit:1ess; int NumE'nEvals; Matrix mState[ITERATE_SIZE); double +:State[ITERATE_SIZE); State[ITERATE_SIZE); int mStateNum; int fStateNum; int mStateDim[ITERATE_SIZEJ; int fStateDim[I7ERATE_SIZE); BinaryDataPoint FloatDataPoint int flterDataPts; int blterDataPts; +bDATA; +fDATA; i:1t Echo; Detail; Iterate(void) ; char int cStateNum; int iStateNum; int cStateDim[ITERATE_SIZE); int iStateDim[ITERATE_SIZE); IntegerDataPoint iDATA; .per int iIterDataPts; IIZcho Iterate(SearchMethod+ M, ?robSpace+ P, Search+ S, Cost;; -Iterate ( ) ; void void void void void Allocate (void) ; WriteState(E'ILE+ TraceE'ile); PrintState(void); 8); void+ operator new size); void ECHO(char+ msg); void DETAIL(char msg); void ECHOVAL(char+ msg, float val); void DETAILV(char msg, float val);

PAGE 197

CLASS: Iterate File: ooiterat.cpp This class defines an typedef void* HGLOBAL; idefine huge revS iteration. --file dependencies-------------------------------------------------------------iinclude iifndef iinclude "oomethod.cpp" iendif --------------------------------------------------------------------------------idefine T (*this) Bl:STORY: Iterate Created.: Modified: kcoppes I) Moved includes for stdio and stdlib to oosearch.h 2) Added class predecl for SearchMethod kcoppes changed data from elqllicit arra)"S to pointers kcoppes I) Added Method as a member 2) Removed IterNum member moved it to Search 12/10/94 kooppes Moved source file into Miaosoft \\' rev 1 12113/94 kooppes Added capability to dump state infomatiOll to externally opened file rev 2 1118195 kcoppes 1) Added mOllitoring of number offundion evaluations and cost 2) Added representatiYe fitness variable to iteration kcoppes Added class destructor rev 3 kcoppes Exchanged ltenne(lastIterate)m for Propagate to solve mem problems rev 4 'new' operator kooppes Modified memory allocation for iteration BinaryDataPoints rev 5 kooppes Added abort flag to propagation rev 6

PAGE 198

CLASS: Method; ProbSpace* PrSpace; Search Srch; float ConvMeasure; double double ':'nt int Iterate int abortFlag; TMatrix double +fState[ITERATE_3IZEj; int int mStateNum; int fStateNum; int mStateDim[ITERATE_SIZEj; int FloatDataPoint int int bDATA; *fDATA; Echo; :::terate(void); var. char int cStateNum; int iStateNum; int cStateDim[ITERATE_SIZEj; int iStateDim[ITERATE_SIZEj; IntegerData?oint *iDATA; IINw1J:Ier int M, ProbSpace* P, Search* S, int* Cost); -Iterate ( ); } void void void void void ?ropagate(Iterate Allocate (void); WriteState(FILE" TraceFi:e,; PrintState(void); operator (Iterate& B); void* operator new (size_t size,; void ECHO(char* void DETAIL(char* msg); void ECHOVAL(chac msg, float val); void DETAILV(char* msg, float val);

PAGE 199

CLASS: ITERATE Iterate Constructor (cont) (cont) ========================================================= FUNC'l'ION: Iterate (polymorphic) Creates an iteration instance. =========================================================== --------------------------------------------------------------------------------VMUANT 1: Creates a new iteration without major details Created: kl:oppes ---------------------------------------------------------------------------------lterate::lterate(void) ( IINull iteration history pointers NULL; NULL; IIZero number of search states 0; T.fStateNurn 0; 0; T.iStateNurn 0; of data points per iteration of each type 0; 0; 0; .J; }

PAGE 200

CLASS: ITERATE Iterate Constructor (cont) (cont) --------------------------------------------------------------------------------VARIANT 2: Inputs: creates a new iteration & assigns method SearchMethod .method Pointer to search method Created: !Koppes :\lodl.Oed: kcoppes kcoppes kooppes kooppes Changed args to Method ProbSpace to allow for memory allocation information driven by method and probspace I) Added search pointer to 2) Added cost arg:; Added monitoring of number of fitness func:tion iterations and initialization cost. Added call to Allocate fimttion --------------------------------------------------------------------------------Iterate::Iterate(SearchHethod. ProbSpace. P, Search. 5, int. Cost) { iteration pointers T.lasclcerate = NULL; T.nextlterate = NULL; T.Method = M; T.PrSpace P; Iiset iteration matrices to defau1t va1ues T bDATA NULL; T. fDATA for(int i=O;iIterateInit(chis); lterateInit fn from method T.flterDataPts T.blterDataPts T.ilterDataPts T .Allocate ( ) ; T.Method->blterDataPts; T.Method->ilterDataPts; IISet va1ues for method states int PriorFnEvals T.PrSpace->NumFitFnEvals; +Cost T.Method->Searchlnit(this); Capture prior of fitness eva1uations T.NumFnEvals = T.PrSpace->NumFitFnEvals -PriorFnEvals; T.lterCost +Cost; T. abortFlag = 0;

PAGE 201

CLASS: FUNCTION: ITERATE Propal!lIte FtlNC'l'ION: Propagate Creates new iteration propagating the 1ast iteration. Inputs: Iterate *1astIter Pointer to 1ast iteration SearchMethoc1 *methoc1 Pointer to search methoc1 ProbSpace *pspace Created: l013/94Iu:oppes Modlfled: 10ll3i94 kcoppes added ProbSpace argument kooppes I) Moved IterNum operation lO Search 2) Changed ProbSpace ... arg to Search kooppes Attached ProbSpaoeMelhod to Iterate removed theiri r.pu"C argument 5 kcoppes Added monitoring of number of fitness fimClion iterations propagation toSl. kcoppes Eliminated c:oss inking of iterations. void Iterate::Propagate(Iterate *lastIter) { Use the propagation function to determine the contents of this iteration and determine number of fitness function evaluations examining the prob1em space before and after. } int PriorFnEvals T.lterCost: T.NumFnEvals T.PrSpace->NumFit::nEvals; T.Method->PropFn:last:lter,this); -?riorFnEvals;

PAGE 202

CLASS: ITERATE (CORt) Iterate Destructor FUNCTION: Destroys an and all of its attendant data. Created: IIlO/95 kcoppes Modified: kcoppes Added deletion of arrays of pointers to arrays. Iterate: IIClear state information for(t=O;tO)&&(T.fDATAI=NtJLI. 1f(T.fIterDataPts>1) NtJLI.) ::ielete T.fState[t]; NtJLI.) :ielete T.iState[t]; NtJLI.) delete T.cState[t:; for(t=l;tO)&&(T.bDATA!=NtJLI. if (T.bIterDataPts>l) for(t=l;tO)&&(T.iDATA!=NtJLI. 1f(T.iIterDataPts>1) for(t=1;t
PAGE 203

FUNCTION: ITERATE Allocate FUNCTION: Allocate (cont) Allocates memory for the iteration structures. Inputs: SearchKethod* M Method from which to draw of data ProbSpace* P Problem space from which to draw size of data point arrays Created: Modiftl'll: kcoppes kcoppes moved Convl\'leasure to Search from Iterate kcoppes changed to Windows allocation methods kcoppes insured equality of Method and Iter of data points void Iterate::Allocate(void) { int SearchMethod+ M = T.Method; ProbSpacew 2 IIAllocate space for data points T.PrSpace; FloatDataPoint+ F; BinaryDataPoint+ 8; IntegerDataPoint+ In; void+ hglb; if (K->flterDataPts) T.flterDataPts = M->flterDataPts; pOints F =\FlcatDataPoint +: MemMAllocate( (unsigned)T.flterDataPtssizeof(FloatDataPoint),hglb); T. fDATA = F; for(t=o;tblterDataPts) { T.blterDataPts = B = (Bin3ryDataPoint r-lemMAllocate ( (unsigneolT.blterDataPts+sizeof(3inaryData?oint),hglb); T.bDATA = B; for(t=o;t
PAGE 204

ClASS: T.iIterDataPts = Y) MemMAllocatsl (unsignedIT.ilterD3t3Pts*sizeof(IntegerDataPoint),hglb); .AllocateIP); } } ( = NULL; ;T.mState[i]).m 0; IT.::-.State[ij).n 0; ); } T.fState[il = new T.iState[ij new T.cState[ij = new

PAGE 205

"CLASS: ITERATE WriteState (cont) FUNCfIO:'li: FUNCTION: Writestate Wri tes the current state information to externaJ.ly opened. file. It is assumed that a method will always have the same number of states. Therefore this function should always output the same number of columns during a qiven search. Inputs: FILE* TraceFile Pointer to file to write information to Created: lu:oppes Modified: kcoppes repaired TMatrix output void Iterate::WriteState(FlLE* TraceFile) { } int i,j: if (TraceFile!=NULL) %1 if (T. mStateNum) for(i=O:i
PAGE 206

FUNCTION: ITERATE PrintState (cont) ========================================================== FUNCTION: Prints tate Writes the current state information to stdout. It is assumed that a method will always have the same number of states. Therefore this function should always output the same number of columns during a given search. Inputs: PlLE* TracePile Created: kl:oppes Modified: Pointer to file to write information to kcoppes repaired TMatrix output void Iterate::Printstate(void) } if (T mStateNum) for (1=0; i
PAGE 207

CLASS: OPERATOR: ITERATE Iterate ============================================================= OPERATOR: Assigns to an iteration Inputs: IterateB KOppes The assigning ModLlled: kcoppes Added explicit allocation =========================================================== void Iterate::aperator = (Iterate& B) { int i,j,; T.Methcd T.Detai!. 8.1-!e>:hod; B.Detail; T.PrS!=,ace T.Echo a.PrS!=,ace; a.Echo; T.Srch B.Srch; T.flterDataPts T.ilterDataPts T.mStateNum T.iStateNum 3.:-:::terDataPts; 3.ilterDataPts; B.mStateNum; B.iStateNum; T.blterDataPts = 5.blterDataPts; T.fStateNum T.cStateNum B.fStateNum; B.cStateNum; for(i=O;1O) for(i=O;iO) for(i=O;1O) for (1=O;i
PAGE 208

CLASS: ITERATE I/OPERATORIFUNCfION: new.ECHO. DETAIL OPERATOR: new memory for an iteration Inputs: size t size the number of structures to a110cate Cnated: lu:oppes void. Iterate::operator new (size_t size) ( HGLOSAL hglb; } FUNC'l'ION: BCHO, BCBOV1I.L Prints a message on stdout if the echo f1ag is set on the search Inputs: char. msg The message to print Cnated: lu:oppes =========================================================== void Iterate::BCBO(char. ( void Iterate: :ECBOVAL(char. msg,f10at if (this->Echo) ); if (this->Bcho) }; FUNC'l'ION: DETAIL, DBTAILV Prints a message on stdout if the echo f1ag is set on the search Inputs: char msg The message to print Cnated : 1114/94 lu:oppes void Iterate::DBTAIL(char. ( void :DETAILV(char. msg,f1oat if (th:1.s->Deta11) }; if (this->Deta11) printf(msq,vall; hndef T

PAGE 209

A.2.6 Search Type Implementation 197

PAGE 210

HEADER: File: Search revS oosean:h.h This class defines a generalized search. ltifndef _SEARCH B #define _SEARCH typedef void. BGLOBAL; idefine huge --file dependencies-------------------------------------------------------------'include iinclude 'include "ooiterat.h" ---------------------------------------------------------------------------------class Search ( public: 11------data members---------ProbSpace" E'rSpace; Searci'J1echodfloac long inc Iterace long floac int long huger HGLOBAL int int Matrix char" Ccnvrgt::Metric; IterLimit; SearchCoscLimic; currenclter; IterNum; ConvMeasure; CumSearchCosc; summarylterNum; summaryhqlb[2); heartbeatPeriod; csptureE'lag; mSParms(20); cSParms[20]; Relevant problem space definition Search method Search convergence target metric Maximum number of iterations(-l if no limdt) Search cost limit (set to if no limit) CUrrent iteration Number of completed iterations CUrrent convergence measure cumulative search cost double huge" summaryData; HGLOaAL parmhglb(4); method-tailoring parameter arrays double" int .. fSParms(20); iSParms(20);

PAGE 211

HEADER: int mSParmNum; int fSPar!!lNum; int iSPar!!lNum; mSParmDim[20];int cSParmDim[20);int iSPa=mDim[20); FILE" TraceFile; Dumplnterval; int Echo; int Detail; SinqleStep; Search(ProbSpacew pspace, SearchMethcdmethod); Search(ProbSpace* pspace, SearchMethodw method, targetMetric); Search(ProbSpace* pspace, SearchMethodmethod, int -Search (); float float int Run(void); Run(Iterate 'seedlter); Step (void) ; 'foid '1oid '1oid ".roid void void SetupTraceFile(char+ fname, char" mode, int iDumplnterval); WriteState(void); SetupSummaryData(void); ::apturelcerSummary (Itera teO .!: CancelLimits(voidl; Allocatelvoidl; operator new(size_t sizel; 11--------------------------void RunBody(void); void AssignAndZero(ProbSpace wpspace, SearchMethod "method);

PAGE 212

HEADER: MACRO: SEARCH S DETAIL (cont) (cont) ============================================================= MACRO: S Echos debug' information. Assumes s* is available and is a pointer to a search. In order to print, DEBUG must be defined.. Inputs: S POinter to Search instance lOnt94 KOppes iifdef DEBUG idefine idefine S (HSG, V) 1=!: [MSG, ';; *else *define "define iendif iendif

PAGE 213

CLASS: Searc:h oosearc:h.epp This class defines a generalized search. typedef void* !-;GLOBAL; 'define huge revS --file dependencies-------------------------------------------------------------'include 'include 'ifndef _OOITBRA!l'_CPP 'include "ooiterat.cpp" 'endif 'define T (*this)

PAGE 214

CLASS: { ll------data PrSpace; Method; float long int ConvTgtMetric; IterLimit; SearchCostLimit; Iterate-currentlter; long !terNum; float ConvMeasure; int CumSearchCost; int CumfnE'/als; long summaryIterNum; double hugesummaryData; double evalSum, evalSumCost; HGLOBAL HGLOBAL parmhglb [4 J ; int int summaryhglb[2J; heartbeatPeriod; captureE'lag; TMatrix mSParms[20J; cSParms[20J; fSParms[20J; iSPar::'.s [20J; int mSParmNum; int int cSParmNum; int iSParmNum; int mSParmDim[20J;int cSParmDim[20];int iSParmDim[20]; fILE* Tracefile; int DumpInterval; int DumpCounter; int Echo; int Detail; int SingleStep; pspace, SearchMethod method); pspace, SearchMethod* float targetMetric); Search(ProbSpace pspace, SearchMethoj method, int costLimit); -Search() ; float float int Run (void); Run(Iterate *seedIter); Steplvoid);

PAGE 215

CLASS: SEARCIl Class DeflniUon (cont) (cont) //------function members----void void void /oid void void void void* SetupTracefile(char+ fname, mode, iDumplnterval); SetupSummaryData(void); CapturelterSummary(IterateI); UpdateEvalData(Iterate+ II; CancelLimits(void); Allocate(voidl; operator new(size_t size); //---------------------------private: //------function members---RunBody(voidl; +pspace, SearchMethod -method); } CLASS HISTORY: Search kcoppes ModlOed: kcoppes I) Moved includes for stdio and stdlib to oosean:::h.h 2) Added construaor variant 2 kcoppes Replaced ProbSpaceDefwith ProbSpace kcoppes Added method states to search kcoppes Moved IterNum from iteration to Seard! kcoppes Moved source file Miaosoft Word kcoppes Patch errors from kooppes Added capability to dump state infornation to file kooppes I) Added CostAndFitData fimaion to retum seard! summary info 2) Added pointer to first iteration to kooppes Added seard-! destruaor kooppes 1) Switched to memory handling w. huge pointers 2) Added sampling mechanism to capture data (removed Cog&Fit) 3) Added 'new' kooppes Switched sampling mechanism to huge pointmi kooppes Added abort flag to propagation rev 1 rev 2 rev 3 rev 3a rev 4 rev4a rev4b

PAGE 216

CLASS: MACRO: SEARCH S DETAIL //========================================================= // MACRO: S _DETAIL Echos debug information. Assumes s* is available and is a pointer to // a search. In order to print, DEBUG_DETAIL must be defined. // Inputs: // // Created: S Pointer to Search instance IOnt94 // ============================================================= /tifdef DEBUG DETAIL /tdefine S _DETAIL (MaG) print: /tdefine S_DETAILV(MSG, V) (:rintf /telse /tdefine S_DBTAIL(MSG) printf ("") 'define S_DETAILV(MSG,V) printf ("") fendif

PAGE 217

CLASS: SEARCH // Selll'ch Constructor FUNCTION: Search (pol.ymorphic) Creates a search instance. (cont) --------------------------------------------------------------------------------1: Unl.imi ted search Note: running this search without wil.l. resul.t in an unterm1nated run. Inputs: ProbSpace 8earchMethod 1013/94 Modified: *pspace *method lu:oppes The probl.em space to search The search method to empl.oy 12/9.'94 kcoppes Modified to use AssignAndZero fimaion kooppes Fixed call to Assi@1lAndZero Search: : Search (ProbSpace *pspace, SearchMethod *method) { this->CancelLimits(); Cancel. al.l. l.1m1ts (defined l.ater) Assign pspace method (defined l.ater) pspace, method); } --------------------------------------------------------------------------------VARIAN'l' 2: convergence-only limited search Inputs: ProbSpace SearchMethod *pspace *method f10at ConVTgtKetric Modified: lu:oppes The probl.em space to search The search method to empl.oy The l.1m1ting val.ue of the convergence metriC kcoppes Modified to use Assi@1lAndZero funaion kcoppes Fixed call to Assi@1lAndZero Search: : Search (ProbSpace *pspace, Searc:hMethod method f10at targetMetric) ( this->CancelLimits(); Cancel. al.l. l.1m1ts this->ConvTgtMetric = targetMetric; pspace, method); Set convergence l.imit

PAGE 218

SEARCH Search Constructor. Destructor (cont) (cont) --------------------------------------------------------------------------------VARIANT 3: cost-only limited search Inputs: ProbSpace SearchHethod *pspace *l!Iethod The problem space to search The search I!Iethod to employ float conV'rqtMetric The limiting value of the convergence metric Created: koppes Modlfled: kcoppes to use fimaion kooppes Fixed call to AssignAndZero --------------------------------------------------------------------------------Search::Search(ProbSpace *pspace, SearchHethod *method, int costLimit)( } this->CancelLimits(); this->SearchCostLimit costLimit; this->AssignAndZero(pspace. method); Cancel all limits set cost FUNCTION: -Search Destroys a search instance and its data Created: 1120195 koppes int t; 1f(summaryIterNum!=NULL) delete surnrnarylterNum; 1f(summaryData!=NULL) delete surnrnaryData; if(T.currentIter!=NULL) jelete T.currentlter; 1f(T.mSParmNum) for (t=o;t
PAGE 219

CLASS: SEARL1.1 FUNCTION: Run (polymorphic) Executes the search. (cont) 1: UnseededRnn (generates start iteration base4 on method) Inputs: Iterate *seedIter The iteration with which to seed the search final converqence function value Returns: Created: 1013/94 :\Iodlfled: I0i13!94 kcoppes kcoppes moved ConvMeasure to Search from Iterate I) Added support for state to file (dump counter) 2) Propagated cost of creating first iteration kcoppes kcoppes Added tagging of first iteration to search Reworked creation of first iteration (to situation) float search::Run(void) { IICreate the initial iteration (G initialize search) int Cost 0; search int: PriorFnEvals Capture prior of fitness evaluations this->currentlter new Itera:e(this->Method,this->PrSpace,this,&Costl; thls->currentlter->Detail this->Det3il; Pass echo flaqs to first iteration this->currentlter->Echo this->Echo; this->CumSearchCost = Cost; set cumulative search cost this->DumpCounter 0; Reset count for state dUmp to file current iteration to initial iteration: this->RunBody(); the of the search (define4 next pg) this->CumFnEvals = this->PrSpace->NumFitFnEvals PriorFnEvals; return this->ConvMeasure; llReturn the final convergence value }

PAGE 220

CLASS: SEARCH VAlUAN'l' 2: Seeded Run (starts with qiven iteration) Inputs: Returns: Created : MocliOed: Iterate *seedIter The iteration with which to seed the seaz:ch final convergence function value KOppes kcoppes moved ConvMeasure to Search from Iterate kooppes 1 )Added support for state dump to file (dump counter) 2) Propagated echo flags to seed iteration kcoppes Added tagging of fin;t iterationlo search kcoppes Removed memory of fin;t iteration float Search::Rnn(Iterate *seedIter){ } this->CumSearchCost = 0; Set cumulative seaz:ch cost this->DumpCounter 0; // Reset count for state dump to file this->currentlter seedlter; Set curro iter. to seed iteration this->currentlter->Detail // Pass echo flags to first iteration this->currentlter->Echo this->Echo; this->RunBody(); // Run the body of the search return this->ConvMeasure; Return the final convergence value

PAGE 221

CLASS: SEARCH RunBody (cont) ========================================================= FUNCTION: RIlnBody Executes the body of the search Created: 1013/94 Modified: kooppes Added setup for data capture (sampling) voici Search: : RlJnBody () { } in1: ':er:::i:-.a1:e=O; temporary var inc1icatinq termination temporary var inc1icatinq converqence if(T.captureFlaq!=O) { T.Cap1:ureIterSummarY(1:his->curren1:Iterl; }; while (!terminate) { current state information to file if(T.DumpInterval 0) { if ( T. DumpCounter T. DumpInterval) if( T.DwapCounter 0) T.DumpCcunter += 1; } T.DumpCcunter = 0; T. Wri testate ( ); Ilcreate a new iteration converqe = T.Step(l; if(!fmcdc1ouble)T.IterNum,(dauble)T.heartbeatPer1od IICheck for search termination ... ,:onverge Via converqence .. Via timeout (maxecl out iterations) (7.:terLimit > -ll&&(T.lterNum >= Via costinq out (exceeclecl allowecl cost) (7.SearchCos1:Limit> -ll&&(T.CumSearchCos: >= T.SearchCos1:Limitl if( terminate && (T.DumpInterval > 0 { ); IIPause.if in sinqle user if(T.SinqleStep) {printf("\n\nEnd-of-step Pause\n"l;getc(s1:dinl;}; } 1f(T. Echo) print f (" \n\nDONE. \ n");

PAGE 222

CLASS: SEARCH (cont) FUNCfIO:-i: Step FUNCTION: step steps from the current iteration to the next iteration using the search method Returns: 1nt flag indicating completion of search. If target convergence has been achieved, then nonzero is returned Created: kcoppes Modified: kcoppes Brought Iterate args into conformance kcoppes Rewrote fimc:tion to correspond to new Iterate memory routines kooppes Correc:ted sign error convergence recognition ============================================================= int Search::Step(void){ int convE"lag 0; returned convergence flag. Default 0 .1.f(th1s->Echo) printf("\n%51i ",this->IterNum); IteratenewIter; newlter new Iterate(); *newlter (this->currentlter); newIter->IterCost = 0; delete newIter; this->IterNum 1; Capture summary information if (this->captureFlag! =0) Evaluate convergence this->ConvMeasure = thls->Method->ConvFnlthis->current:ter); this->CumSearchCost this->currentlter->IterCost; .1.f(th1s->ConvHeasure th.1.s->ConvTgtMetr1c) 1; if(th1s->ConvHeasure < 0) convFlag -1; 1f(this->currentIter->abortFlag 0) convFlag = -100 + this->currentlter->abortFlag; Return return convFlag; }

PAGE 223

CLASS: SEARCH FUNCfION: SetupTraceFUe. WriteState FUNCTION: Opens a trace for writing and sets the write char* fname Name of file to write on Inputs: char* IIIIDde Write mode: own for new "an to append int The number of iterations in a write Created: (Bx: writing once every 5 iterations give a value of 5) ki:oppes void fname, char* mode, int { for writing if( != 'w') (mode != 'a' ( printf("\n Error: %5 is not a valid write mode\n"); = fopen(fname,mode printf("\n Error: cannot open fname); write interval if(iDumpInterval > 0) this->Dumplnterval = iDumplnterval; ex1t(-l); }; exit(-l); }; else ( printf("\n Error: dump interval 1 was entered."); exit(-l); }; FUNCTION: Wri teState Writes the current state information to file Created: ki:oppes void Search::Writestate(void){ } { fprintf(T.TraceFile,"\n"l; fprintf(T.TraceFlle,"%i ",:.IterNuml; fprintf(T.TraceFile,"%i ",T.CumSearchCost); ",:.ConvMeasure); T.currentlter->WriteState(T.TraceFi:e); else { p::intf ("\n Error: r.::ied to wrir.e to undefined trace file"); ex1t(-l); };

PAGE 224

CLASS: SEARCH SetupSummaryData FUNCTION: SetupSummaryData A110cates memory to save summary data from each iteration void search::setupsummaryData(void) 'define T (*th1s) } if(T.IterLimit<=O) p!"i:1.tf(" Cannot: ailQcate %i it.erat:ions !'.IterLimit); exit (-1) ; } unsigned long 5z[2]; 5Z[O] sz[1] if(T.captureFlaq!=O) ( delete T.summaryIterNum; delete }; for(unsiqned int t=Q;t<2;t++) 3*sizeof(double); sz[t] = sz[t]*(unsigned) HGLOBAL hglbl,hglb2; long a = (long if (a=NULL) (printf '" Error: Lock :;f search long memory failed. \n"); exit(-l); }; T.summaryIterNum = a; T.surnrnaryhglb[:;=hglb1; double 0 = (double huge-: if(b=NULL) T.summaryData printf(" Srror: Lock::: search ael memory failed. \n"); exit(-l); }; = b; T.summaryhglb[::=hglb2; T.captureFlag 1; 212

PAGE 225

CLASS: SEARCH FUNCTION: CancelLimits // // FUNCTION: Captureltersummary // Saves summary data from this iteration // Inputs: Iterate. I Iteration to capture data from // Created: kcoppes //========================================================== void Search::CaptureIterSummary(Iterate. 1)( 3+ (unsigned)T.I<:erNum+l) 3+(unsigned)T.lterNum+2j } // // FUNCTION: Cancel Limits I->NumFr.E:vals; // sets the values of all limits to -1 (limits ignored). // // Created: 10131941u:oppes // void Search::cancelLimdts(void){ } this->ConvTgtMetric

PAGE 226

SEARCH Allocate FUNCTION: Al.locate Al.locates memory for search-related parameters. It is that the lengths // of the memory arrays have been encoded in the search structure prior to running // this function. This function is normally only called from the method's SearchInit function. // Created: kcoppes // Modifted: kcoppes Switdled allocations //=========================================================== void Search: : Allocate (void) { } i; if (T mSParmNum) for (i=O;i
PAGE 227

CLASS: SEARCH FUNCfIO:"i/OPERATOR: FUNCTION: AssiqnAndZero Assigns input problem space and method. to this search, sets the si2!:e of the method tailoring parameters to 2!:ero, and turns off state dumping. Zeros echo flags Inputs: ProbSpace* SearchMethod.* pspace method The problem space to search The search method. to employ Created: lii:oppes Modified: kcoppes Added nulling of trace file pointer zeroing of dump interval. kcoppes Added zeroing of echo. and single-step flags; zeroed iteration number and target metric. void Search: : AssiqnAndZero (ProbSpace *pspace, SearchKethod *method) { problem space and method to this search T.PrSpace = pspace; T.Method = methea; number of method tailoring parameters T.mSParmNum = 0; T.fSParmNum = 0; T.cSParmNum 0; T.iSParmNum llNull file trace pointer & dump pointer T.TraceFile NULL; Dumplnterval T.summaryData = NULL; 0; T.summarylterNum = NULL; T.summaryhglb[l] = 0; T.currentlter = NULL; T.heartbeatPer!cd = 500; T.captureFlag = 0; TaevalSum 0; T.evalS;;mCost. :J; T.CumSearchCost 0; = J; 215 0;

PAGE 228

CLASS: SEARCH (cont) OPERATOR: new =========================================================== OPERATOR: new memory for a search Inputs: size t size The number of structures to allocate Created: kl:oppes void* Search: : operator new (size_t size) { HGLOBAL hglb; ::etu::n } lIundef T

PAGE 229

A.2.7 Search Method Type Implementation 217

PAGE 230

Hinclude typedef voidw HGLOBAL; char w name; in t Debug; ............................................. void Matrix char ('Methcdlni t) ( SearchMethod (wmethodPrint) ( SearchMethod -M); wmParms; wcParms[201; int mMParmNum; int fMParmNum; int mMParmDim[201;int fMParmDim[201; double wfParms[201; int "iPar:ns[201; int cMParmNum; int iMParmNum; int cMParmDim[201;int iMParmDim[20];

PAGE 231

HEADER: .............................. : "-:onvE"n) ( IIConverqence ........................................... into void int double double int int int (*PropE"n) ( Iterat.e* currIter, nextIter); 1* !terateIni t) ( SearchIni ( :terate* firstlter); : .. randE"n) (double *pl); fIterDataPts; bIterDataPts; iIterDataPts; llRandom llRandom llNumber SearchMettod(void); void tSearchMethcc& 8' } double randSamplelvoid); void Allecate \ void);

PAGE 232

CLASS: Sean:hMethod FOe: oomethod.cpp This class defines a search typedef "/Oid" HGLOBAL; IJdefine huge 411 method. --file dependencies-------------------------------------------------------------'include class Iterate; class Search; class probSpace; 'ifndef __ OODATA_CPP *include "oodata.cpp" *endif --------------------------------------------------------------------------------'define T (*this)

PAGE 233

CLASS: SEARCH METHOD Class Definition class SearchMethod { public: 11------data members--------char name; (cont) IIMethod. name int Debug; echo flag .. Method related .............................................. void ( IIMethod. Initialization function void TMatri:-: char SearchMethod *M); ( *M); mParms; +cParms[20j; int mMParmNum; int fMParmNum; int mMParmDim[20];int fMParmDim[20]; custom Print function parameter arrays double -fParms[20); int *iParms[20]; method. parm array cI.1mensions int cl-lParmNum; int iMParmNum; int cMParmDim[20];int iMParmDim[20]; .. Search related .............................................. float (*ConvFn) ( IIConvergence metric function Iterate *Iteration); .. Iterate related ............................................. void int (*PropFn) ( ( Iterate); t) ( firstlter); double (+randFn) (double *pi); double int int fIterDataPts; bIterDataPts; int iIterDataPts; IIIteration propagation function IIFirst Iteration Initialization function IISearch Initialization function function function parameters IINUmber of data points per iteration of each type

PAGE 234

CLASS: SEARCH METHOD Class DeftnlUon II------function members---SearchMethod(void); } void = 8l; double randSampleivoid); void Print(void); void Allocate(void); (cont) (cont) Ca11 to attached random function ========================================================= CLASS HISTORY: SearchMethoci Created: MocIl.Oed: 9/28194 HOppes kcoppes I) Propagation fWlIxion args chgd so that does not return iteration. but modifies next iter based on current iter. 2) Moved includes lor steno and stdIib to oosearch.h 3) Fixed ConvFn to float value. 4) Added class preded for class Iterate: kooppe I) Added problcm space arg to Randlnit member Renamed Randlnit to be Searchlnit 3) returned costs to Searchlnit and PropFn 4) Added prob space arg to PropFn member kooppes -added method arg to PropFn member kcoppes removed Method arg from PropFn as method now attached to Iterate kooppes 1) Replaced ProbSpace arg in PropFn with Search 2) Added random and random parameters to class kooppes -added Print member to class kooppes reworked statelparameter scnema kooppes Moved source into Miaosoit Word kooppes Cleaned up Cannalling trace information kooppes operator rev rev 2 rev 4

PAGE 235

CLASS: SEARCH METHOD (cont) FlDlction member descriptions --------------------------------------------------------------------------------Rather than create a specific method subtype for each specific variety of search, a general search method cl.ass was def:Lned with typed funct:Lon po:Lnters to capture the functions of each particul.ar search type. This way a searchtype specific set of functions could created and attached to a search method instance cal.l.ing a function returning a tailored search method instance. implementing in this manner it :Ls only necessary for a search object to know how to deal. with objects of type SearchMethod rather than many subtypes. (This would be unnecessary if better pol.ymorph:Lc cal.l.s were avail.abl.e). The funct:Lon data members in the method cl.ass are unique then in that several. of them are pointers to functions to cal.l.ed by other cl.asses (such as search or Iterate). Their usages and arguments are detailed bel.ow: MEMBER: HethodIni t ( SearcbHethod* H) The HethodInit pOinter points to a funct:Lon that in:Lt:Lal.:Lzes the parameters for a SearcbHethod :Lnstance, all.ocating memory and sett:Lng defaul.t parameters. Inputs: SearchMethod* HBHBBIl: methodPrint ( Pointer to the method :Lnstance SearehMethod* H) The methodPr:Lnt pointer po:Lnts to a funct:Lon that pr:Lnts the parameters for a SearehMethod instance that has been set up by a specific search in:Ltial.:Lzation. Inputs: SearchHethod* Pointer to the method instance ConvFn ( Iterate* Iterat:Lon) The ConvFn pOinter points to a function that produces convergence :Lnformation about a search based upon the input :Lterat:Lon, previous iterat:Lons attached to the input iteration, or search states attached to the iteration or its search. The degree of convergence is returned as a f10ating point value (1 or 0) Inputs: Returns: Sea.rehHethod* f10at Pointer to the method instance convergence value

PAGE 236

CLASS: FUNCTION: SEARCH METHOD Function member descripUoDS (coot) (cont) MEMBER.: propFn ( Iterate. currIter, Iterate. nextIter) The PropFn pointer pOints to a function that transforms the data and method state contained by currIter into new 1teration data and state for nextIter. PropFn also sets the iteration fitness. Inputs: Returns: Iterate Iterate int currIter nextIter HEKBER.: IterateInit( Iterate. Iterate) POinter to the current iteration POinter to the next iteration # of floatinq point operations in PropFn The IterateInit pointer pOints to a function that sets method state and data memory size parameters and allocates state and data memory for Iterate. Inputs: Iterate currIter POinter to the current iteration Iterate nextIter Pointer to the next iteration Returns: int of floatinq point operations in PropFn SearchInit( Iterate. firstIter) The SearchInit pointer pOints to a function that sets method tailorinq parameter size variables, allocates tailorinq parameter memory, and sets initial values for method states and random values for the initial iteration, firstIter. It is assumed that firstIter has pointers to the method instance, search instance, and the problem space. SearchInit also sets the first iteration fitness. Inputs: Returns: Iterate firstIter int MEMBER.: randFn ( double. pl) POinter to the initial iteration of flops in SearchInit The randFn pointer points to the random number generator used by the method. Inputs: bturns: double double Pl POinter to rand function parameter array random value

PAGE 237

CLASS: // // // // // T.rnMParrnNum T.fMP3rrnNum = 0; T.cMParrnNur.1 = 0; '.iMParrnNum } // T.fIterDataPts 0; = 0; fParms[t] = NULL; cParms(tl = NULL; } // // // // = 0; 0; } int i; if (T.mMParmNum) T.mParms new [T. rnMParr.t'lum] ; (T. mParrr.s [i Il define (T. iI, 1); } delete T.fParms[i]; T.fParms(i! new doubleilunsigned intI T.fMParmDim[i]l; } T.iParms[ij new int[T.iMParmDim[i]]; } T.cParms[i] new char[T.cMParr.1Dim[i]I; }

PAGE 238

CLASS: int i,t; T.name = NULL; B.MethodInit; T.methodPrint T.CcnvFn T.IterateInit T. randFn B.ConvFn; B.IterateInit; B. randFn; T.PropFn = B.PropFn; T.SearchInit B.SearchInit; T.randParm B.randParm; } T. fIterDa taPts T.bIterDataPts T.iI::erDataPts T.mMParmNum T.iMParmNum B.fIterDataPts; B.bIterDataPts; B.iIterDataPts; B.mMParmNum; B.iMParmNum; '!'. fMParrrJ>lum T. cMParrr.Num B.fMParmNum; B.cMParmNum; T. mMParmDim [i j T.fMParmDim[ij '!'.iMParmDirn[ij T.cMParmDim[ij T .Allocate ( l ; if T.mParms[il = B.mParms[il; B.mMParmDim[il; B. fMParmDim[ i 8. iMParmDim[il; 3.cMParmDim(il; T.fParms(il [tl iMParmNum;i++) T.iParms[il (tl T.cParms[il [tl B.fParms(i; [tl; B.iParms[ij (tl; B.cParms[lj [tl;

PAGE 239

CLASS: T.methodPrint(this); } : return 7.randFn(T.randParm); }

PAGE 240

A.2.8

PAGE 241

HEADER: MetbodEvaJuatioD FUe: metheval.h This class defines a search method evaluation. Each instance of this class supports measurement of the performance of a method. for a particular problem aver a set of method. parameter rang-es. This class is an extension to the object-oriented search structure. rev 2 lIifndef _HE'l'BEVAL_H lldefine _HE'l'BEVAL_H typedef void* P.Gt09AL; #define hug-e --file dependencies--------------------------------------------------------------'include 'include #include # include 'include "matrix.h" 'include ,. search. h"

PAGE 242

HEADER: Class SearchMethod" ProbSpace* long int int long int M; P; NRuns; CostLimiL.; FnEvalLimit; IterLimi':; IterSampI:1t:vl; HGLOBAL Matrix" char mParmsRange; cParmsRange(20); int mParmNum; int fParmNum; int mParmDim[20);int fParmDim(20); int Echo; H MaX double* fParmsRange[20); int* iParmsRange[20j; int cParmNum; int iParmNum; int cParmDim[20);int iParmDim[20); MethodEvaluationl SearchMethod* Methd, ?robSp3ceIterLim, int Samplntvl,int Runs); -MethodEvaluation(void); void Allocate(void); void 5etMatrixParmRange(int 3rrynum, .:omplex r3nge [2]); void 5etFloatP3rmRangei int 3rrynurr., :"nt parrr.num, ::ouble range (2]); void SetlntParmRange( int 3rrynum, :'::.t parmnum, i:1t r3nge (2] ); void SetChar?armRange( int 3rrynum, :''!'It parmnum, -::har range (2J ); void Get Ranges ( int parmtype,i::.t arrynum,':':it parmnum, double& minRange, double& m3xR3nge); void SetMethodRangel workMethod,int NvarYP3rm,double varyParm[] [5J); void DumpEvalRanges(char* fname, char" double huge .... double huge* double huge* AllocSumrr.3ryTable5et(void); GenerateRunData( SearchMethod* M, double minEndVal[4]); InterpolateStatsl double hugesampleData, int tableWidth, double endVal, int depNdx(J, int interp );

PAGE 243

HEADER: double huger .. void void double huge .... ParmSweep( double varyParm!] int NvaryParm,int baseParm, SearchMethod& EchoTableData( double huge :ables, int NTables, int tablelnfo[3] [6],int tabWidth); DumpSummaryTableslchar" froot,double huge" parmTable); SingleParmAnalysislcharr EvalRoot, double varYParm[] [5], int NvaryParm,int baseParml; void EvalRoot, couble varyParm[] [5], int NvaryParml; long int int int sampAryLen; sampWidth; NTables; NStatCols; int UpdateState(double varyParm[1 [51, int NvaryParm,int oaseParml;

PAGE 244

CLASS: MethodEvaIuaUon FUe: methevaLcpp This defines a search method Each instance of this supports measurement of the performance of a method for a over a set of method parameter ranges. This is an extension to the object-oriented search structure. typedef void+ HGLOBAL; _define huge rev 1 dependencies------------------------------------------------------------- lifndef __ SEARCH "oosearch.cpp" 'end1f #define T (*this) CLASS HISTORY: !\lethEvai Created: l\IocWIed: 12117/94 kooppes kooppes kooppes kooppes Updated GmerateAccAndPrec fimc:tioo Fleshed out class (throurJi Revised fmishing standards to targ convergence -20 Reworked MethEval to correspond to least squares approach rev 1 rev 2

PAGE 245

I/L'LASS: (<=ont) CIa9s Definition class MethodEvaluation public: 11------data members--------SearchMethod" ProbSpace* long int int long int im: HGLOBAL M; P; NRuns; E"nEvalLimit; parmhglb{4J; TMatrix" mParmsRange; char" cParmsRange[20J: int mParmNum: int fParmNum; int mParmDim[20J;int fParmDim[20J: int Echo; Method under evaluation Problem space to apply to of runs in precision calculation Kale run cost Kale # of function evals in run Kale # of iterations in run Interval to sample performance Heartbeat number of steps Parameter ranqes double" fParmsRange[201; int* iParmsRange[20]: method parm array dimensions int cParmNum; int iParmNum; int cParmDim[20J;int iParmDim[201; Echo Plaq II------function members----MethodEvaluation( SearchMethod* ProbSpace* Pspace,int IterLim, int -MethodEvaluation(void); } Allocace(void); void SetTl1atrixParmRange(i:1t 3rrynum, ':':-:t parmnum, ;:omplex r:;mge[211; void SetE"loatParmRange( int arrynum, ':':1t parmnum, double range[2] SetlntParmRange( int arrynum, int parmnum, int nnge [2J SetCharParmRange ( int arrynum, :'nt parmnum, char range 211 : void GetRanges( int parmtype,int arrynum,int parmnum, double& minRange, double& maxRange): void DumpEvalRanges(char" f:-:ame, char" mode): void RandMethodParms ( Searcl-.Method& workMethod ); void DumpMethodVaryingParms( SearchMethod& workMethod, fILE* fp ); 'laid GenerateRunData( void

PAGE 246

CLASS: int t,1; T.M = Methd; T.M->Debug = C; T.P = Pspace; T.Echo 0; T.CostLimit -1; T.FnEvalLimit = -1; = O+IterLim; T.lterSa:r.p:::.tvl T.hearcbeatPeriod = IterLim; Iiset T.mPar:nNum T.cPar:nNum Methd->mMParmNum; Methd->cMParmNum; T. parmhqlb[tj = 0; : T .All::cate ( ); if Samplntvl; T.NRuns Runs; Methd->fMParmNum; Methd->iMParmNum; mParmDim( t] 1'. !ParmDim[tj T.cParmDim[tj T.iParmDim[t] int mParm2 (Methd->mMParmDim[t! )*2: (Methd->fMParmDim[tj)*2; (Methd->cMParmDim[tjl*2; (Methd->iMParmDim[t] )*2; .eli)) T.mParmDim(t]/2; (Methd->mParms[tj .c(i) I; (Methd->mParms[t] .c(i)); (T.mParmsRange[t) .e(mParm2+il) } T.fParmsRange[tj til int fParm2 T.fParmDim[t)/2; (Methd->fParms[tl [il); T.fParmsRange[t] [fParm2+i) IMethd->fParms[tl [il); }

PAGE 247

CLASS: :\IETHOD EVALUATION Constructor. Destructor .AIIo<'ate 1f (T .1.Pa..rmNum) for (t=O ;t<'1' .1PaDlNum; t++) ( int = T. iE'armDim[ t 1/2: for(1=0;1<1Parm2;1++){ T.iParmsRange[tl [il iMethd->iParms[tl [il); T.iParmsRange[tl [iParm2+i; = iMethd->il?arms[tl [il): ); 1f(T.cPaDlNum) for(t=O;t<'1'.cParmNum;t++){ T.cParmDim[tl/2: for(1=0;1cParms[tl [il); T. cParmsRange [t [cParm2+ i: = (Methd->cParms [t [i //========================================================== // FUNC"l'ION: -HethadEvaluat1on // Deletes the data structures from a Hethad Evaluation 1nstance // Created: koppes // ============================================================= HethodEvaluat1on::-HetbadEvaluat1.on(vo1.d)( } // int i; (T .1IIl'a..rmNum) 1f (T. fPa..rmNum) for (1=0; 1<'1'. fParmNum; 1++) 1f(T.1.Pa..rmNum) for (1=O;1
PAGE 248

CLASS: FUNCTION: FUNCTION: METHOD EVALUATION SetTMBtrbPBrmRanee, SetFloBtPannRanee SetTMatrixParmRanqe (cont) This function sets a specific matrix parameter range Inputs: int arrynum int parmnum complex range [2 ] TMatrix address element address in matrix min max values of range ========================================================= void MethodEValuation::SetTMatrixParmRanqe(int arrynum,int parmnum,complex range[2]) ifarrynum >= T.mParmNum) (arrynum= T. mPazmDim[arrynum) (parmnum= T. fParmNuml (arrynum= T. fParmDim [arrynum)) (parmnum
PAGE 249

CLASS: METHOD EVALUATION SetlntPannRanl!e, SetCbarParmRanj!e (cont) FUNCTION: FUNCTION: SetIntParmRanqe This function sets a specific Inputs: int arrynum int pa=um int range (2] Created: KOppes integer parameter range Integer array address address in integer array max of range void arrynum, int parmnum, int ranqe[2]) { if ( (arrynum >= T. iParmNum) I I (arrynum= T. iParmD1m(arrynumJ II (parmnum= T. cparmNum) (arrynum=T.cParmDim(arrynumI/2) II (parmnum
PAGE 250

CLASS: METHOD EVALUATION GetRant!es FUNCfION: FUNCTION: GetRanges function sets a specific integer parameter range Inputs: int parmtype Cede for range type to get: int arrynum int parmnum TKatrix,O: float,l: int,2: char,3 Array in type to address element address in integer array target for mini111W11 value of' range target for ma,v1 DDlm value of range Created: minRange maxRange 2110r95 keoppes void MethodBvaluation::GetRanges(int parmtype,int arrynum,int parmnum, minRange, double& maxRange) if (parmtype==O) { IITKatrix minRange (T.mParmsRange(arrynumJ .r!parmnum)); maxRange (T.mParmsRange(arrynum] .rT.mParmDimiarrynum]/2)+parmnum)): e1se if(parmtype==l) { IIFloat minRange T.fParmsRange[arrynum] [parmnum]: maxRange T.fParmsRange[arrynum] [(T.fParmDim[arrynum]/2)+parmnum]: else if(parmtype==2) { IIInt minRange T.iParmsRange[arrynum] [parmnum]: maxRange T.iParmsRange[arrynum) [(T.iParmDim[arrynurn]/2)+parmnum]; else if(parmtype==3) ( IIChar minRange T. cParmsRange [arrynum] 'parlT'.numj ; maxRange = T.cParmsRange[arrynum) e1se {printf(n\n Error (MEval.GetRangesJ: out of range parm type"); ex1t(-l):

PAGE 251

FUNCTION: :\IETHOD EVALUATION DompEvalRanees FUNC'l'ION: DWllpBvalRanqes (cont) Opens a trace file and writes method range information. Inputs: char* fname Name of file to write on char* mode Write mode: own for new file, "a" append Created: lu:opp" void Hethoc1Bvaluation: :DWllpBvalRanqes(char* fname, char* mode) { file for writing if mode [0] != 'w')&&(mode[O] != 'a' Srror: is not a valid .... rite mode\n",mode); FILEDumpFile; int i; if (! strcmp (fname," stdout" else printf("FILE: %s",fname); if(DWllpFile DumpFile Dump File fopenlfname,"w"); exit printfl"\n Srror: (MEval.DumpEvalRanges) can't open %s\n", fname); exit(-l): }; fprintf (DumpFile, "\n" ) ; for (int t=O; t<80; t++) fprintf (DumpFile, "-") ; fprintf(DumpFile,"\nMETHOD EVALUATION: "); if (T. H->name fprintf(DumpFl1e,"%s\n",T.M->name); fprintf(DumpFile,"\nRANGES:\n"); IIWrite THatrix ranges fprintf (DumpFile, "\nMATRIX: "); if (T mParmNWIl) { for(t=O;t
PAGE 252

CLASS: fprintf(DurnpFile,"\nFLOAT: for(t=O;t
PAGE 253

METHOD EVALUATION RandMethodParms FUNCfION: ============================================================= FUNCTION: RandHetbodParms Th1.s function sets the parameters of method Hethd to randcm val.ues inside the ranges specified the method eval.uation Inputs: SearchHethod.& workMethod. The method. to randomize Created: void HetbodBVill.uation: : RandHethodParms( SearchMethod& workMethod. ) ( } int parmnum,arrynum; double maxRange,minRange; if (T.mParmNum) IITHatr1.x for (arrynUDFO; arrynum
PAGE 254

CLASS: METHOD EVALUATION DmnpMethodV FUNCfION: FUNCTION: DumpMethodVaryinqParms This function writes the parameters of method workMethod that have non-zero ranges to the specified stream Inputs: SearchMethocl& workMethod The method to write from FILE fp The stream to write to Created: .mO/95 KOppes ========================================================= void MethodEvaluation: : DumpMethodVaryingParm.s( SearchMethod& workMethod, FILE. fp ) { II'l'Matrix int parmnum,arrynum; double maxRange,minRange; if (T .lIIli'armNum) for (arrynum=O ; arrynum0.0) fprintf(fp,"\t%e ",workMethod.mParms[arrynum).r(parmnum)); if (T fParmNum) for (arrynwa=O; arrynum0 0) fprintf(fp,"\t%e ",workMethod. fParms [arrynum) [parmnum)); } IIInteqer if (T. 1ParmNum) for (arrynum=O ; arrynum0) fprintf(fp,"\t%i ",workMethod.cParms(arrynum] [parmnum]);

PAGE 255

CLASS: FUNcrION: METHOD EVALUATION Generat.eRunData FUNCTION: GenerateRunData Executes a series of search runs with randcm:1.zed method parameters. Created: k&:oppes IIDeclare local variables Search" workMethod; 9 (T.M); int 0; printf("\n"); Flaq indicatinq that parameter set has been repeated. output file char fname[20J FILE *dFile; dFile fopen (fname, "w") ; if(dPilel=stdcut&&dFile)==NULL Srror: (MEval.DumpEvalRanges) open file\n"); ex1t(-l); }; 1f(dFile!=stdcut) fclose(dFilel; for(lonq run=O;run
PAGE 256

CLASS: METHOD EVALUATION GeDerateRunData } //l:Create search instance (random start point) = new Search(T.?, = SInst->ConvTgtMetric = -20; = T.Echo; if(T.lterLimit > 0) //2:Run with breakpoints //A:Run search to generatedata if(T.Ecbo) run"); = SInsc->Run(); //B:Summarize search performance dFile fopen(fname,"a"); T.DumpMethodVaryingParms(workMethod,dFile); //Convergence metrics fprintf(dFile,"'t%i",SInst->IterNuffi); fprintf(dFile,"\t%lOe",SInst->currentlter->IterTestDist); fprintf(dFile,"\n"); if (dFile!=stdout) //3:Delete run if(T.Bcho) l=rintf (" jone"); delete SInst;; fiend of run leap printf("\n"); hndef T 0;

PAGE 257

This section contains implementations for the search methods. Each method contains a include file with a function to spawn an instance of its particular type of search method. 1. Very Fast Simulated Reannealing 2. Genetic Algorithm Search

PAGE 258

B.l Very Fast Simulated Reannealing Implementation 246

PAGE 259

METHOD: VERY FAST SIMULATED REANNEALING revS msr.c:pp This file defines a VFSR search method. Note that this method searches for the function minimum. This is a reimplementation of the algorithm developed Lester Ingber of CalTech (VFSR V9. 4) Note: STATE <-> Iteration.DataPoint l)It is assumed that the user random function implements U(O,l). 2)Only a fixed-dimension space may be searched. 3)The reanneal process implicitly assumes that the fitness function takes values greater than 0 Created: !\foclilled : IOn194 kcoppes kcoppes kcoppes kcoppes kcoppes kcoppes kc:oppes Transferred source to Microsoft Word Applied patcnes to refled. dlanges during xfer of matrix c:Iass to Word Cleaned up fonnat Added. setting of Iterate lterFitness property to SeardlInit & PropFn rev Ja Corred.ed various memory and communication problems rev 4 separated temperature initializ.ations for cost & parameters rev 1 rev 2 revJ rev 5 --file dependencies--------------------------------------------------------'include 'include 'include 'include 'include I1fndef OOBBADBR. CPP 'include "ooheader. cpp" 'end1f ---------------------------------------------------------------------------

PAGE 260

METHOD: Very Fat Shnulat.ed ReannealiDg VFSR Method ========================================================= FUNC'rION: WSR Method. Creates an instance of the VFSR method.. Returns: SearchMethod. Created: void VFSR _DEFMETBODINIT ( SearchMethod. *VFaR) ; int int void void void void int void VFSR_SBARCBINIT( Iterate* firstIter); WSR_PROPFN( Iterate* currIter, Iterate* nextIter); VFSR_CONVFN( Iterate *Iteration); WSR_uniform( p) ; WSR _PRINT ( SearchMethod *M) ; VFSR_ITERATBSIZE( Iterate* I); VFSR_MBTBODPARMSIZB( SearchMethod* WSR); VFSR_SBARCBPARMSIZE( Search* S); VFSR_ReAnneu ( Iterate *Iter); VTanqent( TKatrix& d.-dx, (*P) (TKatrix& p) TKatrix p, TKatrix dx, TKatrix prange, int ptypes[], int int* Cost ); Iterate *I, int inciex, newTemp, int N, int* Cost); --------------------------------------------------------------------------------SearchMethod. VFSR_Method(void) SearchMethod M; IISet pointers to method functions M.Methodlnit M. PropE"n M.ConvE"n M.ra!'ldE"n M.randParm M.lteratelnit M.Searchlnit M. methodPrint: VFSR _DEB-IETHOD:::NIT; VFSR_PROPFN; VFSR_CONVE1'l; VFSR_unifor:n; new double; VFSR_ITERATESIZE; VFSR_SEARCHINIT; VFSR_PRINT; IISet for method. parameters and return M.Methodlnit(&M); return M; }

PAGE 261

METHOD: Very Fast Simulated Reannealiog METHOD PARAMETERS (coot) I. KBTBOO PARAMETERS: Method va1ues that are invariant from to of the method are attached to the method structure. Macros are used to fit the va1ues to the method and to access tbem. #define M NumParmMtx 0 of method parm matrices #define M NumParmDbl of method parm double arrays Hdefine M NumParmlnt of method parm integer arrays #define M NumParmChr of method parameter char arrays --------------------------------------------------------------------------------none ---------------------------------------------------------------------------------Hdefine M DimParmMtx {O} of method parm matrices -----------------------------------------------------------------------------------------------------------------------------------------------------------------Hdefine M DimParmDbl [16} Hdefine VtSR_minRange(M) Hdefine VtSR_minVal(M) Hdefine VtSR_maxReAnnlnd(M) Hdefine VtSR_ReAnnRescale(MI Hdefine '1tSR_tempRescalePwrIM) "define \'tSR_tempAnnealScale 1M) Hdefine VtSR_tempRatioScale 1M) Hdefine VtSR_cost"armScale Hdefine VFSR_defCeltaX(M) Hdefine VtSR_defInitTemp(M) Hdefine VtSR_defC0stlnitTempIM) Hdefine VtSR_costPrecision(M) of method parm double arrays (M->f"arms[O] [0] I parm range (M->fParms[O] [li) va1ue (M->fParms[O] [2J) IIMaXimum IndeX (M->fParms[O] [3]) :M->f?arms[O) [4] I IITemperature rescale power II(derive from ReAnnRescale) [5j; IITemp factor ;M->fParms[O] [6jl IITemp ratio scale factor (H->fParms[O) [,] I temp fact (I-1->fParms [8}) II0efault delta-x (M->fParms[Ol [9)) init temperature(parms) (M->fParms[O) [10]) init temperature(cost) (M->fParms[O) [l:j I cbanqe in cost allowed Hdefine (M->fParms[O] I number of cost repeats Hdefine VtSR_acceptToGenMinRatio(M) (M->fParms[O] [13)) accept to qen ratiO Hdefine VtSR_evalScaling(M) (M)->fParms[O] [14]) IIScales the entire performance measurement #define VtSR_iterGain(M) (M)->fParms[O] [15]) metric gain

PAGE 262

I/METHOD: Very Fast Simulated Reannealing METHOD TAILORING PARAMETERS --------------------------------------------------------------------------------tNT: see ---------------------------------------------------------------------------------#define M_DimParmrnt {3) /ldefine VFSR_testPeriod(M) "define VFSR_optIncludeInt(M) #define VFSR_op1:NoReanneal(M) of method pil%1ll inteqer arrays (M->iParms[OJ [0]) of acceptances between tests (M->iParms[O] [1]) to ints 1n (M->iParms[O] [2]) to skip reanneal --------------------------------------------------------------------------------CHAR: (cKParms) none ---------------------------------------------------------------------------------Hdefine M DimParmChr iOI of method character arrays II. HBTBOD TAJ:LORrNG PARAMETERS: Method vaJ.ues that can vary 1n s1ze or vaJ.ue with the space but that remain constant from 1teration to iteration are attached to the search. Hacros f1t parameters to the search structure. none --------------------------------------------------------------------------------S NumParmMtx 2 Hdefine S DimParmMtx of method parm matr1ces Illenqths of method ta1lor matr1ces ................................................................................ [0] Temperature factors 0: Cost temp scale factor (N+1 x 1 matr1x) element 1:N+1 Parameter temp factors ............................................................................ Hdefine VFSR_tScaleLoc(S) S->mSParms{O] VFSR_tScale(S,t) *(VFSR_tScaleLoc(S).e(t HdeHne VFSR_tScaleR(S,t) (VFSR_tScaleLoc(S).rlt) #define VFSR_tScaleCost(S) VFSR_tScale(S,O) #define VFSR_tScaleCostRCS) VFSR_tScaleR(S,O) !ldefine VFSR_tScaleParm(S,t) VFSR_tScale(S,t+l) VFSR_tScaleParmR(S,t) VFSR_tScaleR(S,t+l)

PAGE 263

METHOD: Very Fast Simulated Reannealing METHOD TAILORING PARAMETERS ................................................................................ [1] De1ta x to be used in tanqent ca1cs (N+l x 1 matrix) e1ement O:N Parameter de1ta x "define VFSR_dxLocIS) #define VFSR DeltaXP(S) #define VFSR_DeltaXIS,t) #define VFSR_DeltaXR(S,:) S->mSP3rms[1] IVFSR_:::i:,LocIS +(VFSR_dxLocIS).elt IVFSR_jxLoc(S) .ret) --------------------------------------------------------------------------------DOUBLE: (fMParms) none --------------------------------------------------------------------------------#define S NumParmDbl 0 Hdefine S DimParmDbl lO) of methOd doub1e arrays of method tailor dbl arrays (iMParms) none --------------------------------------------------------------------------------"define S NumParmlnt 0 Hdefine S DimParmInt of methOd inteqer arrays of methOd tai10r int arrays CHAR: (cHParms) none #define S NumParmChr 0 #define S DimParmChr (0) of methOd parameter char arrays of method tai10r char arrays

PAGE 264

METHOD: Very Fast Simulated RelllUleBling METHOD STATE DEFINITIOl'iS // ========================================================== III. MBTBOD STATE DEFrNITIONS: // Methad val.ues that can in size ar val.ue with the search ar probl.em // space are attached ta the search. Macras fit particul.ar state names ta // the search structure. //========================================================== --------------------------------------------------------------------------------// Hld'RIX: (mState) see bel.aw. Note: 1) N is the of problem parameters 2) An R suffix indicates a real. val.ue its absence indicates an 1val.ue // --------------------------------------------------------------------------------Hdefine I 2 Hdefine I of method state matrices //lenqths of method state matrices [0] Cost & parameter initial temperatures (N+1 x 1 matrix) el.ement 0: Initial. cast temperature element l:N+l Initial. parm temperatures Hdefine VFSR_initTempLoc(I) Hdefine VE'SR_initTemp(S,t) Hdefine VFSR_initTempR(S,t) Hdefine VFSR_initCostTemp(S) Hdefine VFSR_initCostTempR(S) Hdefine VFSR_initParmTemp(S,t) Hdefine VFSR_initParmTempR(S,t) I->mState[Ol *(VFSR_initTempLoc(S)).e(t)) VFSR_initTempLoc(S)).r(t)) V FSR_initTemp(S,0) VFSR_initTempR(S,O) VFSR_initTemp t+l) VFSR_initTempR(S,t+l) // // // (N+1 x 1 matrix) [1) Search temperatures element 0: el.ement l:N+l Cost temperature Parameter temperatures Hdefine VE'SR_tempLoc(I) Hdefine VE'SR_Temp(S,t) +(VFSR_tempLoc\S).e(t)) Hdefine VFSR_TempR(S,t) (VFSR_tempLoc(S).r(tl) Hdefine VE'SR_costTempIS) VFSR_TempIS,O) Hdefine VFSR_costTempRIS) VFSR_TempR(S,O) Hdefine VE'SR_parmTemp(S,tl VFSR_Temp(S,t+l) Hdefine VFSR_parmTempR(S,t) VFSR_TempR(S,t+l)

PAGE 265

METHOD: Very Fast Simulated Reannealing STATE DEFINITIO:'JS --------------------------------------------------------------------------------DOUBLE: (fstate) see below --------------------------------------------------------------------------------! NumStaceDbl 2 Hdefine I DimStaceDbl {3,11 of method state double arrays Illengths of method state dbl arrays ................................................................................ [0] Search performance This is a weiqhted sum of how well the search has performed. VfSR_evalSumII) VFSR_evalSumCos":I) #define'lFS,,_i"erWeighc:I) (I->fState[Oj I/weighted search performance (:->fState[Oj Ilweighted search performance 1:->fState[Oj [::; Ilweight on iter performance ................................................................................ [1] Accepted to generated ratio This is a monitor of the success of the search in finding new valid pOints Hdefine VfSR_acceptToGenRatiolI) [OJ) Ilratio of accepted to. qenerated data points --------------------------------------------------------------------------------(1.state) see below --------------------------------------------------------------------------------Hdef:ne I NumStacelnt 2 Hdef:ne I {N+l,91 of method state integer arrays Illengths of method state int arrays ................................................................................ [0] Generations index, times temperature cooling (N+1 length) element 0: element 1:N+1 Cost acceptances index Parameter qenerations index Hdefine VFSR_genlndexLoc(1) Hdefine VFSR_gen1ndexIS,t) Ddefine VFSR_indCostAcc(S) #define VFSR_indParmGen(S,t) 1->15ta"e[Oj [tj) VFSR_gen1ndex(S,O) VFSR_gen1ndex(S,t+l)

PAGE 266

METHOD: Very Fast Simulated Reannealing :\.IETHOD STATE DEFINITIONS (cont) "(cont) ................................................................................ 0: 2: 3: Number of fitness repeats cost precision (1 length) Number of pOints accepted by the function recently Number of points recently qenerated by the function Best number qenerated saved 4: Number of data pOints qenerated 5: Best number of acceptances saved 6: Number of data points accepted 7: Number of acceptances saved Hdefine #defir.e IfFSR_r:ecer.tAcceptancestI) VFSR_recentNumGeneratecII; #define VFSR_bestNumGenSaved(I) "define VFSR_r.umberGenerated(I) #define VFSR_bestNumAccSaved(I) Hdefine IfFSR_number.l\ccepted (I) #define VFSR_numberAccSa'led(I) ((I->iState[l]) [0]) ; II->iState[l]) [1]) : II->iState[ll) [2]) I (I->iState[l]) [3]) (!->iState(l]) [4]) ((I->iState[l]) [5]) ((I->iState[l]) [6]) ((I->iState[l]) [7]) --------------------------------------------------------------------------------CBAR.: (estate) none ---------------------------------------------------------------------------------#define I NumStateChr a #define I DimStateChr {Ol of method state char arrays Illenqths of metbocl state char arrays

PAGE 267

METHOD: Very Fast Simulated Reannealing ITERATION LAYOL'T ITERATION LAYOtn' (DATA) [0] savedPoint (I) bestPoint(I) I fDataStates 2 I iDataStates 0 #define I bDataStates 0 Propagated point Best seen point of float data pOints per Iter of int data points per Iter of binary data points per Iter --------------------------------------------------------------------------------FLOAT/COMPLEX DATA POINTS: (fDATA) see below Notes: 1) No suffiX indicates a pointer to a data point An H suffix indicates the matrix inthe data point An E sUffix indicates a parameter inside a data point (an An R sUffix indicates the real value of a parameter element --------------------------------------------------------------------------------................................................................................ [0] Savec1 point: the last data point generated #define VFSR_savedP(I) VFSR_savedPM(I) Hdefine (I->fDATA[O] ) (VFSR_savedP(I).DATA) (Y(VFSR_savedPM(I).ett))) (VFSR_s3vedPM(I).r(t;) ................................................................................ [1] Best point: the best data point generated VFSR_bestP(I) (I->fDATA[l] ) "define VFSR_bestPM(I) (VFSR_bestPM(I) DATA) #define (Y(VFSR_bestPM(I).ett))) Hdefine (VFSR_bestPM(I).r(t)) --------------------------------------------------------------------------------INTEGER DATA POINTS: (WATA) none ---------------------------------------------------------------------------------------------------------------------------------------------------------------BINARr DATA POINTS: (bDATA) none --------------------------------------------------------------------------------

PAGE 268

METHOD: Very Fast Simulated Reannealing RANDOM GENERATOR // DEFINES: Error return codes // 'define X_PAlUI_TEMP_TOO_SHALL -! 'define X_COST_TEMP_TOO_SHALL 'define -3 (cont) //========================================================= // MACRO: UP CHECK // lbcponent check. Checks to see :if x is between FLT_EXP_MIN and // FLT_EXP_HAX and lim1ts it if it is // Adapted: Based on \'"FSR \'9.4 code, InJber // ============================================================= 'define MIN_UP \-M_LNIO" log(abs( F!.T_MIN_EXP ))) 'define MAX_EXP l M LNIO .. log(abs( rLT_MAX_EXP I)) 'define EXP_CHECKCx) ([x) MIN_EXP ? MIN EXP : x) > MAX EXP ? MAX EXP //========================================================= // MACRO: VFSR.-OPBNRANGB // Returns if range of parameter pNum in dimension Dim for pspace // is qreater than // Inputs: // // Iterate* int // int // Created: // Dim pNum 'define VFSR_OPBNRANGECI,Dim,PNum) Iteration of interest Dimension to interroqate Parameter number PSPACE_RANGE_WIDER(I->PrSpace,Dim,pNum,VrSR_minRange;I->Met:r.od)) // ========================================================= // FUNCTION: VFSR_Uniform // Random function CO,l) // Created: //========================================================= dDuble VFSR_uniformCdDuble *p) ( } prO] = 0; double x; x = [[(double)rand()) 2147483647); return x; unused paramet:er

PAGE 269

METHOD: 11========================================================= .............. VFSR _!'ETHODP,n.R."1SIZE (VFSR) ; IIAssign VFSR->name "Very Fast Simuiated Reannealing"; VFSR_minRange(VFSR) VFSR_minVal (VFSR) VFSR_testPeriod(VFSR) VFSR_optlncludelnt ('IFSR) VFSR_optNoReanneal (VFSR) VFSR_maxReAnnlnd (VFSR) VFSR_deflnitTemp(VFSR) VFSR_oefCostInitTemp(VFSR) VFSR_tempAnnealScale\VFSR) VFSR_tempRatioScale(VFSR) VFSR_costParmScale(VFSR) VFSR_defDeltaX(VFSR) VFSR_costPrecision(VFSR) VFSR_maxCostRepeat(VFSR) VFSR_acceptToGenMinRatio(VFSR) VFSR_iterGain(VFSR) VFSR_evalScaling(VFSR) DBL_E?SILON; DBL E:?SILON; 100; 0; 0; 32766; 5.9360548e+9; 10.0; 1.0e-=; 1.0; 0.001; VFSR r:\in'lal (VFSR); 5; le-4; 1 + 1e-5; Performance (VFSR->randParm) = 0;

PAGE 270

METHOD: Very Fast Simulated Reannealing VFSR SEARCHINIT (coot) FUNCTION: FUNCTION: VFSR SEARCBINIT Initializes the VPSR method search and first iteration. Inputs: Iterate The iberation to initialize. It is that the iteration has pOinters to the VPSR method instance, search instance, and prablem space. Returns: int Cost Adapted: kcoppes from VFSR \'9.4. L.lngber kcoppes Added setting of IterFitness. Modified: int IterateI) IIAl.locate memory for method states (function defined later) VPSR_SBARCBPARKSIZB(I->srch); IIO:Define variables .................................................... int i,pNum,Cost = 0; indeX & cost variables SearchMethod* POinter to VPSR method M I->Method; Search* ?robSpace int S P N I->Srch; I->PrSpace P->maxparms; Pointer to search Pointer to prablem space Dimension of prablem space temperature scale factor for cost and parameters & convert default delta-x to complex form ............................................... double tempScale VFSR_t3caleCost.(S) complex defDx -log (VFSR_tempRatioScale 1M) I expI-':'ogIVFSR_tempAnnealScaleIM)) / Nl; tempScale '/FSR_costE'a!"!I\Scale(MI; VFSR_def!)el taX (1-1); 112:Initialize generation indices & init temperatures for cost and parameters temp scale factors & deltax's for parameters only .................. forli=O;i<=N;i++1 for(i=l;i<=N;i++1 VFSR_initCostTemp(I) for (pNum=O; pNum
PAGE 271

METHOD: SEARCH IN IT ............................ VFSR_savedP(I).dimension N; double mnRange,mxRange; v'FSR_savedPM(I).m = N; VFSR_savedPM(I).n = 1; for (i=O;iGetRangeIN,i,mnRange,mxRange); VFSR_savedPE(I,i) M->randSamp1e() ) + mnRange; } Cost VFSR_savedP(I).Evaluate\?); VFSR_bestP(I) VFSR_savedP(I); 114:Set VFSR_indexCostRepeatII) 0; VFSR_bestNumGenSaved(I) 0; VFSR_numberGenerated (I) 0; VFSR_recentAcceptances(I) = 0; VFSR_recentNumGenerated(I) 0; VFSR_acceptToGenRatio(I) 1.0; VFSR_bestNumAccSaved(I) 1; VFSR_numberAccepted(I) 1; VFSR_numberAccSaved(I) = 1; 115:Set I->IterFitness = VFSR_bestP(I).Fitness; VFSR_e'lalSul:lII) 0.0; VFSR_evalSumCost(I) VFSR_iterWeight(I) VFSR_CONVFN (I); return Cost; } 0.0; 1.0; IISet II II

PAGE 272

Very Fast Simulated Reannealing VFSR PROPFN (cont) FUNCfION: FUNCTION: VFSR_PROPFN Propagate from one iteration to the next. Notes: nextIter starts off identical to currIter (including states) therefore reference all states to nextIter Inputs: Iterate* Iterate* int currIter The iteration to propaqate from The iteration propagate to The propaqation cost Returns: Adapted: k&:oppes from VFSR v9.4. L.lngber :\Iodifted: k&:oppes Added setting of IterFltne99. int *currIter, Iterate* I,{ IIDefine some used variables & new data point int index.CosI. 0; instance Temporary cost & index vars POinter to VFSR method SearchMethod" Search" ProbSpace* M S P I->Method; I->Srcn; I->PrSpace; Pointer to search Pointer to problem space FloatDataPoint newData; newData VFSR_savedP(currlter); int double int do ( newData.dimension; InvDim = 1 / (double) N; PointAccepted 0; NEW CANDIDATE STATE Dimension of prObl.em space Cost += 1; Point accepted fl.ag Cost Ila:SET FLAG SPBCIFYING PARK rNDEX & SUFFICIENT RANGE FOR PARK-----------------int pNum index-I; Cost += 1; int ParmWOpenRange (pNum 0) VFSR_OPENRANGE(I,N,pNum); Cost += 1; NEW TBHPERATURES ------------------------------------------------Note: cost temperatures are in the zeroth el.ement index = 0 Cost teqJs are calculated ParmNopenRange = I Parm temps are calculated Onl.y process if parameter range is sufficiently l.arge or cost is being processed ----------------------------------------------------------------------------

PAGE 273

FUNCTION: Very Fast Simulated Reannealing VFSR PROPFN if( (index==O)I IParmHOpenRange) { Cost += 1; double zq = ((double)pow( (double) VFSR_gen:i:::dex(I,index), (double)InvDim )); double logNewTempRatio = EXP_CHECK( if (exp(lOgNeW'l'empRatio) VFSR._minVal(M ( if(index==O) I->abortFlag = X PARM TEMP TOO_SMALL; else I->abortFlag = X_COST_TEMP_TOO_SMALL; printf("\nabort -too small temperature"); return Cost; VFSR_Temp(!,index) = .. exp(lcgNewTempRatio); end of temp calc 9 zq \ Cost += 3; Cost += 1; Ilc:GBNERATE NEW PARAMETERS----------------------------------------------------if (ParmHOpenRanqe) { Ilparameters w. sufficient range only temporary variables int invalid_points = -1; int valid_step 0; IIExplore region around last point do ( Cost += IIGenerate random determine its sign double double = M->randSample(); dir = u<0.5 ? -l.Q : 1.0; function away from current value dependent upon temperature complex ParrnX = Ilextract element double parmstep = .jir P->Range(N,pNumJ 'iFSR_parr::TempR(!,pNum) .. ( real(pow( 1.0 + l.O/VFSR_parmTempR.;I,pNum), fabs(2u -1)))-1 ); double newval = (double) real (ParmX + parmstep); IIValidate inranqe condition ++invalid_points; Ilcount of tries valid_step = ( (P->IsValidParm( newval, pNum)) (parmstep!=O) ); if( valid_step) -newData. DATA.e (pNum) = newval; While (!valid_step); parameter generation numbers (cooling time indices) Cost += 13; VFSR_genlndex(I,index) += 1; Cost += 1; }; of exploration ------------------------------------------------} ; of indeX for loop

PAGE 274

METHOD: newData.Evaluate(P); Cost += 1; 113.ACCEPT VFSR_numberAccSaved(I) += 1; VFSR_recentNumGenerated(I) += 1; VFSR_numberGenerated(I) 1; Cost += 1; Cost += 1; Cost 1; double acceptlndex (newData.Fitness VFSR_savedP(currlterl.Fitness) / (VFSR_costTempR(I) I; M->randSample( :; Cost 3; dO'Jcle rs double ai (double) exp(EX?_CHECK(-acceptlndexl I; ( VFSR_savedP (I) = newData; } VFSR_recentAcceptances(I) += 1; VFSR_numberAccepted(I) += 1; VFSR_indCostAcc(I) += 1; VFSR_numberAccSavedII) PointAccepted = 1; VFSR _:"..lmberAccepted (I Cost += Cost += Cost += VFSR_3cceptToGenRatio(I) (double) :IFSR_recentAcceptances I!) + 1) (double) (VFSR_recentNumGenerated(I) 1); 1; Cost 1; 1; 1; 114.CBECK 'IFSR_best.P(I) newData; VFSR_recent.Accept.ances(I) VFSR_recentNumGenerat.edII) 0; VFSR_bestNumGenSaved(I) VFSR_numberGenerated(I); VFSR_bestNumAccSaved(I) VFSR_indexCostRepeat(I) 0; I->IterFi tness VFSR_bestP(I).Fitness;

PAGE 275

METHOD: FUNCfION: Very Fast Simulated Reannealing VFSR PROPFN //S.PERIODICALLY REANNEAL---------------------------------------------------------int testPeriodElapsed = int) (VFSR_indCostAcc(I) % VFSR_testPerlod(M)) == 0); 1f( testPeriodE1apsed (VFSR.-numberAccSaved(I) (VFSR_acceptToGenRat1o(I < VFSR_acceptToGenH1nRat10(H Cost += 1; if ( VFSR_acceptToGenRat1o(I) VFSR_recentAcceptances(I) = VFSR_recentNumGeneratedtI) 0; if fabs( VFSR_savedP(I) Fitness VFSR_bestP(currlter) Fitness < VFSR_costprec1s1on(H) ) { VFSR indexCcstRepeat(II = VFSR_indexCostRepeat(I) + 1; if ( VFSR_1ndexCost:Repeat(I) >= VFSR_DlaXCost:Repeat(H) I->abortFlag X_COST_REPEATING; return Cost; }e1se VFSR_indexCostRepeat (I) 0; printf("\n\n Reanneal"); 1f(!VFSR_optNoReannea1(H Cost VFSR_ReAnneal(I); if (I->Bcho) { } printf("\n%d",I->Srch->IterNum); printf(" C:%4g", ,double) Cost); printf(" T:%6.3e", VFSR_costTempR(I)); printf'" A:%6.3e", VFSR_S3VedP(I) .Fitness); VFSR_oestP(I).Print:l; }: //---------------------------------------------------------------------------wh11e (Po1ntAccepted==O); I->ECHOVAL(" C:%4g", (double) Cose); I->ECHOVAL(" T:%6.3e",VrSR_costTempR(I)); I->ECHOVAL(" A:%6.3e",VFSR_savedPII). Fitness); I->ECHOVAL(U B:%6.3e",VFSR_bestP(I) .Fitness); Cost:

PAGE 276

METHOD: Very Fll!It Simulated RellRDealing (CORt) FUNCfION: VFSR ReAnneal // // FUNCTION: VFSR_ReAnneal // Readjusts temperature schedules for search. // Inputs: Iterate* int Cost Reference iteration for which to reanneal Cost of reannealling // Returns: // Adapted: kl:oppes // kl:oppes ncbanged dreaIma:l for reaIma:l refted cbg matrix // int *I){ I/O.DEFINE SOME USED int Cost 0, tC:osti ProbSpace* P Search" S I->Method; I->PrSpace; I->Srch; // VFSR method pointer // Problem space painter // Pointer to search TMatrix p; p VFSR_bestP(I) // Point to reanneal int N p.length(); // of data int Naddr N-l; // Address of dimension N double InvDim = / N; TMatrix dx; dx VFSR DeltaX? TMatrix prange; prange =P->parmrange[Naddrj; TMatrix tangents; VTangent( tangents, P->userFitness, dx, prange, PSPACE_INTPARM(P,Naddr), VFSR_cpt!ncludelntIM), stCost); PROCESS AND for(int index=O;index<=N;index++) { Cost +=1; Cost +=1; //------------------------------------------------------------------------------// Note: cost temperatures are in the zeroth element // Only process if cost is being processed or parameter range is // sufficiently large, there is no conflict in int processing // tangents are sufficiently large //------------------------------------------------------------------------------int pNum index-1; Cost +=1; int ParmWOpenRange (pNum >= 0) && VFSR_OPENRANGE(I,N,pNum); (index=O) ParmHOpenRange // Range is lili( (VFSR_optlncludelnt(M) Ints to included IPSPACE_rNTPARH(P,N) [pNum) ) ) // or non-int lili( fabs(tangents.r(pNum VFSR._minVal(M { Cost +=1;

PAGE 277

METHOD: Very Fast Sbnulated Reannealing \'FSR ReAnneal //a:SET STARrING TEMPERATURES & ASSIGN VALUES FOR TEMPERATURE RESET---------double cempDouble, newTemp; :1.f ( :1.ndex=O ) { //----------------------..:------------------------------------------------//Pass 0: REANNEAL (Reset the index of cost acceptances to take // f:1.ner :1.n cost terra:1.n :1.nto account) //-----------------------------------------------------------------------:1.f(VFSR_:1.nitCostTempR(I) > fabs(VFSR_savedP(I).F:1.tneSS) ) VFSR_initCostTemp(I) = fabs(VFSR_savedP(I).Fitness); if (VFSR_costTempR(I) fabs ( VFSR_bestP (I) Fitness ) ) { newTemp = newTemp = VFSR_cos,:';'empR(I); Cost +=1; tempDouble VFS,,_CalcTempDouble ( I, ':'::dex, newTemp, N, &tCoSt); Cost +=tCost; } ( //-----------------------------------------------------------------------//Pass>O: RESET PARAMETER GENERATIONS IF INDEX RESET TOO LARGE //Assign va1ues for temperature reset //-----------------------------------------------------------------------} newTemp = VFSR_parmTempR(I,pNum) tangents.drealmax() fabs(tangents.r(pNum)i; if ( newTemp VFSR_:1.n:1.tTempR(I,pNum) ) Cost +=3; tempDouble = VFSR_CalcTempDouble(I, ':'ndex, newTemp, N, &tCost); Cost +=tCost; }e1se tempDouble = 1.0; //b:RESET TEMPERATURE GENERATIONS-----------------------------------------tempDoub1e > (doub1e) VFSR_ma.xReAnnlnd(H) ) ( } double logNewTempRatio = EXP_CHECK( -VFS,,_tScaleR(S,index) .. real tempDouble, InvDim )) ); double newTemp = VFSR_initTempR(I,indexl exp(logNewTempRatio); tempDouble VFSR_ReAnnRescale(M); VFSR_initTemp(I,index)= newTemp .. pow ( 'IFSR_initTempRlI,index) newTemp, '/FSR_tempRescalePwr (M) Cost +=7; VFSR_genIndex(I,indexI = (int)tempDouble; }; // endprocessinq :1.f-------------------------------------------------------}; // end :1.ndex loop------------------------------------------------------------return Cost;

PAGE 278

METHOD: Very Fast Simulated Reanneallng VFSR CulcTempDouble (cont) ========================================================= FUNCTION: VFSR CalcTempDouble Loqarithmic calculation for tempDouble in RBANNEAL. Inputs: Iterate* int index Iteration pointing to search & method Index for ref temperatures and scaling Returns: Adapted: double int int* newTemp Cost clcuDle tempDouble 10131194 kcoppes factors, selects cost or parameter New temperature to employ in 109 calc NUmber of parameters Cost of calling function clcuDle VFSR_calcTempDouble( Iterate *I, int index, double newTemp, int N, int* Cost ){ } SearchMethod* Search" S I->Method; I->Srch; double logInitCurrTempRatio Method pointer Search pointer log( (VFSR_minVal(MI + VFSR_initTempR(I,indexl (VFSR_minVal(MI + newTempl ); double tempDouble = VFSR_minVal(MI real(pow( logInitCurrTempRatio );; return tempDouble; Cost 7;

PAGE 279

METHOD: FUNCTION: Very Fast Sbnulated Reaonealing FUNCTION: VPSR. CONVFN Determines exit conditions have occurred. Inputs: Iterate* Iteration Iteration to check for convergence Returns: float Convergence value Created: float VFSR._CONVFN(Iterate *I) Initialize values SearchMethod= !->Methcd; Searchw S = I->Srch; least two iterations must occur if (S->IterNum < 2) return 0; ................................................................................ This part of the function is only valid for test problems whose global is at the origin. At each iteration a weighted sum is updated of the form s sum i=l where N is the of objective function evaluations in the current iteration is the distance to the n is the total # of iterations ................................................................................ Calculate distance from origin (for test problem purposes) double dist 0; for(int c=Q;cBcho) D: !Sq ",dist); S->evalSum T= VFSR_evalScaling(M) + ( VFSR_iterWeight(I)+dist+ I->NumFnEvals); S->evalSumCost VFSR_evalScaling(M) + ( VFSR_lterWeight(I)+dist+ I->IterCost); VFSR_iterWeiqht(I) VFSR_iterGain(M); I->IterTestDist dist; ................................................................................ at convergence value if(I->srcb->ConVTqtMetric > VFSR_bestP(I).Fitness) ( J ; return 1; } else return 0; if (I->Echo) ''..

PAGE 280

METHOD: Very Fast Simulated Reanneoling Vtanj!ent (cont) FUNCTION: FUNCTION: VTangent This function the 1st of the cost function with respect to its parameters at point p. (Note that the ugorithm differs from Ingber's in that no information from the 'best' parameter is used in determining the curvature at the current point. -Jtc) Inputs: F Fitness function TMatrix p [nx1) Parameter to at TMatrix dx [nx1) to use in the approximation TMatrix prange [nx2] min-max matrix of ranges int ptypes en] vector. 1 indicates an integer parameter 1nt Flag indicating that int curvatures should be int* Cost POinter to temporary cost variable Returns: TMatrix d dx [nxl) Returned partiu vector Adapted: kcoppes Based on VFSR d.4. L. Ingber Modified: kcoppes Added cost measurements kcoppes exchanged ACOl\fPLE.."\,:(i) for new complex(i) kcoppes reworked input arguments allocation vgid VTangent(TMatrix& d_dx. complex (*F) (TMatrix& p), TMatrix P, TMatr1x dx, TMatrix prange, int* ptypes, 1nt 1nt. Cost )( dxrp complex dx row complex x_row; int row; (complex 91 TMatrlx A2; A2 = p; d_dx.define(p.length(),l); complex newcostl; int tCost 0; Cont1nues-> row delta-x (deltax_vl 1/ row & col parameters (tempi TMatrix row & col indices Working point Returned matrix Cost for point in calc. Temporary cost variable

PAGE 281

METHOD: for (row=O x row p. c (row); dx row dx.c(row); = (1 A2 e ( J = p. c ( row) ; newcost1 F(A2); } } free(dxrp); } real(newcost! -F(p)) / (real (dx_row+ x_row J DBL_EPSILON); tCost 1; tCost 4; tCost 3; tCost 5; *Cost tCost;

PAGE 282

METHOD: Very Fast Simulated Reanneallng (cont) FUNCTION: VCurvature ========================================================= FUNCTION: VCUrvature This function calculates the curvature of the cost function with respect to its parameters at point p. (Note that the algorithm also differs from Ingber's in that no information from the 'best' parameter is used in deterDdninq the curvature at the current point. -kc) It is assumed that the complex part of the function is zero. Inputs: complex* F TMatrix p Fitness function [nxl] Parameter to evaluate at TMatrix dx [nxl] del ta-x to use in the approximation TMatrix prange [nx2] min-max matrix of ranges TMatrix* d dx [nxl] Returned 1st derivatives (tanqents) int ptypes [n] vector. 1 indicates an inteqer parameter int Returns: includeint Flaq indicatinq that int curvatures should be included Adapted: l\IodlJ1td: TMatrix* B [rum] Returned curvature matrix kcopJIe!i Based on VFSR L.lngber kcoppes replaced ACOMPLE..X(i) maao w. new complex(i) kcoppes added real for complex arguments TMatrix VCUrvature(complex (*F) (TMatrix pI, TMatrix p, TMatrix dx, TMatrix prange, TMatrix d_dx, int ptypes[], int includeint complex dx_row +(new complex(p.length())); complex dx_col ax_row; row & col delta-x (deltax_vv,deltax_v) complex x_row, x_col; row & col parameters (temporary) int row, col; TMatrix row & col indices (index_v,index_vv) TMatrix Al p, A2 = p; Workinq data points TMatr:x H(p.length(),p.length()); returned curvature matrix complex newcostl,newcost2,newcost3; Costs for paints in calc. IIIterate over each row index for (row=O ; row< (p .lenqth () ) ; row++) IIGet row parameter and delta_x x_row p.c(row); dx_row dX.c(row); IIIterate aver upper-diagonal indices for (col=row;col
PAGE 283

METHOD: ): = *H.elcol,rowi= 0.0; 0.0; IIGet x col = p.c(col); dx_col = dx.c(col); = f(A2); e row) = ::ix_row I *A2.elrowl = p.c(row); = f(A21; *H.e(row,col) 2+f(pl ... newcost1) / real(dx_row*dx_col+x_row*x_col + DBL_EPSILON); *d_dx.e(rowl real(newcost1 "(pI) / +A1.e(row) = (1 + dx_row)+x_row; *Al.e(col) = (1 dx_col)+x_co1; = f(All; +A1.e(row) = p.c(rowI; +A1.elcol) = p.o(co1); newcost2 = f(AI); Fex2_row+dx_row, x_colI +A1.e(rowl = (1 + dx_rowl*p.c:row); +Al.e(row) = p.o(row); *H.elrow,col) = *H.e(col,rowl real(newcostl -newcost2 -newoost3 f(pl) / real(dx_row*dx_col+x_row+x_col F(AI) ; return H;

PAGE 284

METHOD: { int M->mMParmNum M_NumParmMtx; M->fMParmNum M_NumParrnDbl; M->iMParmNum M_NumParmlnt; M->cMParmNurn M_NumP a rmCh r ; int amP[ J M DirnParmMtx; int afP[ J M_DimParmDbl; int aiP [J M DirnParmlnt; int acP[ J M DimParmChr; if (M->rnMParmNurn) for (i=O; i i ++) M->mMParmDirn [i] if (M->fMParmNum) for (i=O; ifMParmNum; i++) M->fMParmDim[ i J M->iMParmDim[iJ if (M->cMParrr.Num I for (i=O; i<. ... !->cMParmNurn; i M->cMParmDim [i] Iiset amP[i] ; afP[ iJ; aiP[i]; acP[ i]; M->fIterDataPts M->blterDataPts M->iIterData?ts fDataStates; bDataStates; iDataStates; floating data points binary data points integer data points M->Allocate ( ) ; }

PAGE 285

METHOD: inc i; ProbSpace* int P S->PrSpace; P->maxparms; Pointer to prcblem space max of parameters IINumber S->mSParmNum .3 NumParmMt);; S->iSParmNum = S NumParmlnt; S->fSParmNum S->cSParmNum S NumParmDbl; .3 NumParmChr; int amP[] S_DimParmMtx; int afP[] int aiP[] S_DimParmlnt:; int lcF[] .3_DimParmDbl; S_DimParmChr; if (S->m5ParmNum) for (i=O;im5ParmNum;i++) S->m.3ParmDim[:; if (S->fSPar.mNum) S->fSParmDim[ij if (S->iSParmNum) S->iSParmDim[i] if (S->cSParmNum) S->cSParmDim[i] S->Allocate ( ) ; amP[i] ; afP[ij; aiP[i]; acP

PAGE 286

11=========================================================== int i; ProbSpacew int I->PrSpace; Pointer to problem space P->maxparms; max # of parameters IINUmber I->mStateNum I_NumStateMtx; I->iStateNum = I NumStatelnt; I->fStateNum I_NumStateDbl; I->cStateNum I_NumStateChr; int amP[J I_DimStateMtx; int afP[J I DimStateDbl; int 3iP[J = I_DimStatelnt; int aCP[J if(I->mStateNumJ I->mStateDim[iJ if(!->fStateNumJ for(i=O;ifStateNum;i++) I->fStateDim[iJ if(I->iStateNum) for(i=O;iiStateNum;i++) I->iStateDim[il If(I->cStateNum) for(i=O;icStateNum;i++) I->cStateDim(iJ } amP[iJ; afP! i J; aiP[ iJ; acP[il;

PAGE 287

METHOD: ( pr:":-:t:f("\n"); printf' "'lFSR METHOD: \n") ; if (M->name!=NULL) print:f(M->name); minRange: printf:" printf (" maxReAnnInd: printf(" testPeriod: printf(" defInitTemp: printf(" defC.::>st:Ini tTemp: %-+12.3e allowed parm range\n", VFSR_minRange(M ; %-+12.3e Minimum value (small-float)\n", VFSR_minVal (M) ); %-+12.3e Maximum Reanneal Index\n", VFSR maxReAnnInd(M; %l2d !'Iumber of accept:ances between t:ests\n", VFSR_test:Period(M) ); Default Initial Temperat:ure\n", VFSR_defIni tTemp (!-1) ) ; %-+12.3e Default Cost Initial Temperature\n", VFSR_defCostInitTemp(M; printf(" ReAnnRescale: %-+12.3e rescale value\n", printf:" tempAnnealScale: print:(" t:empRat:ioSc3le: printf(" cost:P3rmScale: VFSR ReAnnRescale(M; %-+12.3e Temperature annealing scale\n", '/rSR_tempAnnealScale (M) ); %-+12.3e :'emperat:ure Ratio S-:ale fact:or\n", "FSR_tempRatioScale 1M) ); %-+12.3e Cost t:o parm temp s-:ale factor\n", VFSR_cost:ParmScale(M; printf(" defDelt3X: %-+12.3e Default int:erval for t3ngent calc\n", VFSR_defDe1taX(M; printf(" optInc!udeInt: printf (" %12d !:-:clude integers in reanneal\n", VFSR_optlncludeIntIM; %12d Reannea1ling\n", VFSR_optNoReanneal(M); printf("\n\n"); optNoReanneal:

PAGE 288

B.2 Genetic Algorithm Implementation 276

PAGE 289

METHOD: GENETIC ALGORITHM SEARCH 1 SIILCPP This fil.e defines a Genetic Al.gor:i.thm search method. Note that this method searches for the function It is assumed that the fitness function produces val.ues greater than 0; Created: Modilled: kcoppes kcoppes for use with l'NIX debugged. --file dependencies-------------------------------------------------------------'define farmalloc ma1loc 'define huge 'ifndef __ OOBEADER_CPP 'include "ooheader. CPP" 'endif 'incl.ude 'include 'include ---------------------------------------------------------------------------------

PAGE 290

Genetic Algorithm Search GA Method (cont) FUNCTION: GA_Hethod Creates an instance of the GA method. Returns: SearchHethod Created: kcoppes GA_uniform( p): void GA_DEFHETHODINIT( SearchHethod *GA) : int int int huge* int huge* int huge* int int void void void. void GA _sEARCBmIT ( GA_PROPFN( GA_CONVFN( GA_RANK( Iterate* firstIter): Iterate* currIter, Iterate* nextIter): Iterate *Iteration): Iterate* I, int huge* InitList, int N, int& Cost): GA TOtmNEy ( Iterate* I,int Ntourneys,int huge* PopList, int pSize,int tSize,float rPrOb, int& Cost); GA_RANDSEQ(int huge* List, int N, int& Cost); GA_CROSSOVER(BinaryDataPoint& PO, BinaryDataPoint& Pl, BinaryDataPoint& CO, BinaryDataPoint& Cl, int NCrosses); GA_MUTATE(BinaryDataPoint& G, float mPrOb); GA_ITERATESIZE( Iterate* II; GA_METHODPARHSIZE( GA_SEARCBPARHSIZE( GA_PRINT( SearchHethod* GA); Search* Sl; SearchHethod *H) ; ---------------------------------------------------------------------------------searchHethod GA_Hethod(void) { SearchMethoci pointers to method functions M.Methodlnit. M.PropFn M.ConvFn M.randFn M.randParm M.lteratelnit M.Searchlnit GA_DEFMETHODINIT; GA_PROPFN; GA_CONVE"N; GA_uniform; new double; GA_ITERATESIZE; GA_SEARCHINIT; M.methodPrir.t GA_PRINT; t val.ues for method parameters and. return M.Methodlnit(&M); return M; }

PAGE 291

METHOD: Genetic Algorithm Search METHOD PARAMETERS (cont) method are attached to the method structure. Hacros are used to fit the values to the method and to access them. Hdefi;-,e M NumParmMtx of method parm matrices Hdefine M NumParmDbl 1 of method parm double arrays Hdefir:e M NumParmInt 1 of method parm inteqer arrays Hdefi;-.e M NumParmChr of method parameter char arrays --------------------------------------------------------------------------------MATRIX: (mHParms) none --------------------------------------------------------------------------------#defir.e M_DimParmMt:< Illengths of method parm matrices --------------------------------------------------------------------------------DOUBLE: see below ---------------------------------------------------------------------------------Hdefine M_DimParmDbl (7) Hdefine GA_bitMutateProb(M) Hdefine GA_finalVariance(M) Hdefine G"'-.replacePct(H) Hdefine GA_nateEligiblePct(M) Hdefine GA_oestWinProb(M) Hdefine GA_evalScaling(M) Hdefine GA_:' -:erGain Illengths of method. parm double arrays ((M)->fParms[O] (0)) IlprObability of bit mutation ((M)->fParms[O) [1]) Ilfitness spread at termination ((M)->fParms[O] [2J) pool size -popSize); ((M)->fParms[O] [3J) of top fitnesses elig1ble to reproduce ((M)->fParms[O] [4]) of best winning tournament ((M)->fParms[O) [5i) the entire performance measurement (6)) the increasing importance of iterations as they proceed --------------------------------------------------------------------------------(iHParms) see below --------------------------------------------------------------------------------Hdefir.e M_DimParmInt (7) Hdefine Hdefine GA_representType(M) Hdefine GA_r.Crossovers(M) Hdefine G"'-.matePoolSize(M) Hdefine GA_tourneySize(MI Hdefine GA_:itMethod(M) of method parm integer arrays ((M)->iParms[O) (0)) of inclividuals in population ((M)->iParms[O) (1)) store, as fixed point,O:float ((M)->iParms[O) [2] 1 of crossover points ((MI->iParms[OJ [3]1 Ilexplicit size of mating pool II(overrides replacePct) ((MI->iParms[O] [4]1 of tournament in selection ((M)->iParms[O] [5]1 Ilfitness measurement method: : mean fitness of pop, min fitness ((MI->iParms[O] [6]1 steady state GA when non-zero

PAGE 292

METHOD: Genetic Algorithm Search (com) :\IETHOD PARAMETERS TAILORING PARAMETERS --------------------------------------------------------------------------------CHAR: (cMParms) none --------------------------------------------------------------------------------Hdefine M DimParmChr {OJ Illengths of method parm character arrays II. METHOD TAILORING PARAMETERS: Method values that can in size or val.ue with the problem space but that remain constant from iteration to iteration are attached to the search. Macros fit particular parameters to the search structure. --------------------------------------------------------------------------------MATRIX: (JllHParms) none --------------------------------------------------------------------------------Hdefine S NumParmMtx 0 Hdefine S DimParmMtx {OJ of method. parm matrices Illengths of method tailor matrices --------------------------------------------------------------------------------DOUBLE: (fHParms) none ---------------------------------------------------------------------------------Hdefine S NumParmDbl 0 Hdefine S DimParmDbl {OJ of method parm cknlble arrays Illengths of method. tailor dbl arrays --------------------------------------------------------------------------------INT: (iHParms) none --------------------------------------------------------------------------------S NumParmlnt 0 Hdefine S DimParmlnt {OJ of method. parm integer arrays Illengths of method. tailor int arrays --------------------------------------------------------------------------------CHAR: (cMParms) none --------------------------------------------------------------------------------#define S NumParmChr 0 Hdefine S DimParmChr {OJ of method parameter char arrays Illengths of method tailor char arrays

PAGE 293

METHOD: Genetic Algorithm Search !'-IETHOD STATE DEFINITIONS (cant) III.METHOD STATE DEFINITIONS: Method values that can vary in size or vaJ.ue with the search or problem space are attached to the search. Macros fit particular state names to the search structure. --------------------------------------------------------------------------------MATRIX: (mState) see below. Note: 1) N is the of problem parameters 2) An R suffix indicates a real vaJ.ue its absence indicates an lvalue --------------------------------------------------------------------------------#define I 0 I JimStateMtx of method. state matrices of method. state matrices --------------------------------------------------------------------------------DOUBLE: (fstate) see below --------------------------------------------------------------------------------#define I NumStateDbl 1 of method. state double arrays Hdefine I DimStateDbl [3} Illenqths of method state dbl arrays ................................................................................ [0] Search performance '1'his is a weighted sum of how well. the search has performed #define GA_evalSumlII #define GA_evalSumCost(II GA_iterWeiqhtlII (I->fState[O] [0] I Ilweighted search performance II->fState[O] [ll) Ilweighted search performance II->fState[O] Ilweiqht on iter performance tNT: (istate) see below --------------------------------------------------------------------------------#def:ne I NumStatelnt 0 #define I DimStatelnt [0] of method state integer arrays of method state int arrays --------------------------------------------------------------------------------CBAR: (cState) none Hdefine I NumStateChr 0 I DimStateChr of method. state char arrays Illenqths of method state char arrays

PAGE 294

METHOD: Genmc Algorithm Srarch ITERATION LAYOUT (cont) //========================================================== // IV. ITERATION LAYOUT FloatoataPoint: (fDATA) // // [0] savedPoint(I) bestPoint(I) Propagated point Best seen point //========================================================= #define I fDataStates 0 #define iDataStates 0 #define I bDataStates 12 of float data pOints per Iter of int data points per Iter // (default) of binary data points per Iter // ---------------------------------------------------------------------------------// FLOAT/COMPLEX DATA POINTS: (fDATA) none ---------------------------------------------------------------------------------------------------------------------------------------------------------------INTEGER DATA POINTS: (iDATA) none ---------------------------------------------------------------------------------------------------------------------------------------------------------------B:rNAlU' DATA POINTS: (bDATA) see below --------------------------------------------------------------------------------................................................................................ [0) Indirtdual: *define GA_Pop(I) #define GA_Ind(I,t) (I->bDATA) (*(GA_PoPII)+(unsigned)t))

PAGE 295

METHOD: Genetic Algorithm Search (cont) MACROS" RANDO:\I GENERATOR MACRO: GA OPENRANGE Returns if range of parameter pNwn in dimension Oilll for pspace is qreater than GA_lII1nRanqe. Inputs: Iterate* int int Created: Oilll pNwn kcoppes Iteration of interest Dimension to interrogate Parameter number #define GA_OPENRANGE(I,Oilll,pNwn) \ PSPACE_RANGE_WIOER(I->PrSpace,Oilll,pNwn,GA_lII1nRange(I->Method FUNCTION: GA.-unifo%1ll function Created: kcappes double GA_unifo%1ll(dauble *p) ( unused parameter

PAGE 296

METHOD: FUNCTION: Genetic Algoritlun Search GA (cont) ============================================================= FUNCTION: GA DBFHETBODINIT Initializes the GA method with default values. Inputs: Secu:cbMethod* GA search method instance to initialize. Created: kcoppes void GA_DBFHETBODrNIT( SearchMethod* GA){ GA->Debug 0; method parameters & allocate memory (function defined later) .............. GA_METHODPARMSIZE(GA); default values for method pcu:ameters .................................... char methname[J "Genetic Algorithm Search"; GA->name = (char+)farmalloc(strlen(mechname)+sizeor(char)); strcpYlGA->name,methname); GA_representType(GA) GA_popSi::e(GA) GA_replacePctIGA) GA_macePoolSize(GA) GA_bescWinProbCGA) GA_maceE1igiblePctIGA) GA_nCrossoversIGA) GA_bitMutateProoCGA) GA_sceadyStateIGA) GA_finalVariance(GA) GA_i terGain (GAl GA_evalSca1ingCGA) 0; 12; 0.5; -1; 0.7; 4; 0.001; 0; le-lO; 0; 1.03; = le-5; IIFloating point IIPopulation size IIReplace each generation (replacePct used) Probability best wins tourney ind in tourney competitions IITop 70' eligible to mate Crossovers chance of bit mutatelbit IINon-steady state GA at 1e-10 spread mean fitness convergence lEach iteration 3' more important than the previous IIPerformance metric scaling IIConfigure random sampling function (uniform has no parameters) (GA->randParm) = 0; }

PAGE 297

METHOD: ................................................... int Cost = 0; SearchMethodW M I->Method; ProbSpacew P I->PrSpace int N P->maxparms; evaluate ...................................... double mnRange,mxRange; for (unsigned BinaryDat.aPoint& thislnd = GA_IndII,tl; thislnd.d:mension = N; Cost += 1; Cost += 1; Cost += 1; P->GetRange(N,i,mnRange,mxRange); joubl e r:.ewVal I m.xRange-mnRa nge J Y ( M-> !"3ndSampl e I) mnRange; Ylthislnd.JATA+(unsignedlil = newVal+O.O; Cost += thislnd.Evaluate(P); if (I->Srch->Echo) printfl" IIZero IIZero Cost += 1; Cost += 1; GA_evalSum(I) GA_iterWeight(II (I) ; IISet

PAGE 298

: int ProbSpace* SearchMethod+ Cost 0; P I->PrSpace; M I->Method; II II II IIO:PREPARB int N GA_popSize(Ml; HGLOBAL hglb; int huge" InitLisc = IntAlloc(N,hglbl; if (InitL1sb=NULL) {printf(" Null initlist"l; InitList[tj til; int huge" PopList double worstFit GA_RANK(I,: .. itList,N,Costl; GA_Ind(I,PopList[GA-popSizelMI-l]) Fitness; int matePoolSize; matePoolSize (intlfloorlGA_popSize(M)+GA_replacePct(M) I; matePoolSize GA matePoolSize(MI; matePoolSize (intlfmod( (doublelmatePoolSize, (doubleI2l; > { printfl" E:rror(GA_PROPE'NI: mate Pool> population"); matePoolSize 4; IICompete Cost 1; int Neligible = int huge" WinList GA_TOURNEY(I,rnatePoolSize,PopList, Neligible, GA_tourneySize(Ml,GA_bestWinPrcb(M),Costl;

PAGE 299

METHOD: FUNCfIOlll: Genetic Algorithm Search llRandomize mat:inq pool (cont) (cont) ine huge* MateList = if(MateL:1.st=NULI.) {printf'" Errc:::",.'I._PROP:N): Null MateLisc"); exit(-l.) ;}; PROPAGATE BinaryDataPoine* COPt BinaryDataPoint* CIPt new BinaryDataPoint; new BinaryDataPoint; int iRep = GA_popSize(M) matePoclSize; index of individual to replace fortine i=O;idimension = CIPt->dimension = currlter->bDATA(CO] .dimension; int rtype = GA_Ind(currlter,CO).DATA[O] .rtype; COPt->Allocate(P); COPt->SetRType(?,rtype); CIPt->Allocace(P); CIPt->SetRType(P,rtype); 1f(GA_nCrossovers(M) !=O) Cose += GA_CROSSOVER( .:urrlter->cDATA[P0] ,currlter->bDAT}"Pl], 'C8Pt, 'CIPt, GA_nCrossovers(M) else { COPt 'CIPt currlter->bDATA[PO]; currlter->bDATA[Pl]; IIPerform mutation :if (GA.Jl1tMutateprob (M) >0.0) Cost += GA_MUTATE(COPt,GA_bitMutateProb(M)); Cost += GA_MUTATE (CIPt, GA_bi tMutateProb (M) ) ;

PAGE 300

} Cost += I->bDATA[CO] = +COPt; I->bDATA[Cl] = wC!Pt; / delete delete delete delete delete .. ess I->bDATA[CO] < I->bDATA[Cl) COPt; C1Pt; MateList; WinList; PopList; +COPt; wCIPt; Cost+=2; };

PAGE 301

METHOD: FUNCTION: Genetic Algorithm Search (cont) ============================================================= FUNCTION: GA CONVFN Determines if exit conditions have occurred. Inputs: Iterate* Iterat.i.on Iterat.i.on to check for convergence Returns: convergence va1ue Created: ZIlI/95 kcoppes f10at GA_CONVFN(Iterate *1) { IIInitia1ize comparison va1ues SearchMethod+ &11 I->Method; Search+ S I->Srch; int minCost 0, meanCost 0, sameCost 0; double meanFit I->bDATA[l] Fitness; ':'nt sameGene double minFit,maxFit; maxFit I->bDATA[l] .Fitness; IIExam1nepopu1ation int bestlnd 1; minFit !->bDATA[bestlnd] .Fitness; for(int i=1;1bDATA[i].Fitness) > maxFit) maxFit 1fI->bDArA[.i.].Fitness) < m.i.nFit) I->bDATA[ij.Fitness; minFit I->bDATA[i] Fitness; bestInd i; } meanFit meanFit L I->bDATA[ij meanCost += 1; IICheck for of popu1ation to 1dent1ca1 genes for(int t=O;tbDATA[.i.].dimens.i.on;t++) { sameCost += 2; != I->bDATA(i-1].DATA[t].e.f) sameGene 0; } meanFit GA_popSizetM); if (S->Bcho) prim:f("B: %20.12e M: %20.12e ",minFit,meanFit); iteration fitness if( (GA._fitHethod(H) I->IterFitness I->IterFitness meanFit; minFit; meanCost += 1;

PAGE 302

METHOD: Genetic Algorithm (cunt) (cont) .............................................................................. This part of the function is only valid for test problems whose global optimUm is at the origin. At each iteration a weighted sum is updated of the form S sum dX N i=l where is the of objective function evaluations in the current iteration dX is the distance of the best n is the total of iterations ................................................................................ Calculate distance from origin (for test problem purposes) double jist = 0; for(int c=O;cbDATA[bestlnd].dimension;c++){ dist += (I->bDATA[bestlnd) .DATA[=; .DATA[c) .r()); } dist = sqrt(dist); if (S->Bcho) printf(" 0: %g ",dist:; S->evalSum += GA_evalScalingtM)( GA_iterWeight(I) jist I->NumfnEvals); += GA_evalScalingtI1)+( GA_iterWeight(I) + dist + I->IterCost); GA_iterWeight(I) .= GA_iterGaintM); I->IterTestDist = dist; if (S->Bcho) Pf: Pc ................................................................................ Ilreturn conditions & set costs } if( sameGene) { pr:intf("Same ":; return -2; }; Ilidentical genes sameCost += 2; :->IterCost += sameCost+meanCost+minCost; if(GA_fitHethod(H I->IterCost-= meanCcst; population ifmaxFit-minFit) < return I->IterPitness; printf(nConverged "); return -3; }; IIContinuing search

PAGE 303

Genetic Algorithm Search GA RANK (cont) FUNCTION: GA_RANK Examines the data points in the initia1 list and assembles a sorted index list of their evaluated fitnesses with the minimum in the lowest index position. Inputs: Return: Created: Iterate* int huqe* int& int huge* kcoppes I Parent qene datapoints InitL.1.st Initial list of indinduals Cost Cost reference to increment RankLi.st returned ranked list of rankings int huge* GA_RANK(Iterate* I, int huge* InitList, 1nt H, 1nt& Cost) { if (InitList==NULI.) {printf(" Srror(GA_RANKI: Null initlist"l; exit(-l); ); if (H<=O) {printf(" Error(GA_RANKI: 0 initlist"l; exit(-l);); HGLOBAL hglb; int RankList IntAlloc(N,hglbl: if (RanltList==NULI.) {printf(II Error(GA_RANKI: Null ranklist"l: exit(-l,; }; for(1nt t=O;tbDATA[RanJtL1st[i)) Fitness < I->bDATA[RankL1st[m1nNdx)) Fitness ) :ninNd;.: = :; } int :emp; temp RankLlst[t]; RankList[t] RankList(minNdx]; RankList[minNdx] temp; } return RankList;

PAGE 304

METHOD: FUNCTION: Genetic: Algorithm Sellrc:h GA TOURNEY (c:ont) ============================================================= FUNCTION: GA TOURNEY Employs tournament selection over the in the population of iteration I specified by PopList to construct a matinq pool of lenqth Ntourneys. Matinq competitions are conducted amonq tsize randcm chosen from PopList. The best wins with probability rProb; Inputs: Return: Created: Iterate* .1nt int huqe* int .1nt float int& int huge* I Ntourneys PopList pSize tSize rProb Cost winList Iteration conta.1n.1nq chromosomes Number of tournaments (mate pool size) Eliqible Lenqth of PopList Tournament size Best win probability Cost reference to increment returned tournament winners ========================================================= 1nt huge* GA_TOURNEY(Iterate* I,int Ntourneys,int huge* PopList,int pSize, int tSize,float rProb,int& Cost) { I/Create tournament array & win array HGLOBAL hglbl; int huge+ tList = IntAllocltSize,hglbl); if (tList=NULL) {printf(" ErrorIGA_TOURNEYI:Null tlist"); ex1t(-l); }; HGLOBAL hglb2; int huge" ''''inList = IntAlloc INtourneys, hglb2: ; if (winList=NULL) {printf(" ErrorIGA_TOURNEYI :Null winlist":; exit(-l); }; IIConduct tournaments } for(int t=Q;tl) } int huge GA RANKII,tList,tSize,Cost); int winner = RankList[tSize-1]; int index = 0; wbileindextSize-l&&(winner==RankList[tSize-l] Cost += 2; Cost += 2; double p=o; winner = RankList[lndex]; index += 1; }; delete RankList; winLlst[t] else winList(t] = cList(O]; }; delete tList; return w.1nList: winner; /Iend of tournament loop

PAGE 305

Genetic Algorithm Search (cont) FUNCTION: Sbuffl.es Inputs: Return: Crested: GA_RANPSEQ a l.ist of integers int huge* List int N int&. Cost int huge* NewList kcoppes List of integers List l.enqth Cost reference to increment returned ranc:lcmized l.ist int huge* huge* List, int N, int&. Cost) { } if (List=NULL) HGLOBAL hglb; print!:(" Error(RandSeq): 0 N parameter"); ex1t(-I); }; printf(" Error\RandSeq): NULL list parameter"); axit(-I); ); int hugeNewList IntAlloc(N,hglb); if (NewList=NULL) printf(" Null rand seq"); exit(-l); }; for(int t=O;t
PAGE 306

METHOD: Genetic Algorithm Search GA CROSSOVER (cont) FtlNCTION: GA CROSSOVER. Performs a crossover between two genes, producing two new genes. Note: it is assumed that all binary data points are of the same data storage configuration. (number of parameters, rtype, ranges). to 64 crossovers are allowed. If the # of crossovers is negative, a uniform crossover is used.. Inputs: B1naryDataPo1nt& B1naryDataPo1nt& 1nt PO, Pl CO, Cl Parent gene datapoints Child gene datapo1nts Ncrosses NUmber of crossover points Returns: 1nt Number of flaps Created: kcoPJ1e9 kcoppes :\Iodlfled: FL'Ced bit handling for gnu.{; =========================================================== 1nt GA_CROSSOVXR(B1naryDataPo1nt& PO, B1naryDataPo1nt& Pl, B1naryDataPo1nt& co, B1naryOataPo1nt& Cl, int NCrosses) int Cost = 0; int rtype = PO.DATA[O] int nbits ="64; if (rtype>O) nbits = 32; if (NCrosses>nbits) ( printf(W\nError (GA_CROSSOVERI: %i crossovers not possible",NCrossesl;ex1t(-l); }; generate crOSSing points by stored. parameter and crossover bit sort information in array from low to high position int Xpoint[64] [2]; crossover parameter & crossover bit table x-over) 1f(NCrosses>(-l ( double p; for(unsiqned. t=O;t
PAGE 307

METHOD: inc XferDir = 0; inc CrossPCr = 0; inc crossEic Xpoint[CrossPtr] [0]; Xpoint[CrossPtr] [1,; P[2]; PIO] = &PO; P:1] = &P1; ( crossParm = parmi IIUn1form CO.DATA[parm] PI XferDir)->DATA[parm]; C1.DATA[parl':l] = P[!XferDirj->DATA[parm]; inc nWd = 1; = 0; x=nWd;X>=O;x--){ llBuild unsigned inc BMask,NMask; 3Mask 0; NMask = 0; II rosses{ XferDir = 1-XferDir; CrossPtr += 1; crossParm crossBic Xpoint[CrossPtr) [0]; Xpoint [CrossPtr] [1); }; double p; XferDir = 0; IIUn1form XferDir = 1; unsigned int t,w; c = XferDir; w = XferDir; 8Mask += ((O+tl(m-lll; NMasK += ((l-w)(m-ll); CO.DATA(parm] .e.L.hi Cl.DATA[parm].e.L.hi CO.DATA[parm] .e.L.lo Cl.DATA[parmj.e.L.lo } ((P1.JATA[parmj .e.L.hil&(BMask I) I PO.DATA[parm].e.L.hi)&(NMaskl I; (P1.DATA[parm) .e.L.hi)&(NMaskll I PO.DATA[parm] .e.L.hil&(BMask I); (Pl.DATA[parm) .e.L.lo)&(BMask I) I (PO.DATA[parm].e.L.lol&(NMaskl I; .e.L.lo)5(NMaskll I (PO.DATA[parm] .e.L.:ol&(8Mask II; Cost+=l; );

PAGE 308

METHOD: Genetic Algorithm Search GA CROSSOVER end x sweep }; end crossover (cont) (cont) } while!CO.OATA[parm).IsInranqe( 11(!Cl.OATA[parm).IsInranqe(); end parameter sweep return Cost; } Cost 4;

PAGE 309

METHOD: 2127195 5/20/95 rtype = G.DATA[O] .=type; { 0; (GA_t1UTAT::::: probabilities not allowed":; if (mProb==O) = int nWd 3; if (rtype>O) 1; x=3;x>=Q;x--)( unsigned int newVa1 newVal G.DATA[parm].e.U.i[x]; unsigned int origbit, newtic; if (x<=nWd){ for (unsigned origbit = if (origbit>O) crigcit 1; double p; if (GA_un.1.form(&p) <=mPrab) { newbit origbit; newbi t newbit newVal (newbit-origbi:I(m-1); } Ilhandl.e -= (2047) (2046); G.DATA[parm].e.U.i[x] = newVal; }; Cost 1;

PAGE 310

METHOD: FUNCTION: Genede Algorithm Search GA METHODPARMSIZE FUNCTION: GA HBTBODPARMSIZE Sets size values for method parameters & allocates memory Inputs: SearchMethod* Method Method to define parameter sizes for Created: 2121195 kcoppes ========================================================= void GA_HBTBODPARMSIZE(SearchKethod* M) { inl: i; IIDefine sizes of method parameter arrays in each type defines) M->mMParmNum M_NumParmMtx; matrices (complex) M->fMParmNum M_NumParmDbl; double arrays M->iMParmNum M NumParmlnt; integer arrays M->cMParmNum M NumParmChr; char arrays int amP[l M_DimParmMtx; int afP[l M _DimParmDbl; int aiP [] M DimParmlnt; -int acP[ M_DimParmChr; if (M->mHParDINum) for (i=O;imHParmNwD.; i++) N->mMParmDim[ i] if (M->HParDINum) for(i=O;iHParmNum;i++) M->fMParmDim[i] if (M->1MParDINum) for (i=O;iiMPa.rmNum;i++) M->iNParmDim[ i] if (M->cKParDlNum) for (i=O; icKParmNum; i++) [i] numbers of data points of each type in each iteration previous defines) amP[i] ; afP[i] ; aiP [i acP [i 1; M->fIterDataPts M->blterDataPts M->ilterDataPts I_fDataStates; I_bDataStates; I_iDataStates; floating data points data pOints inteqer data points IIAllocate method IllelllQry and set default method values N->Allocate (); } hndef hndef

PAGE 311

11====================================================== int i; S->mSParmNum S->iSParmNum S_NumParmlnt; S->fSParmNum S->cSParmNum S _NumPa rmDbl ; S NumParmChr; int amP [] S_DimParmMtx; int afP [] int aiP[] = S_DimParmlnt; int acP[] S_DimParmDbl; S_DimParmChr; if (S->mSParmNum) S->mSParmDim[i] if (S->fSParmNum) for (i=O;ifSParmNum;i++) S->fSParmDim[i] if (S->iSParmNum) S->iSParmDim[i] if (S->cSParmNum) for (i=O:icSParmNum:i++) S->cSParmDim[i] S->Allocate(i; } amP [i] ; afP [i]; aiP[i] ; acP[:'j;

PAGE 312

========================================================= int i; (inclfmod( (doubleIGA_popSize(I->MethodJ, (double)41; I->blterDataPts I->Methoa->blterDataPts GA-popSizeII->Methodl; IINumber I->mStateNum I_NumStateMtx; I->fStateNum I NumStateDbl; l->iStateNum = I_NumStatelnt; l->cStateNum = I_NumStateChr; int amP[] I_DimStaceMtx; int afP[] DimStateDbl; int aiP[J I_DimStatelnt; int acP[] = I_uimStateChr; if (I->mStateNum) :->mStaceDim[i] if (I->fStateNum) for (i=O;ifstateNum;i++) if (I->istateNum) I->iStateDim[i] if (I->cstateNum) I->cStaceDim[i] } 3.mP[i] ; afP[i] ; 3.iP [i]; 3.cP[i] ;

PAGE 313

princf("\n"); ("GA METHOD: \n") ; if(M->name!=NULLl princf(M->namel; : popSize: H2i ?opulacion Size\n", GA_popSize(M)); printfl" replacePct: H2f U of Pop replace\n",(GA_replacePct(M))+100); princf(" matePoolSize: Pool Size\n", GA_matePoolSize 1M)); printf(" printf(" printf(" (-1 co use replacePct)\n"); %12i 1:32 bit fixed Pt, 0: 64 bit floating pt\n", GA_representType(M)); sizeof(doub1e)=%i bytes\r.",sizeof(doub1e)); printf(" printf(" printf(" tourneySize: bestWinProb: printfl" nCrossovers: H2f 'H2i H2e of top fitnesses eligible co mate\n", Tourney Size\n", GA_courneySize(M)); Probability of ind 1n winning\n", ; crossovers\n", 3A nCrossovers 1M) ); printf(" U2e Bit mutace probabilicy\n",GA_bitMutateProbIM)); princf (" sceadyStace: H2i Sceady state flag\n", 1M) ); printf(" final Variance: printf(" tMethod: printf(" minImax spread over pop causing finish\n", GA_fina1Variance(M)); %12i Method to measure convergence:\n", GA ficMethod(M)); O:mean fitness, 1: min f1tness\n"); printf(U-U); printf(U\n\n"); }

PAGE 314

c: Test Problem Implementations This section contains the implementations of the evaluation and application problems employed in the thesis. Note that those problems which include Matlab components may employ elements from an advanced control toolbox in use at Lockheed Martin. Descriptions of these functions are included in section D.

PAGE 315

C.l. Evaluation Problem

PAGE 316

========================================================== PROBLEM SPACE: Evaluation Problem File: evalprob.cpp This is a problem defined in Ingber and Rosen's work [IR92]. It has on the order of 10A20 minima. Adapted: kc:oppes ----------------------------------------------------------------STEP A: Define problem space name & description -----------------------------------------------------------------#define SPACENAMB EvalPrOb name of problem space #define SPACEDESC ., 10A20 minima problem" description of problem space ----------------------------------------------------------------STEP B: Define problem fitness function ----------------------------------------------------------------#define FITNESSFN EvalPrOb Fitness Fitness function complex EvalPrOb_F1tness(THatrix& x){ double 5(4), t[4), d(4); double term, summ, dj, c, deviate; int i; double offset 0.0; for (1 0; i 4; ++i) { s (i 0.2; I ').05; a[il 10.0; 0; 1000.0; d(21 10.0; 100.0; c 0.15; summ 0.0; for (1 = 0; 1 4: ++i) ( name of fitness function dj = floorfab5(x.r(i) -offset) s[il) 0.49999999); if ((x.r(i)-offset) < 0.0) dj = 0.0 -dj; dj = dj sri); deviate fabs(x.r(i) -offset -dj);

PAGE 317

PROBLEM SPACE: Evaluation Problem (cont) } if (deviate < fabs(t[i] term 0.0; (dj 0.0) term = t[i]; if (dj 0.0) term = (0.0t[i] I; term term + dj; term term term c Y d(i:; else term d[i] + ((x.r(il offsec: + (x.rli)offsec)); summ summ term; recurn (complex) (summ/lOOO); ----------------------------------------------------------------STEP C: Create problem space definition function (or include one) Note: settinq up the below defines will create a satisfactory space ----------------------------------------------------------------'define 4 'define RANGES (-le4, -le4, -let, -le4, 'define ALLOWBDDIH (4} 'define DIHLEN 1 le4, \ le4, le4, le4} number of parameters array of parm range couples array of allowed dimensions length of allowed dimensions array PrObSpace SPACENAHB(void) ( TMatrix inrangell,l,"inrange"l, floac frange[] RANGES; inrange.Assume(frange, MPARNS,21; indim(l,l,"indim"l; int intdim[] ALLOWEDDIM; indim.Assumelintdim,DIMLEN,ll; ProbSpace+ space = new ProbSpace(indim, inrange); space->name = new char[strlenlSPACEDESCI]; strcpy(space->name,SPACEDESCI; space->userFitness F!TNESSFN; return +space; hndef FITNESSFN SPACENAHE RANGES ALLOWBDDIH DIHLEN of File 305

PAGE 318

C.2. Documentation of this problem includes the following elements: 1. C++ Matlab interface (msdprob.cpp) 2. Matlab controller structure function (control.m) 3. Matlab plant definition function (msd3.m) Matlab top-level fitness function (eval_main.m) Matlab stability fitness function (eval_stab.m) Matlab margins fitness function (eval_marg.m) Matlab disturbance rejection fitness function (eval dist.m)

PAGE 319

PROBLEM SPACE: FUe: Mus-Spring-Damper IIIlIdprob.cpp This is a probl.em defined Voth. is used as a computation engine for the actual. fitness function. Information is passed out of matl.al) Via stdout. The fol.l.owinq functions are required, together the Lockheed Martin proprietary fUnctions that they c&1.l.: msd_I!Vlll._main.m, msd control..m Adapted: noppes STEP A: Define probl.em space name & description #define SPACENAME MSDProb name of probl.em space #define SPACEDESC 'Mass-Spring-Damper" description of probl.em space STEP B: Define probl.em fitness function idefine FITNESSFN MaD_Fitness name of fitness function /tundef turn off diagnostics Matrix +p; Fitness function compl.ex MSD_Fitness(TMatrix& x){ int i; lI1fdef echo ------------------------char = ""; #endif dOUble p("7); for(i=O;i<7;i++) p(iJ = x.r(i); complex char out(300J = ft"; char test[9J, engOutputBuffer(ep,out,299); Katl.al) parameter vector engEvalString 1 ep, "q=msd_main (control (p) ) ; fprintf ( '%20 .12e' q 12, 1) ) ;disp (' ,); .. ); II1fdef printf(out) ; iendif } rtnval return

PAGE 320

PROBLEM SPACE: l\Iass-Spring-Damper (cont) ----------------------------------------------------------------STEP C: Create problem space definition function (or include one) Note: settinq up the below defines will create a satisfactory space ----------------------------------------------------------------Itde:ine KPARKS 7 'define RANGES 0.10, 0.30, 0.50, 0.50, 0.30, 0.50, 0.50, 'define ALLOWEDDIH {7} Itdefine DIHLEN 1 0.30, 1.30, 1.00, 1.50, 1.00, 2.00, 2.00) number of parameters array of parm ranqe couples array of allowed length of allowed dimensions array PrObSpace SPACENAHE (void) } TMatrix inrange(l,l,"lnrange"), float frange [I RANGES: inrange.Assume(frange, indim{l,l,"indim"); int intdim[) = ALLOWEDDIM; indim.Assume (intdim, DIMLEN, 1); space = new inrange); space->name = new strcpy(space->name,SPACEDESC); space->userfitness fITNESSfN; return space; /lundef FITNBSSFN SPACENAHE KPARKS RANGES ALLOWEDDIH DIHLEN End of Fil.e

PAGE 321

function (Sc) control(gfParams) %-----------------------------------------------------------------------------% HATIAB FUNCTION: % PURPOSE: % % SYNPOSIS: control.m Returns a control law moclel system matr1x from the ga1n/f1lter parameters. [Sc] = msd._control(qfParams) PARAMETERS: % qfParams [vector] control law gain and. filter parameters: % qfParams = [ltD, zetaD,omeqaD,ltR, zetaR,omeqaR,tauR] % % % Sc [system] control law system matrix. ,-----------------------------------------------------------------------------% $Source: /home/hunter/voth/RCS/control.m,v $ % $Rev1s1on: 1.1 $ % $State: Exp $ $Author: voth $ % $Date: 1993/06/14 17:07:29 $ % % MODIFICATION HISTORY: control.m,v $ % ReVision 1.1 1993/06/14 17:07:29 voth % Fixec:l. mistake 1n Ac-matrix and. formattec:l. the matrices. % % Rev1s10n 1.0 1993/06/14 16:51:28 voth % Initial revision % ,-----------------------------------------------------------------------------% CALLS: % system() % CALLED BY: LOCIU. % ltD -cl1splacement loop % zetaD cl1splacement loop filter damping ratio. % omeqaD -cl1splacement loop filter frequency. % ltR -rate loop % zetall -rate loop filter damping ratiO. omeqaR -rate loop filter frequency. tauR -rate loop filter t:Lme constant.

PAGE 322

MATLAB FUNCTION: 'ir GLOBALS: 'ir rNPUT: 'ir 'ir OUTPUT: 'ir 'ir OESCRJ:PTION: 'ir 'ir ALGORITBK: 'ir 'ir LIMI'l'A'l'IONS: 'ir 'ir DIAGNOSTICS: controLm 'ir Error: "Wronq number of function arquments." 'ir Error: "qfParams vector is the wronq size." 'ir 'ir 'ir 'ir SIDE EFFECTS: 'ir 'ir 'ir 'ir (cont) 'ir-----------------------------------------------------------------------------'ir ---------Check the function arguments: if (narqin < 0 I narqin > 1) erro!:" 'Wror.g :-,umber 'Jf funct.ion elseif (narqin==O) gfParams end if length (qfParams) -=7 error('gfParams is the wrong size. ') end 'ir ----------Get the qains/filter parameters: kD zetaD omegaD kR gfParams(l); gfParams(2); gfParams(3); gfParams(4); MATLAB FUNCTION:

PAGE 323

% MAT LAB 0/0 ornegaR tauR grPararnsI5); gfParams(6); gfPararns(7); Ae 0, ornegaD, 0, -omegaD, 0, 0, 0, 0, 0, 0, -ornegaR, 0, 0, l/tauR, J; Be 0, 0 ornegaD, 0 0, 0, ornegaR 0, 0 Ce -lcD, 0, 0, 0, -lcR J; Dc 0, 0 1; Se system(Ae,Be,Ce,De); 0, 0, ornegaR, a oJ -2*zetaR*ornegaR, -l/t3Uii.

PAGE 324

function (S,nu,nYI ,-----------------------------------------------------------------------------, HATLAB FUNC'rION: PtJRPOSB: msd.3.m compute a linear differential state space model for a 3 Mass-Spring-Oamper system. SYNPOSIS: [s,nu,ny) m b Cfx S nu msd.3(m,k,b,Cfx) [real vector) mass values: [mass (1) ,mass (2) ,mass(J). [real vector] spring stiffness values: [k(1),k(2)]. [real vector] damping coefficient values: [b(l) ,b(2)]. [real) "aerodynamic" force derivative wrt x em (x_em is the x-position of the center of mass). [system) model of J mass-spring-clamper. lint] number of control inputs. 'ny lint] number of sensor outputs. ,-----------------------------------------------------------------------------, $source: /home/hunter/voth/genitor/MSOJ/RCS/msd.3.m,v $ $Rsvision: 1.1 $ $State: BlIp $ $Author: voth $ $Oate: 1993/12/09 18:06:25 $ HOOIFlCA7ION BISTORY: $Log: msd.3.m,v $ Revision 1.1 1993/12/09 1B:06:25 voth Fixed error in equations of motion stiffness matrix. Revision 1.0 1993/06/14 16:29:5B voth Initial revision ,-----------------------------------------------------------------------------ClU.LS: system LOCAL VARIABLES:

PAGE 325

MATLAB Fl"NcrION: GLOBALS: INPUT: OUTPUT : OBSCRIPTION: The inputs to the system are: (c:ont) { disurbance: rigid-body force (1ike the wind disturbance on an ELV) contro1: force on the first mass (1ike the thrust ang1e on an ELV) The outputs from the system are: criterion: center of mass position (rigid-body position) sensor: position of the third mass (1ike the lHO gyro on an ELV) sensor: ve10city of the second mass (1ike the rate gyro on an ELV) ALGORITHM: LIKITMIONS: O:IAGNOSTICS: EXAMPLES: SlOB EFFECTS: SBE ALSO: ,-----------------------------------------------------------------------------, ----------check the function input arguments: if (narqin < narqin 4) error ('l'Irong number of input argt.:ments.' end if nargin < 4 Cfx=O.l; end. if narqin 3 b=[O.Ol,O.Ol] ; end

PAGE 326

MAT LAB l'o 2 ),;:=[1.0,1.::); m=[1.0,O.2,1.0); 5umm=sUm(m) ; diag(rnl; B = K = b(l) -b(l) o (b(1)+b(2) ) -b(2) o -b(2) b(2) k(l)-Cfx'(m(l)+m(1)/summ"2) -k -C x (m ( 2 ) + m ( 5 umm" 2 ) 0-Cfx+(m(3)+m(1)/summ"2) G2 H2 Hl HO k (2) -Cfx (m(3) +m(3) /summ"2) m(l)/summ m(3)/summ zeros(3,3); o o o o o 1 m(l)/sumrn o o 1 0 o o o m(2)/sumrn o o (k (1) +k (2) ) -Cfx'" (m (2) +m (2) sumrn'"2 ) -k (2) -Cfx" 3) "m( 2) m(3)/summ o 0-

PAGE 327

MAT LAB F = zeros(3,3) eye(3,3) -M\K -1'1\9 1 ; G = zeros(3,2) M\G2 1 ; H = HO-H2+(M\K) 1 ; D = H2+(N\G2) 1 ; S = system(F,G,H,D); nu=1;ny=2;

PAGE 328

imction (ObjJ eval_main(S,,) \-----------------------------------------------------------------------------HATLAB FUNCTION: eval_main.m PURPOSE: Evaluates the total Objective matrix for the KSD3 control law. SYNPOSIS: [Obj] = eval_main (Sc) Sc [system] control law system model (2 inputs, 1 output). Obj [matrix] 2 x 11 Objective matrix with columns corresponding to: [ TotObj, stab, LFGH, RBPM, RBGH, MlPK, MlFS, M1BS, COGH, H2PK, Dist ]. The rows of each column contain entries: [ Pass=(Oll); F ] where Pass indicates whether or not the requirement is satisfied and F is the assigned Objective value. $Scurce: /home/hunter/vcth/RCS/eval_main.m,v $ $Revision: 1.3 $ $state: Blcp $ $Author: vuth $ $Date: 1993/06/21 16:41:48 $ MODIFICATION HISTORr: $Log: eval_main.m,v $ 1.3 1993/06/21 16:41:48 vuth o PUt a lower limit on objectives until *all* reqs. are passed. 0 Do not prompt the user if the conditioninq is bad in fresp() II 1.2 1993/06/17 20:03:25 vuth II 0 If closed-loop system is unstable, set margins/dist. Objectives to 1.0. 0 Changed how the fitness table is created. II 1.1 1993/06/16 16:06:51 voth Fixed errors in compu t:i.nq the total objective. 1.0 1993/06/15 19:52:42 voth Initial revision ,,------------------------------------------------------------------------------

PAGE 329

DID CALLS: eval. _stab 0 eval_ marg () eval_distO frespO lft 0 msd30 select 0 sinfoO .. CALLED BY: .. LOCAL VAlUABLES: .. objWeights -[real vector) weightings on individual. objective values used .. INPUT: to compute the total objective value .. Sp_nODl.-g -[system) the nominal. plant model. .. OUTPUT: .. Sp_nom_q -[system] the nominal. plant model DESCRIPTION: ALGORITHM: LIMITATIONS: DIAGNOSTICS: Brror: Brror: Error: .. EXAMPLES: "Wrong nWllber of input arguments." "Sc must be a continuous system." "Sc must have 2 inputs ancl 1 output." .. SIDE EFFECTS: .. SBE ALSO: .. ------------------------------------------------------------------------------

PAGE 330

% global Sp_nom_g Sp_nom_q msd3; doPlotNich 1; objWeights :.J 1.0 1.0 1.0 1.0 1.0 1.0 1.0 COGM Obj zeros(2,11); error('Sc must be a continuous system.') [nStsC,nSen,nCtl] = sinfoISc); errorl'Sc must have 2 inputs and 1 ')

PAGE 331

MATLA8 FUNCTION: (cont) 0/0 ---------Evaluate the objective: precompute some thinqs: Sdist nom Itt: Sp_nom_g, Sc ); Sol nom select:Sp_nom_g,2, [2,3)), Sc l; first qet the stability objective: ObjStab eval_stab(Sdist_noml; next qet the marqins and disturbance responses objective: treq_ol_nom = freqvecl Sol_nom, 0.001, :0, 500 I; "ol_nom -fresp( Sol_nom, =req_ol_nom, 'eig', 'r.oprcmpt' ;; if (doPlotNich) figure(doPlotNichl; plotnich( rol_nom, ::60, [-60,40) }; title('Open-Loop Control Freq. end if Objstab(l.) ObjMarq ObjDist else ObjMarg ObjDist end eval_marg( Fol_nom, freq_ol_nom I; eval_dlst( Sdist_nom I; [zeros(1,81;onesll,81); [0; I) ; now assemble the total objective matrix: Obj(:,::' ObjStab; Obj(:,3:101 Obj (:,11 I ObjMarg; ObjDist; compute the total objective: Obj(l,ll all(Obj(I,2:11II; if -Obj(l.,l.) 'lower limit on objectives until *all* reqs. are passed Obj (2, 2 : 11) max: O. 0, Obj (2, 2 : 11 I I; end Obj(2,1) Obj(2,2:ll) objWeights;

PAGE 332

MATLAB FUNCTION: % ----------create a fitness % table = '*zeros(7,80); str = sprintf('----------------------------------------------------------------------str; str = sprintf('1 str; TOTAL fITNESS RESULTS str = sprintf('-----------------------------------------------------------------------table(3,I:length(str)) str; str = sprintf('1 Tot Stab LFGM RBPM RBGM MIPK MIFS MIBS COGM M2PK Dist table(4,1:length(str)) str; str = sprintf('1 %4.2f %4.2f H.2: %4.2f %4.2f %4.2f H.2f %4.2: %4.2f %4.2f %4.2f 1',Objtl,:)); table(5,1:length(str)) = str; str = sprintf('1 %4.2f %4.2f %4.2: H.2f 1',Obj(2,:)); str; %4.2f %4.2: %4.2f %4.2f %4.2f %4.2f %4.2f str = sprintf('----------------------------------------------------------------------str; print the if nargout==O: if disp(table) end ----------Bnd of Function ---------,

PAGE 333

function (ObJI t'Val_stab(S) %-----------------------------------------------------------------------------% KATLAB FUNCTION: evU_stab.m 'POR.POSE: Evaluates the HSD3 system with respect to stability. SYNPOSIS: [Obj) = eval._stab(S) PARAMETERS: S [system) system model for stability evaluation. % RETURN VAI.UES: Obj [matriX) 2 X 1 objective matrix: [Pass=(Oll); F ) where Pass indicates whether or not the requirement is satisfied and F is the assigned objective value. ,-----------------------------------------------------------------------------, $Source: /home/hunter/voth/RCS/eval_stab.m,v $ $Revision: 1.1 $ eState: Bxp $ $Author: voth $ $Date: 1993/06/16 16:02:28 $ % KODIFICATION BISTORX: $Log: evU_stab.m,v $ Revision 1.1 1993/06/16 16:02:28 voth Changed objective to a minimization problem. Revision 1.0 1993/06/15 19:52:31 voth Initial %-----------------------------------------------------------------------------CALLS: split 0 BY: 'B System A-matrix eigenvalues. sigmaHax MaximUm eigenvalue real part (sigma). sigmaLim Lower limit on eigenvalue real sigmaHax for the objective. The val.ue of objective is 1 for sigmaHax sigmaLim. This limits the maximwD objective to 1 for any system. sigmaScal Scaling factor on eigenvalue real part in the objective

PAGE 334

l\IATLAB FUNCTION: 0/0 '" '" '" function. Increasing sigmaSca1 increases the steepness of the objective function for va1ues of sigmaMaX > sigmaLim. '" GLOBALS: '" INPUT: '" OtJ'l'PtJT: DESCUPTION: ALGORITHM: '" LIMITATIONS: .. DIAGNOSTICS: Error: Error: "Wrong number of input arguments." "S01 must be a continuous system." EXAMPLES: SIDE EFFECTS: SBE ALSO: ,------------------------------------------------------------------------------sigmaLim sigmaS-:al -0.0:; Obj 1; 1 J; % Eigenvalue r-eal par-t :ower-forobjective. % Eigenvalue real part scaling computing objective. ----------Check the input arguments: if (narqj.n .= 1) err-or-('Wrong number of input arguments.') end if issys(S,'continuous') er-r-or-('S must be a continuous system.') end

PAGE 335

% MAT LAB E eig(split(S)); sigmaMax max(real(E)); Obj(ll=O; Obj(2) = max( 0.0, );

PAGE 336

funrtion IObjJ eval_marg(FoJ.fnq) ,-----------------------------------------------------------------------------eval_marq.m HATLAB FUNCTION: PURPOSB: Evaluates the MSD3 system with respect to qain/phase stability marqins. SYNPOSIS: [Obj] eval_marq(Fol) PARAMETERS: Fol [system] open-loop frequency responses for qain/phase stability marqins evaluation. freq [real vector] frequency vector for open-loop frequency responses. RETURN VALUES: Obj [matrix] 2 x 8 objective matrix with columns correspondinq to: [ LFGH, lUJPH, aGM, HlPK, MlFS, MlBS, COGH, K2PK ]. The rows of each column contain entries: [ Pass=(Oll); F ] wbere Pass indicates wbether or not the requirement is satisfied and F is tbe assiqned objective value. ,-----------------------------------------------------------------------------, $Source: /bome/hunter/voth/RCS/eval_marq.m,v $ $Revision: 1.2 $ eState: ElCp $ $Author: voth $ $Date: 1993/06/17 20:07:34 $ HODIFICATION BISTORr: eval_marq.m,v $ Revision 1.2 1993/06/17 20:07:34 voth 0 Fixed error in the cross-over gain marqin objective calculation. Readjusted the frequency bounds. Revision 1.1 1993/06/16 15:56:22 voth Chanqed objective to a minimization problem. Revision 1.0 1993/06/15 19:52:18 voth Initial reviSion ,-----------------------------------------------------------------------------, CALLS:

PAGE 337

% " RbFreqLB 0.00; Rigid-body crossover :requencies :cwer bound. RbE'reqUB 0.90; Rigid-body crossover frequencies bound. MIFreqLB 0.90; Mode frequency lower bound. MIFreqUB Mode frequency upper bound. CoFreqLB 1.00; Crossover gain margin frequency l:lwer bound. CoE'reqUB Inf; Crossover gain margin frequency bound. M2FreqLB 2.00; Mode :2 f.requency lower bound. M2FreqUB Inf; % Mode :2 frequency upper bound. LFGMreq 6.00; Low E'req. Gain Margin requirement:. RBPMreq 30.0; % Rigid-Body Phase Margin requirement:. RBGMreq 6.00; Rigid-Body Gain Margin requiremen,". COGMreq 8.00; % Cross-Over Gain Margin requirement:.

PAGE 338

% MAT LAB /. MIE"Sreq '50.0; Mode front: Side phase margin requirement:. MIPKreq 6.00; Mode PeaK gain (minimum) requirement. MIBSreq 60.0; Mode Sack Side phase margin requiremen!:. M2PKreq -10.0; Node PeaK gain (maximum) requirement:. Obj 1; ') error( ''iTror.g number of inpu!: argument:s.') [npts,m! = size(E"ol); error('Fcl must: be SISO. ') error('E"ol must be the same lengt:h as freq.') [Gm,PmJ = margins(Fol,freq,O); Peaks = findpeaks (Fol, freq, 0); iGmRb = find(Gm(:,l=RbFreqLB & Gm(:,l:
PAGE 339

== objRBGMlll = 0; objRBGM(2) = maXI 0.0, \1 -Gm(iGmRb::),2) / RBGMreq) ); == obj RBPM 11 I = 0; objRBPMI21 = maxi :1 Pm(iPzr.Ri:::l),2) RapI-l!:eqJ ); iPkMl = find(Peaks(:,l=MlF!:eqLB & Peaks(:,l)
PAGE 340

0/0 0/0 = & = (1) 0; 0.0, & [1;0]; 0.0, J;

PAGE 341

function evaCdist(Sdlst) %-----------------------------------------------------------------------------.. PURPOSE: Eva1uates the MSD3 system with respect to disturbance responses .. SYNPOSIS: .. [Obj] = eva1_dist(Sdist) .. Sdist [system] system mode1 (c1osed-loap) for disturbance rejection evaluation. The E2-norm of the system is computed to % evaluate the objective function. % Obj [matrix] 2 x 1 objective matrix: [Pass=(Oll); F ] where Pass % indicates Whether or not the requirement is satisfied F is % the assigned objective value. %-----------------------------------------------------------------------------.. $Source: /home/hunter/voth/RCS/eval_dist.m,v $ .. $Revision: 1.5 $ .. $state: Exp $ % $Author: voth $ .. $Date: 1993/06/18 16:47:55 $ % MODIFICATION HISTORY: .. eval_dist.m,v $ .. Revision 1.5 1993/06/18 16:47:55 voth .. Remavecl lower limi.t on the objective .. Revision 1.4 1993/06/17 22:07:53 voth % PUt a lower limit on the objective (0.0) .. Revision 1.3 1993/06/16 16:09:21 voth .. Fixed mi.stake in renormalization .. Revision 1.2 1993/06/16 16:07:39 voth .. Renormal.ized objective to zero .. Revision 1.1 1993/06/16 15:43:40 voth .. Changed objective to a mi.nimi.zation problem .. Revision 1.0 1993/06/15 19:52:00 voth .. Initial revision %------------------------------------------------------------------------------

PAGE 342

0/0 CALLS: h2normO CALLED BY: LOCAL VAJUABLES: h2Scale Scale factor on B2-norm Objective. INPUT: OUTPUT : " DESCRIPTION: ALGORITHM: LIMITATIONS: DIAGNOSTICS: Error: Error: EXAMPLES: "Wrong number of input ou:guments." "S amst a continuous system." SIDE EFFECTS: .. SEE ALSO: lWFERXNCES: ,------------------------------------------------------------------------------h2Scale 2.8567; Obj = ( 1; .) ---------Check the input arguments: if (narqin 1) error('Wrony number of input arguments. end

PAGE 343

% errcr;'Sdist be a system. ') norm h2norm( Sdist ); Obj (2) (norm h2Scale) 1.0;

PAGE 344

C.3. Documentation for this problem includes the following elements: 1. C++ Matlab interface (armprob.cpp) 2. Matlab controller structure function 3. Matlab plant definition function 4. Matlab fitness function (armctrl.m) (arm.m) (armeval. m)

PAGE 345

PROBLEM SPACE: File: Robotic: annprob.c:pp This problem is to design a control for a robotic arm without a tip mass. This problem is based upon a model given in the doctoral thesis of Dr. Eric Schmitz. HATLAB is used as an engine as in the mass-spring-d.amper problem. // Adapted: kc:oppes -----------------------------------------------------------------// STEP A: Define problem space name & description // ----------------------------------------------------------------/#define SPACENAMB AlUIProb // name of problem space /#define SPACEDESC "RDbotic Control" // description of problem space // -----------------------------------------------------------------// STEP B: Define problem fitness function // ----------------------------------------------------------------'define FITNESSFN ARM Fitness // name of fitness function llundef MECHO // turn off diagnostics static Matrix .p; 'define HPARMS 4 // Fitness function complex ARM_FitneSs(TMatriX& x){ Matlab parameter vector // number of parameters iifdef MECHO // echo ------------------------char out[300j = ""; I#endif //---------------------------------double p[MPARMSj; for(1=O:i
PAGE 346

Robotic Arm \ \ } TMatrix inranqe(l,l,-inranqe"), float frange (j RANGES; inranqe.Assume(franqe, MPARMS,2); indim(l,l,"indim"); int intdim[ j ALLOWED!):!:!'I; ProbSpacew space = new ProbSpace(indim, inranqe); space->name = new char[strienlSPACEDESC)j; strcpy(space->name,SPACEDESC); space->userritness = rITNESSFN; return +space;

PAGE 347

function IScI annctrl(gfPal'8lllll) ,-----------------------------------------------------------------------------MATLAB FtlNCTION: armctrl..m PURPOSE: Returns a conuol. law model system matrix from the gain/filter parameters. SYNPOSIS: .. [Sc] armctrl.(qfParams) This buil.ds a control.ler with the control. law .. u(s) theta(s) Itt yt(s) The inputs to the control. matrices are: tip position, hub rate. qfParams [vector] control l.aw gain and fil.ter parameters: .. qfParams = [] 'Sc [system] control. law system matrix. ,-----------------------------------------------------------------------------, Revision 0.0 kcoppes Initial. version t-----------------------------------------------------------------------------, CALLS: system 0 CALLED BY: LOCAL VAlUABLES: pseudo-rate feedback gain. sec rad) Itt -tip position feedback gain. a -tip position feedback zero frequency (rad/sec) b -tip position feedback pol.e frequency (radlsec) .. D:IAGNOSTICS: "Wrong nWllber of function arguments." .. "qfParams vector is the wrong size." ,------------------------------------------------------------------------------

PAGE 348

% error('Wrong number of funcc:on 'I gfParams = [0.5, 3.0, 3.0, 30.0]; error('gfParams veccor is the wrong size. 'I Kr gfP:uarns I; Kt gfParams(2); a gfParams(31; b gfParams(41; Ae Be Kt"b+((b/al-l) 0.0 Ce Dc -Kt"b/a, -Kr Sc system(Ac,Bc,Cc,Dcl;

PAGE 349

function (S.nu.nyJ annO %-----------------------------------------------------------------------------% PURPOSE: bui1d a state space mode1 for a robotic system a tip % % SYNPOSIS: mass. [S,nu,ny) % % RETURN VALUES: % S [system) mode1 of robotic nu tint) number of contro1 inputs. % ny tint] number of sensor outputs. \-----------------------------------------------------------------------------, Revision 0.0 1995/08/29 ltcoppes % Initia1 version % %-----------------------------------------------------------------------------% CALLS: % system 0 ,-----------------------------------------------------------------------------% ----------bui1d the state space matrics: F zeros(9,9); F(1:2,1:2l 0.0 0.0 F(3:4,3:4l 0.0 -139.1 F(5:6,5:6l 0.0 -467.2 F(7:8,7:8l 0.0 -2311. 0 F (9,1: 9l 0.0 64.7 1. 0; -0.2 1. 0; -0.35 1. 0; -0.8 ]; 1. 0; -1.44 ]; 0.0 192.16 0.0 80.875 -64.7 J ; 0.0 194.1 ... G [0 2.27 a 6.75 0 6.818 a 2.84 0.0 J';

PAGE 350

% H(I,I:7) 1.1: 0.0 -1.098 0.0 0.0 J; 1.0; H(3,3:7) = (1.1 0.0 5.78 0.0 11.03:; D=[OOOJ';

PAGE 351

function [Obj) Brmeval(Sc) ,-----------------------------------------------------------------------------'PURPOSE: Evaluates the total objective matrix for the robotic arm control law. SYNPOSIS: [Obj] = armeval (Sc) 'Sc [system] control law system model (2 inputs, 1 output). t Obj [matriX) 2 X 5 objective matrix with columns corresponding to: TotObj -Total objective stab Poles -controller poles stable stab -closed loop system stable Damp dominant poles damping> 0.7 t t Damp2 non-dcminant poles maximize damping maximize bandwidth The rows of each column contain entries: [ Pass=(Oll); F ] where Pass indicates whether or not the requirement is satisfied and F is the assiqned objective vaJ.ue. ,-----------------------------------------------------------------------------,-----------------------------------------------------------------------------, ObjWeights [real. vector) weiqhtings on indiv:1.ctual objective values used to compute the total objective value. INPUT: Sp_nom_g [system) the nominal robotic arm plant model. OUTPUT: Sp_nom_g -[system] the nominal robotiC arm plant model. DIAGNOSTICS: Error: Brror: "wrong nUlllber of input arguments. Sc must a continuous system." ,------------------------------------------------------------------------------

PAGE 352

% 0/0 formac .::ompacc; global Sp_nom_g Sp_nom_g = arm; doPlotNich l; :)bjWeiq:'cs 10000.0 10.0 0.1 0.001 Obj [ones(1,6); =eros(1,6)]; errcr('Wrong of input argumencs. 'J smult( select (Sp_nom_g,1, [1,:] J, Sc ); 'Ir 1. sigmaLim sigmaSca1 -0.01; 5.00; % Eigenvalue real part lower limit for computing objective. % Eigenvalue real part scaling factor for computing objective.

PAGE 353

% l\lATLAB FDICfION: annevaLm (cont) A. Plant is already stable, check for unstable poles in controller. EC = sigmaMax = if (sigmaMax 0) Obj(1,2) 0; fail if open-loop poles are unstable Obj(2,Z) r.1ax 1 0.0, sigmaScal;' 1 sigmaMax-sigmaLim) ); B. Check for unstable closed loop poles = sigmaMaxCL = if (sigmaMaxCL > 0) Obj(1,3) 0; fail if closed-loop poles are unstable ObjI2,3) 0.0, sigmaScal*(sigmaMaxCL-sigmaLim) ); C. Check for dampinq of poles dampScal targDamp 0.7; scale factor for domin3nt pole damping % target damping targDamp2 = 3; target damping = find(imagIE)-=O); polefreqs = aos((i); dom findl aoslE) ==minlpolefreqs) ); damp -cos(angle(E(dom) if ( damp targOamp ) Objll,4) 0; fail if d3mping not met Obj 12,4) maXI 0.0, dampScal*ltargDamp-dampl );

PAGE 354

Obj(2,5) = 0; > 2 ) 3: length (i) damp2 = abs(cos(angle(E(j)))); objadd maxI 0.0, dampScalltargDamp2 dampZ)/2 ); Objl2,5) Obj(2,5) + objadd; > 0 Obj(1,5) 0; bwScal = 1; % scale fcr bandwidth = 50.0; % target damping minfreq min(abs(E) ); Obj(1,6) 0; looking for max bw Obj(2,6) maxI 0.0, Obj (1,1) all (Obj (1,2: 5) ) ; if -Obj(l,l) % lower limit on objecti ... es until *311* reqs. are passed Obj(2,2:6) maxI 0.0, Obj(:,2:6) ;; end Objl2,2:6; Objl2,2:6) ." 'Jbj,.leig:'ts'; Obj(2,l) Obj(2,2:6) oneslsizelObjl:,2:6)))';

PAGE 355

MAT LAB FUNCfION: IlnnevaLm (cont) % ----------Create a f1tness tab1e: (act1vate for table '''::erost!,80); ser _______ '); eable(l,l:length(serJ) ser; str = spr!:1tf(' I table(2,1:length(str)1 = str; TOTAL FITN<:SS RESULTS str = sprintf(I----------------------------------------------------------------------= ser; str Tet IOLStablCLSt3bi Damp2 I'); table(4,1:lengeh(serJ) ser; str=sprintf(I( !4.2f table(5,1:length(str)) str sprintf ( I I \4. 2f H.2f I H.2f I H.2f str; %4.2f I %4.2f I %4.2f table(6,1:lengeh(str)) str; % 4 .2 I', Obj ( 1, : ) ) ; %4.2f I %4.2f 1',Obj(2,:)); ser sprintf(I-----------------------------------------------------------------------------1); table(7,1:length(str)) = str; disp ( table ); end ----------End of Funct10n ---------,

PAGE 356

C.4. This section includes those elements used to apply the GA and SA methods to the evaluation problem. Elements include: 1. Makefile for the evaluation problem application (Makefile) 2. C++ application of GAs to the evaluation problem (gaeval.cpp) 3. C++ application of SAs to the evaluation problem (saeval.cpp)

PAGE 357

# CF g++ ROOTDIR = Ihomelkcoppes/thesiswork LIBS = -L/usr/1ib -1/ appli lib/gcc-l ib/ sparc-sun-su:1os4 -L/app1/lib \ -L/appl/lang/C+T/SC1.J \ -L/app1/mat1ab-4.2/extern/1ib/sun4 SYSINCLUDES = -I/usr/include \ -I/app1/1ib/g++-inc1ude -I/app1/1ib/gcc-1ib/sparc-sun-sunos4.1.3/2.05.3/inc1ude -I/appl/mat1ab-4.2/extern/include CLIBDIR METHDIR SEAACHDIR EVALDIR S(ROOTDIR)/INCLUDE_CLIB S(ROOTDIR)/INCLUDE_METH S(ROOTDIR)IINCLUDE_SEAACH S(ROOTDIR)/PROB_E'fALFROB STRUCTINCLUDES = -IS(CLIBDIR) \ -IS (SEAACHDIR) \ -IS (METHDIR) PROBINCLUDES -IS (EVALDIR) INCLUDES S(SYSINCLUDES) \ S(STRUCTINCLUDES) \ S (PROBINCLUDES) DEPEND = S(EVALDIR)/eva1prob.cpp gaeva1: gatest.cpp S(DEPEND) S(METHDIR)/sga.cpp S(CP) S(INCLUDES) S(LIBS) gaeval,cpp -1m -lio -0 gaeva1 saeva1: satest.cpp S(DEPEND) S{METHDIR)/svfsr.cpp S(CP) S(INCLUDES) S(LI95) satest.cpp -1m -lio -0 saeva1

PAGE 358

APPLICATION: GA Application to evaluation problem gaeval.epp Created: #include #include STEP 1: INCLtlDE METHOD INFORKM'ION #include "sga.cpp" ============================================= STEP 2: INCLtlDE PROBLEM SPACE DEFINITION/S =============================================== #include "'Ovalprob.cpp" STEP 3: SBT UP SBARCBES =============================================== voicl main (voicl) IIInstantiate prOblem spaces printf("\nprobspace"); ProbSpace Pl = print!(" .. done"); I/Instantiate methods SearchMethod GAl GA_popSi::e I &GAl) I &GAl) GA_tourneySize(&GA1) GA_bestWinProb(&GA1) GA_bitMutateProb(&GAl) GA_steadyState(&GAl) GA_mateEligiblePct(&GA1) GA_representType(&GA1) GAl.;)ebug = 0; printf("=-"); getclstdin); printf("\nCreating search ... "); GA_Method ( ); 1291 ; 5; 0.05; 2; 0.6; 0.001; 0; 1. 0; 0; // -> floating point

PAGE 359

Search S_Pl_GA1(&Pl,&GAl); S_Pl_G.lI.l. SetupTraceFile ("temp. asc", "',,,",5) ; = -9; S_Pl_GA1.SingleStep = 0; printf ("done") ; S PI GA1.::cho 1; printf("\nSearch runs ... "); MonitoredRun(S_Pl_GAl,lOOl; int SampPd = 1; int NRUNS 1000; int NIter = 100; int NRuns = NRUNS; MethodEvaluation T(&GAl,&Pl,Nlter,SampPd,NRuns); double mutateProb[2j {O, O. 9}; double replacePct[2j {0.01,1.0}; double competePct[2} {0.01,1.0}; double best:Win [2J {0.5, 0.99}; int popSize [2}= {5, ':OOO}; int repType [2J= {O, I}; int nCrossovers[2]= {-I, 30}; int tournSize [2J= {2, 10J; int steadyStat:e[2J= to, 1}; T.Echo = 0; T.DumpEvalRanges("stdout","w"); T.GenerateRunData(); } T. (0,0, muca teProb); T.SetFloaCParmRange(0,2,replacePct); T.SetFloacParmRange(0,3,compecePct); T.SeCF:oatParmRange(0,4,bestWin); (O,O,popSize); T.Setlnt:ParmRange (O,l,repType); T.Seclnt:ParmRange (0,2,nCrossovers); T.Set!ncParmRange (0,4,tournSize); T.Setlnt:ParmRange (0,6,steadyState);

PAGE 360

// APPLICATION: // FDe: // Created: // Modified: // 10/4;94 SA AppUration to evaluation problem saeval.cpp kcoppes Set up step-by-step fonnalism kcoppes Moved problem space defto separate file Hinclude //============================================= // STEP 1: INCLUDE METHOD // "svfsr.cpp" // // STEP 2: INCLUDE PROBLEM SPACE DEFINITION/S // ============================================= Hinclude "evalprob.cpp" //============================================= STEP 3: SET UP SEARCBES // ============================================= void main (void) /IInstantiate problem spaces printf("\nprobspace"); = printf(" .. done",; //Instantiate methods SearchMethod V
PAGE 361

printf("\nSearch runs ... "); MonitoredRun(S_?l_VFSRl,:OOO); lJi.f int SampPd int NRUNS 1S00; int NIter 1000; int NRuns NRUNS; MethodEvaluation T (&VFSRI, &Pl, :Hter, SampPd, NRuns); double ReAnnRescaie[2] {S.O,SOO.O}; T.SetFloatParmRange(O,3,ReAnnRescale); double tempAnnealScale{2] {50,250}; T.SetFloatParmRange(O,5,tempAnneaIScale); double tempRatioScale[2] {le-S,!e-4}; T.SetFloatParmRange(0,6,tempRatioScale); double costParmScale[2] {I, 5}; T.SetFloatParmRangeIO,7,costParmScale); double defDeltaX[2] {le-4,le-2}; T.SetFloatParmRange(O,e,defDeltaX); double defInitTemp [2] {l.O,lO}; T.SetFloatParmRange(O,9,defInitTemp); double defCostInitTemp[2} {lee,leIO}; T.SetFloatParmRange(O,IO,defCostInitTemp); int testPeriod[2] {10,500}; T.SetIntParmRange(O,O,testPeriod); int reAnnea 1 [ 2] {a, I}; T.SetIntParmRangeIO,2,reAnneal); T.Echo 1; T.DumpEvalRanges("stdout","w"l; getc:stjinl; T.GenerateRunData(); }

PAGE 362

c.s. This section includes those elements used to apply the GA and SA methods to the mass-spring-damper problem. Elements include: 1. Makefile for the mass-spring-damper application (Makefile) 2. C++ application of GAs to the mass-spring-damper problem 3. C++ application of SAs to the mass-spring-damper problem (msd sarun.cpp)

PAGE 363

# CP g++ ROOTDIR = Ihomelkcoppes/thesiswork LIBS = -L/usr/lib -L/appl/lib/gcc-lib/sparc-sun-sunos4.1.J/2.6.3 \ -i./appi/lic \ \ -L/appl/langiC++/SCl.O lib -L/appl/maclab-4.2/excern/lib/sun4 SYSINCLUDES = -I/appi/lib/g++-inciude \ -I/appl/lib/gcc-lib/sparc-sun-sunos4.1.J/2.6.3Iinclude \ -I/appl/maLlab-4.2/excern/include CLIBDIR = S(ROOTDIR)/INCLUDE_CLIB METHDIR = S(ROOTDIR)/INCLUDE_METH SEARCHDIR = MSDDIR S(ROOTDIRIIPROB_MSD3 STRUCTINCLUDES -IS(CLIBDIR) -IS (SEARCHDIR) \ -IS (METHDIR) PROBINCLUDES = -IS (MSDDIR) INCLUDES S(SYSINCLUDES) $ (STRUCTINCLUDES) \ S(PROBINCLUDES) DEPEND = S(MSDDIR)/armprob.cpp \

PAGE 364

all: msd_g3rur. :r,sd_garun2 msd 3arun msd sarun2 mSd_garun: msd_garun.cpp SIDEPEND) SIMETHDIR)/sga.cpp SICP) SIINCLUDES) SILIBS) msd_garun.cpp -1m -lio -0 msd_garun msd_garun2: msd_g3run2.cpp SIDEPEND) SIMETHDIR)/sga.cpp SICP) SIINCLUDES) SILI8S) msd_garun2.cpp -1m -liD -0 mSd_garun2 msd sarun: msd_sarun.cpp SIDEPEND) SIMETHDIR)/svfsr.cpp SICP) SIINCLUDES) SILI3S) msd_sarun.cpp -1m -liD -0 msd_sarun msd sarun2: msa_sarun2.cpp S(DEPEND) SIMETHDIR)/svfsr.cpp SICP) S(INCLUDES) SIL:2S1 -1m -0 msd sarun2

PAGE 365

#include Hinclude #include #include "sga.cpp" 11=============================================== 11=============================================== static Engine ;ep; Hinclude "nsdprob.cpp" printf("\nprobspace"); ProbSpace Pl MSDProb(); printi(" .. done"); printf("\nmethod"); SearchMethod GAl GAyopSize(&GA1) GA_nCrossovers(&GA1) GA_replacePct(&GAl) GA_tourneySi=e(&GA1) GA_steadyState(&GA1) GA_mateEligiblePct(&GA11 GA_representType(&GAl) GAl. Debug 0; GAl.Print(); GA_Method (); 2000; 1" 0.62; 0.5; 0.005; 1; 0.71; 0 ->

PAGE 366

APPLICATION: GA Application lIWlS-spring-damper problem //Open Hatlab engine ep NULL; // if (! engOpen(n/usr/local/matlab/b1n/matlab ") if engOpen("/appl/matlab/b1n/matlab ") fprintf(stderr, "\nCan't start engine\n"); ex1t(-l) ; //Instant1ata search & initialize printf("\nCreating search ..... ); Search S_Pl_GA1(&Pl,GAl); S_Pl_:3..;l. SetupTracefile ("temp. asc", ..... 11,5); 3_P1_3';1. ::onvTgtMetric -9; S_Pl_GA1.Single5tep = 0; printf("done"); S P1 :::cho //Bxecute iteration-limited search MonitoredRun(S_Pl_GA1,100); FILE" dFile; dFi1e= fopen("PerfLog.txt","a"); fprinrf(dFi1e,"\np=["); for(1nt z=O;z<7;z++) 1; (cont) fprinrf (dFile, "\n %e", 5_P1_G.:l,.i. currentlter->bDATA[O] DATA [ z] r (II; fprintf(dFile,"\nj;\n\n"l; //Converqence metrics fprin!;f:dFile,"\t\t%lOe\tHOe",3_": GA1.evalSum,3 P1 '3Al.e,ralSumCostl; fprint f (dFile, "\ t%i", 5_P1_ GAl. r.:;J::ltnEvals I; fprinrf (dFile, "\th", S_P1_GA1. ::erNuml; fprintf(dFi1e,"\n"); iclose ( dFile I; //Close Hatlab engine engC1ose(epl;

PAGE 367

"include Hinclude Hinclude Hinclude #include "svfsr.cpp" static Engine *ep; #include ":!Isdprob.cpp" Instantiate printf("\nprobspace"); ProbSpace PI = MSDProb(); printf(" .. done"); printf("\nmethod"); SearchMethod VFSRl (I; VE'SR_ReAnnRescale((&VE'SRIII VE'SR_tempAnnealScale((&VE'SRIII VFSR_tempRatioScale((&VFSRll) VFSR_costParmScale((&VFSRI)I VFSR_defDeltaX( (&VE'SRl) I VFSR_deflnitTemp((&VE'SRI)I VE'SR_testPeriod((&VFSRIII VE'SR_optNoReanneal&VE'SRII) VFSR1.Debug 0; VFSRI. Print ( I ; 500.0; 50.0; 3.3; 0.0001; 10.0; 1. OelO; 500; 0;

PAGE 368

ep NULL; fprintf(s1:derr, n\nCan't S1:ar1: MATLAB engine\n"); prin1:f(n\nCrea1:ing search ... "); Search S_Pl_'!E"SR1(&Pl,&VE"SR1); S _Fl_lFSR1. SetupTraceFi le ( 3SC", "',.,",5) ; S_Pl_'!:SR1.::c:wTg1:Me,:ric S_Fl_VFSRl.SingleStep = 0; S Pl 'fE"SRl.Echo 0; printf("done"); MonitoredRun(S_Pl_VE"SRl,lOOO); FILE dFile; dfile= fopen(nperfLog.txt","a"); fprintf(dFile,"\np=["); fprin1:f(dE"ile,"\n S_Pl_VFSRl.currentlter->fDATA[Oj.DATA.r(Z)); fprintf(dFile,"\nj;\n\n"); fprin1:f(dE"ile,"\':\t%l0e\t%lOe",S_Fl_VFSRl.evalSum,S_Pl_VE'SR1.evalSumCost); fprin1:f (dnle,"\ tH", S_Pl_VFSR1. C:.Jm5earchCost:.) ; fprintf (dFile, "\th", S_Pl_VFSRl. ;:::'..:mFnEvals/; fprin1:f (dFile, "\ t %i", S _Pl_ VFSRl. IterNum) ; fprintf(dFl1e,"\n"); fclose ( dFile ); engClose (ep);

PAGE 369

C.6. This section includes those elements used to apply the GA and SA methods to the robotic arm problem. Elements include: 1. Makefile for the robotic armapplication (Makefile) 2. C++ application of GAs to the robotic armproblem (arm...Earun.cpp) 3. C++ application ofSAs to the robotic armproblem (arm_sarun.cpp)

PAGE 370

4# CP g++ ROOTDIR = Ihome/kcoppes/thesiswork LIBS = -L/usr/lib \ -L/appi/l!b/gcc-l!b/sparc-sun-sunos4.1.3/2.6.3 -L/appl/lib \ -L/usr4/1ibg++-2.6.1/1ibio \ -L/appl/lang/C++/SC1.O \ -L/lib \ -L/appl/matlab-4.2/extern/lib/sun4 SYSINCLUDES -I/usr/include \ -I/appi/lib/g++-inciude \ -I/appl/lib/gcc-lib/sparc-sun-sunos4.1.3/2.6.3/1nclude \ -I/appl/rnatlab-4.2/extern/include CLIBDIR = $(ROOTD!RI/INCLUDE_CLIB METHDIR S(ROOTDIRI/INCLUDE_METH SEARCHDIR = S(ROOTDIRI/P_ARM_GA STRUCTINCLUDES = -IS (CLIBDIRI -IS (SEAACHDIRI \ -!S(METHDIRI PROBINCLUDES -IS(ARMDIRI INCLUDES = $ (SYSINCLUDESI \ S(STRUCTINCLUDESI \ S(PROBINCLUDESI DEPEND $(ARMDIRl/armprob.cpp

PAGE 371

all: arrn_garun arm garun2 arm sarun arm_sarun2 arm_garun: arm_garun.cpp $tDEPEND) $(METHDIR)/sga.cpp S(CP) $ (INCLUDES) S(LI2S) arm_garun.cpp -lmat -1m -lio -0 arm_garun arm_garun2.cpp S(DEPEND) S(METHDIR)/sga.cpp S(CP) S(INCLUDES) S(LI8S) arm_garun2.cpp -lmat -1m -l!o -0 arm_garun2 arm sarun: arm_sarun.cpp S(METHDIR)/svfsr.cpp S(CP) S(INCLUDES) S(L12S) arm_sarun.cpp -lmat -1m -lio -0 arm sarun arm_sar.:n2: arm_sarun2.cpp S(CEPEND) S(METHDIR)/svfsr.cpp SIC?) S(1NCLUDES) $(L:2) arm_sarun.2.cpp -1mat -"-i::> -0 a!:m sarun2

PAGE 372

========================================================== APPLICATION: File: GA Application to robotic arm problem armJDnUJ.cpp Created: Note: arm_garun2.cpp is identical except for GA parameters Hinelude #inelude Hinclude #inelude STEP 1: INCLUDE t-lETElOD INFORMATION Hinclude "sga.epp" STEP 2: INCLUDE HASS-SPRING-DAMPER. PROBLEM stacie Engine Hinelude "armprob.epp" STEP 3: SET UP SEARCHES void main (void) IIInstant1ate problem space printf("\"probspace"); ProbSpaee P1 .. jone"); IIInstant1ate method printf("'nmechod"); GAl GA_popSize(&GAl) GA_nCrossovers(&GAl) GA_tourneySize(&GA1) GA_bestWinProb(&GAl) GA_bitMutaceProb(&GAl) GA_steadyScace(&GAl) GA_mateEligiblePec(&GAl) GA_representType(&GAl) GAl. Debug = 0; GAl.Pcint(); GA_Method ( ) ; 2000; 0.62; 0.5; 0.005; = 0; 0 -> floacing point

PAGE 373

" ep NULL; ") fprintf(stderr, "\nCan't start: t1}I.TLAB engine\n"); print:f("\nCreat:ing search .. "); Search .3 5) ; S_?1_GA1.CvnvTgtMetric -9; S_P1_GAl.51ngleStep 0; S Pl GAl.Echo 1; printf(njone"); MonitoredRun(S_Pl_GA1,lOO); FILE+ dFi1e; dFi1e= fopenl"PerfLog.txt:","a"); fprintf(dFlle,"\np=["); fprintf (dFlle, "\n %en S_i?1_GA1. currentlter->bDATA[ 0) DATA[z) r ( ) ); fprintf(dFl1e,"\n);\n\n"); fpri:-.tf (dFi1e,"\ t\ tHOe\ t'lOe", "_i?1_GA1. eval"um, "_P1_GA1. evalSurr.Cost: I; fpr inc: IdFlle,"\ t i", S_i?1_GA1. rchCost:) ; fprint:f IdFile,"\ t% i", S_E'l_ GAl.:::.ImFnEvals) ; fprin-.::f (dFlle, "\ th", S_Pl_GA1. IterNuml; fprintf(dFl1e,"\n"); fclose ( dFile ); engCloselepl;

PAGE 374

============================================================ APPLICATION: File: SA Application to robotic ann probll'm anD_sanm.cpp Note: arm_sarun2.cpp is identical except for SA parameters Hinclude Hinclude Hinclude Hinclude STEP 1: INCLUDE METHOD INFORMATION .include "svfsr.cpp" STEP 2: INCLUDE MASS-SPRING-DAHPER PROBLEM ============================================= static Engine -ep; ilinclude "armprob.cpp" STEP 3: SET UP SEMCBES =============================================== void main (VOid) IIInstantiate preblem space printf("\nprobspace"); ProbSpace PI = ARMProb(); printf:" .. done"); Instantiate method SearchXechod VE"SRI = VE"SR_Nethod(); VE"SR_ReAnnRescaIe&VE"SRII) VFSR_tempAnnealScaleSVE"SRI)) VE"SR_tempRatioScale&VFSR1)) VE"SR_costParmScale&VFSRI)) VFSR_defDe1taX&VFSR1)) VFSR_defInitTemp((&VFSR1)) VFSR_defCostInitTemp&VFSR1)) VE"SR_te.S"tPeriod ( (&VFSR1) ) VE"SR_opcNoReanneal ( (&VFSRll ) VFSRl Debug = 0; VE"SRI. Print (); 500.0; 50.0; le-4; 3.3; 0.0001; 10.0; 500; 0;

PAGE 375

IIOpen ep = NULL; ") { ") fprintf(stderr, "\nCan't start MATLAB engine\n"); Instantiate search ... "); Search S_Pl_VFSRl (&Pl, 5oVFSR1); S _Pl_'1FSRl. SetupTr3ceFile I "-::emp. asc" n_,,", 5) ; S_Pl_VFSR1.:::;)nvTgtMetric = S_Pl_'1FSR1.SingleStep 0; printf("done"); 5 Pl VFSRl. Echo MonitoredRunIS_Pl_VFSRl,lOOO); FILE" dFile; dFile= fopenl"PerfLog.txt",la"); fprintf(dFile,"\np=["); fpr intf (dFile, "\n %e", ,,_Pl_ VFSR1. currentI ter->fDATA[ 0 1 DATA. r (z) ) ; fprintf(dFile,l\n);\n\n"); Eprintf (dFile, "\ t \ t lC:e\ t% 10e", S_?1_'fFSR1. eva lSum, S_Pl_VFSRl evalSumCost) ; fprintf IdFile, "\ t% i", S_?!_ VFSRl .::::JmSearchCos':.: ; fprintf (dFile," \ tli", S _F,"_ VFSR1. CumFnEvals I; fprintf (dFile, "\ tH ", S_Pl_VFSRl. IterNum); fprintfidFile,l\n"); fclose ( dFile ); engClose (ep) ; }

PAGE 376

D: Advanced Contr:ol Toolbox Function Descriptions 364

PAGE 377

ADVANCED CO:\'TROL TOOLBOX FlTNCTION DESCRIPTIONS function = findpeaks(.,omega,disp.lag) %-----------------------------------------------------------------------------% PURPOSE: Find the peaks of a frequency response. SYNPOSIS: % [Peaks) f1ndpeaks(F,omeqa,d1spFl.ag) % [cODlPl.ex vector] compl.ex-val,ued frequency response. % omeqa [real. vector] frequency points. % dispFl.ag [boolean] (O} when non-zero, results are displayed. % % RETURN VALUES: % Peaks [real. matriX] l.ocation of peaks: [ omeqaPk, DbPk, PhPk ]. %------------------------------------------------------------------------------function freq = freqvec(S,lb,ub,OPTO,OPT1); ,-----------------------------------------------------------------------------, freq = freqvec(S,lb,ub,np,'siso'); PURPOSB: computes a frequency vector for frequency response % anal.ysis of a l.inear differential system. The frequency vector has a variable grid density from the % pol.es and zeros of the system. S [system] system for which the frequency responses % % % lb ub np 'siso' [real [real. [real. [flag] zeros. RETURN VlU.UES: scal.ar] l.ower frequency bO'Und (rad/sec). scal.ar] upper frequency bound (rad/sec). scal.ar] {50} number of frequency points. use the SISO zeros rather than the system freq [real. vector] anal.ysis frequencys in rad/sec. wil.l. be computed. transmission %------------------------------------------------------------------------------

PAGE 378

ADVA ..... CED CONTROL TOOLBOX Fl':' (flagJ The default algorithm ('dir') computes the responses directly from the system (a,b,c,d}-matrices. The responses are computed from the Hessenberg form (. hess') or the modal. form (' eig') the other algorithms. .. 'nobalance' (flagJ Skip the step of balancing the a-matriX when computing .. the Bessenberg or forms (see the options 'hess' or 'eig'). *' Sigma' (flag] compute the singular val.ues of the frequency responses. 'inverse' (flag] Compute the inverse of the system frequency responses. % *' retdiff' (flagJ compute the frequency responses of the return % difference, I+G(s), or the inverse return difference, % I+inv(G(s)), if the inverse responses are selected. .. *, osborne' [flagJ Apply Osborne's algorithm for balancing the 2-norm .. to the frequency response matrices at each frequency. .. *' auto' [flagJ Auto-select algorithm if an ill-conditioned problem % % is detected. .. [complex matrix] The complex frequency responses of the system. .. The frequency responses for each input/output pair are stored % in columns the columns of columns p*(i-l)+1 through i*p are % the frequency responses for the iAth input where p is the .. number of outputs. % %------------------------------------------------------------------------------

PAGE 379

% ADV A,'iCED CONTROL TOOLBOX FUNCfION DESCRIPTIONS 0/0 funccion [norm,normV! = h2norm(S,typeJ ,-----------------------------------------------------------------------------'PURPOSE: Returns the B -norm of a continuous system. 2 8YNPOSIS: [norm,normV] = h2norm(S) 8 [system] continuous time system. type [bool.ean] See normV. norm [real.] B2-norm of system. normV [real. vector] '" type==O ---> from system inputs to indiVidual. outputs. type=l from indiVidual inputs to the system outputs. ,------------------------------------------------------------------------------function 51ft = Ift(Sl,S2) "'-----------------------------------------------------------------------------, 81ft = lft(81,S2); PURPOSE: linear fractional transformation (LFT) state-space realization. 81 82 [system] first system state model. [system] second system state model.. '" 81ft [system] LFT state-space realization. --->1 81 -->1 1---> ----I 82 1<---or Sl 1<---->1 --82 --->1 1---> '" NOTE: If 81 or S2 are regul.ar matrices then they are assumed to be pure gain systems. "'------------------------------------------------------------------------------

PAGE 380

% ADVANCED CONTROL TOOLBOX FU;o.;CTION DESCRIPTIONS funccion [oucl,ouc2] marginsIFol,omega,dispFlag) %-----------------------------------------------------------------------------, PURPOSE: COlIIPutes the qain and phase m.arqins and the crossover frequencies at which they occur from the open-loop frequency responses. SYNPOSIS: [GIll, Pm) marqj.ns (G, omega, dispPlaq) 'G vector) Open-loop frequecy response. omega [real vector) Correspondinq frequency vector. dispFlaq [boolean] {l} Display marqj.ns when true. RETtlRN VlU.UBS: ,GIll [real matriX) Gain m.arqins in dB. The first column contains the crossover frequencies and the second column contains the m.arqin values. Pm [real matrix) Phase m.arqj.ns in degrees. The first column contains the crossover frequencies and the second column contains the m.arqj.n values. ,------------------------------------------------------------------------------function plotnich(g,phaselim,maglim) ,---------------------------------------------------------------------------, plotnich(q,phaselim,maqlim) PURPOSE: Nichols plot. (REQUIRED) [cOlllPlex matrix) frequency responses in columns (OPTIONAL) phaselim -[real scalar] {360} upper phase limit maqlim -[real vector) {[min(maq) max (maq) )) magnitude plot limits RETtlRN V1ILUBS: none ,-----------------------------------------------------------------------------

PAGE 381

ADVANCED CONTROL TOOLBOX FUNCTION DESCRIPTIONS 0/0 [Ssub) ,-----------------------------------------------------------------------------, [Ssub] = select(S,Iu,Iy,Ix); PURPOSB: selected a subset of inputs, outputs, or states from a system. (REQUIRED) S [system matrix] system Iu -[real vector] indices of inputs (OPTIONAL) Iy -[real vector] indices of outputs Ix vector] indices of states RETtmN VALUES: Ssub [system matrix] Subsystem ,------------------------------------------------------------------------------function [Rl,R2,R3,R4] sinfo{S) [n,m,p,Ts] = sinfo(S) ; PURPOSB: returns information about a system matrix. S [system] state-space system. RETURN VALUES: n m Ts lint] number of states. [intI number of inputs. [intI number of outputs. sample period. (discrete time domain systems) Note: Ts = 0 is returned for continuous domain systems. ,------------------------------------------------------------------------------

PAGE 382

% ADVA.:'1CED CONTROL TOOLBOX FUNCTION function S smultiSl,S2) ,-----------------------------------------------------------------------------, S = smu1t(Sl,52): PURPOSE: Multiply (series connection of) state-space systems. (REQUIRED) 51 -[system matrix) first system S2 -[system matrix) second system RETtmN VALUES: 5 -[system matrix) system formed from S2 Sl. NOTE: If Sl or S2 are requ1ar matrices then they are mu1tiplied as gains. ,------------------------------------------------------------------------------function [Rl,R2,R3,R4,R5,R6,R7,R8,R9,RlO] split(S,m2,p2) ,-----------------------------------------------------------------------------, [a,b,c,d] split(S); [a,b,c,d,Ts] = split(S): [a,b1,b2,c1,c2,d11,d12,d21,d22] = split(S,nu,ny); [a,b1,b2,cl,c2,d11,d12,d21,d22,Ts) = sp1it(S,nu,ny); PURPOSE: extract the state-space matrices from a system matrix. S [system) state-space system matrix, nu lint) number of control inputs (columns of b2), ny lint] number of measurement outputs (rows of c2). RETtmN VALUES: a,b,c,d, or a,bl,b2,cl,c2,d11,d12,d21,d22 [matrix) state-space system matrices. Ts [real] salllp1e period (discrete time domain systems) Note: Ts = 0 is returned for continuous time domain systems. ,-----------------------------------------------------------------------------

PAGE 383

ADVA .. 'iCED CONTROL TOOLBOX FUNCTION DESCRIPTIONS (cont) % functicr. [S] ,-----------------------------------------------------------------------------, [9] system(a,b,c,d,TS); [9] system(a,bl,b2,cl,c2,dll,dl2,d21,d22,Ts); PURPOSE: create a system matrix from the state-space matrices .. a,b,c,d, or .. a,bl,b2,cl,c2,d11,d12,d21,d22 [qenera1 matrix] state-space matrices OPTIONS: Ts [rea1 sca1arJ samp1e period (for discrete time domain systems on1y)! S -[system matrix] compact representation of a state-space system. %------------------------------------------------------------------------------

PAGE 384

GA flops FP-SS MSD MSD3 MSLS OOA PID PIDF SA TOS VFSR LIST OF ABBREVIATIONS Genetic Algorithm Floating Point Operations Floating Point, Steady State Mass-SpringDamper 3-segment Mass-SpringDamper Multi-Service Launch System Object-Oriented Analysis Proportional-Integral-Derivative Control ProportionalIntegralDerivativeFeed back Control Simulated Annealing Transfer Orbit Stage Very Fast Simulated Reannealing 372

PAGE 385

REFERENCES [AV92] Nikolaus Almassy and Paul Verschure, IIOptimizing Self-Organizing Control Architectures with Genetic Algorithms: The interaction between natural selection and ontogenesis Technical Report Nr. 92.10, Institute for Informatics -AI Laboratory, University Zurich-Irchel, Switzerland, 1992. [Bac92] Thomas Back. IISelf-Adaptation in Genetic Algorithms II ftp:/llumpi.informatik.uni-dortmund.de/pub/GNpapers, University of Dortmund, Department of Computer Science, Dortmund, Germany, 1992. [Bac92b] Thomas Back. liThe Interaction of Mutation Rate, Selection, and Self Adaptation Within a Genetic Algorithm", ftp:/Ilumpi.informatik.uni dortmund.de/pub/GNpapers, University of Dortmund, Department of Computer Science, Dortmund, Germany, 1992. [BBM93] D. Beasley, D.R Bull, and RR. Martin. "An Overview of Genetic Algorithms: Part 1, Fundamentals Prepared for 15(2), 1993. [BBM93b] D. Beasley, D.R Bull, and RR Martin. "An Overview of Genetic Algorithms: Part 2, Research Topics", Prepared for 15(4), 1993. [BD93] [BS61] Bertoni and M. Dorigo. "Implicit Parallelism in Genetic Algorithms", Prepared for (61)2, 1993. C.L. Barnhart, ed. and J. Stein, ed. The L.W. Singer Company, Syracuse, New York, 1961. 373

PAGE 386

[Gem84] S. Geman and D. Geman. "Stochastic relaxation, Gibbs distribution and the Bayesian restoration in images", IEEE Transactions on Pattern Matching and Machine Intelligence, vol. 6, no. 6, 721-741, 1984. [Hei93] Joerg. Heitkotter and David Beasley, Ed. ftp://lumpi.informatik.uni-dortmund.de/pub/EA/docs/hbgtec.ps.gz, 1995. [HoI75] lH. Holland. University of Michigan Press, Ann Arbor. 1975. [Ing89] Lester Ingber. "Very Fast Simulated Re-Annealing", Mathematical Computer Modelling, vol. 12, no. 8,967-973, 1989. [Ing93] Lester Ingber. "Simulated Annealing: Practice versus theory", Mathematical Computer Modelling, 18(11), 29-57, 1993. [Ing94] Lester Ingber. "Adaptive simulated annealing (ASA): Shades of annealing", Lester Ingber Research, ingber@alumni.caltech.edu. [IR92] Lester Ingber and B.E. Rosen. i'Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison", Mathematical Computer Modelling, vol. 16, no. 11, 87-100, 1992. [Kir83] S. Kirkpatrick, C.D. Gelatt, Jr., and M.P. Vecchio "Optimization by simulated annealing", Science, 220(4598), 671-680, 1983. [KMN89] D. Kahaner, C. Moler, S. Nash. Prentice Hall, Englewood Cliffs, New Jersey, 1989. [MG92] Samir W. Mahfoud and D.E. Goldberg. "Parallel Recombinative Simulated Annealing: A Genetic Algorithm", TIliGAL Report No. 92002, Illinois Genetic Algorithms Laboratory, Department of General Engineering, University ofTIlinois, Urbana-Champaign, 1992. [WI87] M. Mitchell and J.H Holland. "When Will a Genetic Algorithm Outperform Hill-Climbing?" mm@santafe.edu, John.Holland @um.cc.umich.edu.

PAGE 387

[Gem84] S. Geman and D. Geman. "Stochastic relaxation, Gibbs distribution and the Bayesian restoration in images", IEEE Transactions on Pattern Matching and Machine Intelligence, vol. 6, no. 6, 721-741, 1984. [Hei93] Joerg. Heitkotter and David Beasley, Ed. 3.1, ftp://lumpUnformatik.uni-dortmund.de/pubIEAldocs/hbgtec. pS.gz, 1995. [HoI75] lH. Holland. University of Michigan Press, Ann Arbor. 1975. [Ing89] Lester Ingber. "Very Fast Simulated Re-Annealing", Mathematical Computer Modelling, vol. 12, no. 8,967-973, 1989. [Ing93] Lester Ingber. "Simulated Annealing: Practice versus theory", Mathematical Computer Modelling, 18(11), 29-57, 1993. [Ing94] Lester Ingber. "Adaptive simulated annealing (ASA): Shades of annealing", Lester Ingber Research, ingber@alumni.caltech.edu. [IR92] Lester Ingber and B.E. Rosen. "Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison", Mathematical Computer Modelling, vol. 16, no. 11, 87-100, 1992. [Kir83] S. Kirkpatrick, C.D. Gelatt, Jr., and M.P. Vecchio "Optimization by simulated annealing", Science, 220(4598), 671-680, 1983. [KMN89] D. Kahaner, C. Moler, S. Nash. Prentice Hall, Englewood Cliffs, New Jersey, 1989. [MG92] Samir W. Mahfoud and D.E. Goldberg. "Parallel Recombinative Simulated Annealing: A Genetic Algorithm", IlliGAL Report No. 92002, Illinois Genetic Algorithms Laboratory, Department of General Engineering, University oflllinois, Urbana-Champaign, 1992. [MH87] M. Mitchell and J.H Holland. "When Will a Genetic Algorithm Outperform Hill-Climbing?" mm@santafe.edu, John.Holland @um.cc.umich.edu.

PAGE 388

[Radl] [Rosl] [Sch85] [SM88] [SM92] [Vot93] N. Radcliffe. "The Algebra of Genetic Algorithms", njr@epcc.ed.ac.uk, Edinburgh Parallel Computing Centre, University of Edinburgh, King's Buildings, EH9 3JZ, Scotland. Rosen. "Function Optimization based on Advanced Simulated Annealing", ftp://archive.cis.ohio-state.edu/pub/neuroprose/ rosen.advsim. ps, Division of Mathematics, Computer Science and Statistics, University of Texas at San Antonio, San Antionio, Texas. E. Schmitz. Stanford University, April 1985. S.Shlaer and SJ. Mellor. Prentice Hall, Englewood Cliffs, New Jersey, 1988. S.Shlaer and S.J. Mellor. Prentice Hall, Englewood Cliffs, New Jersey, 1992. C. Voth. "Genetic Algorithms for Control Systems Design and Analysis", AAS paper, August 1993.