Citation
Heuristics search methods for globally optimal design

Material Information

Title:
Heuristics search methods for globally optimal design
Alternate title:
Heuristic search methods
Creator:
Hu, Nanfang
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
x, 107 leaves : illustrations ; 29 cm

Subjects

Subjects / Keywords:
Mathematical optimization ( lcsh )
Heuristic programming ( lcsh )
Heuristic programming ( fast )
Mathematical optimization ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 99-107).
General Note:
Spine title: Heuristic search methods.
Thesis:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Department of Mathematical and Statistical Sciences
Statement of Responsibility:
by Nanfong Hu.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
23525030 ( OCLC )
ocm23525030
Classification:
LD1190.L62 1990m .H8 ( lcc )

Downloads

This item has the following downloads:


Full Text
HEURISTICS SEARCH METHODS
FOR GLOBALLY OPTIMAL DESIGN
by
Nanfang Hu
B.A., Northwestern Polytechnic University, Xian, China, 1981
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Master of Science
Department of Mathematics
1990


This thesis for Master of Science
degree by
Nanfang Hu
has been approved for the
Department of Mathematics
by
Date


Dedicated to Weiqi, my daughter


Nanfang Hu (M.S., Applied Mathematics)
Heuristic search methods for globally optimal design
Thesis directed by Professor Harvey J. Greenberg
Optimum engineering design problems are usually formulated as nonconvex
optimization problems of continuous variables. Due to the absence of con-
vexity structure, they can have multiple minima, and global optimzation
becomes difficult. Traditional methods of optimization, such as penalty
methods, can often be trapped at a local optimum. Two new algorithms
to solve approximately these problems are introduced. Their reliabilities
and effciencies are examined with the help of standard test functions. By
the analysis of the implementations, it is seen that tabu search methods are
easy to use, and outperform the composite genetic algorithms. In particu-
lar, a tabu search method is applied to minimum weight design examples
of 3-bar Pyramid, three-bar truss, coil springs, z-section, and channel sec-
tion. For the channel section, the optimal design using tabu search'methods
saved 26.14% over the weight of the SUMT method.
The form and content of this abstract are approved. I recommended its
publication.
Signed
Harvey J. Greenberg


Contents
CHAPTER
1 INTRODUCTION 1
2 OPTIMAL DESIGN: MODELS AND METHODS 5
2.1 General Formulation................................ 5
2.2 Optimum Design of Mechanical Elements............ 9
2.3 Optimum Structural Design......................... 11
2.4 Optimum VLSI Design............................... 20
2.5 Optimum Shape Design ............................. 22
2.6 Optimum Reliability Design........................ 26
2.7 Multiobjective Optimization....................... 29
2.8 Mathematical Methods of Optimal Design............ 32
3 HEURISTIC SEARCH METHODS 38
3.1 Problem Solving and Search........................ 38
3.2 Heuristics........................................ 39
3.3 Iterative Improvement Methods..................... 41
3.4 Exploitation and Exploration...................... 45
3.5 Optimization by Genetic Algorithms................ 46
3.5.1 Adaptive Process............................ 48
3.5.2 Schemata.................................... 48
3.5.3 Chromosome ................................. 50
3.5.4 Evaluation Function......................... 51
3.5.5 Population and Reproduction Plans........... 51
3.5.6 Genetic Operators .......................... 52
3.5.7 Implicit Parallelism and Building Blocks Hypothesis 53


VI
3.5.8 Schema Theorem.................................. 55
3.5.9 Genetic Algorithms.............................. 55
3.5.10 Composite Genetic Algorithm.................... 57
3.6 Optimization by Tabu Search........................... 59
3.6.1 Tabu Move and Tabu List......................... 60
3.6.2 Strategic Oscillation........................... 61
3.6.3 Aspiration Levels............................... 61
3.6.4 Intermediate Memory and Long-term Memory ... 62
3.6.5 Tabu Search..................................... 62
4 EMPIRICAL STUDIES 66
4.1 Standard Problems..................................... 66
4.2 One-dimensional Algorithms............................ 68
4.3 Multiple-dimensional Algorithms ...................... 74
4.4 Discussions ...................................> . 80
5 APPLICATION EXAMPLES 86
5.1 3-bar Pyramid......................................... 86
5.2 Three-bar Truss....................................... 91
5.3 Coil Springs.......................................... 91
5.4 Thin-walled Columns................................... 94
REFERENCES
98


Vll
List of Figures
2.1 A coil spring........................................ 10
2.2 3-bar Pyramid observed from above.................... 13
2.3 Three-bar truss...................................... 16
2.4 Z-section and channel section........................ 17
2.5 Circuit example...................................... 21
2.6(a) Cross section of typical turbine disc ..................... 24
2.6(b)Discretized nonlinear programming model................ 24
2.7 A example of complex system.......................... 29
3.1 Crossover operator................................... 53
3.2 Comparisons of natural and genetic algorithms........ 56
4.1 The curve of test function funl...................... 69
4.2 The curve of test function fun2............................. 70
4.3 The curve of test function fun3...................... 71
4.4 The curve of test function fun4...................... 72
4.5 Convergent trajectories of one-dimensional test function funl 75
4.6 Convergent trajectories of one-dimensional test function fun2 76
4.7 Convergent trajectories of one-dimensional test function fun3 77
4.8 Convergent trajectories of one-dimensional test function fun4 78
4.9 Convergent trajectories of test function ffunl....... 81
4.10 Convergent trajectories of test function ffun2....... 82
4.11 Convergent trajectories of test function ffun3....... 83
4.12 Convergent trajectories of test function ffun4....... 84
5.1 Convergent trajectory of case I using tabu search.... 87
5.2 Convergent trajectory of case II using tabu search .... 89
5.3 Convergent trajectory of case III using tabu search .... 90


vrn
5.4 Convergent trajectory of optimal design of three-bar truss 92
5.5 Convergent trajectory of optimal design of coil springs . 93
5.6 Convergent trajectory of optimal design of Z-section ... 95
5.7 Convergent trajectory of optimal design of channel .... 96


IX
List of Tables
4.1 Test functions of one-dimension and optimal solutions ... 73
4.2 Comparison of genetic algorithms, tabu search, and random
search with the same degree of deviation (one-dimensional
cases) .................................................. 74
4.3 Test functions and optimal solutions .................... 79
4.4 Comparisons of genetic algorithms, tabu search, and random
search with the same degree of deviation (multi-dimensional
cases) .................................................. 80
5.1 The minimum weight design of 3-bar Pyramid subject to a
compressive stress constraint........................... 86
5.2 The minimum weight design of 3-bar Pyramid subject to an
element buckling constraint ............................. 88
5.3 The minimum weight design of 3-bar Pyramid subject to a
displacement constraint.................................. 88
5.4 The minimum weight design of three-bar truss............. 91
5.5 The minimum weight design of coil springs................ 94
5.6 Comparisons of optimal design using SUMT and tabu search 94
5.7 Comparisons of optimal design using SUMT and tabu search 97


X
ACKNOWLEDGEMENT
I am deeply in debt to Professor H.J. Greenberg for his rigorous ideas,
beneficial discussions, and helpful guidance.
Much of my work was inspired by the recent developments in combina-
torial optimization. I am, therefore, thankful to Professor J. Ryan for her
lectures in Math Clinic.
I am also indebted to Pratt &; Whitney, Inc. for its finanical support
so that I can have a chance to continue my education.
Finally, I have to express my gratitude to my wife and family. Owing
to the support and assistance of my wife, Xiaoqing, I can spend much time
to pursue my academic goal. To Weiqi, far away in China, I have to say
that she, as a sweetheart of her parent, accepted little love from her father.
I


CHAPTER 1
INTRODUCTION
Optimal design is a branch of science applying scientific methods, or/and
technological means in a reasonable way to meet the societys need. Opti-
mal design problems are first considered in aeronautical engineering. Be-
cause the weight of the structures has a critical effect on the mission perfor-
mance and safety of aircrafts, the minimum weight design of basic aircraft
components, such as columns and stiffened panels, subject to compressive
loads, was initially developed during World War II. Owing to the devel-
opment of computer technology, more complex design problems can be
effectively analysed. Therefore, the concept of optimal design is more and
more addressed in engineering design fields. Today, considerations of lim-
ited energy resources, shortage of ecomonic and material resources, strong
technological competition, and environmental problems have further in-
creased the significance of optimal design.
Of optimal engineering designs, the most attractive is structural op-
timization. This is not only because there exist various problems, but
also because it has a wide range of applications, including those from
civil engineering to aerospace engineering, from machine industry to au-
tomobile industry, and from off-shore engineering to nuclear engineering.
There are plenty of books, reports, papers and publications on struc-
tural optimization. The excellent books have been written by Haug and
Arora (1979), Iyengar and Gupta (1980), Kirsch (1981), Vander-
plaats (1984), Haftka and Kamat (1985), and Arora (1989). The


2
recent developments and significant contributions have been contained in
several proceedings. A vast number of reviews can also be found
in the literature. A historical review was given by Schmit (1981)^. Lev
(1981)[nl provided a rather comprehensive survey of structural optimiza-
tion over a period of 10 years, 1970 1980. Ashley (1982)f12^ focused on
the uses of optimization in aerospace engineering. A specilaized survey on
the optimization of plates was conducted by Haftka and Prasad (1981)^.
Vanderplaats (1982)t14^ stressed the contribution of numerical techniques
to the development of structural optimization. All mathematical program-
ming methods for structural optimization were discussed by Belegundu and
Arora (1985)^. A state-of-art review of recent five years, 1980- 1984, was
presented by Levy and Lev (1987)t16l.
Optimal design problems in engineering are usually formulated as non-
linear, nonconvex optimization of continuous variables. As a result, it can
have multiple local minima. Therefore, global optimization is of special
importance for optimal engineering design.
In optimization theory, two concepts, global optimization and local
optimization, are distinguished. Global optimization means to solve the
optimization problem in a domain given by a real-world problem. Local
optimization means to solve locally the optimization in a part of the area,
or a neighborhood belonging to this area.
Local optimization has been fully studied. Some complete discussions
can be found in many books, for example, Jacoby, Kowalik, and Pizzo
(1972)^, Fletcher (1980t18^, 1981^), and Schwefel (1986)^. On the
other hand, global optimization is an area which is left to be explored.
Some efficient and general methods, such as simplex method, are available
solely under linear cases. Historically, global seeking methods relied on


3
algorithms that could guarantee at best a local optimum. However, in this
way, global optimization problems can be solved only in the limited cases,
e.g., convex programming.
Recently global optimization has attracted much attention. An overview
of various methods for global optimization has been given in two vol-
umes, edited by Dixon and Szego (1975^, 1978^). Globally optimal
design is first discussed systematically by Wilde (1978)^. Constrained
global optimization is especially treated in Pardalos and Rosens mono-
graph (1987)l23i. Hong and Quan (1988)^24^ examined carefully the prob-
lems of integral global optimization. Some recent results have been col-
lected in Torn and Zilinskass book (1989)^. A special issue on global opti-
mization is also published by Naval Research Logistics (1990, vol. 37). The
complexity of global optimization problems has also been studiedt26^27!.
Methods for global optimization are divided into two classes: deter-
ministic and probabilistic (or stochastic) methods. A procedure of global
optimization is generally composed of a series of moves. In determinstic
methods, all current information for global optimization problems is pro-
cessed in a deterministic way to define the next step. In stochastic meth-
ods, the information is processed in a probabilistic way to determine the
next step. An overview for deterministic methods of global optimization
is given by Horst (1990)^. Intervals have been recognized to be excellent
tools for handling global optimization problems. Interval arithmetric is
introduced by Moore (1966)^. Some deterministic methods based on in-
terval arithmetric for global optimization have been developed. Ratschek
and Roknes book (1988)^ provides a deep and thorough treatment of
this kind of method.
On the other hand, heuristic search has also been used as an efficient


4
and general way for solving approximately global optimization problems.
Recently, some heuristic search strategies for global optimization, based
on iterative improvement methods, have been introduced. They are: sim-
ulated annealing (SA), genetic algorithms (GA), and tabu search (TS).
These heuristic search methods can be categorized into probabilistic meth-
ods of global optimization. Their applications to combinatorial optimiza-
tion or optimization of discrete variables have been reported in the lit-
erature but their applications to globally optimal design of continuous
variables have not been studied. In chapter 2, the models and methods
of optimal design are briefly surveyed. Two heuristic search methods for
globally optimal design are introduced in chapter 3, and empirical stud-
ies of them are given in chapter 4. Finally, some applications to practical
design examples using tabu search methods are examined in chapter 5.


CHAPTER 2
OPTIMAL DESIGN: MODELS AND METHODS
Mathematical formulation of a design problem is an important step
of optimal design. The five prototype mechanical and structural design
problems were formulated by Haug and Arora^. The discussions in details
have also been given by Schmit^, Kirsch, and Arora and Thaneder^31!.
In engineering, many kinds of optimal design problems can be involved.
They include: optimum design of mechanical elements, optimum structural
design, optimum VLSI design, optimum shape design, optimum reliability
design, and multiobjective optimization. All of these problems usually are
formulated as nonlinear, nonconvex programming (NLP) problems.
The process of problem formulation requires identification of design
variables, constraints and an objective function.
2.1 General Formulation
When a system is designed, it usually is defined by a set of quanti-
ties, some of which are viewed as variables during the design optimization.
Those quantities defining an engineering system that are fixed during the
optimization process are called preassigned parameters, and they are not
changed by the optimization methods. Those quantities that are not pre-
assigned are called design variables. The preassigned parameters, together
with the design variables, completely describe a design. Quantities are
designated as preassigned parameters for a variety of reasons. It may be


6
that the designer is not free to choose certain parameters, or it may be
known from experience that a particular value of the parameter produces
good results. Often, by considering some quantities fixed, i.e., invariant
during the optimization process, the problem is greatly simplified. Any set
of values for the design variables represents a design of the system required.
Clearly, some designs are useful or practical, but others might be inad-
equate or impossible. To exclude such unacceptable designs as candidates
for an optimal solution and reduce the time of optimization operations, the
design and performance requirements are expressed mathematically in the
form of equalities or/and inequalities, which are called constraints, prior to
optimization. A design satisfying all the limitations or constraints placed
on it is called a feasible design. In design optimization, some measures
characterizing the factors that effect the efficiency of the designed system
are introduced and referred to as a performance index, or figure of merit
(FOM). In general, a figure of merit can be taken as performance, weight,
area, volume, cost, reliability, or availiability. The figure of merit can often
be associated with some characteristic quantities or design variables of the
components of which the designed system is composed. Therefore, it can
be viewed generally as a function of design variables, and referred to as
an objective function. Sometimes the objective function is also called cost,
criterion, or merit function.
The selection of an objective function can be one of the most important
decisions in the optimal design process. In some situations, an obvious
objective function exists. For example, in some cases, the total cost is
dominantly the material cost, which is proportional to its weight. Then a
design that minimizes cost also minimizes weight. In general, the objective
function represents the most important single property of a design, but it


7
may also represent a weighted sum of a number of properties for multi-
objective optimization design problems.
Weight is the most commonly used objective function of engineering
design, especially in structural engineering, due to the fact that it is easily
quantified, although most optimization methods are not limited to weight
minimization. The weight of a system is often of critical importance, but
minimum weight design is not always the least costly.
Cost is of wider practical importance than weight, but it is often dif-
ficult to obtain sufficient data for the construction of a real cost function.
A general cost function may include the costs of materials, fabrication,
transportation, etc. In addition to the costs involved in the design and
manufacture, other factors such as operating and maintenance costs, re-
pair costs, insurance, etc., may be considered. The result may be a flat
function, which is not sensitive to variations in the design variables. In this
case, any feasible design is optimal; however, finding a feasible design is,
itself, a difficult problem, which can be formulated as a (nonconvex) global
optimization problem to minimizing total deviation from constraint satis-
faction. From a practical point of view, it is desired to introduce such an
objective function that is both sensitive to variations in the design variables
and represents the important cost components.
In complex engineering optimal design problems, there can exist sev-
eral non-commensurable criteria that must be simultaneously considered.
Such a situation is formulated by a vector, multicriteria, or multi-objective
optimization problem in which the designers goal is to minimize several
objective functions simultaneously. For such design problems, the pareto
optimum is often introduced. The pareto optimum is such a solution for
which there is no way of improving any criterion without worsening at


8
least one other criterion. In general, the pareto optimum is not unique.
One could find a pareto optimum by optimizing any one of the objective
functions. Modern systems begin with the collection of pareto optima ob-
tained in this manner of choosing each of the objectives individually. Then,
a graphic interface enables design engineers to see the effects of trading-
off the confliciting objectives. For each trade-off, the system must solve a
global optimization problem with a single objective function.
A STANDARD NLP MODEL
The general problem of optimal engineering design can be stated as one
of choosing design variables subjected to constraints related to the design
variables of the system, so that the objective function is minimized. That
is,
minimize f(X) (1)
subject to N equality constraints and M inequality constraints,
Fi(X) = 0 (i = 1, , JV), (2)
Gj(X) < 0 0 = 1,...,M), (3)
and subject to the n side constraints,
Q>k S S bk 'a- II i* (4)
Because and Gj are usually derived from the functional requirements or
behavioral conditions, they are also called functional, or behavioral con-
straints in engineering.
When the functions f(X), Fi(X), and Gj(X) are all affine, the opti-
mization problem is called linear programming problem; otherwise, it is


9
called a nonlinear programming (NLP) problem. In general, optimal de-
sign problems are NLP problems.
2.2 Optimum Design of Mechanical Elements
The design of machine elements is founded on the theories of mechanics
and the strength of materials. Optimum design of mechanical elements
can be defined as the selection of materials and values for the independent
geometrical parameters with the explicit objective of either minimizing an
undesirable effect or maximizing a functional requirement, assuring that
the design variables satisfy other functional requirements.
In designing mechanical elements, material parameters and geometri-
cal parameters generally are taken as design variables, while functional
requirement parameters are preassigned parameters. Optimum design of
mechanical elements is usually a nonlinear optimization problem of contin-
uous variables. It often requires a complex study of functional variations
and functional relationships of the elements or designed systems. An excel-
lent book on optimum design of mechanical elements is written by Johnson
(1980)t32l
EXAMPLE 2.1 Design of Coil Springs
The design of coil springs is encountered widely in practical applica-
tions. Detailed methods for analyzing and designing such mechanical com-
ponents have been developed. As one of prototype mechanical and struc-
tural design probelms, it has been formulated in Haug and Aroras book^.
More discussions of optimization of this structure have also been given by
Arora (1989)^. A coil spring loaded in tension or compression (see Fig-
ure 2.1) is considered here. Our objective is to design a minimum mass
spring to carry given loads without material failure while satisfying other


10
performance requirements.
To formulate the problem of designing coil springs, the following nota-
tion and data are defined:
Number of inactive coils : Q
Applied load : P
Shear modulus : G
Minimum spring deflection : A
Weight density of spring material : 7
Gravitational constant : g
Mass density of material : p
Allowable shear stress : ra
Lower limit on surge wave frequency : w0
Limit on the outer diameter : D
= 2,
= lOZfcs,
= 1.15 x 107lb/in2,
- 0.5 in,
= 0.285ZZ>/m3,
= 386in/sec2)
= 7.38342 x 104Z6 sec2/in\
= 8.0 x 104Z5/m2,
= 100 Hz,
= 1.5m.
Figure 2.1 A coil spring
The three design variables for the problem are defined as d = wire
diameter (inches), D = mean coil diameter (inches) and N = number of
active coils.
Through the analysis of design, the optimum design problem of the coil
spring is formulated^ as follows.
Minimize f - (N + 2)Dd2


11
subject to the deflection constraint:
D3N
91 =
71875d4
- 1.0 > 0;
the shear stress constraint:
92 = 1-0
D(4D d) 2.46
12566d3(D d) 12566d3
the surge wave frequency constraint:
140.54d
>0;
53 =
and the diameter constraint:
D2N
1.0 > 0;
D + d
9a = 1.0 > 0.
l.o
The lower and upper bounds on the design variables are set as follows
0.05 < d < 0.20,
0.25 2 < IV < 15.
This problem has been discussed^ using sequential quadratic program-
ming (SQP). The applications of tabu search to this problem, and the
comparisons with the Aroras results will be given in chapter 5.
2.3 Optimum Structural Design
Optimum structural design is widely formulated as a minimum weight
design with stress, displacement, buckling load, natural frequency and de-
sign variables bound constraints.
The general formulation of the problem is:
minimize w(b, z, A),
(5)


12
satisfying the finite element equilibrium equations:
K(b)z = F(b), (6)
K(b)y = A M(b)tl, (7)
and the constraints
hi(b,z, A) < 0 ( = 1,...,M), (8)
where b is an n-dimensional vector of design variables, z is an n-dimensional
vector of nodal generalized coordinates, 77 is an n-dimensional eigenvector, A
is an eigenvalue, K(b) is an n x n stiffness matrix, M(b) is an n x n mass or
geometric stiffness matrix, and F(b) is an n x 1 equivalent loading vector.
Optimum structural design is often a nonlinear programming problem of
continuous variables.
The features of problems of optimum structural design are discussed by
Belegundu and Arora^, and summarized as follows.
1. The functions f, F^, and Gj are usually implicit in design variables.
In general, the objective function depends on the displacement vec-
tor z which in turn depends on the design variable b through the
equilibrium equation.
2. The evaluation of functions and gradients is quite expensive due to
their implicit dependence on design variables.
3. The optimal design problem is highly nonlinear and in general non-
convex. As a result, it can have multiple local minima.
Example 2.2 Design of 3-bar Pyramid
The optimal design of 3-bar pyramid is considered here. The coordi-
nates of the nodes are shown in figure 2.2 and each element has a cross-


1 ?
lo
sectional area = a. A horizontal force p = (0, PO;0) is acting in the top
node. The length of each element is yT -j- -~f 2.

Q>
X
Figure 2.2 3-bar Pyramid observed from above
The weight of the pj>ramid is given by:
w(a, 7) = Cay/1 + 72,
where C=3p, p is the density. Because C is a constant, the objective
function may be expressed as
f(X) = x2( 1 + xl),
where xj, x2 represent 7, a, respectively. The same example has also been
discussed in the literature^.
I. The minimum-weight design subject to a compressive constraint
For the 3-bar Pyramid, the compressive stress in element no. 1 is found
to be:
_ 2P0 r~t
^ +


14
Then we have the minimization problem,
minimixe f(X)
subject to
sW = + 4 < < o-
Zxi
x?
For this problem, the analytical solution can be given as follows.
, ,5x/5iV
Xi = 1, 2 = ( ------).
v 3a?ax
II. The minimum weight design subject to a buckling constraint
For the 3-bar Pyramid, the buckling stress in element no. 1 is found to
be:
2P0
3 Ea
ll + T^
ka
1+72
where E is the modulus of elasticity, and k is the cofficient of buckling.
Then, the design problem of minimum weight subject to an element buck-
ling constraint (in element no. 1) can be formulated as follows.
Minimize f(X)
subject to
9(X) =
2P0
11 +
kx 2
< 0.
3Ex2 y i + x\
For this problem, the analytical solution can be easily obtained as,
, 5\/5Po,
xi = 0.5, x2 = (-
6 kE
-)1/2.
III. The minimum weight design subject to a displacement constraint
For the 3-bar Pyramid, the displacement constraint of the top node in
the y-direction is given by:
n p ----- -I
^£+1Ty(i +^5 <.r.
7


15
Then, the minimum weight design subject to a displacement constraint on
the top node in the y-direction is formulated as follows.
Minimize f(X)
subject to
o p ------- I
9(X) = + -i) 3Ex2 '' x\
For this simple problem, the analytical solution can be given by:
4\/2Po
(9)
X\ =1, x2=
ZEdn
For the given input data, the comparisons of the exact solutions of op-
timal designs of 3-bar Pyramid for the three cases with the computational
results using tabu search will be given in chapter 5.
Example 2.3 Design of Three-bar Truss
This example is also one of prototype mechanical and structural design
problems given by Haug and Arora (1979)^, and has been used by Schmit
(1981 )f10^ as an example of not fully stressed designs.
For the symmetric three-bar planar truss shown in figure 2.3, two dis-
tinct loads are considered, i.e, Pj = 20,000 lb acting at an angle of 135 to
the axis. Because the truss is symmetric, Ai = A3, and therefore, the two
independent design variables are Ai and A2.
For the following input data,
N = lOira,
A = 135,
@2 II CO O 0
& II Cn 0
P = 0.1 lb/in3,
E = 10 x 106lb/in2,


16
Figure 2.3 Three-bar truss
the optimum design problem of the three-bar truss can be formulated as
follows.
Minimize f(AuA2) = pN(2\/2Ai + A2)
V*
subject to
A2) 20,000 an >
9i{Ai, A2) = 20,000 (t2i > 0,
53(^15 -^-2) = 15,000 o*3i > 0,
where
an = 20,000[---
<2"21
31
Ai 2AiA2 + y/2Af
20,000 y/2Ai
],
2AiA2 \/2A\
20,000A2
2AiA2 -f- V2A\


17
This example will be solved using tabu search method in chapter 5.
Example 2.4 Design of Thin-walled Columns
In aircraft structures and other metal construction, stiffeners which are
normally channel and Z-section are widely used. Optimal design of these
structures have been fully analyzed and discussed by Iyengar^.
A column of given length L is designed (see figure 2.4). The design
variables for both the Z- and channel section are of flange width b, thickness
t, and overall depth h. The material used for the column is aluminium alloy
t
Y
Figure 2.4 Z-section and channel section
with compressive yield stress 25,000 psi. When both the length of column
and material are specified, the minimum weight design is really to optimize
the cross-sectional area. Since the flange and web thickness are assumed to
be the same for both sections, the design vector X contains three variables:
X = {x-L,x2,x3},


18
where
Xi = b, X2 = t, Xz = c = h/b.
The objective function of our problem is to minimize the structural
weight W which is defined by:
W = pAcL,
where p is the density of the material, Ac the area of cross-section of the
column, and L the length of the column. For the channel and Z-section,
an exact expression for the area Ac can be given by:
Ac = 2 bt + ht bt{2 + c).
Then, the objective function can be rewriten as:
W = Xt.x2{2 +xz)pL.
Because p and L are constants, the objective function may be expressed
as:
f(X) = xtx2(2 + 23).
For the designed column, five distinct restrictions which act as constriaints
in the choice of design variables. They are:
1. the load Pi for which overall buckling of the column occurs;
2. the load P2 for which local buckling of the flange takes place;
3. the load P3 for which local buckling of the web is observed;
4. the load P4 for which there is torsional buckling of the column;


19
5. the load P5 for which the yield stress is reached in the column.
Then, the constraint functions are given by
9i = Pi(X)-P> 0 (i = l,2,...,5).
I. Optimal design of Z-section
For the Z-section, the closed-forms for P{(X) have been given^ by
x2P
MX) =
MX) =
Pz{X) =
Pt(X) =
MX) =
24 L2
+ 6X3 -f 8)
-(x + 12xl + 36xg I6X3 + 48x3 + 64)1/2],
(2 + x3),
(0.43)tt2E Xj
12(1 v2) Xi'
tt2E x\ (2 + x3)
3(1 i/2) xi x|
tt2Ex\x 2X3(14-2x3) 4(jx3(2 +- x3)2
L2 8 + 6x3 + X3
ayx 1X2(2 + x3),
xi(8 + 6x§ + X3)5
where E is the Youngs modulus, G the shear modulus, u the Possions
ratio, and cry the yield stress.
II. Optimal design of channels
For the channel section, the constraints <725 <73, and g$ have the same
forms as the Z-section. For the other constraints, we have:
HX)
P4(X)
2x3 \
xlx2\ o )i
3 L2
2 + x3
x2Px2x2x| x| +- 14xg 4- 6OX3 + 104x3 + 232x3 + 336
2AL2 ^ x| + 8x| +- 12x2 + 8x3 + 4 '
, 2Gx\{2 + x3)3
xi(x| +- 8x| + 12x3 + 8x3 + 4)
-[{
x2Px3x2x| 288 + 128x3 56x3
24P2
L(
60x|
14xg
x.
Xl(xg + 8X3 + 12x| + 8x3 +- 4)
2rx3(2 + x3)3
;}2
xi(x| + 8x| + 12x| + 8x3 + 4)
(1.33)x4P2x^x|x2(3 + x3)2 1/2
' P4(x| + 8x3 + 12xg + 8x3 + 4)J


20
This example has been discussed using sequential unconstrained minimiza-
tion techniques (SUMT)^. In chapt er 5, the application of tabu search
method to this example will be given.
2.4 Optimum VLSI Design
Today VLSI design has become one of main engineering design prob-
lems. In optimum VLSI design, two problems are often involved: placement
of cells and routing of interconnections for the cells.
For some of digital integrated circuits (IC), the area available for inter-
connections is strictly limited. If there is not enough interconnection area
available for a given cell placement, then a new placement must be gener-
ated. Then one of goals to optimum VLSI design is to minimize the total
chip area occupied by the circuit models and interconnections between the
modules. Circuit performance is another major concern in the design of
digital integrated circuits. If the delay through a particular signal path is
excessive, the circuit may not perform in accordance with logic specifica-
tion. For integrated circuits, signal delay results in part from the lengths of
the interconnections on the chip. Hence, if the maximum signal delay is to
be minimized, the length of the interconnections along the critical paths,
that is, the signal paths whose delays limit the overall performance of the
chip, must be minimized or kept below a certain length. Then, another
goal is to minimize the total length of interconnections of a chip.
Optimum VLSI design previously described is often formulated as a
combinatorial optimization problem of discrete variables. Given N cells on
a chip, the number of possible placements is of the order 0(N!). Then,
the placement problem, in which it is desired to find the placement of
minimum total interconnect length, belongs to the class of NP-complete


21
problems. In fact, most placement and global routing problem in opti-
mum VLSI design belong to this class. For such a class of problems of
this complexity there exists no known polynomial time exact algorithm.
Therefore, some approximate or heuristic concepts are necessary to solve
this class of problem. Recently some heuristic search methods have been
successfully applied to optimum VLSI design. Some results can be found
in Sechens book (1988)^, and Wong, Leong and Lius book (1988)^.
Some global search methods, based on Browian motion process, are also
recently applied for optimal design of electronic circuitst35^36].
Example 2.5 Design of Circuit
Consider the design of a circuit of a constant voltage described by
Heydemann, Durbin, and Montaron^, shown in figure 2.5. The circuit
is designed to keep the output voltage vout constant in spite of variations
in the set of technological parameters such as transconductance, threshold
voltage, channel width absolute tolerance length absolute tolerance and
temperature.
R1
Figure 2.5 Circuit example


22
The design variables are the channel lengths of the MOS devices, de-
noted by (xi, X5). The following five-dimensional optimization problem
can be formulated.
Minimize /(x 1,..., x5) = (vout 5)2
subject to
15 < Xi < 100,
15 < x2 < 150,
15 < x3 < 150,
15 < x4 < 200,
15 < x5 < 200,
where vout is a function of the design variables xi, ..., x5.
2.5 Optimum Shape Design
The determination of shape of two- and three-dimensional elements sub-
ject to constraints involving natural frequencies, displacements and stresses
defines a typical optimum shape design problem.
The general form of the problem can be formulated as
minimize w
subject to
J j a,(z)ia
= 0
< 0
j Jnf(z)dtt (10)
(i = l,...,JV), (11)
y = i,...,M), (12)
where Q is a domain for the problem, and the state variable z is solution of
the boundary value problem (equilibrium equations) for the elastic body.


23
The functionals f(z),Fi(z),Gj(z) depend on the state variable (z) and
represents some measure of stress or displacement. The cost or objective
functional can be to minimize volume, weight, area, stress concentration
or peak torision rigidity.
When finite element methods or sample analysis is applied, this func-
tional form can be changed into the form of algebraic equations of nodal
point coordinates of the finite elements, or the coefficients of various poly-
nomial samples (splines, Lagrange, Hermite, cubic, etc.). Thus, when the
nodal point coordinates or the coefficients of polynomial samples are as
design variables, the problem can be formulated by the standard NLP
model. A survey of structural shape optimization is given by Haftka
and Grandhi (1986)^. The more discussions can be seen in some recent
proceedings^39^40!.
Example 2.6 Design of Rotating Discs
The minimum weight design of rotating discs shown in figure 2.6 is
considered. In this example, radii r2, and thicknesses u2, u4, u5 are the
design variables x:
X (^lj *2j *3? ^4)
= (ii3, u4,?/5, r2).
The objective function is
/*** T71
w{x) = / 2wprh(r)dr,
Jr1
where 7*1, rm are the inner radii and outer radii, respectively, p is the density
and h(r) is the thickness at a radical distance r from the axis of rotation.
For the division 7*1, 7-2, ..., rm and the linear approximation to h(r), the


24
Ak!s of rotation
Figure 2.6(a) Cross section of typical
turbine disc.

Figure 2.6(b) Discretized nonlinear programming model.


25
weight function can be given by:
m2
w 'P (ri+1 ri)(ri+1 + ri + ri-i)ui
15 i=3
+^7rpui(-3r? + + r2r3)
- r^_2 rm_2rm_!).
For this minimum weight design, the behavioural constraints are given as
follows. From the analysis of stresses, the rotation stresses within rach ring
[r,-, r{+1] can be given approximately by:
(3 pw2(3 + i/)_2
2 Q T
Tz 8
/3 pu;2( 1 + 3i/) .
cre = a+
rz 8
where a, (3 are constants which can be determined by equating the radial
displacement and radial load at the the interface of adjacent rings.
Similarly the thermal stresses are given by:
aE
J rdr
+ 1-^,
olE t b
dr aE + 'y + ,
rz J rz
where 7, 8 are constants, and a is the coefficient of linear expansion. The
temperature <£(r) is a prescribed function of r. The resultant stresses are
then given by:
(Tr = = Therefore, the principal shearing stresses at the given radii r are given
by:
Tl


26
Then the maximum principal shearing stress r is given by:
r = maa3(r1,r2,r3).
Therefore, the constraint condition is given by:
T < T0.
In this problem, the behavioural constraints cannot be expressed analyti-
cally as functions of design variables.
It is clear that the discretized minimization problem is a nonlinear,
nonconvex optimization problem of continuous variables. The problem has
been discussed using feasible direction methods. The details are referred
to Silvas work!41!.
2.6 Optimum Reliability Design
Nowadays reliability is one of the key design factors when designing
complex, critical, and expensive systems. In general, optimum reliability-
based design can be divided into two basic classes of problems. One is the
reliability optimization of network systems, and the other is the reliability
optimization of machine elements or systems.
For the reliability optimization problems of network systems, there are
usually the following two forms: 1) cost minimization subject to the desired
level of system reliability, and 2) system reliability maximization under cost
constraints.
The cost minimization subject to the desired level of system reliability
is stated as follows.
71
Minimize c ^ c,(y,)
i=i
(13)


27
subject to the system reliability constraint
Ra = f[Ri(yi)>Ro, (14)
i=i
where Ra is the system reliability, Rq is the specified reliability of system,
C{ is the cost of the i-th element, yi is the design variables denoted the
number of the i-th component, and n is the number of all elements in the
system.
System reliability maximization problems under cost constraints is stated
as follows.
71
Maximize Rs = II Hm) (is)
i=l
subject to the following constraints:
Pki(Vi) where pki(yi) represents consumption of the k-th resource at the i-th ele-
ment.
Stress-strength inference theory is used to determine the reliability of
a mechanical component or system when the component stress-strength
probability distributions are known. At this time, the problem concerned
becomes a probabilistic design problem.
Similarly, the reliability optimization problems of mechanical elements
can also be categoried into the following two types of the problems: 1)
at a specified component reliability, minimize the cost of design, and 2)
maximizing component reliability under resource constraints.
When mechanical element or system distributions of stress s and strength
S are known, say f(s) and F(S), the component reliability is given by a well
known equation:
/oo /*oo
f(s)[ F(S)dS]ds.
-OO J 8
(17)


28
Then, component or system reliability maximization subject to resource
constraints, when stress and strength are independent and normally dis-
tributed, is stated as follows.
Maximize x = (fis /^)/(s + c^)1^2 (18)
subject to
a{fis) + b(/J.s) + c( where a, b, c, and d are cost functions, and Ao represents the quantity of
available resources.
Optimum reliability design is often a nonlinear programming problem
of continuous variables. Reliability optimization has attracted more atten-
tion. Many papers can be found in some journals, for example, Techno-
metrics, or IEEE Trans ations on Reliability. A comprehensive discussion of
optimization of system reliability can also be found in the Tillman, Hwang
and Kuos book (1980)^.
Example 2.7 Reliability Design of Complex Systems
Consider a example of reliability design of complex system shown in
figure 2.7. Our objective is to design a system of reliability maximization
subject to a cost constraint.
The problem can be formulated as follows.
Maximize Re = 1 JR3[(1 JR1)(1 ^24)]2
-(1 R3){1 R2[(l (1 J20 x (1 R4)]}2
subject to
C. = £ci i
Ri,min

29
input
Figure 2.7 Example of complex system
where the relation between the cost and the reliability for the component
(i) is given by:
ci = kilt?*,
where the k{ and oti are given constants.
2.7 Multiobjective Optimization
In complex engineering design problems there often exist several non-
commensurable criteria that must be simultaneously considered. Such a
situation is formulated as multicriteria or multiobjective optimization prob-
lem in which the designers goal is to optimize several objective functions
simultaneously. Since the objective function in this problem is a vector,
multiobjective optimization is often called vector optimization, while a sin-
gle objective function optimization is referred to as scalar optimization.


30
The engineering design problems are acknowledged to be inherently
multiobjective optimization problems. In general, a system is to be de-
signed to meet several requirements, expressed as criteria of cost, weight,
area, reliability, stiffness, deflection, etc. Then, the problem is a multiob-
jective optimization.
In optimum engineering design problems with a single objective func-
tion, one usually seeks a single solution with little or no attention to the
sensitivities of other objectives that were ignored. This is often felt to
be a serious weakness in the design process because the solution gained is
strongly dependent on the form and various parameters of the problem. In
practice, it is generally difficult to give exactly the constraint functions and
choose suitably a single definite criterion to measure the advantage of the
designed system. Sensitivity analysis has been applied to some extent in
order to lessen this drawback characteristic of scalar optimization, but only
local information in the neighborhood of the optimal solution is obtained
by this technique. On the other hand, multiobjective optimization may
be regarded as a systematic sensitivity analysis of those design objectives
which are considered especially important.
There generally exists no point that is an optimum for all criteria si-
multaneously because the criteria are confliciting. For example, one can
improve reliability at the expense of increasing cost. Thus, in multiobjec-
tive optimization, one cannot speak of an optimum solution point, but of
a satisfying solution. Optimality plays an important role in the scalar
optimization. It allows the designer to restrict their attention to a single
solution or a very small subset of solutions from among much larger set of
feasible solutions. On the other hand, noninferiority or pareto optimum,
introduced by Pareto in 1896^, serves a similar, but less limiting, purpose


31
for multiobjective problems.
The idea of noninferiority arises from the concept of dominance. Non-
inferiority is the term used by some mathematicians; nondominance by
the others; efficiency by computer scientists and statisticians; and pareto
optimum by economists and most design engineers.
Mathematically, vector optimization can be regarded as a generaliza-
tion of scalar optimization, and the solution of pareto optimum for a mul-
tiobjective optimization problem requires the solution of a set of scalar
optimization problems, including those which are obtained by optimizing
every criterion separately. Therefore, it can be concluded that the compu-
tational effort involved in the numerical operation of pareto optimum may
prove to be quite considerable.
A multiobjective optimization problem can be formulated as follows.
Minimize Z(X) = [^(X), z2(X),..., zp(X)\ (20)
subject to
Fi{X) = 0
Gj(X) > 0
and subject to the side constraints
a* < < bk
(i = 1,.. ;N), (21)
(j = I,- ,,M), (22)
(* = 1, (23)
where X = (1? ..., n) is an n-dimensional design vector. The individual
objective functions are denoted by zk(X) (k = 1, ..., p).
Pareto optimum can be defined mathematically in the following way.
A solution, say X*, is noninferior, or pareto optimum, if it is feasible and
if there does not exist a feasible solution, say Y, such that
zk(Y) < zk(X*)
for k = 1 ,...,p,
(24)


32
and
zk(Y) < Zk(X*) for some k. (25)
Because the pareto optimum is not unique, multiobjective optimization
is often time-consuming. In order to lessen the computational and human
effort for multiobjective optimization, it seems preferable, instead of gen-
erating all pareto optima, to concentrate only on a certain subset of them.
Two kinds of procedures have been developed to acheive this goal. One of
them is develop an interactive procedure, where the designer participates
in the process at every pareto optimum achieved in order to determine the
direction and extent of the next step. Recently, numerous multiobjective
optimization methods have been oriented towards interactive on-line use.
The interactive multiobjective optimization usually consists of sequences
of decision phases and computation phases. The other is to formulate mul-
tiobjective optimization as an adaptive process in which genetic algorithms
can be applied. The approach applies genetic operators to evolve from a
population a satisfactory pareto optimum solution.
The further discussion of multiobjective optimization is beyond the
scope of this thesis. Many valuable books t40^44^4^ are available for the
interested readers.
2.8 Mathematical Methods of Optimal Design
Optimization problems have been divided into two large classes: uncon-
strained and constrained optimization. For unconstrained problems, math-
ematical methods of optimization are classified by Edelbaum (1962)^ into
direct methods and indirect methods. Direct methods, or numerical meth-
ods, are those that approach a solution in a stepwise manner (iteratively),
at each step improving the value of objective function. Indirect methods, or


33
analytic methods, are those that attempt to reach the optimum in a single
step, without tests or trials. Generally, according to indirect methods, the
optimization is changed into a problem of solving a system of simultaneous
equations, namely the necessary conditions for the optimality. Iterative
methods comprise a general approach to functional optimization for con-
tinuous variables. On the other hand, some solutions, e.g., saddle points,
are of no interest for the original problem, it is preferable to design iterative
methods based on direct methods in practice.
Direct methods can be classified further into two classes: descent meth-
ods, based on derivatives, and direct search methods. Descent methods
usually depend on the initial solution, and perform successive moves in a
downhill direction. They are powerful when the objective function is uni-
modal in the feasible domain. However, when there exist many peaks in
the domain and there is no information for their areas, descent methods
can often be trapped at a local optimum. In order to avoid this case, some
other methods, e.g., multistart methods, have been incorporated^47^ with
them.
For constrained problems, they are usually transformed into a para-
metric unconstrained problems^48! so that some unconstrained optimization
technique can be applied. Of these transformations the most common are
penalty methods, such as interior point and exterior point forms. Because
exterior point methods approach the optimal solution from the outside
of the possible domain and the final approximate solution does not ex-
actly satisfy the constraint conditions, they are generally not accepted by
most design engineers. On the other hand, interior point methods make
moves in the interior of the feasible domain so that each successive solution
is feasible. Therefore, sequential unconstrained minimization techniques


34
(SUMT), based on the interior point methods, have been widely applied
in engineering design^^^^^.
The development of mathematical programming techniques during the
late 40s and early 50s provided further impetus to the application of op-
timization methods in engineering design. In 1960, Schmitt proposed
coupling the finite element analysis and mathematical programming meth-
ods for structural optimization design. In this approach structural design
is treated as a problem of mathematical extremization of an objective func-
tion in an n dimensional design variable space constructed by such be-
havioral functions as stresses, displacements, frequencies, etc. The search
for an optimum is carried out by methods of linear and non-linear pro-
gramming techniques. Some of examples of mathematical programming
methods are gradient projection, steepest descent, feasible directions and
various unconstrained minimization techniques in conjection with the so-
called penalty function transformations to account for constraints. A com-
plete survey of mathematical programming methods applied to structural
design optimization have been recently given by Arora^. In the following
years, this approach enjoyed great popularity and was applied to a number
of interesting structural design problems. In order to provide the assis-
tance to choose a generally useful algorithm for the optimal design using
mathematical programming methods, some comparative studies have been
made. Colville (1968)t51^ first laid down the foundation of comparative
studies; Eason and Fenton (1974)^ conducted a major study aimed at
engineering design problems; and a comprehensive comparative study was
given by Sandgren (1977)^. Recently, the concept of interactive design
optimization is also introduced by Arora and Tseng (1988)^ to handle the
uncertainty which is present in most computational optimization methods.


35
As optimizers have become more and more ambitious, the limitations of
the mathematical programming methods proved to be too constructive for
practical applications. The elaborate methematical transformations for de-
termining travel directions, the sensitivity of the algorithms to step size and
the initial (starting) solution are only a few of the difficulties encountered
in applying mathematical programming methods to practical problems. To
overcome some of these difficulties several efforts have been made. They
involve the concepts of mathematical approximation and decomposition.
For structural optimization of large-scale structures, the optimality cri-
teria approaches were presented by Venkayy, Khot, and Reddy (1968^,
1973t56]). In the optimality criteria methods, the criteria that the opti-
mum design must satisfy are first derived. The criteria may be intuitive or
based on mathematical formulations with or without approximations. The
algorithm based on the criteria is developed with the objective that the
current design moves towards a design satisfying the criteria. In this sense
the optimality criteria methods fall under the category of indirect methods
of optimization. In mathematical form the optimality criteria are equiva-
lent to the Kuhn-Tucker conditions of non-linear programming. Optimum
design procedures based on optimality criteria methods usually involve two
distinct types of approximations:
1. Those associated with indentifying in advance the critical constraints
and the active (free) variables at the optimum;
2. Those associated with the development of simple recursive redesign
rules.
It has been recently shown by Arora (1980)f57^, Fluery (1982)^, and Bele-
gundu and Arora (1985)^ that the Newtons iteration to solve the non-


36
linear set of necessary conditions is equivalent to solvinga quadratic pro-
gramming subproblem using mathematical programming methods. The
equivalence of the numerical optimality criteria and gradient projection
methods has also been shown by Arora (1980)^.
On the other hand, multilevel optimization methods for the large-
scale structures are developed by Sobieszczanski-Sobieski (1982)t60f and
Sobieszczanski-Sobieski, James, and Dovi (1985)t61^. In multilevel opti-
mization, an optimization problem is decomposed into a set of subproblems
and a coordination problem that preserves coupling between the subprob-
lems in which the decomposition is achieved by separating the structural
element optimization subproblems from the assembled structural optimiza-
tion problem. Each element optimization and optimum sensivity analysis
yields the cross-sectional dimensions that minimize a cumulative measure
of the element constraint violation as a function of the elemental forces and
stiffness. Then, the assembled structural optimization produces the overall
mass and stiffness distributions optimized for minimum total mass subject
to the element forces and stiffness constraints. A general multilevel iter-
ative method for optimization problems have also been recently discussed
by Gelman and Mandel (1990)^.
In optimal engineering design, the following difficulties have been ad-
dressed:
1. There are problems that are difficult to be treated in the common
NLP methods, such as implicit, or nondifferentable function relations.
2. It has been shown that even the best of methods will may fail to
solve or make slow progress toward the solution on a given practical
problem from time to time.


37
3. Global optimization is of special importance in engineering design.
But the current NLP methods either may be trapped at local opti-
mum, e.g., SUMT methods; or is not efficient to handle the problem,
e.g., random search, or multistart methods.
We shall attempt to attack these difficulties using heuristic methods.
In order to handle the complexity of combinatorial optimization, some
heuristic search methods have been recently introduced. For example, the
simulated annealing methods have been widely involved in optimum VLSI
designt33^34]; the application of genetic algorithms to structural optimum
design has been recently presented by Hajela (1990)^. More examples
can be found in the next chapter. Heuristic search methods and their
applications to optimum design of continuous variables are introduced in
the following chapters.


CHAPTER 3
HEURISTIC SEARCH METHODS
3.1 Problem Solving and Search
In order to develop general solution approaches, theory of problem solv-
ing has been widely discussed, especially in artificial intelligence. Simon
(1983)^ has distinguished three distinct ways for representing and think-
ing about solving tasks. One way to view problem solving is as a process of
search. In this way, a description of a desired solution is called a goal, and
the set of possible steps leading from initial conditions to a goal is viewed as
a search space. Then, problem solving is carried out by searching through
the space of possible solutions for ones that satisfy a goal.
The second way to view problem solving is as reasoning. In this way,
a system of logic is postulated that allows us to deduce new statements
from axioms and previously deduced statements. Then solving a problem
consists of accumulating more information (more statements) by inference
until the answer to the problem has been found.
The third way of viewing problem solving is as constraint satisfaction.
In this way, a set of objects and various subsets defined by the constraints
they satisfy are postulated. Then, solving a problem consists of narrow-
ing the original set to a subset or unique object that satisfies all of the
constraints.
Here we are concerned only with the formulation of problem solving
as search. The state space search, or generate-and-test, is an example of


39
state space search. This method uses states and operators to construct a
solution process. The idea behind it is to find a sequence of operators that
can be applied to a starting state until a goal state is reached.
Conventional nonlinear optimization methods are based on the state
space search. They are generally specified as:
1. define an initial state, X(0);
2. let X(i) be the current state;
3. select a direction S(i) and a step length h(i);
4. generate a new state,
X(i + 1) = X(i) + k{i) S(i)] (26)
5. compare the new state with the old ones, store the better one to be
the current state;
6. check if the stopping criteria are satisfied. If so, then stop; otherwise,
go to step 2.
As simple search process is often time-consuming, especially for global op-
timization of continuous variables, some approximate or heuristic concepts
are often necessary for efficient problem solving.
3.2 Heuristics
Heuristics are some rules of thumb introduced solely by intuition and
experience. They can be used as tools to reduce the state space to be
searched. A search that uses a heuristic is called heuristic search. A
common form of heuristic search is hill clmbing. In hill climbing methods,


40
the generated state is compared with the last state, the better one is kept,
and the search process is repeated until no better is found.
Heuristics as a discipline has been strongly influenced by George Polya.
In his well-known book (1945)f65^, Polya gave detailed descriptions of the
various heuristic methods and condensed them to a few general principles.
Since then, a vast number of articles and papers on heuristics have been
published in the literature. A historical review for approaches to heuristics
is given by Groner and Bischof (1983)^. The applications of heuristics to
intelligent search strategies have also been fully studied by Pearl (1984)^.
Weiner (1975)^ indicated that heuristics for problem solving can be
classified as follows:
constructive methods, that would build solutions using heuristic rules,
often in a sequential, deterministic manner;
local transformation methods, that aim to improve existing solutions
by the local search of a well-defined neighborhood in solution space;
decomposition methods, that break a problem in smaller parts;
feature extraction methods, both statistical and combinatorial, that
aim to reduce the size of a problem; such methods are often applied
after several candidate solutions are obtained, where features com-
mon to these solutions removed from the problem;
inductive methods, that solve a large problem by first solving small
embedded problems, and then inductively add to these problems;
approximation methods, that transform an intractable problem into
a tractable one.


41
The technique of problem transformation (approximation) is emphasized
by Weiner which is considered to be worthy of both theoretical and prac-
tical study.
3.3 Iterative Improvement Methods
The general strategy for designing heuristics is the method of iterative
improvement, or hill climbing, which is based on stepwise improvement
on the value of the cost function by exploring neighborhoods. Iterative
improvement is also known as neighborhood transform, local transform or
local search.
The application of an iterative improvement method presupposes the
definition of a state, a cost function and a generation mechanism.
In many optimization problems, a solution is usually specified by some
data structure which can be processed on computers. Such a solution is
called the state. The set of these solutions is referred to the state space or
solution space.
A generation mechanism is a simple prescription to generate a tran-
sition from a state to another one by a small perturbation. It defines a
neighborhood N(i) for each state that can be reached from the current state
(i) in one transition. Therefore, the performance of exploitation of a heuris-
tic search method based on iterative improvement methods is determined
by its defining generation mechanism. The central part of developing an
effective heuristic search method is to define a reasonable generation mech-
anism. The generation mechanism, based on k-optimality by Lin (1965)^,
has been widely applied in the discussion of travelling salesman problems.
Another generation mechanism, based on natural evolution, was introduced
by Holland (1975)^ in which natural selection by survival of the fittest is


42
emulated.
In general, iterative improvement, or hill climbing methods, can be
specified as follows.
1. An initial state and its neighborhood are given.
2. A sequence of iterations is generated, each iteration consisting of a
possible transition from the current state to a state selected from the
neighborhood of the current state.
3. If this neighboring state has a lower cost, the current state is re-
placed by this neighbor; otherwise, another neighbor is selected and
compared for its cost value.
4. When a state is obtained whose cost is no worse than any of its
neighbors, terminates; otherwise, go to step 1.
For function optimization of one continuous variable, hill clmbing can
be simply formulated as follows.
1. An initial point xq in search space is given.
2. For the given step h, consider the moves
3/1 *Co h,
2?2 2/0 h,
and compute their costs f(25i) and f(2J2).
3. If one of these moves has less cost, then save it as 2;0 and go to step
4; otherwise, replace h by the smaller one and go to step 4.
4. If the termination criteria are satisfied, stop; otherwise, go to step 1.


43
In general, for function optimization of several continuous variables, tbe
hill climbing method can be formulated as follows.
1. An initial point Xq is given.
2. For the given step h, consider the moves
X\ Xq + 5,
where s £ S, and
S = {5 | s = (x\ h, 2 h,..., Xk h)}.
3. If one of moves has less cost, save it as Xo and go to step 4; otherwise,
replace h by the smaller one and go to step 4.
4. If termination critera are satisfied, stop; otherwise, go to step 1.
The success of hill climbing methods depends largely on both the initial
solution and the step size of h. On the other hand, it is hard to define
the direction of moves towards the optimum for cases of several variables
because of the geometrical complexity of the search space. Hill climbing
methods have been used as a local search method with the smaller step h
for optimization.
The disadvantages of iterative improvement methods are:
by defintion, the iterative methods can terminate in a local minimum,
there is generally no information as to the amount by which this local
minimum deviates from a global minimum;
the obtained local minimum depends on the initial state, for the
choice of which generally no guidelines are available;


44
in general, it is not possible to give a sharp estimate for the compu-
tation time.
To overcome some of the disadvantages of iterative improvement methods,
the following approaches have been suggested by Laarhoven and Aarts
(1988)[711;
1. execution of the method for a large number of initial solutions. Clearly,
asymptotically under the guarantee that all solutions have been used
as initial solutions such an algorithm (e.g., multistart methods) finds
a global optimum with probability one;
2. use of information gained from previous runs of the algorithm to im-
prove the choice of initial state for next run (this information relates
to the structure of the set of states);
3. introduction of a more complex generation mechanism (or, equiva-
lently, enlargement of the neighborhoods) in order to jump out of
a local minimum corresponding to the simple generation mechanism
properly requires detailed knowledge of the problem itself;
4. acceptance of transitions which correspond to an increase in the cost
function in a limited way (in iterative improvement algorithm only
transitions corresponding to a decrease in cost are accepted).
The first approach is a traditional way to solve global optimization prob-
lems approximately (the iterative improvement method based on Lins 2-
opt strategy is a well-known example). But, this method is still a trial-
and-error procedure. The second approach is usually strongly problem-
dependent. Recently, some heuristic search methods that follow the third


45
and the fourth approaches have been introduced. They are: simulated an-
nealing, genetic algorithms, and tabu search. Simulated annealing, popu-
larized by Kirkpatrick, Gelatt, and Vecchi (1983)^ and by Cerny (1985)^,
mimics the physical annealing process. A detailed discussion of the the-
ory and applications of simulated annealing can be found in Laarhoven
and Aartss book (1988)^. A comprehensive annotated bibliography of
the 300 papers and reports has also been included in the special issue
(1988) ^. Genetic algorithms, introduced by Holland (1975)^, mimics
the process of biological evolution. The excellent book given by Goldberg
(1989) ^ and two volumes of proceedings edited by Grefenstette (1985^,
1987^) provide an introduction of genetic algorithms and their appli-
cations. The overview can also be found in Liepins and Hillards paper
(1989)[781. Tabu search, introduced by Glover (1977^, 1986^), stems
from the general tenets of intelligent problem-solving. A detailed intro-
duction has been given by Glover (1989)t81^. All these approaches have
been surveyed by Glover and Greenberg (1989)^. Genetic algorithms and
tabu search methods, and their applications to globally optimal design will
be discussed in details in the next sections.
3.4 Exploitation and Exploration
As we described previously, optimal design problems can be viewed as
a search process over design space. Point-by-point, or exhausive search
over the entire design space is prohibited for many globally optimal de-
sign problems because of the limitations of computer time and space. For
these kinds of problems some approximate or heuristic search strategy is
necessary. When designing heuristic search methods for global optimiza-
tion, both concepts of exploitation and exploration play important roles.


46
Exploitation, or intensification, means a deep search strategy restricted in
a local region while exploration, or diversification, means a wide search
strategy applied in the entire region. Too little exploration results in poor
locally optimal solutions or the best regions of solution space not being vis-
ited. Too little exploitation is likely to result in the best regions of solution
space being missed. They are two conflicting goals of achieving a heuristic
search method. Therefore, a good heuristic search is a trade-off design
of exploitation and exploration.
Hill climbing and random search are two extremes. Hill climbing is
a typical example of a search strategy that exploits only in the restricted
area, e.g., neighborhood of an initial solution. Because of no exploration,
the final solution can often be trapped in a region of solution space that is
far from the optimal solution. On the other hand, random search is a good
example of a search strategy that is concerned with only wide exploration.
Because of no exploitation and blindly sampling over every region, it is
usually inefficient. Then, a heuristic search algorithm can be designed by
combining hill climbing and random search. The heuristic methods we
will discuss are examples of this idea. In composite genetic algorithms,
exploration and exploitation are simply combined using genetic algorithms
and hill climbing method while in tabu search for global optimization of
continuous variables they are directly achieved with the help of a set of
moves of different step sizes.
3.5 Optimization by Genetic Algorithms
A genetic algorithm is a heuristic search strategy based on the process of
natural selection and natural evolution. Holland first developed the genetic
algorithm and provided its theoretical foundation in his well-known book


47
(1975)f70l. Hollands formulation stems from the human reproduction and
is stimulated further by the fact that, under the natural law of survival of
the fittest, various species evolved so remarkedly well as to adapt to their
enviorment. His breakthrough contribution is the central role played by the
crossover operator as the underlying discovery mechanism, and mutation
as an infrequent operator, used primarily to preserve population diversity.
In fact, early researchers in genetic algorithms(Bledsoe, 1961^)depended
solely on the mutation operator as the genetic search mechanism. However,
search by means of mutation uses information from each individual alone,
and has been shown more recently (DeJong, 1975^) to be inferior to
the crossover-mutation combination. The crossover operator remains a
primary genetic search operator.
Although Holland was not able to prove convergence theorems for ge-
netic algorithms, he did prove that if the choice of search hyperplanes is
viewed as a k-armed bandit problem, genetic algorithms optimally allocate
trials to the hyperplane and, in this sense, they are optimal. Vector eval-
uated genetic algorithm (VEGA) for multiobjective optimization has also
been introduced by Scheffer (1984)^.
Genetic algorithms have been applied in many diverse fields, including
optimum engineering design problems. For example, recursive adaptive
filter is designed by Etter, Hicks, and Cho (1982)t86l. Goldberg (1983)^
gives an example of applications to steady-state and transient optimiza-
tion of gas pipeline. Optimum VLSI circuit layout has been discussed by
Davis and Smith (1985)^. Keyboard configuration design using genetic
algorithm is given by Glover (1985)t8^. Goldberg and Samtani (1986)^
considered the application of genetic algorithms to structural optimization
(plane truss). Aircraft landing strut weight optimization is given by Minga


48
(1986)t91l
In order to introduce the composite genetic algorithms for global op-
timization of continuous variables, some basic concepts and theorems of
genetic algorithms first are briefly reviewed.
3.5.1 Adaptive Process
The concept of adaptation first comes from biology. In that context,
adaptation is defined as any process whereby a structure is progressively
modified to give better performance in its environment. Natural evolution
is a typical example of an adaptive process. Today, adaptive processes have
been widely involved in many other fields: psychology, economics, system
control, artificial intelligence, computational mathematics, and sampling.
The study of adaptive processes is complex. Frequently, nonadditive
interaction makes it impossible to determine the performance of a struc-
ture from a study of its isolated parts. Moreover, possiblities for improved
performance must be achieved through the trade-off design of exploitation
and exploration search strategies. A mathematical framework that extracts
and generalizes this biological process is presented by Holland (1975)^ in
which two of the most important generalizations are introduced: (1) the
concept of a schema as a generalization of an interacting, or coadapted,
set of genes; and (2) the generalization of genetic operators, crossing-over,
inversion, and mutation. The concept of schema makes it possible to disect
and analyze complex nonlinear or epistatic interactions. The gener-
alized operators extend the analysis to studies of learning and optimal
planning. '
3.5.2 Schemata
The concept of a schema (or schemata, plural), generalizes the nonlin-
ear interaction of adaptive processes. It is also called similarity templates.


49
Suppose we can describe a given structure by a finite binary string of
length L, and we wish to describe a particular similarity. For example,
consider the two strings A and B as follows:
A = 101110,
B = 111001.
We notice that the two strings both have ls in the first and third position.
A description to such similarities is to use a wild card, or a dont care,
symbol, say the asterisk (*), in all positions where we are disinterested in
the particular bit value. For example, the similarity in the first position
can be described as:
1 *****.
Likewise, the similarity in the third position may be described with the
breif notation
* 1 * .
Combined similarity is described with *'s in all positions but the first and
third:
1*1***.
Then, these strings with *'s are called the schema of the given structure. It
is clear that these schemata, or similarity templates, do not only name the
strings A and B. The schema 1 * ** describes a subset containing 25 =
32 strings, each with a 1 in the first position. In general, for a binary string
of length L each individual in the population belongs to 2L-1 schemata.
It is easy to see that not all schemata are created equal. Some are more
specific than others. We define the specificity of a schema H (its number of


50
fixed positions) as its order, denoted by o(H), and the distance between a
schemas outermost defining positions its defining length, denoted by d(H).
For example,
o( 1 1 **) = 2,
d(l 1 **) = 3 1 = 2.
3.5.3 Chromosome
In genetic algorithms, one represents the planning strategies, design
structures, or problem solutions as chromosomes. Chromosomes, or geno-
types, are fixed length strings having a finite number of possible values, or
alleles, at each position.
Each chromosome plays two roles. First, it provides a presentation of
what the organism will become. Secondly, it provides the actual mate-
rial that can be transformed to yield new genetic material for the new
generation.
In most literature on genetic algorithms, chromosomes are represented
by means of bit strings lists of 0s and ls. Bit strings have been shown to
be capable of usefully encoding a wide variety of information, and they have
also been shown to be an effective representation mechanism in unexpected
fields, for example, function optimization. The properties of bit string
representations for genetic algorithms have been extensively studied, and
a good deal is known about the genetic operators and parameter values
that work well with them.
Some different types of representations have also been explored, often
in connection with engineering applications of genetic algorithms, such as
ordered lists (for bin-packing), embeded lists (for factory scheduling prob-
lems), variable-element lists (for semiconductor layout), and integer-based


51
multiple-permutation-group representation (for keyboard configuration).
3.5.4 Evaluation Function
When applying genetic algorithms, an evaluation function is defined.
The evaluation function which plays the role of the environment in natural
evolution, is a procedure that measures the fitness of each genotype. The
fitness value assigned to each genotype represents the relative ability of the
genotype to survive and produce offspring.
In practice, there are many possible ways to define an evaluation func-
tion. For optimization problems, the objective or cost functions is often
used easily to the evaluation.
If the value of the evaluation function is given for the genotype i of a
population, say Cj, the fitness of the genotype is given by
Cj
Eic/
One linear scaling procedure for fitness is discussed further by Goldberg
(1989)t7Sh
3.5.5 Population and Reproduction Plans
A population, or gene pool, is a set of a finite number of genotypes that
can evolve from one generation to the next through a reproduction plan. A
genetic algorithm is composed of an initial population and a reproduction
plan. The reproduction plan includes:
1. representing the population of genotypes of a generation by some
data structure;
2. selecting successful genotypes to be used in creating the offsprings of
the next generation;
3. a set of genetic operators (e.g., reproduction, crossover, and muta-
tion) used to create the new offsprings.


52
Usually, the initial population is selected in a random manner. The size of
population is problem-dependent. In practice, it often remains fixed from
generation to generation and is typically between 50 and 200 individuals
or genotypes.
3.5.6 Genetic Operators
For a reproduction plan, there are these genetic operators applied to
the population: reproduction, crossover, and mutation.
Reproduction changes the population by adding copies of genotypes
with above-average fitness or figure of merit. Because the size of the pop-
ulation often is fixed in a reproduction plan, below-average are discarded
in the process. No new genotypes are introduced, but the distribution of
the population is changed. This process increases the percentage of geno-
types of the population that possess the above-average fitness so that the
average fitness, of the population rises towards that of the most-fit existing
genotype. For an optimization problem, a reproduction operator narrows
its application to highly fit individuals or possible solutions.
In addition to this reproduction according to fitness, it is necessary
to generate new, untested genotypes and add them to the population. Oth-
erwise, the population will simply converge on the best one it started with.
Crossover is the primary means of generating plausible new genotypes for
adding to the population. In a simple implementation of crossover, two
genotypes are selected at random from the population. The crossover op-
erator takes some of the alleles from one of the parents and the remainder
from the other, and combines them to produce a complete genotype. In
general, two offsprings are generated from a couple of parent in a crossover
process. When genotypes are represented in terms of strings, crossover
operator takes two randomly selected genotypes as parent, and splits the


53
strings at the same randomly determined point, and then creates two new
genotypes or offsprings by swapping the tail portions of the strings, see
figure 3.1.
parent offsprings
(a) 100 10110 (c) 100 00101
(b) 111 00101 (d) 111 10110
Figure 3.1 Crossover operator
The function of crossover is to rearrange the structure of genotypes of
the population. It does not create any new genotypes with different genetic
materials. This is done by a mutation operator. In a typical implementa-
tion, the mutation operator provides a chance for any allele to be changed
to another randomly chosen value. Since reproduction and crossover only
redistribute existing alleles, the mutation operator plays a role of explo-
ration in genetic search strategies. The rate of mutation is carefully chosen
when it is incorporated into genetic algorithms.
3.5.7 Implicit Parallelism and Building Blocks Hypothesis
Genetic search is mainly based on the following two basic results: im-
plicit parallelism and building blocks hypothesis.
In a population of size M with string length L, the estimated values of
M 2l or 0(M3) schemata are adjusted every generation during a repro-
duction plan. This information is used by the genetic algorithm to increase
or decrease the number of copies of every schema according to whether it
is above or below average fitness. More precisely, if u(t, s) is the average
value of the copies of schema s at generation t, and if n(t, s) is the number


54
of those copies in the population,
u(t + l,s) = ^1^1(1 e)n(i,s), (28)
u
where u is the average fitness of all strings in the population and e is a
very small positive number. This evaluation and management of a large
number of schemata is accomplished by simply manipulations of only M
strings at a time. Holland has given such a phenomenon a special name,
implicit parallelism.
Implicit parallelism means that even though each generation we perform
computational efforts proportional to the size M of the population during
genetic algorithms, we get useful processing of something like M3 schemata
in parallel with no special memory other than the population itself.
In the implementations of genetic algorithms, short, low-order, and
highly fit schemata are of special importance. Holland gave also them a
special name, building blocks. In the early stages of genetic algorithms, a
wide variety of schemata are represented in the gene pool, and the search
is primarily a winnowing process (save the good portion and discard the
bad portion) of low order schemata. The relatively superior schemata
(that survive after crossover and maintain their relative superiority) are
represented with exponential frequency, and the crossover operator is able
to glue together different exemplars of low order building blocks (highly
fit, low order schemata) to form increasingly fit individuals and direct the
search to high order schemata. Such an observation is named further by
Holland as the building blocks hypothesis.
Schema is the core concept of the building blocks hypothesis. In some
sense, schema can be considered to represent the direction of the genetic
search. The genetic algorithms implicitly collect statistics about the aver-
age fitness of competing schemata. Schema fitnesses are estimated by the


55
average fitness of their exemplars. The selection of copies of individuals to
gene pool in direct proportion to their fitness biases the gene pool in the
direction of the more favorably estimated scemata.
3.5.8 Schema Theorem
The schema theorem is a fundamental theorem of genetic algorithms.
It can be stated as follows.
Under fitness proportionate reproduction, simple crossover, and muta-
tion, the expected number of copies m(H, t) of a schema H is bounded by
the following expression:
1) > (29)
where pm and pc ate the mutation and crossover probabilities respectively,
/ is the average fitness of the population, and the factor fjj is the schema
average fitness which can be calculated by the following expression:
f = y -ft-.
(30)
The schema average fitness f(H) is simply the average of the fitness values
of all strings s which currently represent the schema H.
Schema theorem says that above average, short, low-order schemata are
given exponentially increasing numbers of trials in succesive generations
of genetic algorithms. Holland has showed (1975)^ that this is a near-
optimal strategy when the allocation process is viewed as a set of parallel,
overlapped, multi-armed bandit problems.
3.5.9 Genetic Algorithms
The correspondence of natural and genetic algorithms terminology is
summerized in figure 3.2.


56
NATURAL
Chromosome
Gene
Allel
Loeus
Genotype
Phenotype
Epistasis
GENETIC ALGORITHMS
String
Character
Feature value
String position
Structure
Alternative solution
Nonlinear
Figure 3.2 Comparison of natural and genetic algorithms
In the literature there are many variations of the basic genetic algo-
rithm. All of them consist of three basic components:
1. evaluation of individual fitness;
2. formulation of a population or gene pool;
3. a reproduction plan.
The individuals resulting from genetic operators form the next genera-
tions population. The process is iterated until the stop criteria is met.
The basic genetic algorithm can be specified as follows.
1. Select a finite, bounded domain for the search and a representation
that discretizes the search space.
2. Choose a desired population size M and initialize the starting popu-
lation P.
3. Evaluate individuals according to the evaluation function /().
4. If stopping criteria is met, stop. Otherwise, probabilitically select in-
dividuals (according to their fitness dtermined by the function eval-
uation) to populate a gene pool of size M.


57
5. So long as the size of the temporatory population TP is less than M,
randomly choose a parent from the gene pool. With the probability
of its fitness, fill the parent into the temperatory population TP.
With probability pc, choose a coparent, cross parents, and fill the
resulant offsprings into the temperatory population.
6. With (low) probability pm, mutate randomly chosen individuals of
TP. Set P = TP, and return to step 3.
Some notes for this procedure is further addressed by Liepins and Hillard
(1989)I781.
It is possible to have multiple copies of an individual in a population
and /or gene pool.
Historially, one point crossover has been used. More recently, two
point crossover has been shown to be superior.
Mutation helps assure population diversity. It is not the primary
genetic search operator.
The stopping criteria is usually given in terms of a threshold in im-
provement or total number of generations.
The selection of parameter values of genetic algorithms has been widely
discussed. The most authoritative work is Dejong (1975)^. Dejong sug-
gested a mutation rate of 1/pop-size and a crossover rate of approximately
0.6.
3.5.10 Composite Genetic Algorithm
Genetic algorithms are not suited for fine-tuning structures which are
very close to optimal solutions. Instead, the strength of genetic algorithms


58
is quickly locating the high performance regions of vast and complex search
space. Once those regions are located, it may be useful to apply local search
heuristics, for example, hill-climbing, to the high performance structures
evolved by the genetic algorithms. According this idea, composite genetic
algorithms for global optimization of continuous variables is designed.
The composite genetic algorithm consists of a genetic algorithm and a
hill climbing method. It can be formulated as follows.
1. An initial population is given.
2. The population is evolved using genetic algorithms with survival of
fittest reproduction, two-points crossover and mutation.
3. After some generations, a best solution among the current population
is chosen as the initial solution for hill climbing method.
4. The hill climbing method is applied to the neighborhood of the cur-
rent solution.
5. If termination criteria are satisfied, stop; otherwise, go to step 4.
The representation of population is the first step of designing a genetic
algorithm. Usually, the 0-1 string representation is used, but what it repre-
sents is part of the art of genetic algorithm design. Secondly, an evaluation
function is required. In practice, there are many possible ways to define an
evaluation function. For minimization problems, the evaluation function
can be easily defined as follows.
Ci = Cmax - (31)
where is the upper bound of the objective function, and /; is the
objective value of gene (i) in the gene pool or population. Through the


59
evaluation function, the fitness for reproduction can be given by equation
(27).
The plan of survival of fittest reproduction can be achieved by the
following approach.
1. Give initial value of fitness (i = 1, n), compute the partial sums
of fitnesses for all genes in the current population,
sum(i) sum(i 1) + /{ (i = 1,n 1),
where
fiiim(O) = 0.0,
sum(n) = 1.0.
2. Generate a random number on the interval (0, 1). Suppose we have
sum(i 1) < r < sum(i) (0 < i < n).
Then the gene i will be chosen as the gene of the next generation in
reproduction.
3. If the number of genes in the next generation is equal to n, stop;
otherwise, go to step 2.
The reliability and efficiency of composite genetic algorithms will be
analysed in the next chapter with the help of some common test functions.
3.6 Optimization by Tabu Search
Tabu search is a metaheuristic recently developed by Glover(1977^,
1986^) specifically for combinatorial optimization problems. It stems
from the general tenets of intelligent problem-solving. Tabu search is called


60
a metaheuristic because it can be combined with other heuristic procedures
to prevent them from trapping at locally optimal solutions.
The functions of the method are:
1. guide any local transform process that employs a set of moves for
transforming one (or solution state) into another;
2. provide an evaluation function for measuring the attractiveness of
these moves.
The form of the guidance provided by tabu search is highly flexible and
often motivates the creation of new types of moves and evaluation criteria
to take advantage of its ability to be adapted to different problem structures
and strategic goals.
Although tabu search methods were initially introduced for combinato-
rial optimization, some attempts to apply them to globally optimal design
of continuous variables are made here.
3.6.1 Tabu Move and Tabu List
In tabu search, a move is currently not allowed is called a tabu move.
All tabu moves form a list, known as a tabu list. In general, the tabu list
stores the n most recent moves so that these moves will not be immediately
repreated. The purpose of the tabu list is to avoid returning to previous
solution states. This strategy can help a hill climbing heuristic or other
heuristics to escape from a locally optimal solution.
The size n of a tabu list depends on the specific problem. The smaller
the size, the more exploitation; the larger, the more exploration. In general,
a good size of tabu list is determined through some trials of the specific
problems.
The tabu list is updated after each problem iteration with items being


61
added while other items are deleted from the list. When building the tabu
list, several tactics suggested by Glover can be incorporated to improve the
efficiency of this method. They are strategic oscillation, aspiration levels,
intermediate and long term memory.
3.6.2 Strategic Oscillation
Strategic oscillation is a tactic by which a feasible solution is purposely
made infeasible by a series of (destructive) moves using tabu list, and then
the infeasiblity is resolved by making a series of (constructive) moves. The
idea behind it is to induce the exploration of new regions by crossing or
moving from certain boundaries which normally affect the progress of tabu
search. In general, it addresses those situations where feasible solutions
may be confined to narrow regions. Strategic oscillation can be accom-
plished by using multiple tabu lists.
Strategic oscillation provides the following advantages:
allowance for less complex moves requiring less effort to evaluate;
the ability to approach solutions from different directions, especially
in cases where the search is narrowly confined;
discovery of good solutions in perturbed conditions.
3.6.3 Aspiration Levels
Another tactic in tabu search is the use of aspiration levels. This allows
the tabu status of moves to be overriden if the aspiration level is reached.
Let A(s, x) represents an aspiration level dependent on move s and the
current solution x. Glover defined A(s, x) to be the smallest value of cost
c(x) that could be achieved by reversing a previous move from some x to
some s(x) such that
c(s,(x')) < c(s(x)).
(32)


65
1. The objective function is defined in the box P,
P = {X | a < x\ < b, a < £2 < b,..., a < xk < b}.
A set H of steps hi (i=l, 2, n) is defined where
hi b
/12 = fei/10.0,
^3 = ^-2 /10.0,
K = /in_i/10.0.
An initial solution X is given and begin with tabu list T empty. Set
the counter K = 0.
2. A set S of moves is defined as follows
S = {s | s = (xi h^Ti^x2 hi2ri2,...,xk hikrik)},
where h^ E H and r^.s are random numbers on [0, 1].
3. Select the move s(X) E S -T for which we have c(s(X)) < c(X), added
it to the tabu list T, set K = 0, and go to step 4. If for each move s
E S T we have c(s(x)) > c(x) and K is greater than the given large
number, stop; otherwise, go to step 2.
4. Let X =: s(X). If S T is empty, go to step 5; otherwise, go to step
3;


62
An aspiration level is attained if c(s(x)) < A(s, x).
3.6.4 Intermediate Memory and Long-term Memory
Intermediate and long term memory, as a basis of tactics for locally
intensifying and globally diversifying the search (concentrating on the old
and good regions, and moving to new regions), are important for obtain-
ing the near-optimal solutions of large-scale combinatorial optimization
problems. Glover indicated (I989)t81^ that intermediate memory and long
term memory provide an interplay between learning and unlearning.
Intermediate memory involves creating a network of good solutions as a
matrix for generating other good solutions. Long-term memory expands
or diversifies the search for optimal solutions.
3.6.5 Tabu Search
Tabu search can be combined with other heuristics to solve hard op-
timization problems. A typical example given by Glover is a tabu search
based on a hill climbing heuristic.
In terms of tabu search, the hill climbing heuristic is:
select an initial x X; X is a set of possible solutions;
for the set S(x) of moves that can be applied to x, select some s
S'(x) such that for cost function c(x) we have
c(s(x)) < c(x); (33)
let x = s(x) and return to step 2.
Such a hill climbing heuristic will proceed unidirectionally from its starting
point to a local optimum. (For minimization, the direction of hill climbing
is downward.)


63
The chief limitation, of this type of procedure is that the local opti-
mum solution obtained at its stopping point, when no improving moves
are possible, may not be a global optimum solution. Tabu search guides
such a heuristic to continue searching in the absense of improving moves,
and without returning to a previously visted local optimum.
A tabu search based on hill climbing heuristic can be generally described
as follows.
1. Select an initial point x E X and let x* = x. Set the iteration counter
k = 0 and begin with tabu list T empty.
2. If S(x) T is empty, go to step 4; otherwise, set k = k + 1 and select
sk (E S(x) T such that
sk = optimize{s(x) : s 6 S(x) T}.
3. Let x = sk(x). If c(x)< c(*), where x* denotes the solution currently
found, let x* = x.
4. If stopping criteria are satisfied, stop. Otherwise, update T and
return step 2.
Tactics (e.g., strategic oscillation, aspiration levels) can be incorporated
into the simple tabu search.
According to the idea prescribed previously, tabu search method for
optimization of continuous variables is now introduced.
For optimization of one continuous variable, a tabu search is formulated
as follows:
1. The objective function is defined in the interval [a, b]. A set H of
steps hi (i=l, 2, ..., n) is defined, where
b

64
hz hi/10.0,
hz = ^2/10.0,
hn = hn_i/10.0.
An initial solution x is given. Begin with tabu list T empty. Set the
counter K = 0.
2. A set S of moves is defined as:
S = {5 | s = x hiTi | hi G H},
where r{s are the random numbers on [0, 1].
3. Select the move s(x) G S -T for which we have c(s(x))< c(x), added
it to the tabu list T, set K = 0, and go to step 4. If for each move s
G S T we have c(s(x)) > c(x) and K is greater than the given large
number, stop; otherwise, set K = K + 1, and go to step 2.
4. Let x s(x). If S T is empty, go to step 5; otherwise, go to step 3.
5. If termination criteria are satisfied, stop; otherwise, update T empty
and return step 2.
Because random moves are employed, it is unnecessary to include the in-
verse moves of accepted moves in the tabu list T to prevent the cycles as
in Glovers method.
For global optimization of several continuous variables, tabu search can
be extended as follows.


66
5. If termination criteria are satisfied, stop; otherwise, update T empty
and return to step 3.
The reliability and efficiency of tabu search methods for globally op-
timal design will be analysed in the next chapter with the help of some
common test functions. It is shown that tabu search methods are easy to
use, and outperform the composite genetic algorithms for test problems.
Finally, the applications of tabu search methods to some practical design
examples are considered in the chapter 5.


CHAPTER 4
EMPIRICAL STUDIES
In order to compare the different methods in the same standard, a set
of test functions for global optimization were introduced by Dixon and
Szego(1978)t221, and collected in Torn and Zilinskass book (1987)t2Sl. Us-
ing these test functions, some experiments are designed in this chapter to
compare the efficiencies and reliabilities of composite genetic algorithms
and tabu search with random search.
A natural field of applications of global optimization methods is prob-
lems of optimal engineering design. In practice, objective functions can be
expensive to evaluate. For this reason, the number of evaluations has been
chosen as a performance index in the analysis of algorithms. The termina-
tion of the algorithms is made while the indicated relative deviation (RD)
is less than the given e. The relative deviation is defined as follows,
RD = (exact value approximate value)/exact value,
where the exact value is not zero in all cases. In the following discussions,
the relative deviation of objective function value has been used as the
terimination criterion.
4.1 Standard Problems
In our experiments, problems of optimization without behavioral con-
straints are studied. The problems of optimization are distinguished be-


68
tween one variable and several variables because diffient algorithm struc-
tures are involved. In order to simplify the design of computer algorithms,
we consider the problems of optimization in the standard interval, or stan-
dard subspace.
For problems of optimization of one continuous variable, the standard
problem is defined as follows.
Minimize /(x),
where x 6 [0, 16]. We use the four bits of a string for the integer part
of the solution in genetic algorithms, while the other bits are used for the
decimal part. For a general problem of optimization,
minimize /(x),
where x E [a, b], it can be changed into the standard form by using the
variable transformation:
x = a + (b a) x yj 16,
or
y (x a) x 16.0/(6 a),
In cases of several variables, we consider the following standard form of
optimization without behavioral constraints:
minimize /(X),
0 < xi < 16,
0 < x2 < 16,


69
0 < xk < 16.
For the general problem of optimization,
minimize /(X),
Oj ^ Xi ^61,
2 5: *2 ^ ^2>
< A: <
similar transformations can be applied in order to have the standard form.
4.2 One-dimensional Algorithms
Test functions for one-dimensional algorithms are given as follows.
IOe
/i(x) = 5 + sinx -f sin--------(- Inx 0.842:,
3
. . . . 2 a;
/2() = 2 + smx + sin,
3
5
/3() = 6 + ^sin((i +l)x + i),
i=i
/4(x) = 1 + (x + sinx)e~x2,
2.7 < x < 7.5,
3.1 < x < 20.4,
-10 < x < 10,
-10 < x < 10.
They are first transformed into the standard forms. The figures of these
standardized functions (funl, fun2, fun3, and fun4) are shown in figures 4.1
through 4.4. Table 4.1 gives their optimal solutions and function values.
The experiments compare random search, composite genetic algorithms,


70
Figure 4.1 The curve of test function funl
LJ-LJ.


71
Figure 4.2 The curve of test function fun2


72
Figure 4.3 The curve of test function fun3


fun4(x)
73


74
Functions Optimal Solution Function Value
funl 8.33259142 0.39869259
fun2 12.8917414 0.0940388
fun3 2.0720404 1.0899717
fun4 7.45632242 0.17576058
Table 4.1 Test Functions of One Dimension and Optimal Solutions
and tabu search. Random search of the one-dimensional case is formulated
as follows.
1. Give an initial solution x0.
2. Generate a random number r on the interval [0, 1] and perform a
random search,
i=rx (16.0 0.0).
3. If f(i) < f(*o)? save Xi as o and go to step 4; otherwise, go to step
2.
4. If termination criteria are satisfied, stop; otherwise, go to step 2.
The computational results for the given accuracy requirement (e = 10-5)
are given in table 4.2. Convergent trajectories of them are shown in figures
4.5 through 4.8.


75
Methods Functions
funl fun2 fun3 fun4
Genetic Algorithms (e = 10-s) X 8.33522 12.89222 2.07222 7.45622
RD 3.15 xlO-4 3.71 xlO"5 8.69xl0-5 1.36X10"5
f(x) 0.398696 0.0940390 1.089973 0.1757606
RD 9.32xlO"6 2.81xl0_e 2.10 xlO-6 3.23 xl0-y
N 75 81 78 89
Tabu Search (e = 10-5) X 8.33452 12.89243 2.07203 7.45614
RD 2.32 xlO-4 5.37xl0"5 8.69 xlO"7 2.38 xlO"5
f(x) 0.398694 0.0940392 1.089971 0.1757607
RD 5.05xl0"6 4.96 xlO-6 5.07 xlO8 7.00 xlO7
N 26 59 80 56
Random Search (e = 10-s) X 8.33200 12.89242 12.12497 7.45567
RD 7.05 xlO"5 5.32 xlO"5 4.85 8.53 xl0~5
f(x) 0.398692 0.0940392 1.089973 0.1757617
RD 4.99 xlO_Y 4.89 xl0- 1.76xl0_e 6.88xl0-e
N 1232 7444 4881 25126
Table 4.2 Comparisons of Genetic Algorithms, Tabu Search, and
Random Search with the Same Degree of Deviation
(One-dimensional Cases)
4.3 Multiple-dimensional Algorithms
Test functions for multiple-dimensional algorithms are chosen as fol-
lows.
fi(x) = 42 2.1i + o*i + *i*2 42 + 42 + 1-5
d
-5<;<5 (* = 1,2),
/2(#) = [1 (i 2 4" 1)2(19 14! -(- 32 142 -f- 6i2 + 32] x
[30 + (2i 32)2(18 32i + 122 + 482 36i2 + 272)j,
-2 < xi < 2 (* = 1,2),
/a() xi + *2 cosl8i cos 182 + 3,
-l

76
evaluations
Figure 4.5 Convergent trajectories of onedimensional test function funl


77
Figure 4.6 Convergent trajectories of one-dimensional test function fun2


78
2 -
V
o
m o
tn
o
>
0)
QJ
>
4 -
-6
8
0
x
_L
V.
Tabu Search
Genetic Algorithms
Random Search
20
i '
1 L.

_L
*
80
100
40 60
evaluations
Figure 4.7 Convergent trajectories of onedimensional test function fun3


79
Figure 4.8 Convergent trajectories of onedimensional test function fun4


80
2 2
/*(*) = Ylxi/d ~T[cos(xi/'/i) + 2 (d = 200.0),
i=l i=l
-100 < ^ < 100 (* = 1,2).
Table 4.3 gives the optimal solutions and the objective function values
of these test functions.
Functions Optimal Solution Function Value
ffunl (8.143729, 6.859840) 0.4683717177
ffun2 (8.0, 4.0) 3.0
ffun3 (8.0. 8.0) 1.0
ffun4 (8.0, 8.0) 1.0
Table 4.3 Test Functions and Optimal Solutions
Similarly, random search, composite genetic algorithms, and tabu search
are also compared. Random search for multiple-dimensional case is formu-
lated as follows.
1. Give an initial solution X0 .
2. Generate m random numbers, r1,r2,...,rm on the interval [0, 1], and
a random search point Xi = (i., x2, > m),
*i = ri x (16.0 0.0),
x2 = r2 x (16.0 0.0),
= Tm x (16.0 0.0).


81
3. iffpro f(X0), save Aj Xq and go to step 4; otherwise, go to
step 4.
4. If termination criteria are satisfied, stop; otherwise, go to step 2.
The computational results are presented in table 4.4. Convergent trajec-
tories of these methods are shown in figures 4.9 through 4.12.
Methods Functions
ffunl ffun2 ffun3 ffun4
Genetic Algorithms (e = 10-5) 1 8.144367 7.999591 8.000367 8.000201
X2 6.860147 4.000188 7.998147 7.999603
f(x) 0.4683727 3.000004 1.000009 1.0000092
RD 2.16xl0-e 1.53x10" 9.08 xl0 9.47 xl0-
N 1853 1532 1829 7606
Tabu Search (e = 10~5) *i 8.143214 7.998714 7.998234 8.000171
x2 6.859093 3.999903 7.999158 8.000417
f(x) 0.4683734 3.000024 1.000009 1.000009
RD 3.79x10- 8.20x10- 9.73 xl0- 9.26 xl0-
N 669 430 1025 1105
Random Search (e = 10-5) i 8.142729 8.000714 7.999047 7.999999
x2 6.859502 4.000704 7.999708 7.999937
f(x) 0.4683734 3.000014 1.000002 1.0000001
RD 3.62x10- 4.87x10- 2.52x10" 1.56 xlO-7
N 6590640 3676958 11935830 574447707
Table 4.4 Comparisons of Genetic Algorithms, Tabu Search, and
Random Search with the Same Degree of Deviation
(Multi-dimensional Cases)
4.4 Discussions
Through the results of the previous experiments, we have the following
conclusions.
1. A composite genetic algorithm can be applied efficiently to various
problems of global optimization of continuous variables.


82
4
2
i ' 1 1 1 r
Tabu Search
Genetic Algorithms
Random Search
u
d
a
m
I
bfl
O
i
o -
1
-6 -
l
8____i___i___i__i__I___i___i___i___i__|___i__i___i___i--1---1--1---
0 500 1000 1500
evaluations
Figure 4.9 Convergent trajectories of test function ffunl
2000


83
l
ed
V
m
I
O
>
CJ
XJ
-2
5 -4
-6
l:
\
T---'--1--1--1---1--'--'--r
Tabu Search _______
Genetic Algorithms _______
Random Search ____________
i
I
i
8____I___I___I___I__I___I__I___I___I__I___I___I__I___1__i___1___I__L
0 500 1000 1500
evaluations
Figure 4.10 Convergent trajectories of test function ffun2
2000


84
evaluations
Figure 4.11 Convergent trajectories of test function ffun3


85
4
2
T---------1--------1--------1--------1--------1--------1--------1--------1--------1--------1--------1--------1--------I
Tabu Search
Genetic Algorithms
Random Search
6)
as
0
m
1
hfl
O
t________________
m
G
O
£0
>
4)
G
>
co
"g
u
-4
h
-6
8 ____i___I i , . I ____I____i___i__
0 2000 4000 6000
evaluations
Figure 4.12 Convergent trajectories of test function ffun4
T
8000


86
2. The success of the method depends on the size of population and the
number of generations. For the test functions of one variable, the
size is chosen between 20 and 22, while the number of generations is
4. For the test functions of two variables, the size is chosen between
250 and 580, while the number of generations is between 8 and 20.
3. Tabu search is a powerful approach to various problem of global op-
timization of continuous variables. It outperforms the composite ge-
netic algorithms and random search for test problems of continuous
variables.
4. The efficiency of the tabu search methods depends on the number of
elements of the step set H. Both the smaller number and the larger
number result in an inefficient search process. For the test problems,
a reasonable choice of the number can be found between 8 and 14,
with relative derivation 10-5 for both cases of one variable and two
variables.
5. Tabu search is easy to use for optimization of continuous variables
because it needs fewer parameter preparations than other heuris-
tic methods, such as simulated annealing or composite genetic algo-
rithms.


CHAPTER 5
APPLICATION EXAMPLES
5.1 3-bar Pyramid
The optimal design of 3-bar pyramid is considered here. More discus-
sions and their exact solutions of this example are given in Example 2.2.
I. The minimum-weight design subject to a compressive constraint
Let Pa 100 lb and cr0* = 100 psi. We have the exact solution, X =
(1, 0.9428091), with the value f(X) = 1.3333333.
The tabu search method described previously is applied to this problem.
For each move point given by the move set S, the feasibility is first examined
in correspondance to the constraint condition. Then, the values of the
objective function are computed and compared for the possible moves.
The best one is saved. For the given requirement (e = 10-3) of accuracy,
the results of computation are given in table 5.1. The convergent trajectory
of the method is shown in figure 5.1.
Exact Solution (1.0, 0.9428091)
Function Value 1.3333333
Best Solution (0.98952, 0.92823)
Function Value 1.334005
Table 5.1: The Minimum Weight Design of 3-bar Pyramid
Subject to a Compressive Stress Constraint
(after 1160 function evaluations with the number of intervals = 9)


88
Figure 5.1 Convergent trajectory of case I using tabu search


89
II. The minimum weight design subject to a buckling constraint
Suppose P0 = 100 lb, E = 5.6 xlO6 psi, and k = 0.4. Then, we have the
exact solution, X = (0.5, 0.009120694), with the objective function f(X) =
0.010197. Tabu search is applied to this problem, and the results with the
accuracy requirement (e = 10-3) are given in table 5.2. The convergent
trajectory is shown in figure 5.2.
Exact Solution (0.5, 0.009120694)
Function Value 0.010197
Best Solution (0.50738, 0.00909)
Function Value 0.010199
Table 5.2: The Minimum Weight Design of 3-bar Pyramid
Subject to an Element Buckling Constraint
(after 881 evaluations with the number of intervals = 9)
III. The minimum weight design subject to a displacement constraint
Suppose P0 = 100 lb and d^ax = 5 x 10-5. Then, we have the ex-
act solution, X = (1, 0.673435), with the objective function value f(X)
= 0.9523810. Similarly, tabu search is applied, and the results with the
accuracy requirement (e = 10-3) are given in table 5.3. The convergent
trajectory is shown in figure 5.3.
Exact solution (1.0, 0.673435)
Function Value 0.952381
Best Solution (0.98197, 0.67994)
Function Value 0.9529597
Table 5.3: The Minimum Weight Design of 3-bar Pyramid
Subject to a Displacement Constraint
(after 1084 evaluations with the the number of intervals = 9)


90
Figure 5.2 Convergent trajectory of case II using tabu search


Full Text

PAGE 1

HEURISTICS SEARCH METHODS FOR GLOBALLY OPTIMAL DESIGN by Nanfang Hu B.A., Northwestern Polytechnic University, Xian, China, 1981 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Master of Science Department of Mathematics 1990

PAGE 2

I This thesis for Master of Science degree by Nanfang Hu has been approved for the Department of Mathematics by Harvey J. Greenberg Date

PAGE 3

Dedicated to Weiqi, my daughter

PAGE 4

. i Nanfang Hu (M.S., Applied Mathematics) Heuristic search methods for globally optimal design Thesis directed by Professor Harvey J. Greenberg Optimum engineering design problems are usually formulated as non convex optimization problems of continuous variables. Due to the absence of convexity structure, they can have multiple minima, and global optimzation becomes difficult. Traditional methods of optimization, such as penalty methods, can often be trapped at a local optimum. Two new algorithms to solve approximately these problems are introduced. Their reliabilities and effciencies are examined with the help of standard test functions. By the analysis of the implementations, it is seen that tabu search methods are easy to use, and outperform the composite genetic algorithms. In particu-lar, a tabu search method is applied to minimum weight design examples of 3-bar Pyramid, three-bar truss, coil springs, z-section, and channel section. For the channel section, the optimal design using tabu search methods saved 26.14% over the weight of the SUMT method. The form and content of this abstract are approved. I recommended its publication. Signed Harvey J. Greenberg

PAGE 5

Contents CHAPTER 1 INTRODUCTION 1 2 OPTIMAL DESIGN: MODELS AND METHODS 5 2.1 General Formulation . . . . . . 5 2.2 Optimum Design of Mechanical Elements 2.3 Optimum Structural Design 2.4 Optimum VLSI Design 2.5 Optimum Shape Design 2.6 Optimum Reliability Design 2. 7 M ultiobjective Optimization 2.8 Mathematical Methods of Optimal Design 3 HEURISTIC SEARCH METHODS 3.1 Problem Solving and Search 3.2 Heuristics ......... 3.3 Iterative Improvement Methods 3.4 Exploitation and Exploration 3.5 Optimization by Genetic Algorithms 9 11 20 22 26 29 32 38 38 39 41 45 46 3.5.1 Adaptive Process 48 3.5.2 Schemata . 48 3.5.3 Chromosome 50 3.5.4 Evaluation Function 51 3.5.5 Population and Reproduction Plans 51 3.5.6 Genetic Operators . . . . . 52 3.5. 7 Implicit Parallelism and Building Blocks Hypothesis 53

PAGE 6

3.5.8 Schema Theorem 3.5.9 Genetic Algorithms 3.5.10 Composite Genetic Algorithm 3.6 Optimization by Tabu Search . 3.6.1 Tabu Move and Tabu List 3.6.2 Strategic Oscillation Vl 55 55 57 59 60 61 3.6.3 Aspiration Levels . 61 3.6.4 Intermediate Memory and Long-term Memory 62 3.6.5 Tabu Search . . . . . . . . . 62 4 EMPIRICAL STUDIES 4.1 Standard Problems .. 4.2 One-dimensional Algorithms 4.3 Multiple-dimensional Algorithms 4.4 Discussions . . . . .. 5 APPLICATION EXAMPLES 5.1 3-bar Pyramid 5.2 Three-bar Truss 5.3 Coil Springs .. 5.4 Thin-walled Columns REFERENCES 66 66 68 74 80 86 86 91 91 94 98

PAGE 7

List of Figures 2.1 A coil spring . . . . . . 2.2 3-bar Pyramid observed from above 2.3 Three-bar truss ....... 2.4 Z-section and channel section 2.5 Circuit example ...... 2.6( a) Cross section of typical turbine disc 2.6(b )Discretized nonlinear programming model 2.7 A example of complex system 3.1 Crossover operator . . . 3.2 Comparisons of natural and genetic algorithms 4.1 The curve of test function fun1 4.2 The curve of test function fun2 4.3 The curve of test function fun3 The curve of test function fun4 Vll 10 13 16 17 21 24 24 29 53 56 69 70 71 72 4.4 4.5 4.6 4.7 4.8 4.9 Convergent trajectories of one-dimensional test function fun1 75 Convergent trajectories of one-dimensional test function fun2 76 Convergent trajectories of one-dimensional test function fun3 77 Convergent trajectories of one-dimensional test function fun4 78 Convergent trajectories of test function ffun1 81 4.10 Convergent trajectories of test function ffun2 4.11 Convergent trajectories of test function ffun3 4.12 Convergent trajectories of test function ffun4 5.1 Convergent trajectory of case I using tabu search 5.2 Convergent trajectory of case II using tabu search 5.3 Convergent trajectory of case III using tabu search 82 83 84 87 89 90

PAGE 8

Vlll 5.4 Convergent trajectory of optimal design of three-bar truss 92 5.5 Convergent trajectory of optimal design of coil springs 93 5.6 Convergent trajectory of optimal design of Z-section 95 5. 7 Convergent trajectory of optimal design of channel 96

PAGE 9

I II !X List of Tables 4.1 Test functions of one-dimension and optimal solutions . 73 4.2 Comparison of genetic algorithms, tabu search, and random search with the same degree of deviation (one-dimensional cases) 4.3 Test functions and optimal solutions 4.4 Comparisons of genetic algorithms, tabu search, and random search with the same degree of deviation (multi-dimensional cases) 5.1 The minimum weight design of 3-bar Pyramid subject to a 74 79 80 compressive stress constraint . . . . . . . . 86 5.2 The minimum weight design of 3-bar Pyramid subject to an element buckling constraint 5.3 The minimum weight design of 3-bar Pyramid subject to a displacement constraint . . . . . . 5.4 The minimum weight design of three-bar truss 5.5 The minimum weight design of coil springs .. 88 88 91 94 5.6 Comparisons of optimal design using SUMT and tabu search 94 5. 7 Comparisons of optimal design using SUMT and tabu search 97

PAGE 10

X ACKNOWLEDGEMENT I am deeply in debt to Professor H.J. Greenberg for his rigorous ideas, beneficial discussions, and helpful guidance. Much of my work was inspired by the recent developments in combina torial optimization. I am, therefore, thankful to Professor J. Ryan for her lectures in Math Clinic. I am also indebted to Pratt & Whitney, Inc. for its finanical support so that I can have a chance to continue my education. Finally, I have to express my gratitude to my wife and family. Owing to the support and assistance of my wife, Xiaoqing, I can spend much time to pursue my academic goal. To Weiqi, far away in China, I have to say that she, as a sweetheart of her parent, accepted little love from her father.

PAGE 11

i i CHAPTER 1 INTRODUCTION Optimal design is a branch of science applying scientific methods, or/ and technological means in a reasonable way to meet the society's need. Opti mal design problems are first considered in aeronautical engineering. Be cause the weight of the structures has a critical effect on the mission perfor mance and safety of aircrafts, the minimum weight design of basic aircraft components, such as columns and stiffened panels, subject to compressive loads, was initially developed during World War II. Owing to the devel-opment of computer technology, more complex design problems can be effectively analysed. Therefore, the concept of optimal design is more and more addressed in engineering design fields. Today, considerations of lim-ited energy resources, shortage of ecomonic and material resources, strong technological competition, and environmental problems have further in creased the significance of optimal design. Of optimal engineering designs, the most attractive is structural optimization. This is not only because there exist various problems, but also because it has a wide range of applications, including those from civil engineering to aerospace engineering, from machine industry to au tomobile industry, and from off-shore engineering to nuclear engineering. There are plenty of books, reports, papers and publications on struc-tural optimization. The excellent books have been written by Hang and Arora (1979)['], Iyengar and Gupta (1980)[21, Kirsch (1981)[31, Vander plaats (1984)[4], Haftka and Kamat (1985)[s], and Arora (1989)[6]. The

PAGE 12

2 recent developments and signficant contributions have been contruned in several proceedingsi7J.[s],[o]. A vast number of reviews can also be found in the literature. A historical review was given by Schmit (1981)1101. Lev (1981 )1111 provided a rather comprehensive survey of structural optimization over a period of 10 years, 1970 1980. Ashley (1982)1121 focused on the uses of optimization in aerospace engineering. A specilaized survey on the optimization of plates was conducted by Haftka and Prasad (1981)1131. Vanderplaats (1982)1141 stressed the contribution of numerical techniques to the development of structural optimization. All mathematical program ming methods for structural optimization were discussed by Belegundu and Arora (1985)1151. A state-of-art review of recent five years, 19801984, was presented by Levy and Lev (1987)1161. Optimal design problems in engineering are usually formulated as non linear, nonconvex optimization of continuous variables. As a result, it can have multiple local minima. Therefore, global optimization is of special importance for optimal engineering design. In optimization theory, two concepts, global optimization and local optimization,. are distinguished. Global optimization means to solve the optimization problem in a domain given by a real-world problem. Local optimization means to solve locally the optimization in a part of the area, or a neighborhood belonging to this area. Local optimization has been fully studied. Some complete discussions can be found in many books, for example, Jacoby, Kowalik, and Pizzo (1972)1171, Fletcher (19801181, 19811191), and Schwefe! (1986)1201. On the other hand, global optimization is an area which is left to be explored. Some efficient and general methods, such as simplex method, are available solely under linear cases. Historically, global seeking methods relied on

PAGE 13

3 algorithms that could guarantee at best a local optimum. However, in this way, global optimization problems can be solved only in the limited cases, e.g., convex programming. Recently global optimization has attracted much attention. An overview of various methods for global optimization has been given in two vol umes, edited by Dixon and Szego (19751211, 19781221). Globally optimal design is first discussed systematically by Wilde (1978)1921. Constrained global optimization is especially treated in Pardalos and Rosen's monograph (1987)l23l. Hong and Quan (1988)1241 examined carefully the prob lems of integral global optimization. Some recent results have been col lected in Torn and Zilinskas's book (1989)1251. A special issue on global opti mization is also published by Naval Research Logistics (1990, vol. 37). The complexity of global optimization problems has also been studiedi26Li27l. Methods for global optimization are divided into two classes: deter ministic and probabilistic (or stochastic) methods. A procedure of global optimization is generally composed of a series of moves. In determinstic methods, all current information for global optimization problems is pro cessed in a deterministic way to define the next step. In stochastic meth ods, the information is processed in a probabilistic way to determine the next step. An overview for deterministic methods of global optimization is given by Horst (1990)1281. Intervals have been recognized to be excellent tools for handling global optimization problems. Interval arithmetric is introduced by Moore (1966)1291. Some deterministic methods based on in terval arithmetric for global optimization have been developed. Ratschek and Rokne's book (1988)1301 provides a deep and thorough treatment of this kind of method. On the other hand, heuristic search has also been used as an efficient

PAGE 14

4 and general way for solving approximately global optimization problems. Recently, some heuristic search strategies for global optimization, based on iterative improvement methods, have been introduced. They are: simulated annealing (SA), genetic algorithms (GA), and tabu search (TS). These heuristic search methods can be categorized into probabilistic meth ods of global optimization. Their applications to combinatorial optimiza-. tion or optimization of discrete variables have been reported in the literature but their applications to globally optimal design of continuous variables have not been studied. In chapter 2, the models and methods of optimal design are briefly surveyed. Two heuristic search methods for globally optimal design are introduced in chapter 3, and empirical stud ies of them are given in chapter 4. Finally, some applications to practical design examples using tabu search methods are examined in chapter 5.

PAGE 15

CHAPTER 2 OPTIMAL DESIGN: MODELS AND METHODS Mathematical formulation of a design problem is an important step of optimal design. The five prototype mechanical and structural design problems were formulated by Haug and Arora[11. The discussions in details have also been given by Schmit[loJ, Kirsch[3l, and Arora and Thaneder[3'1. In engineering, many kinds of optimal design problems can be involved. They include: optimum design of mechanical elements, optimum structural design, optimum VLSI design, optimum shape design, optimum reliability design, and multiobjective optimization. All of these problems usually are formulated as nonlinear, nonconvex programming (NLP) problems. The process of problem formulation requires identification of design variables, constraints and an objective function. 2.1 General Formulation When a system is designed, it usually is defined by a set of quanti ties, some of which are viewed as variables during the design optimization. Those quantities defining an engineering system that are fixed during the optimization process are called preassigned parameters, and they are not changed by the optimization methods. Those quantities that are not pre assigned are called design variables. The preassigned parameters, together with the design variables, completely describe a design. Quantities are designated as preassigned parameters for a variety of reasons. It may be

PAGE 16

6 that the designer is not free to choose certain parameters, or it may be known from experience that a particular value of the parameter produces good results. Often, by considering some quantities fixed, i.e., invariant during the optimization process, the problem is greatly simplified. Any set of values for the design variables represents a design of the system required. Clearly, some designs are useful or practical, but others might be inadequate or impossible. To exclude such unacceptable designs as candidates for an optimal solution and reduce the time of optimization operations, the design and performance requirements are expressed mathematically in the form of equalities or/and inequalities, which are called constraints, prior to optimization. A design satisfying all the limitations or constraints placed on it is called a feasible design. In design optimization, some measures characterizing the factors that effect the efficiency of the designed system are introduced and referred to as a performance index, or figure of merit (FOM). In general, a figure of merit can be taken as performance, weight, area, volume, cost, reliability, or availiability. The figure of merit can often be associated with some characteristic quantities or design variables of the components of which the designed system is composed. Therefore, it can be viewed generally as a function of design variables, and referred to as an objective function. Sometimes the objective function is also called cost, criterion, or merit function. The selection of an objective function can be one of the most important decisions in the optimal design process. In some situations, an obvious objective function exists. For example, in some cases, the total cost is dominantly the material cost, which is proportional to its weight. Then a design that minimizes cost also minimizes weight. In general, the objective function represents the most important single property of a design, but it

PAGE 17

7 may also represent a weighted sum of a number of properties for multi objective optimization design problems. Weight is the most commonly used objective function of engineering design, especially in structural engineering, due to the fact that it is easily quantified, although most optimization methods are not limited to weight minimization. The weight of a system is often of critical importance, but minimum weight design is not always the least costly. Cost is of wider practical importance than weight, but it is often dif ficult to obtain sufficient data for the construction of a real cost function. A general cost function may include the costs of materials, fabrication, transportation, etc. In addition to the costs involved in the design and manufacture, other factors such as operating and maintenance costs, repair costs, insurance, etc., may be considered. The result may be a "flat" function, which is not sensitive to variations in the design variables. In this case, any feasible design is optimal; however, finding a feasible design is, itself, a difficult problem, which can be formulated as a (nonconvex) global optimization problem to minimizing total deviation from constraint satis faction. From a practical point of view, it is desired to introduce such an objective function that is both sensitive to variations in the design variables and represents the important cost components. In complex engineering optimal design problems, there can exist sev eral non-commensurable criteria that must be simultaneously considered. Such a situation is formulated by a vector, multicriteria, or multi-objective optimization problem in which the designer's goal is to minimize several objective functions simultaneously. For such design problems, the pareto optimum is often introduced. The pareto optimum is such a solution for which there is no way of improving any criterion without worsening at

PAGE 18

8 least one other criterion. In general, the pareto optimum is not umque. One could find a pareto optimum by optimizing any one of the objective functions. Modern systems begin with the collection of pareto optima obtained in this manner of choosing each of the objectives individually. Then, a graphic interface enables design engineers to see the effects of trading off the confliciting objectives. For each trade-off, the system must solve a global optimization problem with a single objective function. A STANDARD NLP MODEL The general problem of optimal engineering design can be stated as one of choosing design variables subjected to constraints related to the design variables of the system, so that the objective function is minimized. That !S1 minimize f(X) subject toN equality constraints and M inequality constraints, F;(X) = 0 Gj(X) :S 0 and subject to the n side constraints, ( i = 1, ... N), (j = 1, ... M), (k=1, ... ,n). (1) (2) (3) (4) Because F; and Gj are usually derived from the functional requirements or behavioral conditions, they are also called functional; or behavioral constraints in engineering. When the functions f(X), F;(X), and Gj(X) are all affine, the optimization problem is called linear programming problem; otherwise, it is

PAGE 19

9 called a nonlinear programming (NLP) problem. In general, optimal de sign problems are NLP problems. 2.2 Optimum Design of Mechanical Elements The design of machine elements is founded on the theories of mechanics and the strength of materials. Optimum design of mechanical elements can be defined as the selection of materials and values for the independent geometrical parameters with the explicit objective of either minimizing an undesirable effect or maximizing a functional requirement, assuring that the design variables satisfy other functional requirements. In designing mechanical elements, material parameters and geometri cal parameters generally are taken as design variables, while functional requirement parameters are preassigned parameters. Optimum design of mechanical elements is usually a nonlinear optimization problem of contin uous variables. It often requires a complex study of functional variations and functional relationships of the elements or designed systems. An excel lent book on optimum design of mechanical elements is written by Johnson (1980)1321. EXAMPLE 2.1 Design of Coil Springs The design of coil springs is encountered widely in practical applica tions. Detailed methods for analyzing and designing such mechanical com ponents have been developed. As one of prototype mechanical and struc tural design probelms, it has been formulated in Haug and Arora's book I']. More discussions of optimization of this structure have also been given by Arora (1989)161. A coil spring loaded in tension or compression (see Fig ure 2.1) is considered here. Our objective is to design a minimum mass spring to carry given loads without material failure while satisfying other

PAGE 20

10 performance requirements. To formulate the problem of designing coil springs, the following nota-tion and data are defined: Number of inactive coils: Q = Applied load : p Shear modulus: G iifinimum spring deflection: D. H' eight density of spring material : i Gravitational constant : g Mass density of material : p Allowable shear stress : Ta = Lower limit on surge wave frequency : wo Limit on the outer diameter : [) = Figure 2.1 A coil spring 2, lOlbs, 1.15 X l07lb/in2 0.5in, 0.285lb/in3 386inf sec2 7.38342 x 10-416 jin4 8.0 X l04lb/in2 100Hz, l.Sin. ,, "' ,_ ,\:. p '.J The three design variables for the problem are defined as d = wne diameter (inches), D = mean coil diameter (inches) and N = number of active coils. Through the analysis of design, the optimum design problem of the coil spring is formulatedi61 as follows. Minimize f = (N + 2)Dd2

PAGE 21

subject to the deflection constraint: D3N 9 -----1.0 > 0 1 71875d4 the shear stress constraint: 9 -10-2 12566d3(D -d) the surge wave frequency constraint: _2_.4_6_ > 0 12566d3 140.54d 9a = D' N -1.0 2: 0; and the diameter constraint: D+d 94 = 1.0 1.5 2: 0. The lower and upper bounds on the design variables are set as follows 0.05 :::; d :::; 0.20, 0.25 :::; D :::; 1.30, 2 :::; N :::; 15. 11 This problem has been discussedf6l using sequential quadratic programming (SQP). The applications of tabu search to this problem, and the comparisons with the Arora's results will be given in chapter 5. 2.3 Optimum Structural Design Optimum structural design is widely formulated as a minimum weight design with stress, displacement, buckling load, natural frequency and design variables bound constraints. The general formulation of the problem is: minimize w(b, z, >.), (5)

PAGE 22

satisfying the finite element equilibrium equations: and the constraints K(b)z = F(b), K(b )1J = >.M(b )'7, h;(b, z, >.) ::; 0 (i=l, ... ,M), 12 (6) (7) (8) where b is ann-dimensional vector of design variables, z is ann-dimensional vector of nodal generalized coordinates, '7 is ann-dimensional eigenvector, A is an eigenvalue, K(b) is an n x n stiffness matrix, M(b) is an n x n mass or geometric stiffness matrix, and F(b) is an n x 1 equivalent loading vector. Optimum structural design is often a nonlinear programming problem of continuous variables. The features of problems of optimum structural design are discussed by Belegundu and Arora[ls], and summarized as follows. 1. The functions f, F;, and Gj are usually implicit in design variables. In general, the objective function depends on the displacement vec tor z which in turn depends on the design variable b through the equilibrium equation. 2. The evaluation of functions and gradients is quite expensive due to their implicit dependence on design variables. 3. The optimal design problem is highly nonlinear and in general non convex. As a result, it can have multiple local minima. Example 2.2 Design of 3-bar Pyramid The optimal design of 3-bar pyramid is considered here. The coordi-nates of the nodes are shown in figure 2.2 and each element has a cross-

PAGE 23

p 13 sectional area = a. A horizontal force p = (0, P0,0) is acting in the top node_ The length of each element is yll + "'/2 (_ ..n,._!.. o) \ 2 1 2' p 0 (O,y,O) Element no. 1 (0,0,1) ( V3,._2. o) 2 1 2 I Figure 2.2 3-bar Pyramid observed from above The weight of the pyramid is given by: w(a,,) = ca)l +!2 L where C=3p, p is the density. Because C !S a constant, the objective function may be expressed as f(X) = x2(l +xi), where x1 x2 represent 1, a, respectively. The same example has also been discussed in the literaturef8l. I. The minimum-weight design subject to a compressive constraint For the 3-bar Pyramid, the compressive stress in element no. 1 is found to be:

PAGE 24

14 Then we have the minimization problem, minimixe f(X) subject to g X = -1 + 2 -o-1 0. ( ) 2Po v 1 =az 3x2 x1 For this problem, the analytical solution can be given as follows. x, = 1, II. The minimum weight design subject to a buckling constraint For the 3-bar Pyramid, the buckling stress in element no. 1 is found to be: 2PoR --1-'--< 3Ea 72 -ka 1 2' -r-7 where E is the modulus of elasticity, and k is the cofficient of buckling. Then, the design problem of minimum weight subject to an element buck ling constraint (in element no. 1) can be formulated as follows. Minimize f(X) subject to 2Po R kx2 g(X) = -3E 1 + 2 2 0. X2 X1 1 +XI For this problem, the analytical solution can be easily obtained as, ( 5VSPo )';2 XJ = 0.5, X2 -BkE III. The minimum weight design subject to a displacement constraint For the 3-bar Pyramid, the displacement constraint of the top node in the y-direction is given by:

PAGE 25

15 Then, the minimum weight design subject to a displacement constraint on the top node in the y-direction is formulated as follows. Minimize f(X) subject to g(X) = 2EPo Vl + :z:i(l + d"'"' :S 0. 3 :z:2 x, (9) For this simple problem, the analytical solution can be given by: 4v'2Po :z:2 = 3Edma'. x, = 1, For the given input data, the comparisons of the exact solutions of optimal designs of 3-bar Pyramid for the three cases with the computational results using tabu search will be given in chapter 5. Example 2.3 Design of Three-bar Truss This example is also one of prototype mechanical and structural design problems given by Haug and Arora (1979)l1l, and has been used by Schmit (1981 )l10l as an example of not fully stressed designs. For the symmetric three-bar planar truss shown in figure 2.3, two distinct loads are considered, i.e, P1 = 20,000 lb acting at an angle of 135" to the axis. Because the truss is symmetric, A1 = A3 and therefore, the two independent design variables are A1 and A2 For the following input data, N lOin, {3, 135, f32 = goo, {33 45, p O.llb/in3 E = 10 x 106lbjin',

PAGE 26

16 y 1 Figure 2.3 Three-bar truss the optimum design problem of the three-bar truss can be formulated as follows. Minimize f(A1 A2 ) = pN(2J2A1 + A2 ) . subject to g1(A1 A 2 ) = 20,000-
PAGE 27

17 This example will be solved using tabu search method in chapter 5. Example 2.4 Design of Thin-walled Columns In aircraft structures and other metal construction, stiffeners which are normally channel and Z-section are widely used. Optimal design of these structures have been fully analyzed and discussed by Iyengarl21. A column of given length L is designed (see figure 2.4 ). The design variables for both the Zand channel section are of flange width b, thickness t, and overall depth h. The material used for the column is aluminium alloy y t y T h Figure 2.4 Z-section and channel section with compressive yield stress 25,000 psi. When both the length of column and material are specified, the minimum weight design is really to optimize the cross-sectional area. Since the flange and web thickness are assumed to be the same for both sections, the design vector X contains three variables:

PAGE 28

18 where x1 = b, x2 = t, x3 = c = hjb. The objective function of our problem is to minimize the structural weight W which is defined by: where p is the density of the material, Ac the area of cross-section of the column, and L the length of the column. For the channel and Z-section, an exact expression for the area Ac can be given by: Ac = 2bt + ht = bt(2 +c). Then, the objective function can be rewriten as: W = x1x2(2 + xa)pL. Because p and L are constants, the objective function may be expressed as: For the designed column, five distinct restrictions which act as constriaints in the choice of design variables. They are: l. the load P1 for which overall buckling of the column occurs; 2. the load P2 for which local buckling of the flange takes place; 3. the load P3 for which local buckling of the web is observed; 4. the load P4 for which there is torsional buckling of the column;

PAGE 29

5. the load P5 for which the yield stress is reached in the column. Then, the constraint functions are given by g; = P;(X)-P 2': 0 (i = 1,2, ... ,5). I. Optimal design of Z-section For the Z-section, the closed-forms for P;(X) have been given[21 by P3(X) = P4(X) Ps(X) rr2 E 3 3 2 2412 x 1 x2[(x3 + 6x 3 + 8) + 12x; + 36xi-16x; + 48xi + 64)1 1 2 ], ( 0.43 )rr2 E (2 x ) 12(1v 2)x1 + 3 rr2E 3(1 v2 ) x1 rr2 E xfx2x;(l + 2x3) + x3) 2 8 + + + x1(8 + + o-yx1x2(2 + x3), 19 where E is the Young's modulus, G the shear modulus, v the Possion's ratio, and o-y the yield stress. II. Optimal design of channels For the channel section, the constraints 92 93 and 95 have the same forms as the Z-section. For the other constraints, we have: rr2 E 3 ( 1 + 2x3 ) --X Xz 3 1 2 + x3 rr2 Exfx2x; ( + 14xi + + 104x; + 232x3 + 336) 24 xi + + + 8x3 + 4 + x3 ) 3 x1(xi + + + 8x3 + 4) -[ { rr2 ( 288 + 128x3 --. 14xj -2412 x 1(xj + + + 8x3 + 4) + x3)' }2 + x1(xi + + + 8x3 + 4) (1.33)rr4 + x3)2 ] 1 ; 2 + (xi + + + 8x3 + 4)

PAGE 30

20 This example has been discussed using sequential unconstrained minimiza tion techniques (SUMT)f21. In chapter 5, the application of tabu search method to this example will be given. 2.4 Optimum VLSI Design Today VLSI design has become one of main engineering design prob lems. In optimum VLSI design, two problems are often involved: placement of cells and routing of interconnections for the cells. For some of digital integrated circuits (IC), the area available for inter connections is strictly limited. If there is not enough interconnection area available for a given cell placement, then a new placement must be gener ated. Then one of goals to optimum VLSI design is to minimize the total chip area occupied by the circuit models and interconnections between the modules. Circuit performance is another major concern in the design of digital integrated circuits. If the delay through a particular signal path is excessive, the circuit may not perform in accordance with logic specifica tion. For integrated circuits, signal delay results in part from the lengths of the interconnections on the chip. Hence, if the maximum signal delay is to be minimized, the length of the interconnections along the critical paths, that is, the signal paths whose delays limit the overall performance of the chip, must be minimized or kept below a certain length. Then, another goal is to minimize the total length of interconnections of a chip. Optimum VLSI design previously described is often formulated as a combinatorial optimization problem of discrete variables. Given N cells on a chip, the number of possible placements is of the order O(N!). Then, the placement problem, in which it is desired to find the placement of minimum total interconnect length, belongs to the class of NP-complete

PAGE 31

21 problems. In fact, most placement and global routing problem in opti mum VLSI design belong to this class. For such a class of problems of this complexity there exists no known polynomial time exact algorithm. Therefore, some approximate or heuristic concepts are necessary to solve this class of problem. Recently some heuristic search methods have been successfully applied to optimum VLSI design. Some results can be found in Sechen's book (1988)f331, and Wong, Leong and Liu's book (1988)[341. Some global search methods, based on Browian motion process, are also recently applied for optimal design of electronic circuitsf351.[36l. Example 2.5 Design of Circuit Consider the design of a circuit of a constant voltage described by Heydemann, Durbin, and Montaronf37l, shown in figure 2.5. The is designed to keep the output voltage Vout constant in spite of variations in the set of technological parameters such as transconductance, threshold voltage, channel width absolute tolerance length absolute tolerance and temperature. Rl out Figure 2.5 Circuit example

PAGE 32

22 The design variables are the channel lengths of the MOS devices, de noted by (x1 ... x5). The following five-dimensional optimization problem can be formulated. Minimize f(x!, ... xs) = (vout-5)' subject to 15 :S Xs :S 150, 15 :::; "' :::; 200, 15 :S x5 :S 200, where Vout is a function of the design variables ... x5 2.5 Optimum Shape Design The determination of shape of two-and three-dimensional elements sub-ject to constraints involving natural frequencies, displacements and stresses defines a typical optimum shape design problem. The general form of the problem can be formulated as subject to minimize w = j In f(z)dO j In F,(z)dO = 0 j In G;(z)dO :S 0 (i=1, ... ,N), (j = 1, ... ,M), (10) (11) (12) where 0 is a domain for the problem, and the state variable z is solution of the boundary value problem (equilibrium equations) for the elastic body.

PAGE 33

23 The functionals f(z ), F;(z ), G;( z) depend on the state variable (z) and represents some measure of stress or displacement. The cost or objective functional can be to minimize volume, weight, area, stress concentration or peak torision rigidity. When finite element methods or sample analysis is applied, this func tional form can be changed into the form of algebraic equations of nodal point coordinates of the finite elements, or the coefficients of various poly nomial samples (splines, Lagrange, Hermite, cubic, etc.). Thus, when the nodal point coordinates or the coefficients of polynomial samples are as design variables, the problem can be formulated by the standard NLP model. A survey of structural shape optimization is given by Haftka and Grandhi (1986)l381. The more discussions can be seen in some recent proceedings[asJ,[oJ. Example 2.6 Design of Rotating Discs The minimum weight design of rotating discs shown in figure 2.6 is considered. In this example, radii r2 and thicknesses u3 u4 u5 are the design variables x: The objective function is w(x) = J.r= 21rprh(r)dr, Tt where r,, rm are the inner radii and outer radii, respectively, pis the density and h( r) is the thickness at a radical distance r from the axis of rotation. For the division r1 r2 rm and the linear approximation to h(r), the

PAGE 34

Hob t r; .,a Figure 2.6(a) Cross section of typical turbine disc. ----'\-_ I ----------'-------------j'H\ / --,, ,, Figure 2.6(b) Discretized nonlinear programming model. 24

PAGE 35

25 weight function can be given by: For this minimum weight design, the behavioural constraints are given as follows. From the analysis of stresses, the rotation stresses within rach ring [r;, r;+l] can be given approximately by: (3 pw2(3 + v) 2 a r2 8 r (3 pw2(1 + 3v) 2 a+ r2 8 r where a, (3 are constants which can be determined by equating the radial displacement and radial load at the the interface of adjacent rings. Similarly the thermal stresses are given by: where/, 5 are constants, and a is the coefficient of linear expansion. The temperature c,b( r) is a prescribed function of r. The resultant stresses are then given by: Therefore, the principal shearing stresses at the given radii r are given by: 1 ' T2 = 2 I (J'T I' 1 r3 = 2 I
PAGE 36

26 Then the maximum principal shearing stress T is given by: Therefore, the constraint condition is given by: T :S; To. In this problem, the behavioural constraints cannot be expressed analytically as functions of design variables. It is clear that the discretized minimization problem is a nonlinear, nonconvex optimization problem of continuous variables. The problem has been discussed using feasible direction methods. The details are referred to Silva's work1411. 2.6 Optimum Reliability Design Nowadays reliability is one of the key design factors when designing complex, critical, and expensive systems. In general, optimum reliabilitybased design can be divided into two basic classes of problems. One is the reliability optimization of network systems, and the other is the reliability optimization of machine elements or systems. For the reliability optimization problems of network systems, there are usually the following two forms: 1) cost minimization subject to the desired level of system reliability, and 2) system reliability maximization under cost constraints. The cost minimization subject to the desired level of system reliability is stated as follows. n Minimize c = L c;(yi) (13) i:::::::l

PAGE 37

I 27 subject to the system reliability constraint n R, = II R;(y;) Ro, (14) i=l where R, is the system reliability, R0 is the specified reliability of system, c; is the cost of the i-th element, Yi is the design variables denoted the number of the i-th component, and n is the number of all elements in the system. System reliability maximization problems under cost constraints is stated as follows. n MaximizeR,= II R;(y;) (15) i=l subject to the following constraints: (k=l, ... ,n), (16) where Pki(Y;) represents consumption of the k-th resource at the i-th ele-ment. Stress-strength inference theory is used to determine the reliability of a mechanical component or system when the component stress-strength probability distributions are known. At this time, the problem concerned becomes a probabilistic design problem. Similarly, the reliability optimization problems of mechanical elements can also be categoried into the following two types of the problems: l) at a specified component reliability, minimize the cost of design, and 2) maximizing component reliability under resource constraints. When mechanical element or system distributions of stress sand strength S are known, say f(s) and F(S), the component reliability is given by a well known equation: R = ;_: f(s)[[' F(S)dS]ds. (17)

PAGE 38

; 28 Then, component or system reliability maximization subject to resource constraints, when stress and strength are independent and normally dis-tributed, is stated as follows. (18) subject to (19) where a, b, c, and d are cost functions, and A0 represents the quantity of available resources. Optimum reliability design is often a nonlinear programming problem of continuous variables. Reliability optimization has attracted more attention. Many papers can be found in some journals, for example, Techno metrics, or IEEE Transations on Reliability. A comprehensive discussion of optimization of system reliability can also be found in the Tillman, Hwang and Kuo's book (1980)[421. Example 2. 7 Reliability Design of Complex Systems Consider a example of reliability design of complex system shown in figure 2.7. Our objective is to design a system of reliability maximization subject to a cost constraint. The problem can be formulated as follows. MaximizeR, = 1 R3[(1 -R,)(l R4)f subject to -(1 R3){1 R2[(1(1-R,) X (1 R.)]} 2 C, = L c; ::; C, i R;,min :S R; :S 1.0,

PAGE 39

29 1 1-input 2 -4 -:l !--<> 4-'-2 ____j output 1 Figure 2. 7 Example of complex system where the relation between the cost and the reliability for the component (i) is given by: where the k; and a; are given constants. 2. 7 Multiobjective Optimization In complex engineering design problems there often exist several non commensurable criteria that must be simultaneously considered. Such a situation is formulated as multicriteria or multiobjective optimization prob lem in which the designer's goal is to optimize several objective functions simultaneously. Since the objective function in this problem is a vector, multiobjective optimization is often called vector optimization, while a sin gle objective function optimization is referred to as scalar optimization.

PAGE 40

30 The engineering design problems are acknowledged to be inherently multiobjective optimization problems. In general, a system is to be de signed to meet several requirements, expressed as criteria of cost, weight, area, reliability, stiffness, deflection, etc. Then, the problem is a multiob jective optimization. In optimum engineering design problems with a single objective func tion, one usually seeks a single solution with little or no attention to the sensitivities of other objectives that were ignored. This is often felt to be a serious weakness in the design process because the solution gained is strongly dependent on the form and various parameters of the problem. In practice, it is generally difficult to give exactly the constraint functions and choose suitably a single definite criterion to measure the advantage of the designed system. Sensitivity analysis has been applied to some extent in order to lessen this drawback characteristic of scalar optimization, but only local information in the neighborhood of the optimal solution is obtained by this technique. On the other hand, multiobjective optimization may be regarded as a systematic sensitivity analysis of those design objectives which are considered especially important. There generally exists no point that is an optimum for all criteria si multaneously because the criteria are confiiciting. For example, one can improve reliability at the expense of increasing cost. Thus, in multiobjec tive optimization, one cannot speak of an optimum solution point, but of a "satisfying" solution. Optimality plays an important role in the scalar optimization. It allows the designer to restrict their attention to a single solution or a very small subset of solutions from among much larger set of feasible solutions. On the other hand, noninferiority or pareto optimum, introduced by Pareto in 1896[431, serves a similar, but less limiting, purpose

PAGE 41

31 for multiobjective problems. The idea of noninferiority arises from the concept of dominance. Non inferiority is the term used by some mathematicians; nondominance by the others; efficiency by computer scientists and statisticians; and pareto optimum by economists and most design engineers. Mathematically, vector optimization can be regarded as a generaliza tion of scalar optimization, and the solution of pareto optimum for a mul tiobjective optimization problem requires the solution of a set of scalar optimization problems, including those which are obtained by optimizing every criterion separately. Therefore, it can be concluded that the computational effort involved in the numerical operation of pareto optimum may prove to be quite considerable. A multiobjective optimization problem can be formulated as follows. Minimize Z(X) = [z1(X), z2(X), ... zp(X)] subject to Fi(X) = 0 Gi(X) ::?: 0 and subject to the side constraints ak :::; Xk :::; bk (i = 1, ... ,N), (j=1, ... ,M), (k=1, ... ,n), (20) (21) (22) (23) where X= (x1 ... xn) is ann-dimensional design vector. The individual objective functions are denoted by zk(X) (k = 1, ... p ). Pareto optimum can be defined mathematically in the following way. A solution, say X*, is noninferior, or pareto optimum, if it is feasible and if there does not exist a feasible solution, say Y, such that fork=l, ... ,p, (24)

PAGE 42

32 and for some k. (25) Because the pareto optimum is not unique, multiobjective optimization is often time-consuming. In order to lessen the computational and human effort for multiobjective optimization, it seems preferable, instead of generating all pareto optima, to concentrate only on a certain subset of them. Two kinds of procedures have been developed to acheive this goal. One of them is develop an interactive procedure, where the designer participates in the process at every pareto optimum achieved in order to determine the direction and extent of the next step. Recently, numerous multiobjective optimization methods have been oriented towards interactive on-line use. The interactive multiobjective optimization usually consists of sequences of decision phases and computation phases. The other is to formulate mul tiobjective optimization as an adaptive process in which genetic algorithms can be applied. The approach applies genetic operators to evolve from a population a satisfactory pareto optimum solution. The further discussion of multiobjective optimization is beyond the scope of this thesis. Many valuable booksloJ,[J.[sJ are available for the interested readers. 2.8 Mathematical Methods of Optimal Design Optimization problems have been divided into two large classes: unconstrained and constrained optimization. For unconstrained problems, mathematical methods of optimization are classified by Edelbaum (1962)1461 into direct methods and indirect methods. Direct methods, or numerical methods, are those that approach a solution in a stepwise manner (iteratively), at each step improving the value of objective function. Indirect methods, or

PAGE 43

33 analytic methods, are those that attempt to reach the optimum in a single step, without tests or trials. Generally, according to indirect methods, the optimization is changed into a problem of solving a system of simultaneous equations, namely the necessary conditions for the optimality. Iterative methods comprise a general approach to functional optimization for con tinuous variables. On the other hand, some solutions, e.g., saddle points, are of no interest for the original problem, it is preferable to design iterative methods based on direct methods in practice. Direct methods can be classified further into two classes: descent meth ods, based on derivatives, and direct search methods. Descent methods usually depend on the initial solution, and perform successive moves in a downhill direction. They are powerful when the objective function is uni modal in the feasible domain. However, when there exist many peaks in the domain and there is no information for their areas, descent methods can often be trapped at a local optimum. In order to avoid this case, some other methods, e.g., multistart methods, have been incorporatedl2Ll7l with them. For constrained problems, they are usually transformed into a parametric unconstrained problemsl48l so that some unconstrained optimization technique can be applied. Of these transformations the most common are penalty methods, such as interior point and exterior point forms. Because exterior point methods approach the optimal solution from the outside of the possible domain and the final approximate solution does not exactly satisfy the constraint conditions, they are generally not accepted by most design engineers. On the other hand, interior point methods make moves in the interior of the feasible domain so that each successive solution is feasible. Therefore, sequential unconstrained minimization techniques

PAGE 44

:; 34 (SUMT), based on the interior point methods, have been widely applied in engineering designf2],[3],[7],[s],f9]. The development of mathematical programming techniques during the late 40's and early 50's provided further impetus to the application of op timization methods in engineering design. In 1960, Schmitf491 proposed coupling the finite element analysis and mathematical programming meth ods for structural optimization design. In this approach structural design is treated as a problem of mathematical extremization of an objective func tion in an 'n' dimensional design variable space constructed by such be havioral functions as stresses, displacements, frequencies, etc. The search for an optimum is carried out by methods of linear and non-linear pro gramming techniques. Some of examples of mathematical programming methods are gradient projection, steepest descent, feasible directions and various unconstrained minimization techniques in conjection with the so called penalty function transformations to account for constraints. A com plete survey of mathematical programming methods applied to structural design optimization have been recently given by Arora1501. In the following years, this approach enjoyed great popularity and was applied to a number of interesting structural design problems. In order to provide the assis tance to choose a generally useful algorithm for the optimal design using mathematical programming methods, some comparative studies have been made. Colville (1968)1511 first laid down the foundation of comparative studies; Eason and Fenton (1974)1521 conducted a major study aimed at engineering design problems; and a comprehensive comparative study was given by Sandgren (1977)fsaJ. Recently, the concept of interactive design optimization is also introduced by Arora and Tseng (1988)fs 4 J to handle the uncertainty which is present in most computational optimization methods.

PAGE 45

35 As optimizers have become more and more ambitious, the limitations of the mathematical programming methods proved to be too constructive for practical applications. The elaborate methematical transformations for de termining travel directions, the sensitivity of the algorithms to step size and the initial (starting) solution are only a few of the difficulties encountered in applying mathematical programming methods to practical problems. To overcome some of these difficulties several efforts have been made. They involve the concepts of mathematical approximation and decomposition. For structural optimization of large-scale structures, the optimality cri teria approaches were presented by Venkayy, Khot, and Reddy (1968l55l, 1973l56l ). In the optimality criteria methods, the criteria that the opti mum design must satisfy are first derived. The criteria may be intuitive or based on mathematical formulations with or without approximations. The algorithm based on the criteria is developed with the objective that the current design moves towards a design satisfying the criteria. In this sense the optimality criteria methods fall under the category of indirect methods of optimization. In mathematical form the optimality criteria are equiva lent to the Kuhn-Tucker conditions of non-linear programming. Optimum design procedures based on optimality criteria methods usually involve two distinct types of approximations: 1. Those associated with indentifying in advance the critical constraints and the active (free) variables at the optimum; 2. Those associated with the development of simple recursive redesign rules. It has been recently shown by Arora (1980)l57l, Fluery (1982)l58l, and Bele gundu and Arora (1985)l59l that the Newton's iteration to solve the non-

PAGE 46

36 linear set of necessary conditions is equivalent to solvinga quadratic pro gramming subproblem using mathematical programming methods. The equivalence of the numerical optimality criteria and gradient projection methods has also been shown by Arora ( 1980 )[571. On the other hand, multilevel optimization methods for the large scale structures are developed by Sobieszczanski-Sobieski (1982)[60] and Sobieszczanski-Sobieski, James, and Dovi (1985)[611. In multilevel opti mization, an optimization problem is decomposed into a set of subproblems and a coordination problem that preserves coupling between the subprob lems in which the decomposition is achieved by separating the structural element optimization subproblems from the assembled structural optimiza tion problem. Each element optimization and optimum sensivity analysis yields the cross-sectional dimensions that minimize a cumulative measure of the element constraint violation as a function of the elemental forces and stiffness. Then, the assembled structural optimization produces the overall mass and stiffness distributions optimized for minimum total mass subject to the element forces and stiffness constraints. A general multilevel iter ative method for optimization problems have also been recently discussed by Gelman and Mandel (1990 )[621. In optimal engineering design, the following difficulties have been ad dressed: 1. There are problems that are difficult to be treated in the common NLP methods, such as implicit, or nondifferentable function relations. 2. It has been shown that even the best of methods will may fail to solve or make slow progress toward the solution on a given practical problem from time to time.

PAGE 47

37 3. Global optimization is of special importance in engineering design. But the current NLP methods either may be trapped at local optimum, e.g., SUMT methods; or is not efficient to handle the problem, e.g., random search, or multistart methods. We shall attempt to attack these difficulties using heuristic methods. In order to handle the complexity of combinatorial optimization, some heuristic search methods have been recently introduced. For example, the simulated annealing methods have been widely involved in optimum VLSI designf33U34l; the application of genetic algorithms to structural optimum design has been recently presented by Hajela (1990)[631. More examples can be found in the next chapter. Heuristic search methods and their applications to optimum design of continuous variables are introduced in the following chapters.

PAGE 48

CHAPTER3 HEURlSTIC SEARCH METHODS 3.1 Problem Solving and Search In order to develop general solution approaches, theory of problem solv ing has been widely discussed, especially in artificial intelligence. Simon (1983 )l641 has distinguished three distinct ways for representing and think ing about solving tasks. One way to view problem solving is as a process of search. In this way, a description of a desired solution is called a goal, and the set of possible steps leading from initial conditions to a goal is viewed as a search space. Then, problem solving is carried out by searching through the space of possible solutions for ones that satisfy a goal. The second way to view problem solving is as reasoning. In this way, a system of logic is postulated that allows us to deduce new statements from axioms and previously deduced statements. Then solving a problem consists of accumulating more information (more statements) by inference until the answer to the problem has been found. The third way of viewing problem solving is as constraint satisfaction. In this way, a set of objects and various subsets defined by the constraints they satisfy are postulated. Then, solving a problem consists of narrow ing the original set to a subset or unique object that satisfies all of the constraints. Here we are concerned only with the formulation of problem solving as search. The state space search, or generate-and-test, is an example of

PAGE 49

I i I 39 state space search. This method uses states and operators to construct a solution process. The idea behind it is to find a sequence of operators that can be applied to a starting state until a goal state is reached. Conventional nonlinear optimization methods are based on the state space search. They are generally specified as: 1. define an initial state, X(O); 2. let X(i) be the current state; 3. select a direction S(i) and a step length h(i); 4. generate a new state, X(i + 1) = X(i) + h(i) S(i); (26) 5. compare the new state with the old ones, store the better one to be the current state; 6. check if the stopping criteria are satisfied. If so, then stop; otherwise, go to step 2. As simple search process is often time-consuming, especially for global optirnization of continuous variables, some approximate or heuristic concepts are often necessary for efficient problem solving. 3.2 Heuristics Heuristics are some rules of thumb introduced solely by intuition and experience. They can be used as tools to reduce the state space to be searched. A search that uses a heuristic is called "heuristic search". A common form of heuristic search is "hill clmbing". In hill climbing methods,

PAGE 50

40 the generated state is compared with the last state, the better one is kept, and the search process is repeated until no better is found. Heuristics as a discipline has been strongly influenced by George Polya. In his well-known book (1945)[651, Polya gave detailed descriptions of the various heuristic methods and condensed them to a few general principles. Since then, a vast number of articles and papers on heuristics have been published in the literature. A historical review for approaches to heuristics is given by Groner and Bischof (1983)f66l. The applications of heuristics to intelligent search strategies have also been fully studied by Pearl (1984)[671. Weiner (1975)[68] indicated that heuristics for problem solving can be classified as follows: constructive methods, that would build solutions using heuristic rules, often in a sequential, deterministic manner; local transformation methods, that aim to improve existing solutions by the local search of a well-defined neighborhood in solution space; decomposition methods, that break a problem in smaller parts; feature extraction methods, both statistical and combinatorial, that aim to reduce the size of a problem; such methods are often applied after several candidate solutions are obtained, where features common to these solutions removed from the problem; inductive methods, that solve a large problem by first solving small embedded problems, and then inductively add to these problems; approximation methods, that transform an intractable problem into a tractable one.

PAGE 51

tl 41 The technique of problem transformation (approximation) is emphasized by Weiner which is considered to be worthy of both theoretical and prac tical study. 3.3 Iterative Improvement Methods The general strategy for designing heuristics is the method of iterative improvement, or hill climbing, which is based on stepwise improvement on the value of the cost function by exploring neighborhoods. Iterative improvement is also known as neighborhood transform, local transform or local search. The application of an iterative improvement method presupposes the definition of a state, a cost function and a generation mechanism. In many optimization problems, a solution is usually specified by some data structure which can be processed on computers. Such a solution is called the state. The set of these solutions is referred to the state space or solution space. A generation mechanism is a simple prescription to generate a tran-sition from a state to another one by a small perturbation. It defines a neighborhood N(i) for each state that can be reached from the qurent state (i) in one transition. Therefore, the performance of exploitation of a heuris tic search method based on iterative improvement methods is determined by its defining generation mechanism. The central part of developing an effective heuristic search method is to define a reasonable generation mech anism. The generation mechanism, based on k-optimality by Lin (1965)1691, has been widely applied in the discussion of travelling salesman problems. Another generation mechanism, based on natural evolution, was introduced by Holland (1975)[701 in which natural selection by survival of the fittest is

PAGE 52

42 emulated. In general, iterative improvement, or hill climbing methods, can be specified as follows. 1. An initial state and its neighborhood are given. 2. A sequence of iterations is generated, each iteration consisting of a possible transition from the current state to a state selected from the neighborhood of the current state. 3. If this neighboring state has a lower cost, the current state is re placed by this neighbor; otherwise, another neighbor is selected and compared for its cost value. 4. When a state is obtained whose cost is no worse than any of its neighbors, terminates; otherwise, go to step 1. For function optimization of one continuous variable, hill clmbing can be simply formulated as follows. 1. An initial point x0 in search space is given. 2. For the given step h, consider the moves x, = Xo + h, X2 = Xo-h, and compute their costs f(x1 ) and f(x2). 3. If one of these moves has less cost, then save it as x0 and go to step 4; otherwise, replace h by the smaller one and go to step 4. 4. If the termination criteria are satisfied, stop; otherwise, go to step 1.

PAGE 53

43 In general, for function optimization of several continuous variables, the hill climbing method can be formulated as follows. 1. An initial point X0 is given. 2. For the given step h, consider the moves X1 = Xo + s, where s E S, and S ={sIs= (x1 h,x2 h, ... ,xkh)}. 3. If one of moves has less cost, save it as X0 and go to step 4; otherwise, replace h by the smaller one and go to step 4. 4. If termination critera are satisfied, stop; otherwise, go to step 1. The success of hill climbing methods depends largely on both the initial solution and the step size of h. On the other hand, it is hard to define the direction of moves towards the optimum for cases of several variables because of the geometrical complexity of the search space. Hill climbing methods have been used as a local search method with the smaller step h for optimization. The disadvantages of iterative improvement methods are: o by defintion, the iterative methods can terminate in a local minimum, there is generally no information as to the amount by which this local minimum deviates from a global minimum; the obtained local minimum depends on the initial state, for the choice of which generally no guidelines are available;

PAGE 54

' 44 in general, it is not possible to give a sharp estimate for the compu tation time. To overcome some of the disadvantages of iterative improvement methods, the following approaches have been suggested by Laarhoven and Aarts (1988)[711: 1. execution of the method for a large number of initial solutions. Clearly, asymptotically-under the guarantee that all solutions have been used as initial solutions-such an algorithm (e.g., multistart methods) finds a global optimum with probability one; 2. use of information gained from previous runs of the algorithm to im prove the choice of initial state for next run (this information relates to the structure of the set of states); 3. introduction of a more complex generation mechanism (or, equiva lently, enlargement of the neighborhoods) in order to "jump out" of a local minimum corresponding to the simple generation mechanism properly requires detailed knowledge of the problem itself; 4. acceptance of transitions which correspond to an increase in the cost function in a limited way (in iterative improvement algorithm only transitions corresponding to a decrease in cost are accepted). The first approach is a traditional way to solve global optimization prob lems approximately (the iterative improvement method based on Lin's 2-opt strategy is a well-known example). But, this method is still a trial and-error procedure. The second approach is usually strongly problem dependent. Recently, some heuristic search methods that follow the third

PAGE 55

45 and the fourth approaches have been introduced. They are: simulated an nealing, genetic algorithms, and tabu search. Simulated annealing, popu larized by Kirkpatrick, Gelatt, and Vecchi (1983)1721 and by Cerny (1985)1731, mimics the physical annealing process. A detailed discussion of the the ory and applications of simulated annealing can be found in Laarhoven and Aarts's book (1988 )1711. A comprehensive annotated bibliography of the 300 papers and reports has also been included in the special issue (1988)1741. Genetic algorithms, introduced by Holland (1975)1701, mimics the process of biological evolution. The excellent book given by Goldberg (1989)1751 and two volumes of proceedings edited by Grefenstette (19851761, 19871771) provide an introduction of genetic algorithms and their appli cations. The overview can also be found in Liepins and Hillard's paper (1989)1781. Tabu search, introduced by Glover (19771791, 19861801), stems from the general tenets of intelligent problem-solving. A detailed intro duction has been given by Glover (1989)1811. All these approaches have been surveyed by Glover and Greenberg ( 1989 )1821. Genetic algorithms and tabu search methods, and their applications to globally optimal design will be discussed in details in the next sections. 3.4 Exploitation and Exploration As we described previously, optimal design problems can be viewed as a search process over design space. Point-by-point, or exhausive search over the entire design space is prohibited for many globally optimal design problems because of the limitations of computer time and space. For these kinds of problems some approximate or heuristic search strategy is necessary. When designing heuristic search methods for global optimiza tion, both concepts of exploitation and exploration play important roles.

PAGE 56

46 Exploitation, or intensification, means a deep search strategy restricted in a local region while exploration, or diversification, means a wide search strategy applied in the entire region. Too little exploration results in poor locally optimal solutions or the best regions of solution space not being vis ited. Too little exploitation is likely to result in the best regions of solution space being missed. They are two conflicting goals of achieving a heuristic search method. Therefore, a "good" heuristic search is a trade-off design of exploitation and exploration. Hill climbing and random search are two extremes. Hill climbing is a typical example of a search strategy that exploits only in the restricted area, e.g., neighborhood of an "initial" solution. Because of no exploration, the final solution can often be trapped in a region of solution space that is far from the optimal solution. On the other hand, random search is a good example of a search strategy that is concerned with only wide exploration. Because of no exploitation and blindly sampling over every region, it is usually inefficient. Then, a heuristic search algorithm can be designed by combining hill climbing and random search. The heuristic methods we will discuss are examples of this idea. In composite genetic algorithms, exploration and exploitation are simply combined using genetic algorithms and hill climbing method while in tabu search for global optimization of continuous variables they are directly achieved with the help of a set of moves of different step sizes. 3.5 Optimization by Genetic Algorithms A genetic algorithm is a heuristic search strategy based on the process of natural selection and natural evolution. Holland first developed the genetic algorithm and provided its theoretical foundation in his well-known book

PAGE 57

47 (1975)i701. Holland's formulation stems from the human reproduction and is stimulated further by the fact that, under the natural law of 'survival of the fittest', various species evolved so remarkedly well as to adapt to their enviorment. His breakthrough contribution is the central role played by the crossover operator as the underlying discovery mechanism, and mutation as an infrequent operator, used primarily to preserve population diversity. In fact, early researchers in genetic algorithms(Bledsoe, 1961i83i)depended solely on the mutation operator as the genetic search mechanism. However, search by means of mutation uses information from each individual alone, and has been shown more recently (DeJong, 1975i84l) to be inferior to the crossover-mutation combination. The crossover operator remains a primary genetic search operator. Although Holland was not able to prove convergence theorems for ge netic algorithms, he did prove that if the choice of search hyperplanes is viewed as a k-armed bandit problem, genetic algorithms optimally allocate trials to the hyperplane and, in this sense, they are optimal. Vector eval uated genetic algorithm (VEGA) for multiobjective optimization has also been introduced by Scheffer (1984)i851. Genetic algorithms have been applied in many diverse fields, including optimum engineering design problems. For example, recursive adaptive filter is designed by Etter, Hicks, and Cho (1982)i861. Goldberg (1983)l871 gives an example of applications to steady-state and transient optimiza tion of gas pipeline. Optimum VLSI circuit layout has been discussed by Davis and Smith (1985)l881. Keyboard configuration design using genetic algorithm is given by Glover (1985)l891. Goldberg and Samtani (1986)l901 considered the application of genetic algorithms to structural optimization (plane truss). Aircraft landing strut weight optimization is given by Minga

PAGE 58

48 (1986)[911. In order to introduce the composite genetic algorithms for global op timization of continuous variables, some basic concepts and theorems of genetic algorithms first are briefly reviewed. 3.5.1 Adaptive Process The concept of adaptation first comes from biology. In that context, adaptation is defined as any process whereby a structure is progressively modified to give better performance in its environment. Natural evolution is a typical example of an adaptive process. Today, adaptive processes have been widely involved in many other fields: psychology, economics, system control, artificial intelligence, computational mathematics, and sampling. The study of adaptive processes is complex. Frequently, nonadditive interaction makes it impossible to determine the performance of a struc ture from a study of its isolated parts. Moreover, possiblities for improved performance must be achieved through the trade-off design of exploitation and exploration search strategies. A mathematical framework that extracts and generalizes this biological process is presented by Holland (1975)f7 in which two of the most important generalizations are introduced: (1) the concept of a schema as a generalization of an interacting, or coadapted, set of genes; and (2) the generalization of genetic operators, crossing-over, inversion, and mutation. The concept of schema makes it possible to disect and analyze complex "nonlinear" or "epistatic" interactions. The gener alized operators extend the analysis to studies of learning and optimal planning. 3.5.2 Schemata The concept of a schema (or schemata, plural), generalizes the "nonlinear" interaction of adaptive processes. It is also called similarity templates.

PAGE 59

49 Suppose we can describe a given structure by a finite binary string of length 1, and we wish to describe a particular similarity. For example, consider the two strings A and B as follows: A= 101110, B = 111001. We notice that the two strings both have 1's in the first and third position. A description to such similarities is to use a 'wild card', or a 'don't care', symbol, say the asterisk ( ), in all positions where we are disinterested in the particular bit value. For example, the similarity in the first position can be described as: Likewise, the similarity in the third position may be described with the breif notation Combined similarity is described with *' s in all positions but the first and third: Then, these strings with *'s are called the schema of the given structure. It is clear that these schemata, or similarity templates, do not only name the strings A and B. The schema 1 * ** describes a subset containing 25 = 32 strings, each with a 1 in the first position. In general, for a binary string of length 1 each individual in the population belongs to 2L -1 schemata. It is easy to see that not all schemata are created equal. Some are more specific than others. We define the specificity of a schema H (its number of

PAGE 60

50 fixed positions) as its order, denoted by o(H), and the distance between a schema's outermost defining positions its defining length, denoted by d(H). For example, o(l*l***) d(l*l***) 3.5.3 Chromosome 2, 31 = 2. In genetic algorithms, one represents the planning strategies, design structures, or problem solutions as chromosomes. Chromosomes, or geno types, are fixed length strings having a finite number of possible values, or alleles, at each position. Each chromosome plays two roles. First, it provides a presentation of what the organism will become. Secondly, it provides the actual mate rial that can be transformed to yield new genetic material for the new generation. In most literature on genetic algorithms, chromosomes are represented by means of bit strings -lists of O's and l's. Bit strings have been shown to be capable of usefully encoding a wide variety of information, and they have also been shown to be an effective representation mechanism in unexpected fields, for example, function optimization. The properties of bit string representations for genetic algorithms have been extensively studied, and a good deal is known about the genetic operators and parameter values that work well with them. Some different types of representations have also been explored, often in connection with engineering applications of genetic algorithms, such as ordered lists (for bin-packing), embeded lists (for factory scheduling prob lems), variable-element lists (for semiconductor layout), and integer-based

PAGE 61

51 multiple-permutation-group representation (for keyboard configuration). 3.5.4 Evaluation Function When applying genetic algorithms, an evaluation function is defined. The evaluation function which plays the role of the environment in natural evolution, is a procedure that measures the fitness of each genotype. The fitness value assigned to each genotype represents the relative ability of the genotype to survive and produce offspring. In practice, there are many possible ways to define an evaluation func tion. For optimization problems, the objective or cost functions is often used easily to the evaluation. If the value of the evaluation function is given for the genotype i of a population, say c;, the fitness of the genotype is given by Ci f;=-. I;; c; (27) One linear scaling procedure for fitness is discussed further by Goldberg (1989)1751. 3.5.5 Population and Reproduction Plans A population, or gene pool, is a set of a finite number of genotypes that can evolve from one generation to the next through a reproduction plan. A genetic algorithm is composed of an initial population and a reproduction plan. The reproduction plan includes: 1. representing the population of genotypes of a generation by some data structure; 2. selecting successful genotypes to be used in creating the offsprings of the next generation; 3. a set of genetic operators (e.g;, reproduction, crossover, and muta tion) used to create the new offsprings.

PAGE 62

52 Usually, the initial population is selected in a random manner. The size of population is problem-dependent. In practice, it often remains fixed from generation to generation and is typically between 50 and 200 individuals or genotypes. 3.5.6 Genetic Operators For a reproduction plan, there are these genetic operators applied to the population: reproduction, crossover, and mutation. Reproduction changes the population by adding copies of genotypes with above-average fitness or figure of merit. Because the size of the pop ulation often is fixed in a reproduction plan, below-average are discarded in the process. No new genotypes are introduced, but the distribution of the population is changed. This process increases the percentage of geno types of the population that possess the above-average fitness so that the average fitness, of the population rises towards that of the most-fit existing genotype. For an optimization problem, a reproduction operator narrows its application to highly fit individuals or possible solutions. In addition to this "reproduction according to fitness", it is necessary to generate new, untested genotypes and add them to the population. Oth erwise, the population will simply converge on the best one it started with. Crossover is the primary means of generating plausible new genotypes for adding to the population. In a simple implementation of crossover, two genotypes are selected at random from the population. The crossover op erator takes some of the alleles from one of the 'parents' and the remainder from the other, and combines them to produce a complete genotype. In general, two offsprings are generated from a couple of parent in a crossover process. When genotypes are represented in terms of strings, crossover operator takes two randomly selected genotypes as parent, and splits the

PAGE 63

--53 strings at the same randomly determined point, and then creates two new genotypes or offsprings by swapping the tail portions of the strings, see figure 3.1. parent (a) 100 101l0 (b) 111 00101 offsprings (c) 100 00101 (d) 111 10110 Figure 3.1 Crossover operator The function of crossover is to rearrange the structure of genotypes of the population. It does not create any new genotypes with different genetic materials. This is done by a mutation operator. In a typical implementation, the mutation operator provides a chance for any allele to be changed to another randomly chosen value. Since reproduction and crossover only redistribute existing alleles, the mutation operator plays a role of explo-ration in genetic search strategies. The rate of mutation is carefully chosen when it is incorporated into genetic algorithms. 3.5. 7 Implicit Parallelism and Building Blocks Hypothesis Genetic search is mainly based on the following two basic results: Im plicit parallelism and building blocks hypothesis. In a population of size M with string length L, the estimated values of M zL or 0( M3 ) schemata are adjusted every generation during a repro duction plan. This information is used by the genetic algorithm to increase or decrease the number of copies of every schema according to whether it is above or below average fitness. More precisely, if u(t, s) is the average value of the copies of schema sat generation t, and ifn(t, s) is the number

PAGE 64

of those copies in the population, u( t, s) u(t+ l,s) = -_-(1-t)n(t,s), u 54 (28) where u is the average fitness of all strings in the population and E 1s a very small positive number. This evaluation and management of a large number of schemata is accomplished by simply manipulations of only M strings at a time. Holland has given such a phenomenon a special name, implicit parallelism. Implicit parallelism means that even though each generation we perform computational efforts proportional to the size M of the population during genetic algorithms, we get useful processing of something like M3 schemata in parallel with no special memory other than the population itself. In the implementations of genetic algorithms, short, low-order, and highly fit schemata are of special importance. Holland gave also them a special name, building blocks. In the early stages of genetic algorithms, a wide variety of schemata are represented in the gene pool, and the search is primarily a winnowing process (save the good portion and discard the bad portion) of low order schemata. The relatively superior schemata (that survive after crossover and maintain their relative superiority) are represented with exponential frequency, and the crossover operator is able to "glue together" different exemplars of! ow order building blocks (highly fit, low order schemata) to form increasingly fit individuals and direct the search to high order schemata. Such an observation is named further by Holland as the building blocks hypothesis. Schema is the core concept of the building blocks hypothesis. In some sense, schema can be considered to represent the 'direction' of the genetic search. The genetic algorithms implicitly collect statistics about the aver age fitness of competing schemata. Schema fitnesses are estimated by the

PAGE 65

I' 55 average fitness of their exemplars. The selection of copies of individuals to gene pool in direct proportion to their fitness biases the gene pool in the direction of the more favorably estimated scemata. 3.5.8 Schema Theorem The schema theorem is a fundamental theorem of genetic algorithms. It can be stated as follows. Under fitness proportionate reproduction, simple crossover, and muta-tion, the expected number of copies m(H, t) of a schema His bounded by the following expression: m(H, t + l) m(H, t/] ( l)(1(29) where P= and Pc are the mutation and crossover probabilities respectively, f is the average fitness of the population, and the factor fH is the schema average fitness which can be calculated by the following expression: !H = L f,(i) i m(H,t) (30) The schema average fitness f(H) is simply the average of the fitness values of all strings s which currently represent the schema H. Schema theorem says that above average, short, low-order schemata are given exponentially increasing numbers of trials in succesive generations of genetic algorithms. Holland has showed (1975)1701 that this is a near optimal strategy when the allocation process is viewed as a set of parallel, overlapped, multi-armed bandit problems. 3.5.9 Genetic Algorithms The correspondence of natural and genetic algorithms terminology is summerized in figure 3.2.

PAGE 66

NATURAL Chromosome Gene Aile! Locus Genotype Phenotype Epistasis GENETIC ALGORITHMS String Character Feature value String position Structure Alternative solution Nonlinear Figure 3.2 Comparison of natural and genetic algorithms 56 In the literature there are many variations of the basic genetic algo-rithm. All of them consist of three basic components: 1. evaluation of individual fitness; 2. formulation of a population or gene pool; 3. a reproduction plan. The individuals resulting from genetic operators form the next generation's population. The process is iterated until the stop criteria is met. The basic genetic algorithm can be specified as follows. 1. Select a finite, bounded domain for the search and a representation that discretizes the search space. 2. Choose a desired population size M and initialize the starting popu-lation P. 3. Evaluate individuals according to the evaluation function f( ). 4. If stopping criteria is met, stop. Otherwise, probabilitically select in dividuals (according to their fitness dtermined by the function evaluation) to populate a gene pool of size M.

PAGE 67

57 5. So long as the size of the temporatory population TP is less than M, randomly choose a parent from the gene pool. With the probability of its fitness. fill the parent into the temperatory population TP. With probability p0 choose a coparent, cross parents, and fill the resulant offsprings into the temperatory population. 6. With (low) probability Pm, mutate randomly chosen individuals of TP. Set P = TP, and return to step 3. Some notes for this procedure is further addressed by Liepins and Hillard (1989 )1781. It is possible to have multiple copies of an individual in a population and /or gene pool. Historially, one point crossover has been used. More recently, two point crossover has been shown to be superior. Mutation helps assure population diversity. It IS not the pnmary genetic search operator. The stopping criteria is usually given in terms of a threshold in im provement or total number of generations. The selection of parameter values of genetic algorithms has been widely discussed. The most authoritative work is Dejong (1975 )f841. Dejong sug gested a mutation rate of 1/pop-size and a crossover rate of approximately 0.6. 3.5.10 Composite Genetic Algorithm Genetic algorithms are not suited for fine-tuning structures which are very close to optimal solutions. Instead, the strength of genetic algorithms

PAGE 68

58 is quickly locating the high performance regions of vast and complex search space. Once those regions are located, it may be useful to apply local search heuristics, for example, hill-climbing, to the high performance structures evolved by the genetic algorithms. According this idea, composite genetic for global optimization of continuous variables is designed. The composite genetic algorithm consists of a genetic algorithm and a hill climbing method. It can be formulated as follows. 1. An initial population is given. 2. The population is evolved using genetic algorithms with "survival of fittest" reproduction, two-points crossover and mutation. 3. After some generations, a best solution among the current population is chosen as the initial solution for hill climbing method. 4. The hill climbing method is applied to the neighborhood of the cur rent solution. 5. If termination criteria are satisfied, stop; otherwise, go to step 4. The representation of population is the first step of designing a genetic algorithm. Usually, the 0-1 string representation is used, but what it repre sents is part of the art of genetic algorithm design. Secondly, an evaluation function is required. In practice, there are many possible ways to define an evaluation function. For minimization problems, the evaluation function can be easily defined as follows. (31) where Cmaz is the upper bound of the objective function, and J; is the objective value of gene (i) in the gene pool or population. Through the

PAGE 69

t 59 evaluation function, the fitness for reproduction can be given by equation (27). The plan of "survival of fittest" reproduction can be achieved by the following approach. 1. Give initial value of fitness fi (i = 1, ... n), compute the partial sums of fitnesses for all genes in the current population, sum(i)=sum(i-1)+!; (i=1, ... ,n-1), where sum(O) = 0.0, sum( n) = 1.0. 2. Generate a random number on the interval (0, 1). Suppose we have sum(i-1) < r :S sum(i) (0 :S i ::S n). Then the gene i will be chosen as the gene of the next generation in reproduction. 3. If the number of genes in the next generation is equal to n, stop; otherwise, go to step 2. The reliability and efficiency of composite genetic algorithms will be analysed in the next chapter with the help of some common test functions. 3.6 Optimization by Tabu Search Tabu search is a metaheuristic recently developed by Glover(1977[79l, 1986[so]) specifically for combinatorial optimization problems. It stems from the general tenets of intelligent problem-solving. Tabu search is called

PAGE 70

60 a metaheuristic because it can be combined with other heuristic procedures to prevent them from trapping at locally optimal solutions. The functions of the method are: 1. guide any local transform process that employs a set of moves for transforming one (or solution state) into another; 2. provide an evaluation function for measuring the attractiveness of these moves. The form of the guidance provided by tabu search is highly flexible and often motivates the creation of new types of moves and evaluation criteria to take advantage of its ability to be adapted to different problem structures and strategic goals. Although tabu search methods were initially introduced for combinato rial optimization, some attempts to apply them to globally optimal design of continuous variables are made here. 3.6.1 Tabu Move and Tabu List In tabu search, a move is currently not allowed is called a tabu move. All tabu moves form a list, known as a tabu list. In general, the tabu list stores then most recent moves so that these moves will not be immediately repreated. The purpose of the tabu list is to avoid returning to previous solution states. This strategy can help a hill climbing heuristic or other heuristics to escape from a locally optimal solution. The size n of a tabu list depends on the specific problem. The smaller the size, the more exploitation; the larger, the more exploration. In general, a "good" size of tabu list is determined through some trials of the specific problems. The tabu list is updated after each problem iteration with items being

PAGE 71

61 added while other items are deleted from the list. When building the tabu list, several tactics suggested by Glover can be incorporated to improve the efficiency of this method. They are strategic oscillation, aspiration levels, intermediate and long term memory. 3.6.2 Strategic Oscillation Strategic oscillation is a tactic by which a feasible solution is purposely made infeasible by a series of (destructive) moves using tabu list, and then the infeasiblity is resolved by making a series of (constructive) moves. The idea behind it is to induce the exploration of new regions by crossing or moving from certain boundaries which normally affect the progress of tabu search. In general, it addresses those situations where feasible solutions may be confined to narrow regions. Strategic oscillation can be accom plished by using multiple tabu lists. Strategic oscillation provides the following advantages: allowance for less complex moves requiring less effort to evaluate; the ability to approach solutions from different directions, especially in cases where the search is narrowly confined; o discovery of good solutions in perturbed conditions. 3.6.3 Aspiration Levels Another tactic in tabu search is the use of aspiration levels. This allows the tabu status of moves to be overriden if the aspiration level is reached. Let A(s, x) represents an aspiration level dependent on move s and the current solution x. Glover defined A(s, x) to be the smallest value of cost c(x') that could be achieved by reversing a previous move from some x' to some s'(x') such that c(s'(x')) < c(s(x)). (32)

PAGE 72

62 An aspiration level is attained if c(s(x)) < A(s, x). 3.6.4 Intermediate Memory and Long-term Memory Intermediate and long term memory, as a basis of tactics for locally intensifying and globally diversifying the search (concentrating on the old and good regions, and moving to new regions), are important for obtain ing the near-optimal solutions of large-scale combinatorial optimization problems. Glover indicated (1989)[Sl] that intermediate memory and long term memory provide an interplay between "learning" and "unlearning". Intermediate memory involves creating a network of good solutions as a matrix for generating other good solutions. Long-term memory expands or diversifies the search for optimal solutions. 3.6.5 Tabu Search Tabu search can be combined with other heuristics to solve hard optimization problems. A typical example given by Glover is a tabu search based on a hill climbing heuristic. In terms of tabu search, the hill climbing heuristic is: select an initial x E X; X is a set of possible solutions; for the set S(x) of moves that can be applied to x, select some s E S( x) such that for cost function c(x) we have c(s(x)) < c(x); (33) let x = s(x) and return to step 2. Such a hill climbing heuristic will proceed unidirectionally from its starting point to a local optimum. (For minimization, the direction of hill climbing is downward.)

PAGE 73

I ; j 63 The chief limitation of this type of procedure is that the local optimum solution obtained at its stopping point, when no improving moves are possible, may not be a global optimum solution. Tabu search guides such a heuristic to continue searching in the absense of improving moves, and without returning to a previously visted local optimum. A tabu search based on hill climbing heuristic can be generally described as follows. 1. Select an initial point x E X and let x* = x. Set the iteration counter k = 0 and begin with tabu list T empty. 2. If S(x) Tis empty, go to step 4; otherwise, set k = k + 1 and select sk E S(x) -T such that sk = optimize{s(x): s E S(x)T}. 3. Let x = sk(x). If c(x)< c(x*), where x* denotes the solution currently found, let x = x. 4. If stopping criteria are satisfied, stop. Otherwise, update T and return step 2. Tactics (e.g., strategic oscillation, aspiration levels) can be incorporated into the simple tabu search. According to the idea prescribed previously, tabu search method for optimization of continuous variables is now introduced. For optimization of one continuous variable, a tabu search is formulated as follows: 1. The objective function is defined in the interval [a, b]. A set H of steps h; (i=l, 2, ... n) is defined, where h1 = b-a,

PAGE 74

h2 = hl/10.0, h3 = h2/10.0, 64 An initial solution xis given. Begin with tabu list T empty. Set the counter K = 0. 2. A set S of moves is defined as: S ={sIs= x h;r; I h; E H}, where r;'s are the random numbers on [0, 1]. 3. Select the move s(x) E S -T for which we have c(s(x))< c(x), added it to the tabu list T, set K = 0, and go to step 4. If for each moves ES-T we have c(s(x)) 2': c(x) and K is greater than the given large number, stop; otherwise, set K = K + 1, and go to step 2. 4. Let x =: s(x). If STis empty, go to step 5; otherwise, go to step 3. 5. If termination criteria are satisfied, stop; otherwise, update T empty and return step 2. Because random moves are employed, it is unnecessary to include the in verse moves of accepted moves in the tabu list T to prevent the cycles as in Glover's method. For global optimization of several continuous variables, tabu search can be extended as follows.

PAGE 75

1. The objective function is defined in the box P, P ={X I a:<::; x1 :<::; b,a :<::; x2 :<::; b, ... ,a :<::; Xk :<::; b}. A set H of steps hi (i=l, 2, ... n) is defined where h1 = b-a, h2 = hl/10.0, h3 = h2(10.0, 65 An initial solution X is given and begin with tabu list T empty. Set the counter K == 0. 2. A set S of moves is defined as follows where hi; E Hand r;;s are random numbers on [0, 1]. 3. Select the move s(X) ES-T for which we have c(s(X)) < c(X), added it to the tabu list T, set K = 0, and go to step 4. If for each move s E ST we have c(s(x)) 2:: c(x) and K is greater than the given large number, stop; otherwise, go to step 2. 4. Let X =: s(X). If S Tis empty, go to step 5; otherwise, go to step 3

PAGE 76

66 5. If termination criteria are satisfied, stop; otherwise, update T empty and return to step 3. The reliability and efficiency of tabu search methods for globally optimal design will be analysed in the next chapter with the help of some common test functions. It is shown that tabu search methods are easy to use, and outperform the composite genetic algorithms for test problems. Finally, the applications of tabu search methods to some practical design examples are considered in the chapter 5.

PAGE 77

CHAPTER 4 EMPIRICAL STUDIES In order to compare the different methods in the same standard, a set of test functions for global optimization were introduced by Dixon and Szego(1978)f22l, and collected in Torn and Zilinskas's book (1987)f251. Using these test functions, some experiments are designed in this chapter to compare the efficiencies and reliabilities of composite genetic algorithms and tabu search with random search. A natural field of applications of global optimization methods is prob lems of optimal engineering design. In practice, objective functions can be expensive to evaluate. For this reason, the number of evaluations has been chosen as a performance index in the analysis of algorithms. The termina tion of the algorithms is made while the indicated relative deviation (RD) is less than the given E. The relative deviation is defined as follows, RD =(exact value-approximate value)jexact value, where the exact value is not zero in all cases. In the following discussions, the relative deviation of objective function value has been used as the terimination criterion. 4.1 Standard Problems In our experiments, problems of optimization without behavioral constraints are studied. The problems of optimization are distinguished be-

PAGE 78

68 tween one variable and several variables because diffient algorithm structures are involved. In order to simplify the design of computer algorithms, we consider the problems of optimization in the standard interval, or standard subspace. For problems of optimization of one continuous variable, the standard problem is defined as follows. Minimize f(x), where x E [0, 16]. We use the four bits of a string for the integer part of the solution in genetic algorithms, while the other bits are used for the decimal part. For a general problem of optimization, where x E [a, b], it can be changed into the standard form by using the variable transformation: x =a+ (b-a) x y/16, or y = (xa) X 16.0/(b-a), In cases of several variables, we consider the following standard form of optimization without behavioral constraints: minimize f(X), 0 :S x, :S 16, 0 :S x2 :S 16,

PAGE 79

69 For the general problem of optimization, minimize f(X), similar transformations can be applied in order to have the standard form. 4.2 One-dimensional Algorithms Test functions for one-dimensional algorithms are given as follows. J,(x) . lOx = 5 + smx + sm3 + lnx -0.84x, 2.7 :S X :S 7.5, J2(x) . 2x = 2+smx+sm3 3.1 :S X :S 20.4, s fs(x) = 6 + L sin((i + l)x + i), -10 :S X :S 10, i=l !4{x) = 1 + ( x + sinx )e-"2 -10 :":: X :":: 10. They are first transformed into the standard forms. The figures of these standardized functions (fun1, fun2, fun3, and fun4) are shown in figures 4.1 through 4.4. Table 4.1 gives their optimal solutions and function values. The experiments compare random search, composite genetic algorithms,

PAGE 80

70 5 2 1 0 X Figure 4.1 The curve of test function fun!

PAGE 81

69 For the general problem of optimization, minimize f(X), similar transformations can be applied in order to have the standard form. 4.2 One-dimensional Algorithms Test functions for one-dimensional algorithms are given as follows. 5 . lOx l 0 8 + smx + sm-3 + nx 4x, 2.7 :5c X :5c 7.5, 2 . 2x +smx+sm3 3.1 :5c X :5c 20.4, 5 j3(x) 6 + L sin((i + l)x + i), -10 :5c X :5c 10, f<( x) = 1 + ( x + sinx )e-x', -10 So X :S: lQ. They are first transformed into the standard forms. The figures of these standardized functions (funl, fun2, fun3, and fun4) are shown in figures 4.1 through 4.4. Table 4.1 gives their optimal solutions and function values. The experiments compare random search, composite genetic algorithms,

PAGE 82

70 5 -j j 1 4 -j ] -; _J 3 I --:1 >< J -a ;:! "1 ..... \ j 2 I "l -i _J I 1 --=i -j I "1 I l 0 J I 0 5 10 15 20 X Figure 4.1 The curve of test function fun1

PAGE 83

71 4 I I I I I I I I I l >< N 2 I -1 c I I :;l I ' I l I i "i \ I I 0 I 0 5 10 15 20 X Figure 4.2 The curve of test function fun2

PAGE 84

72 10 : i 8 -i l i I I I i I 4 l i ' i 2-J i l i I ' ____j 0 5 10 15 20 X Figure 4.3 The curve of test function fun3

PAGE 85

73 2.0 1.5 0.5 0 5 X Figure 4.4 The curve of test function fun4 I

PAGE 86

74 Functions Optimal Solution Function Value fun1 8.33259142 0.39869259 fun2 12.8917414 0.0940388 fun3 2.0720404 1.0899717 fun4 7.45632242 0.17576058 Table 4.1 Test Functions of One Dimension and Optimal Solutions and tabu search. Random search of the one-dimensional case is formulated as follows. 1. Give an initial solution x0 2. Generate a random number r on the interval [0, 1] and perform a random search, x1 = r x (16.0-0.0). 3. Iff(x1 ) < f(x0), save x1 as x0 and go to step 4; otherwise, go to step 2. 4. If termination criteria are satisfied, stop; otherwise, go to step 2. The computational results for the given accuracy requirement ( E = 10-s) are given in table 4.2. Convergent trajectories of them are shown in figures 4.5 through 4.8.

PAGE 87

75 Functions Methods funl fun2 fun3 fun4 X 8.33522 12.89222 2.07222 7.45622 Genetic RD 3.15 x10 3.71 xlO 5 8.69 X 10 .5 1.36 X 10 ., Algorithms f(x) 0.398696 0.0940390 1.089973 0.1757606 ( = 10-5) RD 9.32 X 10 -o 2.81 X 10 -o 2.10 X 10 -o 3.23 X 10 N 75 81 78 89 X 8.33452 12.89243 2.07203 7.45614 Tabu RD 2.32 X 10 - 5.37x 10 _, 8.69 X 10 I 2.38 x1o-' Search f(x) 0.398694 0.0940392 1.089971 0.1757607 (=10-5) RD 5.05 X 10 -o 4.96 x 1o-ti 5.07 X 10 -o 7.00 X 10 I N 26 59 80 56 X 8.33200 12.89242 12.12497 7.45567 Random RD 7.05 X 10 _, 5.32 X 10 -o 4.85 8.53 X 10 ., Search f(x) 0.398692 0.0940392 1.089973 0.1757617 ( = 10-5) RD 4.99 x10 I 4.89 X 10 -o 1.76x1o-o 6.88 X 10 -o N 1232 7444 4881 25126 Table 4.2 Comparisons of Genetic Algorithms, Tabu Search, and Random Search with the Same Degree of Deviation (One-dimensional Cases) 4.3 Multiple-dimensional Algorithms Test functions for multiple-dimensional algorithms are chosen as fol lows. f1(x) -2.lxi + + x1x2 + 4xi + 1.5 -5 ::; Xi ::; 5 (i=1,2), f2(x) [1 + (x1 + x2 + 1)2(19 -14x1 + 3xi -14x2 + 6x1x2 + x [30 + (2x1 3x2 ) 2(1832x1 + 12xi + 48x2 -36x1x2 + 27x2)], -2 < x < 2 -'( i = 1, 2), fa(x) xi+ x;cosl8x1 cos18x, + 3, -1 ::0 Xi ::0 1 (i = 1,2),

PAGE 88

76 4 I ' -Tabu Search Genetic Algorithms ------2Random Search ----" 0 I ., -I "" --"\ .0:: -----. \ "' ----------\--I 0 -2 r-:;J I r--, '\ ., \ I > :;J -4 r--" -6 I--8 I I J 0 20 40 60 80 100 evaluations Figure 4.5 Convergent trajectories of one-dimensional test funclion funl

PAGE 89

77 4 I I I Tabu Search Genetic Algorithms ------21Random Search -----\ -,_, -;; \ _____ Q ol" I "' ---------------0 -0 -2 I-:;:; ;;: "0 -> -4 I-:;:; \ o; -6 I-. -6 I I I 0 20 40 60 60 !00 evaluations Figure 4.6 Convergent trajectories of one-dimensional test function fun2

PAGE 90

00 c 4 o!:3 -2 !- :: -o 0 :; -4 -8 0 Figure 4.7 I \ ______ \c___ I Tabu Search Genetic Algorithms Random Search -, L. \ \ 1 L---,---1 ' I 20 40 60 80 eval ua tiona Convergent trajectories of one-dimensional test function 78 ---100 fun3

PAGE 91

-;;; u "' I "" 0 "' c .s ;: "' > :;:: 'ii ... 4 2 0 -2 -4 -6 0 20 Tabu Search Genetic Algorithms Random Search evaluations \ I I 79 Figure 4.8 Convergent trajectories of one-dimensional test function fun4

PAGE 92

80 2 2 ''A/dIT cos(xJJi) + 2 (d = 200.0), i=l i=l -100 :::; X; :::; 100 (i=1,2). Table 4.3 gives the optimal solutions and the objective function values of these test functions. Functions Optimal Solution Function Value ffun1 (8.143729, 6.859840) 0.4683717177 ffun2 (8.0, 4.0) 3.0 ffun3 (8.0, 8.0) 1.0 ffun4 (8.0, 8.0) 1.0 Table 4.3 Test Functions and Optimal Solutions Similarly, random search, composite genetic algorithms, and tabu search are also compared. Random search for multiple-dimensional case is formu-!a ted as follows. 1. Give an initial solution X0 2. Generate m random numbers, r1 r2 Tm on the interval [0, 1], and a random search point xl = (xl,x2, ... ,xm), x1 = r1 x (16.0-0.0), X 2 = T2 X (16.0-0.0), Xm = Tm X (16.0-0.0).

PAGE 93

81 3. If f(XJ) < f(X0), save X1 as X0 and go to step 4; otherwise, go to step 4. 4. If termination criteria are satisfied, stop; otherwise, go to step 2. The computational results are presented in table 4.4. Convergent trajec tories of these methods are shown in figures 4.9 through 4.12. Functions -Methods ffun1 ffun2 ffun3 ffun4 x, 8.144367 7.999591 8.000367 8.000201 Genetic x2 6.860147 4.000188 7.998147 7.999603 Algorithms f(x) 0.4683727 3.000004 1.000009 1.0000092 (E=l0-5 ) RD 2.16 X 10 '6 1.53 X 10 =lr 9.08 X 10 '6 9.47 x10'6 N 1853 1532 1829 7606 x, 8.143214 7.998714 7.998234 8.000171 Tabu x2 6.859093 3.999903 7.999158 8.000417 Search f(x) 0.4683734 3.000024 1.000009 1.000009 (E=l0-5 ) RD 3.79x10-6 8.20 X 10-6 9.73 x10-6 9.26 x 10-6 N 669 430 1025 1105 x, 8.142729 8.000714 7.999047 7.999999 Random x2 6.859502 4.000704 7.999708 7.999937 Search f(x) 0.4683734 3.000014 1.000002 1.0000001 (E = lQ-5 ) RD 3.62 X 10-6 4.87 X 10-6 2.52 X 10-6 1.56 x1o7 N 6590640 3676958 11935830 574447707 Table 4.4 Comparisons of Genetic Algorithms, Tabu Search, and Random Search with the Same Degree of Deviation (Multi-dimensional Cases) 4.4 Discussions Through the results of the previous experiments, we have the following conclusions. 1. A composite genetic algorithm can be applied efficiently to various problems of global optimization of continuous variables.

PAGE 94

i I 4 I d :. ---------------0 m c 0 -21-:;J d > -c :5 -4-d -6I Tabu Search Genetic Algorithms Random Search I 82 -2000 0 500 1000 1500 evaluations Figure 4.9 Convergent trajectories of test function ffun 1

PAGE 95

" 4 2 0 :3 -2 ;;: " -4 " -6 0 I---Tabu Search Genetic Algorithms Random Search ------------L-------evaluations Figure 4.10 Convergent trajectories of test function ffun2 83

PAGE 96

';;; u "' I "" .3 m 0 :;:; > -o > :;:; I, 4 or=s--2 >-4 1-6 >-8 0 --' -----I 500 I I Tabu Search Genetic Algorithms Random Search ------I I 1000 1500 evaluations Figure 4.11 Convergent trajectories of test function 84 --'-\ l --2000 ffun3

PAGE 97

d u m I "' 0 4 2 0 -6 0 ,_ --l Tabu Search Genetic Algorithms Random Search -------.,_-----------2000 evaluations Figure 4.12 Convergent trajectories of test !unction ffun4 85

PAGE 98

86 2. The success of the method depends on the size of population and the number of generations. For the test functions of one variable, the size is chosen between 20 and 22, while the number of generations is 4. For the test functions of two variables, the size is chosen between 250 and 580, while the number of generations is between 8 and 20. 3. Tabu search is a powerful approach to various problem of global optimization of continuous variables. It outperforms the composite genetic algorithms and random search for test problems of continuous variables. 4. The efficiency of the tabu search methods depends on the number of elements of the step set H. Both the smaller number and the larger number result in an inefficient search process. For the test problems, a reasonable choice of the number can be found between 8 and 14, with relative derivation 10-5 for both cases of one variable and two variables. 5. Tabu search is easy to use for optimization of continuous variables because it needs fewer parameter preparations than other heuristic methods, such as simulated annealing or composite genetic algorithms.

PAGE 99

CHAPTER 5 APPLICATION EXAMPLES 5.1 3-bar Pyramid The optimal design of 3-bar pyramid is considered here. More discussions and their exact solutions of this example are given in Example 2.2. I. The minimum-weight design subject to a compressive constraint Let P0 = 100 lb and
PAGE 100

88 2 I I -1 -I -01. -;;; 0 m I "" ..s c -.s r--2 :: "' > ::J --4 1-6 I I ' 0 200 400 600 BOO !000 1200 evaluations Figure 5.1 Convergent trajectory of case I using tabu search

PAGE 101

89 II. The minimum weight design subject to a buckling constraint Suppose P0 = 100 lb, E = 5.6 x 106 psi, and k = 0.4. Then, we have the exact solution, X = (0.5, 0.009120694), with the objective function f(X) = 0.010197. Tabu search is applied to this problem, and the results with the accuracy requirement ( E = 10-3 ) are given in table 5.2. The convergent trajectory is shown in figure 5.2. Exact Solution (0.5, 0.009120694) Function Value 0.010197 Best Solution (0.50738, 0.00909) Function Value 0.010199 Table 5.2: The Minimum Weight Design of 3-bar Pyramid Subject to an Element Buckling Constraint (after 881 evaluations with the number of intervals= 9) III. The minimum weight design subject to a displacement constraint Suppose Po = 100 lb and amaz = 5 X 10-5 Then, we have the ex act solution, X = (1, 0.673435), with the objective function value f(X) = 0.9523810. Similarly, tabu search is applied, and the results with the accuracy requirement ( E = 10-3 ) are given in table 5.3. The convergent trajectory is shown in figure 5.3. Exact solution (1.0, 0.673435) Function Value 0.952381 Best Solution (0.98197, 0.67994) Function Value 0.9529597 Table 5.3: The Minimum Weight Design of 3-bar Pyramid Subject to a Displacement Constraint (after 1084 evaluations with the the number of intervals = 9)

PAGE 102

90 2 L I I I I Ol-!: '--, = 0 "' I .. _g 0 :;:; -2 :-= ;;: l v ., v I .:: = v ... -4 -0 200 400 600 800 1000 1200 evaluations Figure 5.2 Convergent trajectory of case II using tabu search

PAGE 103

91 2 I I I I o-" o; 0 "' I .. 0 0 -:;:; -2 "' > :;:; .. -4 10 200 400 600 600 1000 1200 evaluations Figure 5.3 Convergent trajectory of case III using tabu search

PAGE 104

92 5.2 Three-bar Truss The optimal design of a three-bar truss is considered here. More discussions of this example have been given in Example 2.3. For the initial solu tion (1.0, 1.0), the FORTRAN program of the minimum design of three-bar truss using the tabu search method was run 5000 cycles and 50000 cycles in double precision on the Sequent B21 computer. The results are given in table 5.4. The convergent trajectory is shown in figure 5.4. Best Solution ( 5000) (0.80199, 0.371839) Function Value 2.640211 Best Solution ( 50000) (0. 788076, 0.40995) Function Value 2.638966 Table 5.4: The Minimum Weight Design of Three-bar Truss 5.3 Coil Springs The optimal design of coil springs is considred here. More discussions of this example were given in Example 2.1. For the initial feasible design (0.1, 1.2, 5), the FORTRAN program of the minimum weight design of coil springs using the tabu search method was run 5000 cycles and 50000 cycles in double precision on the Sequent B21 computer. The results are given in table 5.5, and compared with Arora's results from the sequential quadratic programming (SQP) methods with small constraint violations. It can be seen that the designs obtained from different methods differ slightly. The convergent trajectory of tabu search method is shown in figure 5.5.

PAGE 105

93 4.0 I I 3.5 u 3.0 u .a 0 2.5 i2. 0 0 1000 2000 3000 4000 5000 evaluatiOns Figure 5.4 Convergent trajectory of optimal design of three-bar truss

PAGE 106

94 0.!0 T I I 0.08 --0 0.06 -:;:; 0 " > :;:; 0 ;:; 0.04 0 0.02-l 0.00 0 1000 2000 3000 4000 5000 evaluations Figure 5.5 Convergent trajectory of optimal design of coil springs

PAGE 107

95 Methods d D N f(.) CPU( sec.) SQP 0.051699 0.35695 11.289 0.0126787 TS (5000) 0.052587 0.37853 10.1346 0.0127026 13.244 TS (50000) 0.052590 0.37860 10.13038 0.0127019 181.07 Table 5.5 The Minimum Weight Design of Coil Springs 5.4 Thin-walled Columns The optimal design of thin-walled columns is considered here. More discussions of this problem have been addressed in Example 2.4. For the optimal design of a Z-section using tabu search, the FORTRAN program was run in double precision on the Sequent B21 computer. For the following set of input data, P = 1000/b, L = 25in, uy = 25,000psi, E = 107psi, v = 0.3, and the initial solution X = (1.5, 0.05, 1.5), the computational results with 5000 cycles and 50000 cyclyes are compared with Iyengar's results from SUMT methods in table 5.6. Convergent trajectory of the algorithm is shown in figure 5.6. Methods xl x2 xa f(X) CPU(sec.) SUMT 0.90683 0.04021 1.5897 0.13089 TS (5000) 0.91597 0.04037 1.5605 0.13090 3.162 TS (50000) 0.91059 0.04037 1.5605 0.13090 43.176 Table 5.6 Comparisons of Optimal Design Using SUMT and Tabu Search For the optimal design of channel section using tabu search, the similar FORTRAN program is run in double precision on the Sequent B21 com puter with the same input data, where the initial solution is chosen as X = (2.0, 0.1, 1.0). The computational results with 5000 cycles and 50000 cycles are compared with Iyengar's result from SUMT methods in table 5.7. Convergent trajectory of the tabu search is shown in figure 5.7.

PAGE 108

I I' ;/ 0 :;:; Q 0.4 ....... 0.2 > :;:; Q ;; 0 96 I I T 0.0 0 1000 2000 3000 4000 5000 evaluations Figure 5.6 Convergent trajectory of optimal design of Z-section

PAGE 109

97 0.8 I 0.6 -" 0 :;J Q '-; 0.4 -I -\._ Q ;:;' 0 0.2 f---evaluations Figure 5.7 Convergent trajectory of optimal design of channels i I

PAGE 110

98 Methods "'1 "'2 "'3 f(X) CPU(sec.) SUMT 0.7013 0.04680 1.6452 0.11966 TS (5000) 0.8385 0.04523 0.3313 0.08843 4.615 TS (50000) 0.8384 0.04524 0.3299 0.08838 55.893 Table 5.7 Comparisons of Optimal Design Using SUMT and Tabu Search It can be observed that the optimal design of channel section using tabu search saved 26.14% over the weight of using the Iyengar's SUMT method.

PAGE 111

. REFERENCES [1] Haug, E.J., and J.S. Arora, Applied Optimal Design: Mechanical and Structural Systems, Wiley-Interscience, New York, 1979. [2] Iyengar, N.G.R., and S.K. Gupta, Programming Methods in Structural Design, John Wiley & Sons, New York, 1980. [3] Kirsch, U., Optimum Structural Design: Concepts, Methods, and Ap plications, McGraw Hill Book Co., New York, 1981. [4] Vanderplaats, G.N., Numerical Optimization Techniques for Engineer ing Design, McGraw-Hill, New York, 1984. [5] Haftka, R.T ., and M.P. Kamat, Elements of Structural Optimization, Martinus Nijhoff, Amsterdam, 1985. [6] Arora, J.S., Introduction to Optimu Design, McGraw-Hill Book Co., New York, 1989. [7] Gallagher, R.H., and O.C. Zienkiewicz (Eds.), Optimum Structural De sign: Theory and Applications, Wiley, New York, 1973. [8] Morris, A.J., (Ed.), Foundations of Structural Optimization: A Unified Approach, Wiley, New York, 1982. [9] Atrek, E., R.H. Gallagher, K.M. Ragsdell, and O.C. Zienkiewicz (Eds.), New Directions in Optimum Structural Design, Wiley, New York, 1984. [10] Schmit, L.A., "Structural synthesis: Its genesis and developments," AIAA Journal, 19(10), pp. 1249-1263 (1981). [ll] Lev, O.E. (Ed.), Structural Optimization: Recent Developments and Applications, ASCE, New York, 1981. [12] Ashley, H., "On making things the best-aeronautical uses of optimiza tion," Journal of Aircraft, 19, pp. 5-28 (1982).

PAGE 112

I 100 [13] Haftka, R. T ., and B. Prasad, "Optimum structural design with plate blending elements: A survey," AIAA Journal, 19(4) (1981). [14] Vanderplaats, G.N., "Structural optimization: Past, present and future," AIAA Journal, 20, pp. 992-1000 (1982). [15] Belegundu, A.D., and J.S. Arora," A study of mathematical program ming methods for structural optimization, part I: Theory; part II: numer ical aspects," International Journal of Numerical Methods in Engineering, 21, pp. 1583-1623 (1985). [16] Levy, R., and O.E. Lev, "Recent developments in structural optimiza tion," Journal of Structural Engineering, ASCE, 113(9), pp. 1939-1962, (1987). [17] Jacoby, S.L.S., J.S. Kowalik, and J.T. Pizzo, Iterative Methods for Nonlinear Optimization Problems, Prentice-Hall, Inc., Englewood Cliffs, NJ, 1972. [18] Fletcher, R., Practical Methods of Optimization, Vol. I, John Wi ley&Sons, New York, 1980. [19] Fletcher, R., Practical Methods of Optimization, Vol. II, John Wi ley&Sons, New York, 1981. [20] Schwefel, H.-P., Numerical Optimization of Computer Models, John Wiley & Sons, New York, 1986. [21] Dixon, L.C.W., and G.P. Szego, Toward Global Optimization, North Holland Publishing Company, New York, 1975. [22] Dixon, L.C.W., and G.P. Szego, Toward Global Optimization 2, North Holland Publishing Company, New York, 1978. [23] Pardalos, P.M., and J.B. Rosen, Constrained Global Optimization: Al gorithms and Applications, SpringerVerlag, New York, 1987.

PAGE 113

101 [24] Hong, C.S., and Z. Quan, Integral Global Optimization, Springer Verlag, New York, 1988. [25] Torn, A., and A. Zilinskas, Global Optimization, Springer-Verlag, New York, 1989. [26] Pardalos, P.M., and G. Schnitger, "Checking local optimality in constrainted quadratic programming is NP-hard," Operations Research Let ters, 7(1) (1988). [27] Murty, K.G., and S.N. Kabadi, "Some NP-complete problems in quadratic and nonlinear programming," Mathematical Programming, 39, pp. 117-129 ( 1987). [28] Horst, R., "Deterministic methods in constrainted global optimization: some recent advances and new fields of application," Naval Research Lo gistics, 37, pp. 433-471 (1990). [29] Moore, R.E., Interval Analysis, Prentice-Hall, Englewood Cliffs, NJ, 1966. [30] Ratschek, H., and J. Rokne, New Computer Methods for Global Opti mization, Ellis Horwood Limited, Chichester, 1988. [31] Arora, J.S., and P.B. Thaneder, "Computational methods for optimum design of large complex systems," Computational Mechanics, 1, pp. 221-242 (1986). [32] Johnson, R.C., Optimum Design of Mechanical Elements, John Wiley & Sons, Inc., New York, 1980. [33] Sechen, C., VLSI Placement and Global Routing Using Simulated An nealing, Kluwer Acamedic Publishers, Dordrecht, Netherlands, 1988. [34] Wong, D.F., H.W. Leong, and C.L. Liu, Simulated Annealing for VLSI Design, Kluwer Acamedic Publishers, Dordrecht, Netherlands, 1988.

PAGE 114

102 [35] Groch, A., L.M. Vidigal, and S.W. Director, "A new global optimization method for electronic circuit design," IEEE Transations on Circuits and Systems, CAS-32, no. 2, pp. 160-169, (1985). [36] Stuckman, B.E., "A global search methods for optimizing nonlinear systems," IEEE transations on Systems, Man, and Cybernetics, 18(6), pp. 965-977 (1988). [37] Heydemann, M., F. Durbin, and J. Montaron, "A tentative approach towards global optimization in the DC case," IEEE Proceedings of Inter national Symposium on Circuits and Systems, pp. 847-850 (1981). [38] Haftka, R.T., and R.V. Grandhi, "Structural shape optimization: A survey," Computational Methods in Applied Mechanical Engineering, 57, pp. 91-106 (1986). [39] Gero, J.S. (Ed.), Design Optimization, Academic Press, Orlando, FL, 1985. [40] Bennett, J.A., and M.E. Botikin (Eds.), The Optimum Shape: Auto mated Structural Design, Plenum Press, New York, 1986. [41] Silva, B.M.E.De, "The application of nonlinear programming to the automated minimum weight design of rotating discs," in: Fletcher (Ed.), Optimization, Academic Press, pp.115-150 (1969). [42] Tillman, F.A., C.L. Hwang, and W. Kuo, Optimization of Systems Reliability, Marcel Dekker, Inc., 1980. [43] Pareto, V., Cours d'Economie Politique, F. Rouge, Lausanne, Switzerland, 1896. [44] Osyczka, A., Multicriterion Optimization in Engineering, Martinus Ni jhoff, Amsterdam, 1985. [45] Steuerm, R.E., Multiple Criteria Computation, and Application, John Wiley & Sons, New York, 1985.

PAGE 115

103 [46] Edelbaum, T.N., "The theory of minma and maxima," in: Leitmann (Ed.), Optimization Techniques with Applications to Aerospace Systems, Academic Press, New York, pp. 1-32 (1962). [47] Robbins, T.C., A Penalty-Function Approach to Radiation-Pattern Synthesis, Doctoral Dissertation, Southern Methodist University (1969). [48] Bellmore, M., H.J. Greenberg, and J.J. Jarvis, "Generalized penalty function concepts in mathematical optimization," Operations Research, 18 pp. 229-252 (1970). [49] Schmit, L.A., "Structural design by systematic synthesis," Proceedings of 2nd Conference on Electronic Computation, ASCE, New York, pp. 105122, (1960). [50] Arora, J.S., "Computational design optimization: A review and future directions," to appear in International Journal of Structural Safety, Jan uary 1989., (1989). [51] Colville, A.R., "A comparative study on nonlinear programming codes," IBM New York Scientific Center Report No. 320-291,9, (1968). [52] Eason, E.D, and R.G. Fenton, "A comparison of numerical optimiza tion methods for engineering design," Journal of Engineering for Industry, Transations of ASME Series B, 95(2), pp. 196-200, (1974). [53] Sandgren, E., The Utility of Nonlinear Programming Algorithms, Doc toral Dissertation, Purdue University (1977). [54] Arora, J .S., and C.H. Tseng, "Interactive design optimization," Engi neering Optimization, 13. pp. 173-188 (1988). [55] Venkayya, V.B., N.S. Khot, and V.S. Reddy, "Optimization of struc tures based on the study of strain energy distribution," AFFDL-TR-68150, (1968).

PAGE 116

104 [56] Venkayya, V.B., N.S. Khot, and V.S. Reddy," Application of optimality criteria approaches on automated design of large practical structures," AGARD Conference Proceedings No. 123, Chapter 3 (1973). [57} Arora, J.S., "Analysis of optimality criteria and gradient projection methods for optimal structural design," Journal of Computer Methods Ap plied to Mechanicasl Engineering, 23, pp. 185-213 (1980). [58} Fleury, C., "Reconciliation of mathematical programming and opti mality criteria methods," In: Morris, A.J. (Ed.), Foundations of Structural Optimization: A Unified Approach, New York, Wiley, 1982. [59] Belegundu, A.D., and J.S. Arora," A study of mathematical program ming methods for structural optimization, part I: Theory; part II: Numer ical aspects," International Journal of Numerical Methods in Engineering, 20, pp. 803-816, (1985). [60} Sobieszczanski-Sobieski, J., "A linear decomposition method for large optimization problems: Bluesprint for development," NASA TM 83248 (1982). [61} Sobieszczanski-Sobieski, J., B.B. James, and A.R. Dovi, "Structural optimization by multilevel decomposition," AIAA Journal, 23(11), pp. 1775-1782, (1985). [62} Gelman, G., and J. Mandel, "On the multilevel iterative methods for optimization," Mathematical Programming, 48, pp. 1-17, (1990). [63} Hajela, P., "Genetic search-an approach to the nonconvex optimiza tion problem," AIAA Journal, 28(7), pp. 1205-1210, (1990). [64} Simon, H.A., "Search and reasoning in problem solving," m: Pearl, J., and Gaschnig, (Eds.), Search and Heuristics, North-Holland Publishing Company, New York, 1983.

PAGE 117

I 105 [65] Polya, G., How to Solve It, Princeton University Press, Princeton, NJ, 1945. [66] Groner, R.M., and W.F. Bischof, Methods of Heuristics, Lawrence Erl baum Associates, Inc., London, 1983. [67] Pearl, J ., Heuristics: Intelligent Search Strategies for Computer Prob lem Solving, Addison-Wesley Publishing Company, Reading, MA 1984. [68] Weiner, P., "Heuristics," Networks, 5, pp.lOl-103 (1975). [69] Lin, S., "Computer Solutions of the Travelling-Salesman Problem," Bell Systems Technology Journal, Vol. 44, pp. 2245-2269 (1965). [70] Holland, J.H., Adaptation in Natural and Artificial Systems, The Uni versity of Michigan Press, Ann Arbor, 1975. [71] Laarhoven, Van P.J.M., and E.H.L. Aarts, Simulated Annealing: The ory and Applications, Kluwer Acamedic Publications, Dordrecht, Nether lands, 1988. [72] Kirkpatrick, K., C.D. Gelatt, Jr., and M.P. Vecchi, "Optimization by Simulated Annealing," Science, 220( 4598), pp. 671-680 (1983). [73] Cerny, V., "Thermodynamical approach to the travelling salesman problem: an efficient simulated algorithm," Journal of Optimization The ory and Its Applications, 45(1) (1985). [74] Johnson, M.E., (Ed.), Simulated Annealing {SA)& Optimization: Mod ern Algorithms with VLSI, Optimal Design&Missile Defense Applications, American Journal of Mathematical and Management Sciences, 8(3&4) 1988. [75] Goldberg, D.E., Genetic Algorithms in Search, Optimization, and Ma chine Learning, Addison-Wesley, New York, 1989. [76] Grefenstette, J.J. (Ed.), Proceedings of Conference on Genetic Algo rithms and Their Applications, Lawrence Erlbaum, Hillsdale, NJ, 1985.

PAGE 118

106 [77] Grefenstette, J.J. (Ed.), Genetic Algorithms and Their Applications: Proceedings of Second International Conference on Genetic Algorithms, Lawrence Erlbaum Associates, 1987. [78] Liepins, G.E., and M.R. Hillard, "Genetic algorithms and applica tions," Annals of Operations Research, 21, pp.31-58 (1989). [79] Glover, F., "Heuristics for integer programming using surrogate con straints," Decision Sciences, 8, pp. 156-166 (1977). [80] Glover, F., "Future paths for integer programming and links to arti ficial intelligence," Computers and Operations Research, 13, pp. 533-549 (1986). [81] Glover, F., "Tabu search, Part 1 and 2," ORSA Journal on Computing (1989 ). [82] Glover, F., and H.J. Greenberg, "New approaches for heuristic search: A bilateral linkage with artificial intelligence," European Journal of Oper ational Research, 39, pp. 119-130 (1989). [83] Bledsoe, W.W., "The use of biological concepts in the analytical study of systems," Paper presented at the ORSATIMS National Meeting, San Franciso, CA. (1961). [84] Dejong, K.A., An Analysis of the Behavior of a Class of Genetic Adap tive Systems, Doctoral Disseration, University of Michigan (1975). [85] Scheffer, J.D., "Vector evaluated genetic algorithm for multiobjective optimization," in: Grefenstette, J.J. (Ed.), Proceedings of International Conference on Genetic Algorithms and Their Applications (1984). [86] Etter, D.M, M.J. Hicks, and K.H. Cho, "Recursive adaptive filter design using an adaptive genetic algorithm," Proceedings of IEEE Inter national Conference on Acoustics, Speech and Signal Processing, 2, pp. 635-638 (1982).

PAGE 119

107 [87] Goldberg, D.E., Computer-aided Gas Pipeline Operation Using Ge netic Algorithms and Rule Learning, Doctoral Dissertation, University of Michigan (1983). [88] Davis, 1., and D. Smith," Adaptive design for layout synthesis," Texas Instruments Internal Report, Dallas: Texas Instrument, (1985). [89] Glover, F., "Solving a complex keyboard configuration problem through generalized adaptive search," in: Davis, L. (Ed.), Genetic Algorithms and Simulated Annealing, pp. 12-31, London, Pitman (1985). [90] Goldberg, D.E., and M.P. Samtani, "Engineering optimization via ge netic algorithms," Proceedings of the Ninth Conference on Electronic Com putation, pp. 471-482 (1986). [91] Minga, A.K., "Genetic algorithms in aerospace design," Paper pre sented the AIAA Southeastern Regional Student Conference, Huntsville, AL, (1986). [92] Wilde, D.J., Globally Optimal Design, John Wiley & Sons, New York, 1978.