Citation
Chaotic annealing

Material Information

Title:
Chaotic annealing the foundation, development, and performance of a hybrid optimization algorithm
Creator:
Rinaldi, Dino Carmen
Publication Date:
Language:
English
Physical Description:
vi, 94 leaves : illustrations ; 29 cm

Thesis/Dissertation Information

Degree:
Master's ( Master of Science)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
Department of Electrical Engineering, CU Denver
Degree Disciplines:
Electrical engineering

Subjects

Subjects / Keywords:
Simulated annealing (Mathematics) ( lcsh )
Combinatorial optimization ( lcsh )
Combinatorial optimization ( fast )
Simulated annealing (Mathematics) ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references.
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Department of Electrical Engineering.
Statement of Responsibility:
by Dino Carmen Rinaldi.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
25360496 ( OCLC )
ocm25360496
Classification:
LD1190.E54 1991m .R56 ( lcc )

Full Text
CHAOTIC ANNEALING:
THE FOUNDATION, DEVELOPMENT, AND PERFORMANCE
OF A HYBRID OPTIMIZATION ALGORITHM
by
DINO CARMEN RINALDI
S.B., Massachusetts Institute of Technology, 1981
M.B.A., University of West Florida, 1983
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Master of Science
Department of Electrical Engineering


This thesis for the Master of Science degree by
Dino Carmen Rinaldi
has been approved for the
Department of
Electrical Engineering
by
D^_UtlL
n


Rinaldi, Dino Carmen (M.S., Electrical Engineering)
Chaotic Annealing: The Foundation, Development, and Performance of a Hybrid
Optimization Algorithm
Thesis directed by Assistant Professor Jay A. Rothman
Determining the optimal solutions to combinatorial optimization problems is a
critical and nontrivial challenge associated with many branches of science and
engineering. Over the past decade, an algorithm known as simulated annealing has
generated widespread interest because, by accepting periodic transitions to less-
desirable system states, it attempts to parallel nature's mechanism for converging to
perfect crystalline ("optimal") structures. In this manner unlike other traditional
algorithms it will, in the limit, avoid getting trapped in locally optimal configurations
and will thereby converge on globally optimal solutions.
Simulated annealing suffers from a significant liability, however: its
neighborhood transition mechanism for searching the solution space and converging on
the optimal solution is far too inefficient for many real-world applications. Chaotic
annealing provides an alternate implementation of the simulated annealing algorithm,
one which is not limited to simple and time-consuming neighborhood transitions. The
inclusion of chaos provides a pseudo-random function which maps jumps from the
current system state to potential new states, thereby allowing the algorithm to tunnel
through high energy (cost) barriers which simulated annealing normally would not be
able to overcome. Chaotic annealing thus appears to provide improved optimization
performance over that of traditional simulated annealing.
This abstract accurately represents the content of the candidate's thesis. I recommend
its publication.
Signed
in


Dedications
To Boots, a remarkable woman who first introduced me to Chaos, for your patience,
persistence, and love. HMLC eternal!!
A Matka e Babbo, con molto amore. Grazie per tutto. Voi siete i migliori.
Dziekuje panu. Sciskam cie mocno i cafuje!!
And to Brodoc, with everlasting admiration!!
Without each of you none of this would have been possible.
IV


Table of Contents
Chapter
1. Introduction..............................................1
2. Technical Basis...........................................3
Simulated Annealing...................................3
Combinatorial Optimization.......................3
Statistical Physics..............................6
The Model for Simulated Annealing................8
Chaos................................................13
Time Evolution and the Definition of Chaos......13
Characteristics of Chaos........................15
Chaotic Equations: The Logistic Function........17
Notes................................................20
3. Chaotic Annealing........................................22
Concepts and Algorithm Development...................22
Summary of Experimental Results......................28
Conclusions and Recommendations......................36
Notes................................................44
V


Bibliography.....................................................45
Appendices.......................................................47
A. Logistic Function Number Generation......................48
B. Distribution of the Chaotic Mapping Function.............49
C. Chaotic and Simulated Annealing Program Listings.........50
D. Tabulation of Selected Experimental Results..............57
VI


Chapter 1: Introduction
Virtually every function can be viewed as the free energy of some system and
thus studying and imitating how nature reaches a minimum during the annealing
process should yield optimization algorithms...The result is the tool called
"simulated annealing," which, since its inception in 1982, has become a
remarkably powerful tool in global optimization.
-- Laarhoven and Aarts, p.v.
Chaos introduces an interface between determinism and randomness...Truly
deterministic description of chaotic dynamics requires infinite precision in the
choice of initial conditions and, thus, is a scientific chimera. Therefore, chaos
introduces a fundamental uncertainty which is more general than Heisenberg's
uncertainty in quantum mechanics: it concerns also macroscopic dynamics and
restricts the available information for every variable whereas quantum
mechanical uncertainty operates on canonically conjugate pairs of variables
only.
-- Schuster, p.l.
The genesis of this paper on "chaotic annealing" derives from a variety of
sources. My interest in two rather distinct mathematical topics simulated annealing
and chaos theory served as a primary impetus. This was supplemented by my work
in the field of mission planning, which is concerned with the optimal placement of an
extremely large number of varied "tasks" onto multiple, conflicting, and often
oversubscribed "resources." In mission planning and certainly other fields there
seems to be an ever-continuing search for more powerful, faster, more generally
applicable, or more global algorithms which are capable of solving large and complex
(often NP-complete) optimization problems. Simulated annealing employs one such
algorithm, and it is this algorithm which, when combined with elements of chaos,
serves as the central topic of this paper.


The purpose of this work, then, is first to provide an understanding of the
generalized simulated annealing algorithm (including its physical analog) and of the
relevant theoretical and practical aspects of chaos. Once this mathematical and
conceptual basis has been established, the concept of chaotic annealing (or, the
application of a chaotic function as the pseudo-random mapping engine for the
simulated annealing process) may be introduced and developed. The rationale for this
attempt to develop such a hybrid optimization algorithm is rooted in the fact that
however effective simulated annealing may be at converging to an optimal solution, it is
nevertheless an extremely slow process. Practical or operational planning systems
often do not have the luxury of time which is necessary to accommodate the successive
neighborhood search process employed by simulated annealing. Chaotic annealing
represents an attempt to mitigate this limitation of an otherwise exceptional algorithm.
The "Technical Basis" portion of this paper (Chapter 2) provides the
developmental discussions of the relevant aspects of both simulated annealing and
chaos theory. The "Chaotic Annealing" chapter (Chapter 3) then combines these two
fundamental concepts, and includes experimental results and associated comments.
Appendices containing more detailed data and information follow the bibliography.
2


Chapter 2: Technical Basis
Simulated Annealing.
Combinatorial Optimization. Research into the subject of combinatorial
optimization tends to concentrate on the development of efficient techniques for finding
minimum or maximum values of a function of very many independent variables. This
function, usually called the cost or objective function, represents a quantitative measure
of the "goodness" of some complex system, the cost function naturally being dependent
upon the detailed configuration of the many parts of the system. Garey and Johnson1
have formally defined combinatorial optimization problems in the following manner:
A combinatorial optimization problem Jt is either a minimization problem or a
maximization problem and consists of the following three parts:
(1) a set of Djc instances;
(2) for each instance Ie D^, a finite set S^I) of candidate solutions;
and
(3) a function mK that assigns to each instance Ie Dn and each
candidate solution ce S^CI) a positive rational number m^CI.c),
called the solution value for c.
If 7t is a minimization [maximization] problem, then an optimal solution for an
instance Ie DK is a candidate solution c*e Srt(I) such that, for all ce STC(I),
mn(I,c*) < mre(I,c) [mn(I,c*) > m^c)].
Unfortunately, interesting and realistic combinatorial optimization problems
typically belong to a class of problems known as NP-complete (nondeterministic
polynomial time complete), in which all known exact methods for determining an
optimal solution require a computing effort that increases exponentially with the
problem's size.2 In practice, then, many large-scale combinatorial optimization
problems cannot be solved to optimality because the search for an optimum requires
3


prohibitive amounts of computation time. It therefore becomes necessary to use some
form of approximation algorithm or heuristic for which there can usually be no
guarantee that the solution found by the algorithm is optimal, but for which polynomial
bounds on the computation time can be given. Algorithms such as branch-and-bound,
A*, and iterative-deepening A* fit this category.3*4
There are many examples of NP-complete combinatorial optimization problems,
particularly in the areas of scheduling and design (such as optimal component
packaging, chip placement, wire routing, delay reduction, and the "knapsack
problem"). But perhaps the most celebrated example is the traveling salesman problem:
given a list of N cities and a means of calculating the cost of traveling between any two
cities, one must plan the salesmans route, which must pass through each city only
once and finally return to the starting point, minimizing the total cost. By Garey and
Johnson's definition, the set of all permutations of the list of cities constitutes the range
of candidate solutions Sn(I), and an optimal solution of the problem corresponds to the
permutation which reflects the minimum cost route.
Laarhoven and Aarts5 have formalized such problems as a variable pair (R,C),
where R is the finite or countably infinite set of configurations and C is a cost
function, C: R-!R, which assigns a real number to each configuration. C is defined
such that the lower its value, the better the corresponding configuration. The problem
then is to find a configuration for which C takes on its minimum possible value, i.e., an
optimal configuration io satisfying
C0pt = C(io) = min C(i)
v ieR
where Copt is the optimum (minimal) cost.
Since the nature of the problem with which we are concerned precludes the use
of an exhaustive search algorithm to find the globally optimum configuration (or
4


solution), three alternative strategies can be employed: "divide and conquer," iterative
improvement, and heuristic filtering. In divide and conquer, the problem is divided
into subproblems of manageable size, and the subproblems are solved and then
recombined into a solution for the original problem. Of course, there are the attendant
difficulties associated with properly subdividing the original problem and of
recombining the subsolutions to obtain the final solution. Neither of these are
necessarily trivial or straightforward, and the result is usually a less-than-optimal
solution.
Iterative improvement algorithms begin with the system to be optimized in a
known configuration (how this initial configuration is achieved is not important at this
point). A standard rearrangement operation or sequence of iterations is then
applied to each element of the system in turn, until a rearranged configuration which
improves the cost function is discovered. The rearranged, lower-cost configuration
then becomes the new working configuration of the system, and the process is
continued until no further improvements can be found. Note that the rearrangement
operation is only a "neighborhood" search in that it defines a neighborhood Rj for each
configuration i, consisting of all configurations that can be reached from i in one
transition.
A third optimization strategy (which might be considered a subset of iterative
improvement) involves reducing the problem search space with some form of heuristic
filtering and then invoking a modified branch-and-bound search. It is the filtering and
resultant problem reduction which are of primary importance to this strategy although
the precise filtering method is highly problem dependent. This hybrid scheme is
employed in a variety of both operational and prototypic large-scale mission planning
systems.
Normally, the iterative process only accepts steps (or rearrangements) which
lead "downhill" to a lower net value of the cost function. The process therefore can
5


possibly get stuck in a local, rather than a global, optimum, and the local minimum
obtained is dependent upon the initial configuration of the system. In an attempt to
avoid this possibility, the iterative process is carried out several times, each from a
different random initial configuration. The best result is then saved and implemented.
Simulated annealing incorporates many aspects of iterative improvement, but
avoids the disadvantages inherent to traditional iterative techniques by adopting a
randomization mechanism: in a limited way it accepts transitions (or rearrangements)
which correspond to an increase in the cost function. There is thus a close analogy to
statistical physics specifically the annealing of solids -- which is the direction this
paper shall now turn. Incidentally, simulated annealing is also known by such names
as probabilistic hill-climbing, statistical cooling, and stochastic relaxation.67
Statistical Physics,8>9 Statistical physics concerns itself with methods for
analyzing aggregate properties of large numbers of atoms to be found in samples of
gaseous, liquid, or solid matter. Because the number of atoms is on the order of 1023
per cubic centimeter, only the most probable behavior of the system in thermal
equilibrium at any given temperature is observed in experiments. This can be
characterized by the average and small fluctuations about the average behavior of the
system, when the average is taken over the ensemble of identical systems.
By assuming that the mechanisms of a physical many-particle system are
compatible with a statistical ensemble and by accepting ergodicity (i.e., that the time
average of a mechanical quantity of the system under macroscopic equilibrium equals
the corresponding ensemble average), then many useful results may be derived. For
example, at thermal equilibrium the probability that the system is in a macroscopic state
i with energy Ei is given by the Boltzmann distribution:
P[E=Ei] = Z(T)-lexp(-Ei/kbT)
6


where T = system temperature
kb = Boltzmann constant
Z(T) = partition function (normalization factor)
= Zexp (-Ei/kbT).
i
From this equation note that as the temperature decreases, the distribution
concentrates on states with the lowest energy until, when the temperature approaches
zero, only the minimum energy states have a probability of occurrence.
In reality, however, low temperature is not a sufficient condition for finding
globally minimum energy states of matter. Most substances, if heated sufficiently,
assume a liquid phase in which the component particles assume a randomly varying
configuration over a wide region of configuration space. When cooled, the substance
freezes into a solid phase in which its configuration is constrained to a very small
region about a stable point at which the energy has a local minima. The frozen
configuration may be either amorphous (with a highly irregular, jumbled packing),
crystalline (with a highly periodic structure), or something between these extremes.
The key to achieving the lowest-energy low-temperature state of a material is
careful annealing: first melting the substance, then lowering the temperature slowly,
spending a long time at each successive temperature in the vicinity of the freezing point.
If the cooling is too rapid (if the substance is not permitted to reach thermal equilibrium
at each temperature), defects can be "frozen" into the solid, and metastable amorphous
structures will occur rather than low-energy crystalline lattice structures. But if cooling
is sufficiently gradual, nature (which presumably does not employ an iterative
improvement method) does not allow the crystal formation process to get mired in a
local energy minimum careful annealing, by whatever physical process, results in a
globally optimal state.
7


Metropolis10 has simulated the evolution to thermal equilibrium of a solid for a
fixed temperature T by an algorithm in which controlled probabilistic uphill steps are
incorporated into the search for an optimal (lowest energy) crystalline solid state. In
each step of the algorithm, an atom is given a small random displacement and the
resulting change, AE, in the energy of the system is computed. If AE < 0, the
displacement is accepted, and the configuration with the displaced atom is used as the
starting point of the next step. The case of AE > 0 is treated probabilistically: the
probability that the configuration is accepted is
P[AE] = exp (-AE / kbT).
Random numbers distributed uniformly in the interval (0,1) are selected and
compared with P[AE]. If the selected random number is less than P[AE], the new
higher-energy configuration is retained; if not, the original configuration is used as the
foundation for the next iteration. By repeating this basic step many times, the thermal
motion of atoms in a heat bath at temperature T is simulated. The system will
eventually settle into thermal equilibrium (that is, after a large number of perturbations
the probability distribution of the states approaches the Boltzmann distribution) and the
temperature can then be lowered in the next incremental step of freezing the substance.
The Model for Simulated Annealing. With the brief discussion that preceded, it
is now possible to consider the application of simulated annealing to practical
combinatorial optimization problems. A qualitative understanding of the algorithm
should now be beginning to emerge. Just as in statistical physics where "nature"
allows limited random atomic reconfigurations to higher energy levels as a solid is
being cooled (in order to avoid becoming trapped in amorphic locally-optimal states),
the simulated annealing algorithm intends to evade local minima and a dependence on
8


initial conditions by accepting probabilistic transitions corresponding to an increase in
the cost function.
An analog to statistical physics' Boltzmann distribution is defined as
a .
P[configuration = i] = qi(t) = Q(t)_1exp(-C(i) /1)
with Q(t) being the new normalization factor. Using the cost function in place of
energy and defining configurations by a set of parameters {xi}, it is possible to
generate a population of configurations of a given optimization problem at some
effective temperature t. "Temperature" is now simply a control parameter with the same
units as the cost function.
The simulated annealing process consists of first "melting" the system to be
optimized at a high effective temperature (again, this is the parameter t), then
progressively lowering the temperature by slow stages until the system "freezes" and
no further changes occur. As with traditional annealing, at each temperature the
algorithm must be allowed to proceed long enough for the system to reach a steady state
-- a process wherein given a configuration i, another configuration j can be obtained by
choosing at random an element from the neighborhood of i. If C equals the cost
function (this corresponds to energy) and t equals the control parameter (temperature)
and
ACij = CG)-C(i)
then
P[new configuration = j] =
1, ACy < 0
exp(-AQj/t), ACjj > 0.
9


[Note: An alternative acceptance probability function, the so-called one-dimensional
Cauchy distribution where P = t/(t2 + AQj2), is presented by Lo and Bavarian11].
In all cases, the annealing process must begin with an initial temperature which
is sufficiently high to ensure acceptance of the first few generated states. The sequence
of control parameters and the number of rearrangements of the {xj} attempted in order
to reach equilibrium at each control parameter setting can be considered an annealing
schedule. Three possible schedules are defined by the following equations:
t to/(l + log(p)) + tmin
t = aPto + tmin
t = to/(l + p) + tmin
where p is the number of iterations so far, tmin is the minimum required temperature,
and a is a constant between 0 and l.12
The annealing process is terminated for some small predetermined value of t =
tmin* ad the final "frozen" configuration is accepted as the solution to the original
problem. Note that simulated annealing differs from iterative improvement in that the
procedure need not get stuck since transitions out of a local optimum are always
possible at nonzero "temperature."
There appear to be several subtle variations in the practical implementation of
the simulated annealing algorithm,13>14,15 but the general process is best summarized
succinctly by Lo and Bavarian:
Step 1: Choose an initial state randomly and assign an initial temperature.
Step 2: Generate a new state over sample size at temperature t.
Step 3: Calculate the new energy state.
10


Step 4: Compare the difference of the energy of the new state and the old
state. If the new state energy is less than the previous one, then
accept the new state, else accept new state only if it satisfies a certain
probability.
Step 5: Decrease the temperature.
Step 6: If the energy (or temperature) is less than a certain value then stop,
else go to Step 2.16
When, in Step 4, the new energy state is greater than that which preceded, the
following procedure is employed: random numbers distributed uniformly in the
interval (0,1) are selected and compared with P[new configuration = j] = exp(-AQj/t); if
the selected random number is less than the probability, the new state is accepted. Note
again that the iterative state-generation process of Steps 2-4 must be given a sufficient
(currently unspecified) amount of time to proceed that is, "thermal" equilibrium must
be achieved.
Given the fact that simulated annealing is, in simple terms, a process which
attempts to transform via a single transition a given configuration i into one of its
neighbors (a member of Ri), it is best described in terms of Markov processes -- since
the future (the new configuration) is dependent only upon the present (the current
configuration) and is independent of the past. From such a perspective, Laarhoven and
Aarts17 provide the theoretical underpinnings for simulated annealing by modeling the
annealing algorithm in the following way: given (1) that a Markov chain is described
by the set of conditional probabilities Pij(k-l,k) for each pair of outcomes (i,j) [where
Pij(k-l,k) is the probability that the outcome of the k* trial is j, given that the outcome
at the k-1* trial is i], and (2) that aj(k) equals the probability of outcome i at the kth
trial, then
ai(k) = Ia£(k-1) P£i(k-U) k = l,2,3,...
£
11


Where £ represents all possible outcomes. If X(k) = the outcome of the k* trial, then
Pij(k-ljk) =P[X(k) = j
X(k-1) = i]
and
ai(k)=P[X(k)=i].
In simulated annealing, the conditional probability Pij(k-l,k) represents the probability
that the k* transition is from configuration i to j, and X(k) is the configuration which
results after k transitions.
The important point here is to derive a proof that simulated annealing will, in
reality, produce a global minimum; that is if, after a number of transitions (k), the
following relation holds:
P[X(k)e Ropt] = 1
where Ropt is the set of globally minimum configurations.
A distinction can be made between whether or not the Markov chain is
homogeneous or inhomogeneous (that is determined by the conditional probability's
dependence on k), and Laarhoven and Aarts18 devote over twenty pages of text to
discussions of asymptotic convergence results for both the homogeneous and
inhomogeneous cases. With a deft combination of handwaving and rigor they show
that (for instance, in the homogeneous case) convergence can be proven by deriving
lim (lim P[X(k)e Ropt]) = 1
t -o k
and then proceed to ensure the existence of the stationary distribution defined as the
vector whose i* component is given by
12


qi = lixn P[X(k) = i
X(0)=j]
(the convergence proof for the homogeneous simulated annealing case depends on the
existence of q).
With the discussion which preceded, it is now possible to implement and test a
standard simulated annealing optimization program (in order to solve a particular
problem, three items must still be defined: a set of configurations, a generation
mechanism for transitions, and a cost function). But for the purposes of this paper
chaotic annealing there remains an entirely different topic to be introduced and
discussed: chaos. Therefore, for the remainder of this chapter the topic of annealing
(simulated or otherwise) shall be temporarily set aside. Chapter 3 will resume the
discussion with the novelty of chaos interposed.
Chaos.
Time Evolution and the Definition of Chaos. There are many disciplines in
science and mathematics which are concerned with the modeling, prediction, analysis,
and manipulation of the time evolution of both physical and theoretical dynamic
systems. From whatever derivation or for whatever purpose, this time evolution can
generally be characterized in a variety of ways: as a purely random process (eg., coin
tossing), as an entirely deterministic process (eg., orbital dynamics), as a deterministic
process which is subject to random fluctuation (eg., Brownian motion), or as
seemingly random behavior in deterministic systems which do not have external
stochastic sources.
It is the latter category of time-evolutionary processes which interests those who
study chaotic phenomena. That is, if apparently random (pseudo-random) behavior is
observed in any system without the application of external stochastic forces, and if
13


system output is sensitive to initial conditions while certain global characteristics are
simultaneously independent of those same conditions, then chaos may indeed be
present. The existence of chaos is being recognized in an increasing number and
variety of systems, but the precise nature of the phenomenon is apparently still not well
understood. The literature is replete with attempted definitions (some more successful
than others) of chaos. For example:
From Parker and Chua:
From a practical point of view chaos can be defined as.. .bounded steady-state
behavior that is not an equilibrium point, not periodic, and not quasi-
periodic... .The limit set for chaotic behavior is not a simple geometrical object
like a circle or a torus, but is related to fractals and Cantor sets.19
From Ruelle:
If an infinitesimal change dx(0) is made to [an] initial condition, there will be at
time t a corresponding change 3x(t). We say that we have sensitive dependence
on initial conditions if 3x(t) grows exponentially with t:
5x(t)| ~ 3x(0)| e^1,
where X > 0 (X is called a Liapunov exponent).
It is not hard to produce examples of sensitive dependence on initial conditions
where x(0) is either an unstable fixed point or is on an unstable periodic orbit.
This fits well with the commonplace observation that small causes may have
large effects. What is not so obvious is that in many cases, for all (or almost
all) initial conditions x(0), and almost all 3x(0), we have exponential growth of
3x(t). This is what is now called chaos. Chaos is thus the prevalence of
sensitive dependence on initial conditions, whatever the initial condition is.20
From Schuster:
[Chaos] refers to the somewhat counterintuitive mixing behavior that is possible
when everywhere locally parallel hair lines (trajectories) are bent into a recurrent
bundle in 3-space. A taffy puller placed onto a synchronously rotating lazy
susan provides the simplest example.
14


When looked at from a somewhat greater distance, chaos can be said to reflect a
mathematical discovery...that bijections across dimensions are possible,
although only at the price of discontinuity. The stepwise weaving in time of
such a discontinuous, yet lower-dimensional, representation of an originally
higher-dimensional set (cross section) is precisely what is being observed in
chaos. A deterministic trajectory keeps shuttling back and forth busily until, in
the limit, any two originally arbitrarily close points of a transversal square arrive
at completely different places along a transversal line, in the simplest case.21
Although at this time there does not appear to be any generally accepted,
rigorous, or necessarily clear definition of chaos, a considerable body of literature
exists which explores not only its definition, but also its implication, existence,
application, and nature. Investigators and theoreticians are evidently having a field day.
This paper, however, has a specific purpose: the practical application of chaotic
phenomena to the optimization process. It will therefore avoid discussions of the
historical development of chaos theory from Lorenz to the present, or any analysis or
attempted description of such concepts as attractors, homoclinic bifurcations, period
doubling, fractals, noodle maps, hyperbolic toral automorphisms, information
dimension, snap-back repellors, or Douady's rabbit. While certainly interesting and
fascinating, it is too easy to get lost in such arcane topics, especially since they are not
fundamentally germane to this discussion.
What are critical in this context are those characteristics of chaos which can be
clearly understood and exploited for the practical purpose of applying them to the
problem of combinatorial optimization. Therefore, whatever the true or precise
definition of chaos may be, this paper will now turn to the recognized attributes of
chaos primarily its apparent "randomness" which is driven by its sensitive
dependence on initial conditions.
Characteristics of Chaos. In discussing the "turbulent" properties of a
numerically computed solution to Lorenz's meteorological modeling equations,
Sparrow lists four simple properties which, to varying degrees, are generally
15


descriptive of most chaotic systems (by plotting the time-dependent solution to the
equations in 3-space, a looping trajectory, or orbit, soon becomes evident and it is
this figure, now known as the Lorenz Attractor, to which the following description
refers).
1. The trajectory... is not periodic.
2. The figure does not appear to show a transient phenomenon. However long
we continue the numerical integration the trajectory continues to wind around
and around, first on one side, then on the other, without ever settling down to
either periodic or stationary behavior.
3. The general form of the figure does not depend at all on our choice of initial
conditions.. .or on our choice of an integrating routine. Anything from a first-
order Euler method with sufficiently small step size to a sophisticated multi-
order integrating technique will produce a similar picture.
4. The details of the figure depend crucially on both factors
mentioned.. .above. The exact sequence of loops which the trajectory makes is
extremely sensitive to both changes in initial conditions and changes in the
integrating routine. As a consequence of this, it is not possible to predict the
details of how the trajectory will develop over anything other than a very short
time interval.22
As will be seen in the following chapter, the concept and development of
chaotic annealing depend entirely upon the latter property, known as sensitive
dependence on initial conditions. Devaney states that".. .a chaotic map possesses three
ingredients: unpredictability, indecomposability, and an element of regularity. A
chaotic system is unpredictable because of [its] sensitive dependence on initial
conditions."23 Unpredictability is equated with randomness, and it is this behavior
which chaotic annealing exploits. [Note that one source states that it is the fractal nature
(infinite layering) of chaos which causes sensitive dependence on initial conditions to
occur.24]
Simply put, sensitive dependence on initial conditions means that tiny
differences in input can quickly become overwhelming differences in output. Given
16


precisely the same starting point, a chaotic system will evolve in exactly the same
manner each time it is allowed to proceed -- but given a slightly different starting point
(due to approximation, error, measurement inaccuracy, noise, etc.), a chaotic system
will evolve to a dramatically ~ some say exponentially divergent result. Reality and
the limitations of technology ensure that this is invariably the case. For systems and
studies such as electronic circuits, chemical kinetics, national economic growth cycles,
fibrillation in a beating heart, population dynamics, or the stability of engineering
structures, this effect can prove catastrophic.25*26*27 In the context of this paper,
however, sensitive dependence on initial conditions is the single most important
characteristic of chaos one to be exploited rather than avoided.
For the moment it should suffice to say that chaotic annealing requires a source
which injects a new measure of randomness into the traditional simulated annealing
process. Chaos effectively provides this source as a direct result of sensitive
dependence on initial conditions because nearby trajectories quickly and continually
spread apart in a chaotic system until, for all practical purposes, they are uncorrelated.
The cumulative effect of [chaos].. .is to rapidly obliterate knowledge of the state
of the dynamical system if the knowledge is initially less than perfect. Suppose
for the sake of argument that population sizes are not restricted to whole
numbers, but any values in a continuum. Consider an initial population size in
binary representation
101 101000101???.??...
In this example, the first 12 bits are known, with the remainder uncertain. In
the second generation, we can expect the population will be accurate to only 11
bits. After 12 generations, the population size will depend purely on the initial
uncertainty! Since the uncertain bits could have any values, the population after
12 generations is indeed as random as a coin toss.28
Chaotic Equations: The Logistic Function.29,30 From the classic Lorenz
equations to certain instances of Newton's method for finding the roots of a polynomial
there are a wide variety of equations (and their associated systems) which can display
unpredictable and bizarre a.k.a. chaotic behavior. Since the purpose of this paper
17


is to exploit chaotic phenomena (and not merely to investigate and note chaotic behavior
in various dynamic systems), it is clearly important to identify a reliable "source" of
chaos. In chaotic annealing, chaos will be employed as a randomizing mechanism
which alters the traditional simulated annealing process; to that end chaos must be made
to manifest itself whenever the chaotic annealing process requires it to do so. The key
is to select an appropriate function and then, in a controlled manner, to apply the result
of successive iterations of that function:
x0 f(xo), f(f(x0)), f(f(f(x0)))....
An "appropriate" function is defined as one whose iterative time evolution neither
converges nor settles into a periodic or quasi-periodic cycle, and which presents no
discemable output pattern. Depending upon the choice of x0, that time evolution will
also develop in dramatically different ways. This, again, is chaos.
An excellent derivation of a chaotic equation is provided by population biology,
which is concerned with the long-term development of the population of selected
species. Accounting for such factors as fertility, ecosystem capacity, predators, food
availability, and other environmental conditions, models can be developed which
attempt to predict or describe population fluctuations. For example, the difference
equation
Pn+l=kPn(L-Pn)
provides a simple model for successive generations of a species (population = P) which
are subject to a carrying limit L (k is a proportionality constant which can be used to
reflect birth rate). This equation clearly defines a set of deterministic rules which
specify the next future state (population), given exact knowledge of the current state.
Thus, by iterating the equation, known as the logistic function, a prediction of any
18


number of future populations may be made. What is not so clear is that, depending
upon the values selected for the function's parameters, pathological time evolution
(chaos) may arise. [The qualitative property which is most likely to lead to such erratic
behavior is dissipation. In biological systems dissipation arises from many factors,
including overpopulation and predation; in physical systems factors such as friction,
diffusion, and viscosity are dissipative.31]
Appendix A details the time evolution of a variety of specific logistic and other
functions. For instance, the logistic function f(x) = 4x (1 x) produces dramatically
different sequences of iterates depending upon the initial selection of x (this is a case of
sensitive dependence on initial conditions). But iterating the function f(x) = 3.839x (1 -
x) converges to a periodic sequence of three numbers no matter what the initial input
may be (0 < x < 1 in both cases). The first function, f(x) = 4x (1 x), is a clear and
common example of an equation through which chaos manifests itself, and will thus be
the function used to develop and test the concept of chaotic annealing in Chapter 3.
19


Notes
1. Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide
to the Theory of NP-Completeness. W.H. Freeman and Company, New York,
1979, p. 123.
2. Ibid., pp. 1 14.
3. UdiManber. Introduction to Algorithms: A Creative Approach. Addison-Wesley
Publishing Company, Reading, MA, 1989, pp. 358 363.
4. Nils Nilsson. Principles of Artificial Intelligence. Tioga Publishing Company,
Palo Alto, CA, 1980, pp. 76 84.
5. P.J.M. van Laarhoven and E.H.L. Aarts. Simulated Annealing: Theory and
Applications. D. Reidel Publishing Company, Dordrecht, 1987, p. 2.
6. Ibid., pp. 5 and 14.
7. Fure-Ching Jeng and John W. Woods. "Simulated Annealing in Compound
Gaussian Random Fields," IEEE Transactions on Information Theory, Vol. 36,
No. 1, January 1990, p. 94.
8. Laarhoven and Aarts, op. cit., pp. 7-9.
9. August Witt. "Lecture Notes: Introduction to Solid State Chemistry (3.091)."
Massachusetts Institute of Technology, Cambridge, MA, Fall 1977.
10. Laarhoven and Aarts, op. cit., p. 4.
11. Zhen-Ping Lo and B. Bavarian. "Optimization of Job Scheduling on Parallel
Machines by Simulated Annealing," Department of Electrical and Computer
Engineering, University of California at Irvine, (Undated), p. 4.
12. Ibid., p. 4.
13. Ibid.
14. Laarhoven and Aarts, op. pip
15. Jeng and Woods, op. tit.
16. Lo and Bavarian, op. cit.. pp. 3-4.
17. Laarhoven and Aarts, op. cit., pp. 12 -13.
18. Ibid., pp. 13 38.
19. Thomas S. Parker and Leon O. Chua. "Chaos: A Tutorial for Engineers,"
Proceedings of the IEEE, Vol. 75, No. 8, August 1987, p. 987.
20. D. Ruelle. "Deterministic Chaos: The Science and the Fiction," Proceedings of
the Royal Society of London, Mathematical and Physical Sciences, Vol. 427, No.
1873, 8 February 1990, p. 242.
20


21. P. Schuster, (ed). "Stochastic Phenomena and Chaotic Behavior in Complex
Systems," Proceedings of the Fourth Meeting of the UNESCO Working Group
on Systems Analysis, Flattnitz, Kamten, Austria, 6-10 June 1983. Springer-
Verlag, New York, 1984, p. 30.
22. Colin Sparrow. The Lorenz Equations: Bifurcations. Chaos, and Strange
Attractors. Springer-Verlag, New York, 1982, p. 3.
23. Robert Devaney. An Introduction to Chaotic Dynamical Systems. Addison-
Wesley Publishing Company, Redwood City CA, 1987, p. 50.
24. Allan McRobie and Michael Thompson. "Chaos, Catastrophes, and
Engineering," New Scientist, Vol. 126, No. 1720, 9 June 1990, p. 45.
25. Leon O. Chua and Rabinder N. Madan. "Sights and Sounds of Chaos," IEEE
Circuits and Devices Magazine, January 1988, pp. 3 -13.
26. McRobie and Thompson, oj). rit., pp. 41 46.
27. H. Bruce Stewart. "The Geometry of Chaos," Brookhaven Lecture Series,
Number 209. Brookhaven National Laboratory, Upton, NY. 14 November
1984, p. 1.
28. Ibid., p. 3.
29. Devaney, og. sil., pp. 2 7, 31 39, 50 51.
30. Stewart, o£. cit., pp. 4-7.
31. Ibid., p. 4.
21


Chapter 3: Chaotic Annealing
Concepts and Algorithm Development.
Simulated annealing is an optimization process which, through a succession of
neighborhood searches, seeks to settle into an optimum system configuration. In the
limit (as time or the number of transitions tend to infinity) this process will encounter
every energy state and will produce an optimal solution to even extremely complex
combinatorial optimization problems. But as in most other interesting and important
instances in science and engineering, time tends to infinity rather slowly and
simulated annealing, which must allow a system to reach "thermal" equilibrium at each
progressively lower control "temperature," is consequently not what would be
considered a fast algorithm.* In fact, in the summary chapter of their book on
simulated annealing, Laarhoven and Aarts recognize this severe limitation of the
algorithm:
.. .the simulated annealing algorithm is a generally applicable, flexible, robust
and easy-to-implement algorithm that is able to obtain near-optimal solutions for
a wide range of application problems. However, [.. .a considerable price has to
be paid in terms of long computation time: CPU times of more than 84 hours
on a VAX 11/780-computer are reported for large placement problems...] and
in a number of cases valuable tailored algorithms are available that can be
executed in far less computation time. For those problems where no tailored
algorithms are available, we consider simulated annealing to be a powerful
optimization tool.1
Practical scientific and engineering problems typically cannot afford to wait
even 84 hours (let alone until infinity finally arrives) before a solution is presented. In
An additional significant point is that if the simulated annealing algorithm is allowed to proceed for
enough time or through enough state transitions, then, in the limit, essentially all potential solution
configurations will be sampled. But in that case the process has devolved into an exhaustive search
process, and therefore begs the question: "Why bother with annealing?"
22


fact, in many cases the problems with which engineers must deal cannot be modeled
rigorously, so any solution (derived by whatever optimization or heuristic method) will
at best be only an approximation to an optimal condition. Therefore, iteration between
human users and the computer is generally necessary. Rapid, or at least reasonable,
algorithm processing times are thus highly valued (and extremely rare if the algorithm is
simultaneously very effective).
In general, systems to be optimized can be characterized as having multiple
locally-stable equilibria. Simulated annealing represents a significant advance beyond
pure iterative improvement algorithms because it accepts periodic random perturbations
out of these local minima. Through time, the iterative simulated annealing process thus
migrates through the defined solution space and slowly focuses on the optimal solution.
But herein lies the essential limitation of simulated annealing, and the reason why it is
such a costly algorithm: neighborhood search. By definition, the simulated annealing
process begins with the system to be optimized in an arbitrary configuration or energy
state. If the universe of solutions all possible configurations of the system can be
represented by a matrix, then the probability of beginning the optimization process at or
near the optimal solution becomes vanishingly small for large (i.e., realistic) problems.
But simulated annealing proceeds only by neighborhood search, and therefore at each
stage of the annealing schedule it must with unit steps slowly make its way toward the
elusive global minimum.
Recognizing the inherent power of simulated annealing, the question, then, is
whether the algorithm can be modified to make it proceed more expeditiously. It is at
this point that the value of chaos becomes evident. Chapter 2 stressed the pseudo-
random attribute of chaos, which is derived from the fact that arbitrarily small changes
in initial condition result in arbitrarily large and unpredictable changes in a chaotic
system's trajectory (or output). Stated differently, in a chaotic system any initial
uncertainty leads to long-term unpredictability. In the case of chaotic annealing, that
23


initial uncertainty is the energy state corresponding to the current configuration of the
system to be optimized. With current energy as the input to a chaotic equation, the
output will be an entirely unpredictable (random) number a number which can be
used as a pointer to another possible system configuration, one which is not necessarily
in the neighborhood of the current configuration.
Consider, for instance, a problem whose solution space consists of n = 30
possible configurations. Each of these configurations is represented by a unique
position in a matrix of size n (the positions are numbered 0 to 29), and each has an
associated energy state. Assume that the current state is represented by matrix point 15.
Given that point, simulated annealing conducts its energy comparison and transition
algorithm only with respect to those points which are contiguous to point 15 (see
Figure 1). Significantly, chaotic annealing is not restricted to any particular
neighborhood, since the algorithm is free to jump from the current configuration to any
position in the solution space (see Figure 2).
By applying this concept to the technical basis which was provided in Chapter
2, a chaotic annealing algorithm may now be outlined:
Step 1: Begin the optimization process with the system in an
arbitrarily selected initial configuration (state).
Step 2: Calculate the energy of the given state.
Step 3: Chaotically identify a new point in the solution space and
calculate the energy of the new state.
Step 4: Determine AE = Enew E0id-
Step 5: If AE < 0 then accept the new state, else accept the new state
only if it satisfies a certain probability.
Step 6: Iterate this process through progressively lower temperatures
until a pre-designated minimum is reached.
24


0 1 2 3 4 5

6 7 8 9 10 11
A
12 13 14 I 15 16 17
I ->
18 19 20 Y 21 22 23

24 25 26 27 28 29

Figure 1: Simulated Annealing. Point 15 represents the
current system configuration at time tx. The arrows
indicate possible transition paths to the next state (tx+i).
Transitions occur only between neighbors.
0 1 2 3 4 5
11
17
23
29
Figure 2: Chaotic Annealing. Point 15 represents the
current system configuration at time tx. At tx+i the
system may find itself at any point in the solution space.
25


Note that this procedure is identical to that of the traditional simulated annealing
algorithm, except for Step 3. In this step, instead of looking at a randomly selected
neighboring point, chaotic annealing may consider any possible point in the solution
space. This mapping is achieved with the logistic function (the chaotic source) in the
following manner:
1. Normalize the energy of the current state to a number 0 < x < 1, and
input that value into the logistic function f(x) = 4x (1-x). Iterate
until the function becomes chaotic.
2. Calculate mod n of the (renormalized) chaotic output (again assume
that the solution space contains n possibilities for example, there
are n permutations in the traveling salesman problem, only one of
which is the optimal permutation). This result will map to a unique
new system state in the solution matrix.
For example, let there exist fifteen possible states (n = 15) of a particular system. The
current state is at matrix point 6, and the (normalized) energy associated with that state
is 0.56 (see Figure 3). After submitting x = 0.56 to the logistic function and iterating
fifteen times, the chaotic output equals 0.0884. [Note: Fifteen iterations were selected
arbitrarily only for the purpose of this example.] Since 0884 mod 15 = 14, matrix
point 14 is therefore the next state for which an energy (Enew) must be calculated. With
this pseudo-random mapping mechanism, chaotic annealing is thus free to
(probabilistically) consider any possible configuration in the problem's solution space.
Recalling the physical analogy for the simulated annealing algorithm (i.e., the cooling
and eventual settling of a solid into its "optimal" low-energy crystalline lattice
structure), perhaps it is not unreasonable to suggest that the extended analog to chaotic
annealing is quantum tunneling in which atomic particles effectively burrow through
high energy barriers to achieve a new state.
26


0 1 2 3 4

5 6 7 8 9
Current V State
10 11^- 13 New H A State 1 ^
Figure 3: Tunneling Through the Solution Space. In
this particular instance, chaotic annealing jumps from the
current state (point 6) to a new state (point 14) which is
not in the neighborhood of point 6.
It is important to recognize the impact that the effective distribution of this
chaotic mapping function can have on the outcome of the optimization process. If the
distribution is not relatively flat (that is, if each point in the matrix is not, on average,
selected approximately the same number of times), then an undue bias may be
introduced in the chaotic annealing process. This bias may then concentrate all or most
of the algorithm's search activity in one region of the matrix or across certain high-
probability points. A flat distribution is therefore highly desirable (unless, of course,
prior knowledge of the problem in question reveals that the optimal solution is probably
in a particular region of the solution space; then a generally skewed distribution which
points most often to this region may be preferable). Appendix B addresses this issue
from the perspective of the matrices which were used in the test cases for this paper.
As outlined above, the chaotic annealing algorithm may be implemented as a
computer program and then tested against a variety of data. Appendix C provides a
listing of one possible implementation of the algorithm, and, for comparison, an
additional listing of one implementation of a traditional simulated annealing algorithm.
These two programs have been applied to a variety of data sets, and the results are
presented and contrasted in the following section.
27


Incidentally, in the implementation of these algorithms (both simulated and
chaotic annealing), the annealing schedule which was chosen was t = aPt0 + tmin (see
Lo and Bavarian2). In addition, since a key component of chaotic annealing involves
mapping a transition from the current system configuration to a potentially new system
configuration, the actual mechanism for calculating that transition is clearly critical. For
the test cases discussed in the following section, the technique involved iterating the
logistic function twenty times plus the number of the matrix position corresponding to
the current state (n = 0 to 35 in most test cases) plus a small random number. In this
manner the output of the logistic function and therefore the new transition is
chaotic (pseudo-random) and is uncorrelated with the preceding transitions, but is
nevertheless also partially a function of the current state. The random number which is
added to the number of iterations also serves to prevent the algorithm from repeating a
particular transition if it happens to return to an already-visited matrix point.
Summary of Experimental Results.
Often, in realistic optimization problems, the solution space (or, in this paper,
the matrix which corresponds to all possible configurations of the system) can be
characterized as displaying some form of pattern. That is, some trend may manifest
itself across the matrix. For instance, there may exist a general smooth descent toward
a single minimum state, or there may be many localized regions (neighborhoods) of
minimas or maximas. Conversely, the solution space may reflect an entirely random
distribution of energy states. For this reason, tests comparing the respective
performances of chaotic and simulated annealing were conducted initially against five
different deliberately-constructed solution spaces: Planar, Gradient, Deep_Wells,
Barrier, and Random. Figure 4 depicts each of these as matrices of energy states, each
energy corresponding to a possible system configuration.
28


Planar
0.3 0.3 0.3 0.3 0.3 0.3
0.3 0.3 0.3 0.3 0.3 0.3
0.3 0.3 0.1 0.3 0.3 0.3
0.3 0.3 0.3 0.3 0.3 0.3
0.3 0.3 0.3 0.3 0.3 0.3
0.3 0.3 0.3 0.3 0.3 0.3
Qradient
0.6 0.5 0.4 0.3 0.2 | 0.1
0.6 0.5 0.4 0.3 0.2 0.2
0.6 0.5 0.4 0.3 0.3 0.3
0.6 0.5 0.4 0.4 0.4 0.4
0.6 0.5 0.5 0.5 0.5 0.5
0.6 0.6 0.6 0.6 0.6 0.6
Deep Wells
0.9 0.9 0.2 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.3 0.9 0.9 0.9 0.4 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 Ol 0.9 0.9
Barrier
0.4 0.2 0.9 0.9 0.3 0.2
0.5 0.3 0.9 0.9 0.6 0.4
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.8 0.7 0.9 0.9 0.5 0.3
0.3 0.3 0.9 0.9 0.1 0.6
Random
0.913924 0.348721 0.821505 0.187676 0.527701 0.263328
0.226258 0.767283 0.540483 0.534554 0.580722 0.492628
0.569488 0.927739 0.217199 0.512075 0.975056 0.493052
0.749735 0.0883754 0.467121 0.815091 0.0310712 0.917721
0.632113 0.0300077 0.0268785 0.483891 0.76579 0.712305
0.483099 0.198115 0.530175 0.533953 0.770352 0.35417
Figure 4: Five Test Cases. Each matrix represents a
particular optimization problem topology. The simulated
annealing and chaotic annealing algorithms were both
tested against each of these matrices five times.
29


The Planar and Gradient test cases represent problems which should be
relatively simple for simulated annealing to solve. The Planar case presents a uniformly
horizontal surface (E = 0.3) with one anomalous lower-energy "pit" (E = 0.1), and the
Gradient case presents a smoothly curved surface which flows in the direction of a
single minimum (E = 0.1). Because of its successive neighborhood search process,
simulated annealing should haphazardly find the minimum in the Planar case and
should tend generally toward the minimum in the Gradient case. These tests were
therefore established as benchmarks for evaluating the performance of chaotic annealing
because, prior to testing, it was not clear how that algorithm's tunneling capability
would manage in a solution space whose continuous features would presumably be
unwittingly apparent to simulated annealing.
The test cases named Deep_Wells and Barrier served the opposite purpose.
Deep_Wells represents a solution space characterized by a few isolated minima (E =
0.1, 0.2, 0.3, 0.4) surrounded by a plane of extremely high energy (E = 0.9).
Similarly, Barrier constitutes a system of four low energy regions separated by walls of
high energy (E = 0.9). In theory, simulated annealing should not perform well in these
cases because, once it finds a single or regional minimum, it will not be able to break
through these intentionally impenetrable energy walls. Even if the annealing schedule
is sufficiently gradual so that, at high temperatures, periodic energy increases are
accepted, a dramatic difference between energy values will render such acceptances
infrequent. In large (realistic) systems, simulated annealing will thus be overwhelmed.
Chaotic annealing, however, should theoretically not fall into, and from that point on be
confined to, these local minimum energy wells: its tunneling capability is designed to
permit jumps through such high energy barriers.
Finally, the Random test case represents an unordered matrix of energy states
with one minimum (E = 0.0268785). It is not clear a priori which algorithm
simulated or chaotic annealing is better suited to finding the solution for this system.
30


Two important points should be evident from the matrices in Figure 4. First,
the matrices are small (6 x 6), and could therefore be construed as being generally
unrepresentative or unrealistic. Second, the matrices are already populated with energy
values, even though, in reality, those values would be unknown (after all, if the energy
corresponding to every possible state is known, then the only algorithm needed is one
which can efficiently sort numbers). These observations are accurate; nevertheless the
size and nature of the test cases are intentional. It is clearly computationally demanding
to determine the optimal solution to a realistic problem, so, for the purposes of this
work, only small scale test cases are employed to evaluate the respective performances
of simulated and chaotic annealing. Proof-of-concept is the intent here. In addition,
because the method for calculating the energy associated with any one system state is
problem dependent, rather straightforward, and identical for both algorithms, it is
essentially immaterial for this investigation. Not having to calculate energy following
each potential transition significantly accelerates both algorithms' performance and, at
this level, in no way detracts from the experimental results.
The simulated annealing and chaotic annealing algorithms were both run against
the five test cases five times (for a total of 25 tests for each algorithm). Following these
tests, a single performance trial for both algorithms was conducted against a randomly
generated 100 x 100 matrix. A selection of raw output from these experiments is
included in Appendix D. The data are tabulated and discussed below.
An obvious initial measure of an algorithm's performance may be drawn from a
simple count of the number of times it converges to a global minimum. This count is
provided in Table 1.
From this data it is clear that simulated annealing did, as expected, perform well
on the benchmark test cases (Planar and Gradient), and poorly on the difficult test cases
(Deep_Wells and Barrier). In fact, in the single instances in which simulated annealing
converged to the global minimum in the Deep_Wells and Barrier systems, the randomly
31


Table 1: Simulated Annealing versus Chaotic Annealing.
Test Case Simulated Annealing Chaotic Annealing
# Times Tested # Times Global Minimum Found # Times Tested # Times Global Minimum Found
Planar 5 5 5 5
Gradient 5 5 5 5
Deep_Wells 5 1 5 5
Barrier 5 1 5 0
Random 5 0 5 2
Total 25 12 25 17
Percent Effectiveness 48% 68%

A simple count of the number of times each algorithm converges to a global
minimum indicates that, at least for this set of tests, chaotic annealing is the
more effective algorithm.
selected initial condition had already placed the algorithm near the final (correct)
solution. In all other cases, simulated annealing either fell into the wrong deep well and
was unable to move out or simply could not penetrate the high energy barrier and
thereby remained helplessly confined to the wrong low energy region.
Like simulated annealing, chaotic annealing performed well on the simple
benchmark test cases. But it also performed remarkably well in the Deep_Wells case
regardless of its initial condition -- in all instances it converged to the global minimum
(E = 0.1), even though in two instances it temporarily occupied a deep local minimum
(E = 0.2). This is the minimum which was, on three occasions, lethal to simulated
annealing. For some reason, Barrier confounded chaotic annealing; the algorithm never
converged to the global minimum of this test case.
32


The Random test case proved interesting. Out of five tests, simulated annealing
never converged to the global minimum; chaotic annealing did so twice. In four of the
five tests, however, simulated annealing did find the global minimum in the course of
the annealing schedule, often bouncing back and forth between the correct answer and
the one which was eventually chosen. This effect also occurred once with chaotic
annealing.
In terms of effectiveness, after 25 tests simulated annealing selected the global
minimum 48% of the time compared to 68% for chaotic annealing. After removing the
easier, less realistic test cases (Planar and Gradient), the numbers become 13% for
simulated annealing and 47% for chaotic annealing although these numbers would
presumably diverge as the number of test cases and size of the matrices are increased.
If, however, the global minimum is not reached by either algorithm, the
question, then, is which algorithm selected the lower (as opposed to lowest) final
energy. These data are tabulated in Table 2.
Table 2: Non-Optimal Solutions.
Test Case # of Times Global Minimum Not Found by Either Algorithm Number of Times Lpwer Value Selected
Simulated Annealing Chaotic Annealing Both (Tie)
Planar 0 N/A N/A N/A
Gradient 0 N/A N/A N/A
Deep_Wells 0 N/A N/A N/A
Barrier 4 0 2 2
Random 3 3 0 0
mwm
A more comprehensive understanding of algorithmic performance may be
gained by answering the question: "If a global minimum was not found, which
algorithm terminated on a better solution?"
33


Table 2 indicates that out of the four Barrier test cases in which a global
minimum was not found, chaotic annealing selected the lower value twice, and both
simulated annealing and chaotic annealing converged to the lower value an additional
two times. And even though chaotic annealing settled into the global minimum twice in
the Random test case, in the remaining three test instances simulated annealing selected
the lower value. [Note that in each test instance both algorithms began their respective
transition processes at the same matrix point]
By combining the data in Tables 1 and 2, a composite performance score can be
determined. Table 3 thus answers the following question: "Taking into account all test
cases both those with globally optimal and locally optimal results which algorithm
more reliably converges to the lower value?"
Table 3: Combined. Relative Performance.
Test Case Number of Times Best Solution Found
Simulated Annealing Cbaotic Annealing Tie
Planar 0 0 5
Gradient 0 0 5
Deep_WeIIs 0 5 0
Barrier 1 2 2
Random 3 2 0
Total 4 9 12
Percent Relative Effectiveness 16% 36% 48%
By merging the data in Tables 1 and 2, greater insight into algorithmic
performance may be derived. This table accounts for both absolute and relative
optimal selections by each algorithm, and therefore indicates which algorithm
performed better across all test instances.
34


Awarding ties to both algorithms, simulated annealing found the lower energy
64% of the time, compared to 84% for chaotic annealing. Excluding the easier test
cases results in effectiveness comparisons of 40% for simulated annealing and 73% for
chaotic annealing. As might be expected, timing for these small test cases was
negligible for both algorithms: on the order of fractions of a second for the algorithms
to terminate.
One large-scale test was run in addition to the five test cases discussed above,
and involved applying both algorithms to a randomly generated 100 x 100 matrix.
Again, the results proved interesting. Both algorithms were allowed 10,000 iterations
(potential transitions) at each temperature level of the annealing schedule. Chaotic
annealing took approximately nine minutes to terminate; simulated annealing took
approximately one minute. Chaotic annealing selected the sixth lowest energy in a
10,000 element matrix (Eseiected = 0.00134643 compared to Emin = 0.000310136);
simulated annealing's selection was not even within the lowest 125 energy values
despite over 570 pages of output (Eseiected = 0.0145141). Chaotic annealing's selection
of a final system configuration was far superior to that of simulated annealing.
However, the fact that simulated annealing was almost an order of magnitude faster
than chaotic annealing is a matter of concern, especially since algorithmic efficiency
was an initial rationale for investigating potential enhancements to the simulated
annealing algorithm. Given the same number of iterations at each temperature of the
annealing schedule, chaotic annealing will be consistently slower than simulated
annealing because of the large number of floating point calculations required to generate
successive iterates of the logistic equation. The implication of this fact is treated in the
following section.
35


Conclusions and Recommendations.
The literature is rife with techniques for determining the optimal solutions to a
vast number of both realistic and theoretical combinatorial optimization problems.
Since exhaustive search is typically too inefficient for this category of problems,
algorithms which employ divide and conquer, iterative improvement, or heuristic
filtering mechanisms are common. Some customized algorithms are highly problem-
specific, while others at least in theory attempt to be broadly applicable. In order
to be effective, however, these algorithms must all adequately face the essential nature
of combinatorial problems: intractability, complexity, and exponential growth in the
number of possible solutions.
Simulated annealing offers a dramatically different approach to optimization,
one which is derived from a model of nature's own path to optimality. By representing
a problem as a system to be "heated" and then progressively "cooled," this algorithm
attempts to parallel the process of annealing a solid into a perfect crystalline structure.
Yet despite its promise as a robust and general optimization algorithm, simulated
annealing suffers from a significant liability its efficiency. One of the premises of
this paper is that, because it depends on a large number of successive neighborhood
searches to reach a system-wide optimum, the simulated annealing process is too slow
to meet the requirements of a considerable number of real-world optimization problems.
For this reason, chaotic annealing was proposed as an enhanced (more efficient)
algorithm.
Chapter 2 introduced the two fundamental components of chaotic annealing:
simulated annealing and chaos. Then, in this chapter, an explanation and derivation of
a chaotic annealing algorithm were presented. A simulated annealing and a chaotic
annealing algorithm were both implemented and tested against five small and one large
test case. In each instance, both algorithms began their respective search and transition
processes at the same point in the system solution space.
36


Naturally, given the small number and simplicity of the test cases which were
considered, the results presented in the preceding section are necessarily inconclusive.
Yet the implication of the pattern which seems to have developed albeit with limited
data is compelling. Tentative conclusions which may be drawn center around two
performance parameters which are critical in judging the value of any particular
algorithm: an algorithm's effectiveness at achieving an optimal or near-optimal solution
and the time it takes to do so. Under a variety of test conditions, chaotic annealing
appeared to outperform simulated annealing in terms of achieving an optimal (or at least
better) solution. The anticipated value of tunneling of not being restricted to
neighborhood transitions -- therefore seems to have been validated in practice [In
general, then, algorithmic effectiveness is partially a function of the characteristics or
topology of the problem in question]. In addition to its performance on simple problem
sets, chaotic annealing appears to be particularly appropriate for problems with many
isolated minima, since it will not necessarily get stuck on the wrong side of high energy
barriers. This characteristic is especially important in problems with extremely large
search spaces, and chaotic annealing thus shows promise as being more effective
against a wider variety of topologies than simulated annealing.
It also appears that the performance of both algorithms would be significantly
enhanced by implementing a "memory" function. The experimental data show that in
the course of searching for a new state and progressively lowering the system
temperature, both simulated and chaotic annealing often happen upon the correct system
solution but continue beyond it and terminate on a lesser solution. By storing the single
lowest encountered energy state and then returning to that state if it is less than the
energy state at the time of process termination, a higher performance rating will be
recorded. Table 4 indicates how performance would have been improved if both
algorithms had had a memory capability during the test cases which were run for this
paper.
37


Table 4: Annealing With Memory.
Test Case Number of Times Global Minimum Found
Simulated Annealing Chaotic Annealing
As-Implemented With Memory As-Implemented With Memory
Planar 5 5 5 5
Gradient 5 5 5 5
Deep_Wells 1 1 5 5
Barrier 1 1 0 3
Random 0 4 2 3
Total 12 16 17 21
Percent Effectiveness 48% 64% 68% 84%
A memory function can significantly improve the operational performance of
both simulated and chaotic annealing.
The second key performance parameter timing seems to be somewhat less
straightforward. In all of the test cases for this paper, simulated annealing and chaotic
annealing were both restricted to the same number of iterations at each temperature.
This being the case, simulated annealing is significantly faster than chaotic annealing
due, as mentioned earlier, to the latter algorithm's need to perform many floating point
calculations. But it appears that to arrive at a particular solution (optimal or lower-
than), chaotic annealing requires significantly fewer processing steps than simulated
annealing. Globally, then, chaotic annealing may be construed as being more efficient
because the annealing process may be terminated sooner while still maintaining a high
likelihood of achieving an optimal or near-optimal solution.
38


If nothing else, the results of the series of experiments presented in this paper
recommend that additional investigations into the applicability and effectiveness of
chaotic (or some other form of improved) annealing be conducted. To that end, the
following recommendations are intended to suggest further avenues of research into
and alternative implementations of both the simulated and chaotic annealing algorithms:
1. Implement and test both algorithms with memory.
2. Timing tests: Perform rigorous timing versus optimality
comparisons of chaotic and simulated annealing. Use large test
cases. Limit the number of iterations -- for instance, allow chaotic
annealing only half the iterations that simulated annealing is allowed.
Compare results. For the same "degree of optimality" how long
must chaotic versus simulated annealing be run?
3. Experiment with different annealing schedules.3.4 Determine the
variable effect of the values of a, to, and tmin in the annealing
schedule equation. Determine the effect of modifying the precision
of the annealing schedule.
4. Determine the effect of different chaos generators (different chaotic
equations will have different distributions how will this impact
algorithmic effectiveness?). Also evaluate the effect of altering the
number of iterations of the chaotic equation. Are the successive
tunneling transitions of chaotic annealing highly correlated?
At the conclusion of their book on simulated annealing, Laarhoven and Aarts expand on
some of these recommendations:
[I]t is desirable to make a profound comparison between different
implementations of simulated annealing, i.e. with different cooling schedules,
and to compare the best implementation.. .with tailored heuristics for a large set
of combinatorial optimization problems.
39


[G]iven a particular instance of a combinatorial optimization problem and a
particular implementation of the algorithm, it would be of great help to have a
probabilistic measure for the deviation of the solution, obtained by the
algorithm, from a globally minimal configuration.5
In addition to these recommendations which are intended to further understand
and improve both algorithms, a number of hybrid algorithms should be implemented
and tested. In particular, four potentially interesting mixtures of chaotic and simulated
annealing may be posed:
1. Apply chaotic annealing until the system is lowered to a certain
predesignated temperature, then switch to simulated annealing.
2. Apply chaotic annealing until a predetermined temperature is
achieved or a certain number of local minima are found, then spawn
multiple parallel simulated annealing (or even limited exhaustive
search) processes at each local minimum. Select the lowest result.
3. Turbo Annealing: Apply simulated annealing, except for a periodic
(or random) application of chaotic annealing. [Note: this may be the
"Fast Simulated Annealing" algorithm to which Lo and Bavarian
allude.6]
4. Apply chaotic or simulated annealing depending upon the value of
two random numbers. That is, suppose two random numbers (ri
and r2) are generated, and an acceptance probability function is
defined as P = exp(-AE/t). Then accept a chaotic jump if P > T2,
accept a simulated annealing neighborhood transition if ri < P < r2,
and do not accept a transition if P < ri.
5. Implement these four algorithms with memory.
Finally, it should be said that the original purpose of this paper was to provide
an avenue for investigation into two quite distinct topics of inquiry: simulated
annealing and chaos. By combining these two concepts, chaotic annealing provided an
40


expedient instrument for achieving this goal, which has been met. However, as can be
expected, the initial intent of this work was soon eclipsed by a compelling question: Is
it possible to actually improve the performance of simulated annealing? It seemed that
chaotic annealing might offer such a possibility.
As mentioned above, after implementing and testing both simulated and chaotic
annealing, and after analyzing and contrasting the limited results of the tests, it appears
reasonable to note that chaotic annealing may, in fact, be superior to simulated
annealing at least under certain circumstances or when posed with particular problem
topologies. Naturally, there remain many more tests to be conducted. But the greater,
more general question remains: excluding the notion of chaos, can simulated annealing
be improved? The answer to that question seems to be an unqualified "yes" from two
notable perspectives.
First, this paper introduced the concept of tunneling, a concept which, in both
theory and practice, appears to be quite an effective tool with which to enhance the
annealing process. As a pseudo-random number generator, chaos (or, more precisely,
the logistic function) provided a convenient mechanism with which to map a transition
anywhere in an optimization problem's solution matrix and thereby achieve the
tunneling effect. But chaos was selected as a mapping engine only because it was one
of the educational goals of this paper, and other faster random mapping mechanisms are
clearly available. The most straightforward implementation of tunneling would be to
simply employ the output of a random number generator in the same manner as that of
the logistic function. That is, use mod n of a truly random number (instead of the
chaotic output) to point to a new configuration in the solution matrix. This approach
avoids the inefficient iterative floating point calculations which chaos generation
requires, and the result would be a significantly faster algorithm which is
philosophically identical to chaotic annealing.
41


Second, there is the question of what larger concept the notion of tunneling is
addressing. The primary shortcoming of simulated annealing lies in its inherent
inability to transition beyond the neighborhood of the current system state. By making
no other modifications to the simulated annealing algorithm than implementing the
tunneling effect, chaotic annealing provides a means of expanding the neighborhood
search of the traditional annealing algorithm. Expanding the neighborhood is thus the
central issue of this paper's search for an improvement to simulated annealing, and
tunneling appears to be but one approach to addressing this issue. Another approach
which should be investigated involves warping the problem's solution space and
applying the standard simulated annealing algorithm. For example, the simplest
transformation of the solution space would involve wrapping the two-dimensional
solution matrix into a three-dimensional torus (connecting the left and right, and upper
and lower sides of the matrix, respectively). In such a manner, simulated annealing
would not be limited by an edge or a comer of the solution space. Beyond a toroidal
solid, other more severe distortions and complex shapes can be theorized until, in the
limit, a shape is attained in which all points in the solution space are within one
transition of (i.e., in the neighborhood of) all other points in that space. The effect
would be similar to that provided by the "wormholes" of astrophysics, and would
result in essentially the same global transition capability as that provided by chaotic
annealing.
This work has thus come full circle. It began with an interest in the topics of
simulated annealing and chaos theory. That interest led to the notion of chaotic
annealing, which, in turn, led to the question of whether the performance of the
simulated annealing algorithm could be enhanced. Experimental results of tests against
both simulated and chaotic annealing reveal that chaotic annealing or at least the
characteristic capabilities of chaotic annealing -- benefits from inherent advantages over
traditional simulated annealing. But with minor alterations to the algorithm these
42


advantages can be transferred to simulated annealing itself, with a concomitant
improvement in performance and without a dependence on chaos. Whether or not
chaotic annealing is ever employed as it is presented in this paper, it has suggested
many fertile and encouraging avenues of research in the continuing quest for improved
combinatorial optimization performance.
43


Notes
1. PJ.M. van Laarhoven and E.H.L. Aarts. Simulated Annealing: Theory and
Applications. D. Reidel Publishing Company, Dordrecht, 1987, pp. 155 156.
2. Zhen-Ping Lo and B. Bavarian. "Optimization of Job Scheduling on Parallel
Machines by Simulated Annealing," Department of Electrical and Computer
Engineering, University of California at Irvine, (Undated), pp. 4-5.
3. Ibid.
4. Laarhoven and Aarts, op. cit., pp. 59 71.
5. Ibid., p. 156.
6. Lo and Bavarian, op. cit., p. 3.
44


Bibliography
Simulated Annealing.
Jeng, Fure-Ching, and Woods, John W. "Simulated Annealing in Compound Gaussian
Random Fields," IEEE Transactions on Information Theory, Vol. 36, No. 1,
January 1990, pp. 94 107.
Laarhoven, P.J.M. van, and Aarts, E.H.L. Simulated Annealing: Theory and
Anplications. D. Reidel Publishing Company, Dordrecht, 1987.
Lo, Zhen-Ping, and Bavarian, B. "Optimization of Job Scheduling on Parallel
Machines by Simulated Annealing," Department of Electrical and Computer
Engineering, University of California at Irvine, (Undated), pp. 1 -13.
Press, William H., et al. Numerical Recipes: The Art of Scientific Computing.
Cambridge University Press, New York, 1986, pp. 326 334.
Chaos.
Chua, Leon O,. and Madan, Rabinder N. "Sights and Sounds of Chaos," IEEE
Circuits and Devices Magazine, January 1988, pp. 3 -13.
Culioli, J.C., Protopopescu, V., and Mann, R.C. "Chaotic Behavior in a New Class
of Parallel Optimization Algorithms." Submitted to appear in the Proceedings of the
Seventh Symposium on Energy Engineering Sciences, June 19 21, 1989,
Argonne, IL, pp. 1 10.
Devaney, Robert. An Introduction to Chaotic Dynamical Systems. Addison-Wesley
Publishing Company, Redwood City, CA, 1987.
McRobie, Allan, and Thompson, Michael. "Chaos, Catastrophes, and Engineering,"
New Scientist, Vol. 126, No. 1720, 9 June 1990, pp. 41 46.
Parker, Thomas S., and Chua, Leon O. "Chaos: A Tutorial for Engineers,"
Proceedings of the IEEE, Vol. 75, No. 8, August 1987, pp. 982 1008.
Ruelle, D. "Deterministic Chaos: The Science and the Fiction," Proceedings of the
Royal Society of London, Mathematical and Physical Sciences, Vol. 427, No.
1873, 8 February 1990, pp. 241 248.
Schuster, P. (ed). "Stochastic Phenomena and Chaotic Behavior in Complex
Systems," Proceedings of the Fourth Meeting of the UNESCO Working Group on
Systems Analysis, Flattnitz, Kamten, Austria, 6-10 June 1983. Springer-Verlag,
New York, 1984, pp. 1 27, 30 51, 67 97, 106 115, and 142 159.
Sparrow, Colin. The Lorenz Equations: Bifurcations. Chaos, and Strange Attractors.
Springer-Verlag, New York, 1982.
Stewart, H. Bruce. "The Geometry of Chaos," Brookhaven Lecture Series, Number
209. Brookhaven National Laboratory, Upton, NY. 14 November 1984, pp. 1 -
26.
45


Combinatorial Optimization, NP-Completeness, and Algorithms.
Garey, Michael R., and Johnson, David S. Computers and Intractability: A Guide to
the Theory of NP-Completeness. W.H. Freeman and Company, New York, 1979.
Machtey, Michael, and Young, Paul. An Introduction to the General Theory of
Algorithms. The Computer Science Library, Theory of Computation Series,
Elsevier North Holland, Inc., New York, 1978.
Manber, Udi. Introduction to Algorithms: A Creative Approach. Addison-Wesley
Publishing Company, Reading, MA, 1989.
Nilsson, Nils. Principles of Artificial Intelligence. Tioga Publishing Company, Palo
Alto, CA, 1980.
Schoppers, Marcel J. "On A* as a Special Case of Ordered Search." University of
Illinois at Urbana-Champaign, Department of Computer Science, (Undated), pp. 1
- 3.
General.
Bender, Carl M., and Qrszag, Steven A. Advanced Mathematical Methods for
Scientists and Engineers. International Series in Pure and Applied Mathematics,
McGraw-Hill Book Company, New York, 1978.
Witt, August. "Lecture Notes: Introduction to Solid State Chemistry (3.091)."
Massachusetts Institute of Technology, Cambridge, MA, Fall, 1977.
46


Appendices
47


Appendix A: Logistic Function Number Generation
The following table presents a comparison of the iterative outputs of the logistic
function [f(x) = 4x (1-x)] and other functions [f(x) = 3.839x (1-x) and f(x) = cos(x)].
No matter what the initial input may be (as long as 0 < x < 1) and no matter how many
iterations occur, the logistic function continues to generate a sequence of pseudo-
random numbers. In contrast, the second equation converges to a repeating pattern of
three numbers, and the third equation converges on the single value f(x) = 0.739085.
Iteration Number f(x) = 4x (1-x) [x0 = 0.6] f(x) = 3.839x (1-x) Tx0 = 0.61 f(x) = cos(x) [Xo = 0.6]
0 0.96 0.92136 0.825336
1 0.1536 0.278158 0.67831
2 0.520028 0.770817 0.778634
3 0.998385 0.67819 0.711874
4 0.00640774 0.837855 0.757139
5 0.0254667 0.521543 0.726804
6 0.0992726 0.957968 0.747302
7 0.35767 0.154577 0.733525
8 0.918969 0.501693 0.742819
9 0.29786 0.959739 0.736565
10 0.836557 0.148339 0.74078
11 0.546917 0.484999 0.737942
12 0.991195 0.958886 0.739855
13 0.034909 0.151347 0.738567
14 0.134761 0.493086 0.739434
15 0.466403 0.959566 0.73885
16 0.995485 0.148948 0.739244
17 0.0179785 0.486641 0.738978
18 0.0706211 0.959065 0.739157
19 0.262535 0.150717 0.739037
20 0.774441 0.491397 0.739118
21 0.698727 0.959466 0.739063
22 0.84203 0.149303 0.7391
23 0.532063 0.487598 0.739075
24 0.995888 0.959159 0.739092
25 0.0163812 0.150383 0.739081
26 0.0644512 0.490502 0.739088
27 0.241189 0.959404 0.739083
28 0.732068 0.149522 0.739087
29 0.784578 0.488188 0.739084
48


Appendix B: Distribution of the Chaotic Mapping Function
The distribution of the chaotic mapping function is critical to the effective
operation of the annealing algorithm because, if the distribution is not flat, it will tend to
concentrate the algorithm's activity instead of spreading it evenly across the system
solution space. As outlined in Chapter 3, the mapping process is dependent upon,
among other things, the current energy value and the size of the solution matrix so
each instance of mapping will necessarily have a different distribution. Nevertheless, a
simple test can help to assure that each distribution tends to be flat.
The logistic function was iterated 10,000 times for three different input energy
values (x0 = 0.2,0.6, and 0.962), the mapping function (mod n) was employed at each
iteration, and the progressive selections of each point in a 6 x 6 matrix were binned. In
all three tests the resulting distributions were essentially flat. The outcome of the test
with .Xo = 0.6 is plotted in the following histogram.
Matrix Points
49


Appendix C: Chaotic and Simulated Annealing Program Listings
The following two program listings represent the functional implementations of
the chaotic and simulated annealing algorithms which were used to generate the test data
presented in Appendix D. The programs were written in C++ and run on a Sun
SPARCstation.
50


Chaotic Annealing
51


car.cc
Wed Oct 16 18:52:57 1991
1
((include
4include
((include
static const int n_rows = 5;
static const int. n_cols = 6;
main(int char** argv)
(
double mtrx[n_rows][n_cols];
struct posit(
int row;
int col;
) ;
cout "Chaotic Annealing" << endl;
cout << "Input File: argv[l] << endl << endl;
ifstream i;
i.open(argv[1], ios::nocreate);
//open database
if (! i) (
cerr << "Bleh!! cannot open file !! << endl;
exit(1);
I
//load data
int row,col;
long seed;
i row;
i col;
i >> seed;
for(int ix = 0 ; ix < n_rows; ix++)(
fortint iy = 0; iy < n_cols; iy++)(
i >> mtrx[ix)[iy];
cout << mtrx[ix][iy] << ";
I
cout << endl;
)
cout endl;
i.close();
srand48(seed);
cout << "starting at [" << row "][" << col <<"]" << endl;
cout << Temperature" << " \t" << "Position" \t" "Energy" << endl
//initialize variables
int valid = 0;
double next;
double t_init = 100.0;
double t_min = 10.0;
double delta = .00000001;
double t_current = t_init;
double c;
52


cac.cc
Wed Oct 16 18:52:57 1991
2
int counter = 0;
int i.ndx;
int n;
double a = .2;
double logist;
struct posit current;
current.row = row;
current.col = col;
int n_r;
int n_c;
int n_posit;
int n_max = (n_rows) (n_cols);
int n_rnd;
while(t_current > t_min + delta)!
forfix = 0 ; ix < 100 ; ix++)( //iterate at each temp
next = drand48();
n = current.row*n_cols + current.col ;
n_rnd = (int) (next10);
logist = mtrx[current.row][current.col];
for(indx = 0; indx < 20+n+n_rnd; indx++)(
logist = 4*logist*(1-logist );
]
logist = logist 100000.0;
n_posit = (int) logist;
n_po3it = n_posit % n_max;
//iff new position == old do it again !
while (n_posit == n) (
logist = logist / 100000.0;
logist = 4*logist*(1-logist);
logist = logist 100000;
n_posit ((int) logi3t) % n_max;
1
n_r = (int)(n_posit/n_rows);
n_c = (n_posit)%(n_cols);
c = mtrx[n_r][n_c] -
mtrx[current.row][current.col] ;
if ( (c <= 0) || (exp (-c1000/t_current) > next))(
current.row = n_r;
current.col n_c;
cout << "\t" t_current << "\t\tt" current.row << ;
cout << current.col << "] \t\t mtrx[current.row] [current. col] << en
counter++;
t_current = (pow (a, (double)counter) t_init) + t_min;
)
}
53


Simulated Annealing
54


sar.cc
Wed Oct 16 18:49:45 1391
1
include
include
include
static const int n_rows = 6;
static const int n_cois = 5;
main (int char" argv)
I
double mtrx[n_rows][n_cols] ;
struct posit!
int row;
int col;
) ;
cout "Simulated Annealing" endl;
cout << "Input File: argv[l] << endl << endl;
ifstream i;
i.open(argv[1], ios::nocreate);
//open database
if(!i) 1
cerr << "Bleh!! cannot open file !! << endl;
exit (1);
//load data
int row,col;
long seed;
i row;
i col;
i seed;
for(int ix = 0 ; ix < n_rows; ix++)(
for(int iy = 0; iy < n_cols; iy++)(
i mtrx[ix] [iy];
cout << mtrx[ix][iyl ";
)
cout endl;
I
cout << endl;
i..close () ;
//seed random number generation
srand48(seed);
cout << "starting at [" << row "] [" << col ]" << endl;
cout Temperature" " \t" "Position" \t" "Energy" endl
//initialize variables
int valid = 0;
double next;
double t_init = 100.0;
double t_min = 10.0;
double delta = .00000001;
double t current = t init;
55


aar.cc
Had Oct 16 18:49:45 1991
2
double c;
int counter = 0;
int indx;
double a = .2;
struct posit adj[4];
struct posit current;
current.row = row;
current.col = col;
while(t_current > t_rain + delta)!
for(ix = 0 ; ix < 100 ; ix++)(
//position above
adj[0].row = current.row-1;
adj[0].col = current.col;
//iterate at each temp
//position below
adj[l].row = current.row+1;
adj[l].col = current.col;
//position left
adj[2].row = current.row;
adj[2].col = current.col-1;
//position right
adj[3].row = current.row;
adj[3).col * current.col+1;
dl;
valid = 0;
while(!valid)(
next = drand48(); //randomly select a neighbor
indx = (int) (next*1000)%4;
if ((adj[indx).row >= 0 && adj[indx].col >= 0 ) ss
(adj[indx).row < n_rows &S adj[indx].col < n_cols )) vaiid =
)
c = mtrx[adj[indx].row][adj[indx].col] -
mtrx[current.row][current.col] ;
if((c <= 0) || (exp(-c*1000/t_current) > next))( //take??
current.row = adj[indx].row;
current.col = adj[indx].col;
cout << "\t" << t_current "\t\t[" current.row << ;
cout << current.col << "] \t\t << mtrx[current.row][current.col] <<
I
)
counter++;
t_current = (pow(a,(double)counter) t_init) + t_min; //lower temp.
]
56


Appendix D: Tabulation of Selected Experimental Results
Due to the quantity of output products, this section does not contain
comprehensive results from all of the tests run against simulated and chaotic annealing.
One sample printout from each test case (Planar, Gradient, Barrier, and Random) has
been selected for both algorithms. In addition, all five instances of the Deep_Wells test
case are presented for both algorithms. Finally, the data from the single large (100 x
100) test case are presented, although only the first and last pages of the simulated
annealing output are included.
Each printout indicates the algorithm, test case ("input file"), and, except for the
large test case, the actual solution matrix. This information is followed by the initial
system configuration and a listing of each subsequent "take" (or, new state acceptance)
at each temperature in the annealing schedule. In the "position" column note how
simulated annealing moves only from neighbor to neighbor, while chaotic annealing
jumps throughout the solution space.
57


Test Case: Planar
58


car.Planar.4
Wed Oct 16 19:25:44 1991
1
Chaotic Annealing
Input File: Planar.4
0.3 0.3 0.3 0.3 0.3 0.3
0.3 0.3 0.3 0.3 0.3 0.3
0.3 0.3 0.1 0.3 0.3 0.3
0.3 0.3 0.3 0.3 0.3 0.3
0.3 0.3 0.3 0.3 0.3 0.3
0.3 0.3 0.3 0.3 0.3 0.3
starting at [5][0]
Temperature
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100 .
100
100
100
100
100
100
100
100
100
100
100
100
Position Energy
[1,0] 0.3
[3,0] 0.3
[1,3] 0.3
[0,4] 0.3
[2,0] 0.3
[2,3] 0.3
[3,1] 0.3
[2,5] 0.3
[4,3] 0.3
[3,5] 0.3
[0,2] 0.3
[1,3] 0.3
[2,5] 0.3
[3,1] 0.3
[2,5] 0.3
[2,3] 0.3
[1,3] 0.3
[2,5] 0.3
[4,0] 0.3
[5,3] 0.3
[0,2] 0.3
[4,2] 0.3
[1,1] 0.3
[5,2] 0.3
[5,1] 0.3
[5,3] 0.3
[1,0] 0.3
[2,0] ' 0.3
[0,4] 0.3
[2,5] 0.3
[1,3] 0.3
[0,4] .0.3
[5,0] 0.3
[1,21 0.3
[3,0] 0.3
[2,5] 0.3
[4,0] 0.3
[3,5] 0.3
[0,4] 0.3
[3,0] 0.3
[4,0] 0.3
[4,3] 0.3
[5,1] 0.3
[0,4] 0.3
[2,0] 0.3
[1,3] 0.3
[5,2] 0.3
[1,0] 0.3
[2,5] 0.3
[4,0] 0.3
[4,4] 0.3
[0,2] 0.3
59


car.Planar.4 Wed Oct 16 19:25:44 1991
CM
nnnnnnnfnfnfonnnnnrnnnnfonnnnnnrnnnfnmnnronnnnnnnH
oooooooooooooooooooooooooooooooooooooooooo
HO'rHONONinoNOivioninnNO'rMMinofnNnn^oortninnNonoNON
OinOOinoinHMOOnonHN^ONOHNNOHininHOHNNHNC'JODN^OinN
8
oooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooo
HrlHrlrliHHrlrlHH H H H H H t-l H -4 f-H H H H H H HrlrlrlrlHHrl H H H *-4 H H H H


0)
rt
u>
n
H r
CD H-
3 P
*0 iQ
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOH (D
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOCDft
ft
C r-,
h cn
CD
o o o o o
CJ U C*J Ul U i
o o o o o o
u cj y ui cj oj
o o o o o o
CJ Ul CJ H CJ u
o o o o o o
CJ CJ U> Ul CJ CJ
o o o o o o o
ui u y cj cj cj
l- C/i
p H-
Xi 3
c C
ft h-
CD
**1 ft
H- CD
H CL
(D >
P
H3 P
H* CD
U> (D
P -
CD -
tn

CO
CD
N
o o o o o o
CJ CJ Ul CJ Ul CJ
On
*
UI^CJyUAUWWKJMWHOHOHOHrJHfOHWWwyUUWUUNHHMCJUU^
U) CJ >c> CJ I
(J CJ CJ lb ^
O O O t1 O O O H* -* O t* O O O O O O O O O O O O O H* I-* O HNWfOHHI'WM K> H* O O O M O O O O O O J* -* O
V
o
(0
H-
ft
H-
o
p
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOt*]
................................................................................
UIUIUICJCJUJCJCJCJCJUJUJCJCJCJCJCJUJUJCJU)CJCJCJU)CJCJUJUIHUJU)UCJCJHCJCJCJCJUJU1CJUIUJCJCJCJCJCJCJCJ(I)
n
.Planar.4 Wed Oct 16 19:25:27 1991


sar.Planar.4 Wed Oct 16 19:25:27 1991
CM
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOCDOOOOOOOOOO
'-iNf\jiNf>irjf\jn(Nn(Njr\jfn(M(vjH(Mnmfyifo^,rnnv rurinTrv'rvnnnvwnnininininwininmT^ioininin^fl'nnnfnnwrKNHHHHOoooHOOHNci'rn'rniN
a
ooooooooooooooooooooo
ooooooooooooooooooooo
f4 ~4 r-4 >-4 r-4 i-H r-4 r-4 r*4 1I >4 H t I *H H iItI H H
o o o o o o
oooooooooooooooooooooo
HHririHHnfonmnnnpininMnmfnnn


sar.Planar.4 Wed Oct 16 19:25:27 1991 3
3C * 3 5' 0.3
30 ;3^i 0.3
30 c 3, s; 0.3
30 £2,5] 0.3
3 G [2,4] 0.3
30 3,4] 0.3
30 [3,31 0.3
30 [3,2] 0.3
30 [3,3] 0.3
30 [3,2] 0.3
30 [3,1] 0.3
30 [2,1] 0.3
30 [2,0] 0.3
30 [2,1] 0.3
30 [1,1] 0.3
30 [2,1] 0.3
30 [1,1] 0.3
30 [1,2] 0.3
30 [1,3] 0.3
30 [1,2] 0.3
30 [1,1] 0.3
30 [1,0] 0.3
30 [1,1] 0.3
30 [1,2] 0.3
30 [0,2] 0.3
30 [0,3] 0.3
30 [0,4] 0.3
30 [1,4] 0.3
30 [2,4] 0.3
30 [3,4] 0.3
30 [3,3] 0.3
30 [2,3] 0.3
30 [3,3] 0.3
30 [3,4] 0.3
30 [4,4] 0.3
30 [4,5] 0.3
30 [5,5] 0.3
30 [5,4] 0.3
30 [4,4] 0.3
30 [5,4] 0.3
30 [5,5] 0.3
30 [5,4] 0.3
30 [4,4] 0.3
30 [4,5] 0.3
30 [5,5] 0.3
30 [4,5] 0.3
30 [4,4] 0.3
30 [4,5] 0.3
30 [4,4] 0.3
30 [4,5] 0.3
30 [4,4] 0.3
30 [3,4] 0.3
30 [2,4] 0.3
30 [1,4] 0.3
30 [1,3] 0.3
30 [1,2] 0.3
30 (0,2) 0.3
30 [0,1] 0.3
30 [0,2] 0.3
30 [0,1] 0.3
30 [1,1] 0.3
30 [0,1] 0.3
30 [1,1] 0.3
30 [2,1] 0.3
63


Planar.4 Wed Oct .16 19:25:27 1991 4
30 [3,:: 0. 3
30 [3,2! 0.3
30 [2,21 0.1
64


Test Case: Gradient
65


car.Gradient.3
Had Oct 16 19:25:40 1991
1
Chaotic Annealing
Input File: Gradient.3
0.6 0.5 0.4 0.3 0.2 0.1
0.6 0.5 0.4 0.3 0.2 0.2
0.6 0.5 0.4 0.3 0.3 0.3
0.6 0.5 0.4 0.4 0.4 0.4
0.6 0.5 0.5 0.5 0.5 0.5
0.6 0.6 0.6 0.6 0.6 0.6
starting at [2][2]
Temperature
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
30
30
Position Energy
CO, 2] 0.4
[5,2] 0.6
[3,1] 0.5
[0,0] 0.6
[1,2] 0.4
[5,4] 0.6
[1,0] 0.6
[0,1] 0.5
[0,0] 0.6
[5,4] 0.6
[1,0] 0.6
[3,3] 0.4
[2,1] 0.5
[0,0] 0.6
[1,2] 0.4
[0,2] 0.4
[5,2] 0.6
[5,5] 0.6
[0,3] 0.3
[1,3] 0.3
[2,5] 0.3
[1,3] 0.3
[2,5] 0.3
[1,3] 0.3
[0,4] 0.2
[1,4] 0.2
[0,4] 0.2
[0,5] 0.1
[1,4] 0.2
[1,5] 0.2
[0,4] 0.2
[1,5] 0.2
[0,4] 0.2
[1,51 0.2
[0,4] 0.2
[0,5] 0.1
66


.Gradient.3 Wed Oct 16 19:23:24 1991
&
u
G
Id
oooooooooooooooooooooooooooooooooooooooooooooooooooo
H(Nj(NHHHH(N(NiNH(N(M(r)TV'fronn'Tiriinioininfnj'in^
O I O O I O *4 H O 4 r-4 tI O O O I O 4 O I *4 O O O O 04t44*444r-4rH O I O r-l O O t4 O O O -4 O -4 (N VO
M
fll
n
* i rg CO in VO
co
o o o O o o
U
G eg rg ro m VO
G a)
'4 H o o o O o o
r*4 X)
(i) M
G O o o o o o O
C
r *r in vo
0)
71 o o o o o o
ai H
U lu m in m in in VO
aj
r-4 U o o o o o o
3 3
l; a VO VO 'D O o VO
40 -1 O o O o o o
(N
rg M
3
0
OcflOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
rd Moooooooooooooooooooooooooooooooooooooooooooooooooooo
HI 4 H >H I r-4 rH iI r-1 f-H H i4 H H H rl H 4 4 H *4 H H tH 4 4 i4 H H rI r4 r4 H *I *I i4 r? i4 4 4 *4 I 4 H H iI r4 t4 r4 4 H 4 t4
a
c g
h a>
U H
L4
aj
u
(0


C\
00
W U W W U OJ W W H H h* H H lJ H I-1 I-* H lJ H H I-
ooooooooooooooooooooooo
ooooooooooooooo
u
cu
h
o
h
(U
P:
(0
a
ct
u>
o t~* o h1 h-* o o o m cs i o i j o o o r-j
ui u) ^ ^ o< cn ui u< ui ui m >b ui ui u>
X
o
a
o
o
ft
a*
ooooooooooooooooooooooo
HMWWWWHWroNrJNJMMHNJHWHfOHKlW
u>
ro
in
ro
\o
u>
to


Test Case: Deep_Wells
69


car.Deep_Hells.1
Wed Oct 16 19:52:26 1991
1
Chaotic Annealing
Input File: Deep_Welis
0.9 0.9 0.2 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.3 0.9 0.9 0.9 0.4 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.1 0.9 0.9
starting at [2] [3J
Temperature Position Energy
100 13,0] 0.9
100 [3,1] 0.9
100 [4,2] 0.9
100 [3,5] 0.9
100 [0,0] 0.9
100 [2,2] 0.9
100 [4,2] 0.9
100 [0,0] 0.9
100 [2,2] 0.9
100 [5,4] 0.9
100 [2,2] 0.9
100 [3,3] 0.9
100 [1,4] 0.9
100 [3,0] 0.9
100 (0,01 0.9
100 [5,2] 0.9
100 [4,3] 0.9
100 [5,3] 0.1
70


car.Doep_Wlla.2
Wed Oct 16 19:25:36 1991
1
Chaotic Annealing
Input File: Deep_Wells.2
0.9 0.9 0.2 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.3 0.9 0.9 0.9 0.4 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.1 0.9 0.9
starting at [4] [4]
Temperature Position Energy
100 [0,4] 0.9
100 [4,4] 0.9
100 [3,5] 0.9
100 [3,3] 0.9
100 [3,0] 0.9
100 [5,3] 0.1


car.Deep_W*lls.3
Wed Oct 16 19:25:36 1991
1
Chaotic Annealing
Input File: Deep_Wells.3
0.9 0.9 0.2 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.3 0.9 0.9 0.9 0.4 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.1 0.9 0.9
starting at [2][2]
Temperature Position Energy
100 [0,2] 0.2
100 [0,1] 0.9
100 [0,2] 0.2
100 [5,3] 0.1
72


car.Deep_Wells.4
Wed Oct IS 19:25:37 1991
1
Chaotic Annealing
Input File: Deep_Wells.
0.9 0.9 0.2 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.3 0.9 0.9 0.9 0.4 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.1 0.9 0.9
starting at [5][01
Temperature
100
100
100
100
100
100
100
100
100
100
100
100
100
100
Position Energy
[2,2] 0.9
[3,1] 0.9
[4,21 0.9
[3,5] 0.9
[4,2] 0.9
[3,5] 0.9
[4,2] 0.9
[3,5] 0.9
[4,2] 0.9
[1,4] 0.9
[2,1] 0.9
[4,2] 0.9
[3,5] 0.9
[5,3] 0.1
73


car.Deap_Wells.5
Wed Oct IS 19:25:38 1991
1
Chaotic Annealing
Input File: Deep_Wells.5
0.9 0.9 0.2 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.3 0.9 0.9 0.9 0.4 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.1 0.9 0.9
starting at [0][2]
Temperature Position
100 [5,3]
Energy
0.1
74


LA
UJUWWWUWUWUUWUWUUHHHHHHHHHHHH
oooooooooooooooooooooooooooo
oooooooooooo
W o o o o o o Ld C/I
(K VO VO vo vo vo vo o \\ h
n n V
H rt o o o o o n n
H- p id
3 VO vo vo vo vo VO '*j -r
T3 i- (D d
(I) o o o o o o w M. 1
n P rt n
tu rr VO vo vo vo VO kj > (D
ft 3 *-
C i o o o o o o a 3 I-*
h N) (V (D la

ii 0 f- H*
CJ o o o o o o | I-
3
VO vo vo .o vo vo (D <£)
c> o o o o o -
(/>
vo vo vo vo VO vo
ooooohohhwwwhwnuuwhhoophhpmnnhhhhwhwwwwnho
NIHOHOOOOh'HWHHHSJWHHHOOHHOHNWHWMlOWWWUWWA^WUH-
H-
o
3
ft
a
o
o
ft
cn
CD
O O O O O O O' OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOf*]
................................................................3
N)U)VOU)^U)VOiOU)V£)VD(PU)^>VpV£lU)U)^VOU)^)\OU)^U)^^>U)lOU>v£kOa>v£ilOU)VO>bV£>VOID
n
iQ
U1
K)
Ul
K>
vo
vo


sar.Deep_Wells.2
Wed Oct 16 19:25:20 1991
1
Simulated Annealing
Input File: Deep_Welis.2
0.9 0.9 0.2
0.3 0.9 0.9
0.3 0.9 0.9
0.9 0.9 0.9
0.9 0.9 0.9
0.9 0.9 0.9
0.9 0.9 0.9
0.9 0.9 0.3
0.9 0.4 0.9
0.9 0.9 0.9
C.9 0.9 0.9
0.1 0.9 0.9
starting at [4][4]
Temperature
100
100
Position
[5,4]
[5,3]
Energy
0.9
0.1
76


3ar.Deep_Wella.3 Wed Oct 16 19:25:21 1991 1
Simulated Annealing Input File; Deep_WeJ lIs 3
3.9 0.9 0.2 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.3 0.9 0.9 0.9 0.4 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.1 0.9 0.9 0 9 0.9 0.9 0.9 0.9
starting at [2][21 Temperature 100 100 Position Energy [1,2] 0.9 [0,2] 0.2
77


.Deep_Wells.4 Wed Oct 16 19:25:22 1991
>1
O'
n
c................................
Wooooooooooooooooooooooooooooo
c
' 0)
I
i a
a)
I <1)
a
c
o
H
AJ >-'^-'<1-^ ".
H O -t *H O O O -l O O
V) > V ^ W V ,
OvtrnrMrorontr
(T\ o o o o o o
CT> CT^ T rT> CT* O O O O O f> o
rr cr a> r-4 ^ (D
.............. uo H
O CD CD O CD O 3
cm o> rr> ad flooooooooo
.............. Cd uooooooooo
O O Cl* CD CD O Q) r*H H rC I H H H
o> a
onct*o> C6
.............f-l 0)
O O CD O CD O *J H
u
cr> i,Ti m c^ cn cTi id
............... AJ
o o n o o o n
O rH
CO CO
O CD O
m tr (
(nj r*j h h
I CM H H (M
H (N f\| CM
CO CO CM CO
-H O *H *-M O
CO CO CO CM CM
O O O O I
O O O O 1
rH H H 1
oo


sar.Deep_Wella.5
Wad Oct 16 19:25:22 1991
1
Simulated Ar.rieali.ig
Input File: Deep_WeiIs.5
0.9 0.9 0.2 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.3 0.9 0.9 0.9 0.4 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.1 0.9 0.9
starting at [0][2]
Temperature Position Energy
79


Test Case: Barrier
80


car.Barrier.3
Wed Oct IS 19:25:33 1991
1
Chaotic Annealing
Input File: Barrier.3
0.4 0.2 0.9 0.9 0.3 0.2
0.5 0.3 0.9 0.9 0.6 0.4
0.9 0.9 0.9 0.9 0.9 0.9
0.9 0.9 0.9 0.9 0.9 0.9
0.8 0.7 0.9 0.9 0.5 0.3
0.3 0.3 0.9 0.9 0.1 0.6
starting at [2][2]
Temperature Position Energy
100 [0,2] 0.9
100 [4,4] 0.5
100 [0,0] 0.4
100 [5,4] 0.1
100 [0,4] 0.3
100 [0,1] 0.2
100 [5,2] 0.9
100 [3,5] 0.9
100 [4,2] 0.9
100 [5,1] 0.3
100 [0,4] 0.3
100 [0,1] 0.2
81


sar.Barrier.3
Wed Oct 16 19:25:17 1991
1
Simulated Annealing
Input File: Barrier.3
0.4 0.2 0.9
0.5 0.3 0.9
0.9 0.9 0.9
0.9 0.9 0.9
0.8 0.7 0.9
0.3 0.3 0.9
0.9 0.3 0.2
0.9 0.5 0.4
0.9 0.9 0.9
0.9 0.9 0.9
0.9 0.5 0.3
0.9 0.1 0.6
starting at [2] [2]
Temperature Position Energy
100 Cl,2] 0.9
100 [0,2] 0.9
100 [0,1] 0.2
100 [0,0] 0.4
100 [0,1] 0.2
100 [1,11 0.3
100 [0,1] 0.2
100 [1,1] 0.3
100 [1,01 0.5
100 [1,1] 0.3
100 [0,1] 0.2
100 [1,1] 0.3
100 [0,11 0.2
100 [1,1] 0.3
100 [0,1] 0.2
100 [1,1] 0.3
100 [0,11 0.2
100 [1,1] 0.3
100 [0,1] 0.2
82


Test Case: Random
83


car.Random.2
Wed Oct 16 19:25:29 1991
1
Chaotic Annealing
Input File: Random.2
0.913924 0.348721 0.821505 0.187676 0.527701 0.263328
0.226258 0.767283 0.540483 0.534554 0.580722 0.492628
0.569488 0.927739 0.217199 0.512075 0.975056 0.493052
0.749735 0.0883754 0.467121 0.815091 0.0310712 0.917721
0.632113 0.0300077 0.0268785 0.483891 0.76579 0.712305
0.483099 0.198115 0.530175 0.533953 0.770352 0.35417
starting at [4][4]
Temperature Position Energy
100 [5,1] 0.198115
100 [3,1] 0.0883754
100 [4,2] 0.0268785
100 [3,1] 0.0883754
100 [4,2] 0.0268785
100 [3,1] 0.0883754
100 [4,2] 0.0268785
100 [3,1] 0.0883754
100 [4,2] 0.0268785
30 [3,1] 0.0883754
30 [4,2] 0.0268785
30 [3,1] 0.0883754
30 [4,2] 0.0268785
30' [3,1] 0.0883754
30 [4,2] 0.0268785
30 [3,1] 0.0883754
14 [4,2] 0.0268785
10.16 [3,1] 0.0883754
10.16 [4,2] 0.0268785
10 [3,1] 0.0883754
10 [4,2] 0.0268785
10 . [3,1] 0.0883754
10 [4,2] 0.0268785
84


jar.Random. 2
Wed Oct 16 19:25:13 1991
1
Simulated Annealing
Input File: Random.2
0.913924 0.348721 0.821505 0.187676 0.527701 0.263328
0.226258 0.767283 0.540483 0.534554 0.580722 0.492628
0.559488 0.927739 0.217199 0.512075 0.975056 0.493052
0.749735 0.0883754 0.467121 0.815091 0.0310712 0.917721
0.632113 0.0300077 0.0268785 0.483891 0.76579 0.712305
0.483099 0.198115 0.530175 0.533953 0.770352 0.35417
starting at C4) [4)
Temperature Position Energy
100 [5,4] 0.770352
100 [5,3] 0.533953
100 [5,2] 0.530175
100 [5,1] 0.198115
100 [5,0] 0.483099
100 [5,1] 0.198115
100 [4,1] 0.0300077.
100 [4,2] 0.0268785
100 [4,1] 0.0300077
100 [5,1] 0.198115
100 [5,2] 0.530175
100 [5,1] 0.198115
100 [4,1] 0.0300077
100 [3,1] 0.0883754
100 [4,1] 0.0300077
100 [3,1] 0.0883754
100 [4,1] 0.0300077
100 [3,1] 0.0883754
100 [4,1] 0.0300077
100 [3,1] 0.0883754
100 [4,1] 0.0300077
100 [5,1] 0.198115
100 [4,1] 0.0300077
100 [3,1] 0.0883754
100 [4,1] 0.0300077
100 [4,2] 0.0268785
100 [4,1] 0.0300077
100 [4,2] 0.0268785
100 [4,1] 0.0300077
100 [3,11 0.0883754
100 [4,1] 0.0300077
100 [3,1] 0.0883754
100 [4,1] 0.0300077
100 [3,1] 0.0883754
100 [4,1] 0.0300077 .
100 [3,1] 0.0883754
100 [4,1] 0.0300077
100 [4,2] 0.0268785
30 [4,1] 0.0300077
30 [4,2] 0.0268785
30 [4,1] 0.0300077
30 [4,2] 0.0268785
30 [4,1] 0.0300077
30 [4,2] 0.0268785
30 [4,1] 0.0300077
30 [4,2] 0.0268785
30 [4,1] 0.0300077
30 [4,2] 0.0268785
30 [4,1] 0.0300077
30 [4,2] 0.0268785
30 [4,1] 0.0300077
30 [4,2] 0.0268785
85


aar.Random.2
Wed Oct 16 19:25:13 1991 2
30 [4,1] 0.0300077
30 [4,2] 0.0268785
30 [4,1) 0.0300077
14 [4,2] 0 0268785
14 [4,1] 0 0300077
14 [4,2] 0.0268785
14 [4,11 0.0300077
14 [4,2] 0.0268785
14 [4,1] 0.0300077
14 [4,21 0.0268785
14 [4,1] 0.0300077
14 [4,2] 0.0268785
14 [4,1] 0.0300077
14 [4,21 0.0268785
14 [4,1] 0.0300077
14 [4,2] 0.0268785
14 [4,1] 0.0300077
14 [4,2] 0.0268785
14 [4,1] 0.0300077
14 [4,2] 0.0268785
14 [4,1] 0.0300077
14 [4,2] 0.0268785
14 [4,1] 0.0300077
10.8 [4,2] 0.0268785
10.8 [4,1] 0.0300077
10.8 [4,2] 0.0268785
10.8 [4,1] 0.0300077
10.8 [4,2] 0.0268785
10.8 [4,1] 0.0300077
10.8 [4,2] 0.0268785
10.8 [4,1] 0.0300077
10.8 [4,2] 0.0268785
10.8 [4,11 0.0300077
10.8 [4,2] 0.0268785
10.8 [4,1] 0.0300077
10.8 [4,2] 0.0268785
10.8 [4,1] 0.0300077
10.8 [4,2] 0.0268785
10.8 [4,1] 0.0300077
10.16 [4,2] 0.0268785
10.16 [4,1] 0.0300077
10.16 [4,2] 0.0268785
10.16 [4,1] 0.0300077
10.16 [4,2] 0.0268785
10.16 [4,11 0.0300077
10.16 [4,2] 0.0268785
10.16 [4,1] 0.0300077
10.16 [4,2] 0.0268785
10.16 [4,1] 0.0300077
10.16 [4,2] 0.0268785
10.16 [4,1] 0.0300077
10.16 [4,2] 0.0268785
10.16 [4,1] 0.0300077
10.16 [4,2] 0.0268785
10.16 [4,1] 0.0300077
10.032 [4,2] 0.0268785
10.032 [4,1] 0.0300077
10.032 [4,2] 0.0268785
10.032 [4,1] 0.0300077
10.032 [4,2] 0.0268785
10.032 [4,1] 0.0300077
10.032 [4,2] 0.0268785
10.032 [4,1] 0.0300077
10.032 [4,2] 0.0268785
86


Random.2 Wed Oct 16 19:25 13 1991
10.032 [4,1] 0.0300077
10.032 [4,2] 0.0268785
10.032 [4,1] 0.0300077
10.032 [4,2} 0.0268785
10.032 [4,1] 0.0300077
10.C32 [4,2! 0.0268735
10.032 [4,1] 0.0300077
10.032 [4,2] 0.0268785
10.0064 [4,1] 0.0300077
10.0064 [4,2] 0.0268785
10.0064 [4,1] 0.0300077
10.0064 [4,2] 0.0268785
10.0064 [4,1] 0.0300077
10.0064 [4,2] 0.0268785
10.0064 [4,1] 0.0300077
10.0064 [4,2] 0.0268785
10.0064 [4,1] 0.0300077
10.0064 [4,2] 0.0268785
10.0064 [4,1] 0.0300077
10.0064 [4,2] 0.0268785
10.0064 [4,1] 0.0300077
10.0064 [4,2] 0.0268785
10.0064 [4,1] 0.0300077
10.0064 [4,2] 0.0268785
10.0064 [4,1] 0.0300077
10.0064 [4,21 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0013 [4,1] 0.0300077
10.0013 [4,2] 0.0268785
10.0003 [4,1] 0.0300077
10.0003 [4,2] 0.0268785
10.0003 [4,1] 0.0300077
10.0003 [4,2] 0.0268785
10.0003 [4,1] 0.0300077
10.0003 [4,2] 0.0268785
10.0003 [4,1] 0.0300077
10.0003 [4,2] 0.0268785
10.0003 [4,1] 0.0300077
10.0003 [4,2] 0.0268785
10.0003 [4,1] 0.0300077
10.0003 [4,2] 0.0268785
87


ooooooooooooooooooo
OOOOOOOOOOOOOOOOOOOQOOOOOOOOOOOOOOOOOOOOOOOOO
o
o
o
o o o o o
o o o o o
o o o o o
o o o o
o o o o
o o o o
o o
o o
o o
o o o o
o o o o
o o o o
O O O O O O O O o o o
o o a o o o o o o o o
ooooooooooo
HHUWUUIUWWUU
00
00

o o o o o o o o o o o o o o o o o o o o o o o o o o o o O o O o O o o o o o O o O o O o O o O o O o O o o o o o o o o o o o O o
o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o O o O o O o O o O o o o O o o o o o o o o o o O a
ro CJ M CJ to CJ ro CJ ro CJ ro CJ ro CJ ro CJ ro CJ ro CJ ro CJ ro CJ to CJ ro CJ to CJ CD CJ to CJ to CJ to CJ ro CJ ro CJ to CJ ro CJ to CJ to CJ to CJ to CJ ro CJ ro CJ ro CJ ro CJ ro CJ
o> o o> o GO o 0D o GO o CD o GO o CD o CD o CD o CD o o OD o GO o GD o GO o ao o CJ o GO o CD o CD o oo o GO o CD o OD o CD o CD o OO o CD o CD o 00 o 00 o 0D o CD o
4 o 4 o 4 o 4 o 4 o >0 o 4 o 4 o 4 o 4 o 4 o 4 o 4 o 4 o 4 o 4 o -J o -J o 4 o -4 o *4 o 4 o -4 o 4 o 4 o 4 o 4 o 4 o 4 o 4 o 4 o 4 o
00 4 GO 4 oo 4 GO -4 oo 4 GO J GO 4 03 4 CD 4 00 4 GO 4 GO 4 GO 4 oo 4 CD -4 cn -4 OD 4 CD <4 GO <4 CD 4 ao *4 00 -4 CD 4 ao 4 00 4 GO 4 ao 4 oo 4 OD 4 CD 4 OD 4 oo 4
U1 4 U o U 4 in -0 cn 4 Cn 4 cn 4 cn 4 cn 4 U1 4 cn 4 cn 4 Ln 4 cn 4 cn 4 4 cn 4 cn -J cn >4 cn -4 cn *4 cn 4 cn 4 cn 4 cn 4 cn 4 cn 4 cn 4 cn 4 cn 4 cn 4 Cn 4
ar.Random.2 Wad Oct 16 19:25:13 1991


jar.Random.2 Med Oct 16 19:25:13 1991
to
r- in r- to
t" ctj r-
or-'-or-
,o worn
O VO O VO
n n n cm
o o o o
t' in Mn r*
f* co r* co r*
o r- o r- o
ooooaio
O VO o vo o
n n to (ni n
o o o o o
in r- to r-' in
oo r* h m
r- o r~- o
oo o oo o co
100(00(0
o o o o o
Mn h in r
r- co r- oo r*
o (-* o r* o
o co o co o
o vo o vo o
n N DIN fO
o o o o o
in r* m r- in
oo ( 00 p* co
r- o r- o r-
oo o oo o co
vo o vo o vo
cm ro cm co cm
o o o o o
r* in ma Mn
h co h m h co
or-or-or-
o co o co o oo
0(00(00(0
n cni n n n o o o o o o
Mnh in ma h
r~c£>r"CDr~--cof-'
o r* o r- o r* o
o oo o CD o co o
o vo o vo o vo o
n n n n n (M o)
o o o o o o o
m h m r- in mo
co r- co r~ oo r- co
r o r o r- o r-
CD o co o oo o co
(00(00(00(0
cm m cm ro cm ro rM
o o o o o o o
h in h in h in
r- a) r* co
or-or-or-
o co o co o co
o vo o vd o vo
ro cm cn cm ro cm
o o o o o o
r- in
r- oo
o r-
o co
o vo
ro cm
o o
r~ m
r** oo
o r-
o oo
o vo
ro cm
o o
r- tn
r- co
o r-
O ao
o vo
ro cm
o o
p m
r~ oo
o r*
o a>
o vo
ro cm
o o
r* in
r- oo
o r~
o ao
o vo
ro cm
o o
r- in
P CD
O P
o a>
o vo
ro cm
o o
r>
r-
o
o
o
ro
o
in
oo
r~-
CO
vo
CM
o
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
N-'((\jrl(MH(MH(MH(MH(MHCMHrMHfMH(M-irMHCMHlMH(MHCMH(MHCMHrgH(MH(MH(MH(MHCMH(NHCMHCMrH(Mry(MHCMHrjH(M
ON
00
CkOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
r I > I r'l i I i-l i*l H *I rt H H l H t-H H H H iI H H fI H H rl H H H H ft H H
ooooooooooooo
H H H H H H H H rI H H H H
ooooooooooooo o o o o
rlrMHHrlpHHHrlHHHrlHHHH


KD
t'-inr-inr-mr-inr-'in
hODhcot'cnr-oor-cD
or-or-or-or-op-
OCOOCOOQOOOOOCD
ovoovoovoovdovo
rH nrjntMfO eft .................
r-4 oooooooooo
r-ifthinhifthiOhinh
r-air-oor'CDr'Oor-cor'
or-or-op'or-or-o
oaooaoocooaooaoo
o VO o VO o o O VD O VO o
pirgnrgnMforjrKMfn
OOOOOOOOOOO
OOOOOOOOOOO
CO
in
CM
o\
V
o
o
d
a
£

I
TJ
G
OOOOOOOOOOO
rl r-C - l r*l iH r* r*l H
OOOOOOOOOO
8


Test Case: Large (100 x 100) Random
91


lcar.LR.1
Wed Oct 16 20:54:17 1991
1
Chaotic Annealing
Input File: LargeRandom
starting at
Temperature
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
[50][50]
Position
[21.5]
[23,34]
[56, 22]
[58.8]
[35, 83]
[1,28]
[14,52]
[52.6]
[29.13]
[65,49]
[97.54]
[43,21]
[76.6]
[81.91]
[90,86]
[35, 50]
[13,83]
[84.73]
[5,93]
[25,44]
[99,33]
[44,82]
[66.9]
[99.92]
[11.73]
[61,41]
[13,89]
[1.30]
[1,74]
[25,80]
[18.92]
[20,16]
[21.55]
[33,54]
[25.30]
[48.14]
[0,8]
[48,96]
Energy
0.445972
0.187036
0.188112
0.246369
0.0257906
0.979458
0.922975
0.611582
0.481102
0.361476
0.102722
0.130998
0.066132
0.12104
0.17034
0.259044
0.0737675
0.212232
0.0482694
0.0462051
0.159781
0.137083
0.190252
0.313383
0.0159165
0.22034
0.149786
0.346586
0.0612043
0.0542816
0.0415741
0.0769786
0.253782
0.171921
0.246139
0.233077
0.155935
0.00134643
92


laar.LR.1
Wed Oct 16 21:02:49 1991
1
Simulated Annealing
Input File: LargeRandom
:arting at [50][50]
imperature Position Energy
100 [51,50] 0.912663
100 [51,51] 0.339182
100 [50,51] 0.536128
100 [51,511 0.339182
100 [50,51] 0.536128
100 [51,51] 0.339182
100 [51,52] 0.52686
100 [51,51] 0.339182
100 [51,52] 0.52686
100 [51,51] 0.339182
100 [50,SI] 0.536128
100 [50,50] 0.884589
100 [49,50] 0.0836566
100 [49,49] 0.220331
100 [49, 50] 0.0836566
100 [49,49] 0.220331
100 [48,49] 0.0145141
100 [48,48] 0.00962524
100 [48,49] 0.0145141
100 [48,48] 0.00962524
100 [48,49] 0.0145141
100 [43,50] 0.38762
100 [49, 50] 0.0836566
100 [49,49] 0.220331
100 [48,49] 0.0145141
100 [48,48] 0.00962524
100 [47,48] 0.0577258
100 [48,48] 0.00962524
100 [47,48] 0.0577258
100 [46, 48] 0.00496464
100 [47,48] 0.0577258
100 [47,47] 0.185876
100 [46, 47] 0.197532
100 [46,48] 0.00496464
100 [46, 47] 0.197532
100 [46,48] 0.00496464
100 [47,48] 0.0577258
100 [47,47] 0.185876
100 [47,48] 0.0577258
100 [48,48] 0.00962524
100 [48,49] 0.0145141
100 [49,49] 0.220331
100 [49,50] 0.0836566
100 [49,49] 0.220331
100 [49,50] 0.0836566
100 [48,50] 0.38762
100 [49, 50] 0.0836566
100 [49, 49] 0.220331
100 [49,50] 0.0836566
100 [49,49] 0.220331
100 [48, 49] 0.0145141
100 [48,48] 0.00962524
100 [47, 48] 0.0577258
100 [48,48] 0.00962524
100 [48,49] 0.0145141
100 [48,48] 0.00962524
100 [47,48] 0.0577258
100 [48,48] 0.00962524
93


lsar.LR.1
Hed Oct 16 21:02:49 1991
10 [48,49] 0.0145141
10 X- cu CO 0.00962524
10 [48,49] 0.0145141
10 [48,48] 0.00962524
10 [48,49] 0.0145141
10 [48,43] 0.00962524
10 [48,49] 0.0145141
10 [48,48] 0.00962524
10 [48,49] 0.0145141
10 [48,48] 0.00962524
10 [48,49] 0.0145141
10 [48,48] 0.00962524
10 [48,49] 0.0145141
10 [48,48] 0.00962524
10 [48,49] 0.0145141
10 [48,48] 0.00962524
10 [48,49] 0.0145141
10 [48,48] 0.00962524
10 [48,49] 0.0145141
94