An ant colony approach to the bandwidth packing problem

Material Information

An ant colony approach to the bandwidth packing problem
Smith, Aaron W
Publication Date:
Physical Description:
47 leaves : ; 28 cm


Subjects / Keywords:
Routers (Computer networks) ( lcsh )
Data compression (Telecommunication) ( lcsh )
Telecommunication systems ( lcsh )
Integer programming ( lcsh )
Data compression (Telecommunication) ( fast )
Integer programming ( fast )
Routers (Computer networks) ( fast )
Telecommunication systems ( fast )
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )


Includes bibliographical references (leaves 44-45).
General Note:
Department of Mathematical and Statistical Sciences
Statement of Responsibility:
by Aaron W. Smith.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
45210563 ( OCLC )
LD1190.L622 2000m .S55 ( lcc )

Full Text
Aaron W. Smith
B.S., Eastern Oregon University, 1996
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Applied Mathematics

This thesis for the Master of Science
degree by
Aaron W. Smith
has been approved
ifw loo

Smith, Aaron W. (M.S., Applied Mathematics)
An Ant Colony Approach to the Bandwidth Packing Problem
Thesis directed by Professor Harvey J. Greenberg
This thesis presents an Ant Colony Optimization (ACO) approach to the Band-
width Packing Problem (BWP). BWP is a combinatorially difficult problem
arising in the area of telecommunications involving selecting and routing a
set of calls in a capacitated network. BWP consists of finding an assignment
which maximizes profit while satisfying network bandwidth constraints. Sev-
eral approaches to BWP are described including an implementation of ACO.
ACO is a metaheuristic based on the foraging behavior of ant colonies utiliz-
ing both local and historic data to develop solutions to the problem at hand.
A description of the algorithm as well as empirical results are given. These
results compare favorably to the results found by other methods in terms of
This abstract accurately represents the content of the candidates thesis. I
recommend its publication.

Special thanks to Harvey Greenberg for all of his effort and guidance. Also
thanks to Tony Cox and Mark Parker for their assistance.

Figures............................................................. vi
Tables............................................................. vii
1 Introduction.................................................... 1
2 The Bandwidth Packing Problem................................... 3
3 The Metaheuristic Approach .................................... 12
3.1 Genetic Algorithms............................................. 13
3.2 Tabu Search.................................................... 17
3.2.1 Tabu Search I............................................... 20
3.2.2 Tabu Search II.............................................. 21
4 Ant Colony Optimization........................................ 24
4.1 Evolution of Parameters ....................................... 32
5 Conclusions.................................................... 43
Glossary of Symbols and Notation.................................... 46

2.1 Simple Network.................................................. 4
3.1 Roulette Wheel Parent Selection................................ 15
3.2 Uniform Order-Based Crossover.................................. 16
3.3 Scramble Sublist Mutation...................................... 17
3.4 Simple Neighborhood Search .................................... 18
3.5 Network for Neighborhood Search............................... 19
4.1 Ants Finding The Shortest Path................................. 24
4.2 Ant Colony Algorithm........................................... 26
4.3 Network for Test Problem ...................................... 34
4.4 Variation of m................................................. 37
4.5 Variation of a................................................. 38
4.6 Variation of (3................................................ 39
4.7 Variation of (1 p)........................................... 40
4.8 Variation of <70 .............................................. 41
4.9 Variation of a................................................. 42

2.1 Simple Call Table............................................. 5
2.2 Arc Definitions............................................... 5
2.3 Call Table for IP............................................. 9
3.1 Call Table for Neighborhood Search .......................... 19
4.1 Comparison Results........................................... 33
4.2 Call Table for Test Problem.................................. 35

1. Introduction
Communications networks are designed for the transfer of information. This
transfer of information must be done efficiently so as to maximize the po-
tential of the network. Telecommunications networks need to efficiently route
calls. These calls are generally requested at unknown times and have unknown
bandwidth demands. The task of routing calls becomes increasingly difficult
as the bandwidth demands of the calls begin to reach or exceed the bandwidth
capacity of the network. Decisions must then be made as to which calls to
route and which calls to ignore. In a centrally controlled network, call requests
are received by a control center which then makes the decision as to whether
or not to place the call as well as how to route the calls it decides to place.
This process can be done at discrete intervals using the current state of the
network and the current call request queue. The results can be implemented
and the next state of the network and call queue evaluated. In a non-centrally
controlled distributed network, each network node receives call requests and
must make the placement and routing decisions.
This thesis considers only the centrally controlled case. The non-centrally
controlled case has also been studied [2, 3]. For our case, the network state,
as far as the amount of available bandwidth of each edge, as well as the set
of call requests, is assumed to be known. The network is defined by its node
and edge sets, with each edge having a limited bandwidth capacity and a cost
per unit bandwidth used. Each call is defined by its start node, destination
node, bandwidth requirement, and revenue associated with placing the call.
When attempting to route the calls through the network, two related questions
arise that the control center must address: How does one select a subset of
calls to place? and once selected, How does one route those calls to maximize
profit while satisfying the network bandwidth constraints? Besides a general
integer programming approach [20, 21], two metaheuristic approaches have
been reported with varied success: tabu search [16,1] and genetic algorithm [8]
techniques. This thesis introduces another metaheuristic approach based on
ant colony optimization. While ant colony optimization is based on route
finding problems, this thesis shows how the ant colony optimization paradigm
can be used to find good solutions to the bandwidth packing problem.
The remainder of this thesis is organized as follows: chapter 2 introduces

the bandwidth packing problem analytically, giving two integer programming
approaches to finding an optimal solution; chapter 3 introduces two meta-
heuristic approaches, one using genetic algorithms, and the other, tabu search;
chapter 4 introduces ant colony optimization, describes an implementation as
applied to the bandwidth packing problem, and gives empirical support for
various parameters used to drive the algorithm.

2. The Bandwidth Packing Problem
In this chapter, the bandwidth packing problem (BWP) is formulated as an
integer program (IP). The objective is to maximize total profit, subject to
constraints that require calls to be routed using feasible paths and constraints
that prevent the bandwidth capacities of each edge from being exceeded. BWP
is defined by the following data:
be = the bandwidth capacity of edge e;
ce = the cost for each unit of bandwidth used on edge e;
Vi = the revenue associated with placing call i;
ri = the bandwidth requirement for call i;
Si = the start node of call i;
ti = the terminal node of call i;
N = the set of network nodes;
E = the set of network edges; and
Q = the set of calls.
An IP formulation that does not rely on the paths being generated before
hand has not, to our knowledge, been reported previously in the literature. A
so called node formulation must include constraints that force the placement of
calls using feasible routes. To construct these routing constraints, the notion
of a calls direction across an edge is needed. To accomplish this, we define
two arcs for each edge in the network, one for each direction. We use e+ and
e~ to denote the two arcs associated with edge e. The additional arc notation
E = the set of all arcs = {e+ | e G E} U {e~ \ e 6 E}-,
H(j) = the set of arcs that terminate at node j\ and
T(j) = the set of arcs that originate from node j.
The decision variables under this formulation are:
1 if call uses edge e
0 otherwise,
1 if call i uses arc a
0 otherwise,

{1 if call i is routed
0 otherwise.
The IP can now be formulated as follows:
ieQ ieQces:
subject to
a e T(j)
- = < Vi -yi 0 k 3 = Si j = U j ^ ti i£Q, jeN, (2.5)
^ 1 xieri iQ be e e E, (2.6)
te+ + xie~ = xie ieQ, e e E, (2.7)
xiei xia, Vi £ {0,1} eeE, 0 e E, 1 e Q. (2.8)
To see how this formulation solves BWP, each constraint is examined using
the network described in Figure 2.1 together with the call list described in
Table 2.1. Table 2.2 illustrates the induced arcs.
Figure 2.1: Simple Network
The objective function (2.4) yields the profit for a given routing schedule:
the revenue of all calls placed minus the cost for each edge used to place those
calls. For our simple example, the objective function is the following:
200 x yi + 100 x j/2 (n x 10 x 2 + si2 x 10 x 1 + ... + X23 x 20 x 1). (2.9)

Call From To Demand Revenue
1 1 2 10 200
2 1 2 20 100
Table 2.1: Simple Call Table
Arc Start Node Terminal Node
1+ 1 2
1- 2 1
2+ 2 3
2" 3 2
3+ 3 1
3 1 3
Table 2.2: Arc Definitions
The routing constraint (2.5) requires calls to be placed using a feasible
path. This is done by creating a path from s,- to t*. The start node, s*, for each
placed call must have one more arc leaving it than entering it. Analogously,
the destination node, £*, for each placed call must have one more arc entering
it than leaving it. All other nodes in the network must be balanced or, have
an equal number of arcs entering and leaving them. In the context of our
example, looking at call 1 and node 3, we have
(Xi2- + 2?13+) (xi2+ + £13-) = 0, (2-10)
where T(3) = {2,3+} and %{Z) = {2+,3-}. Thus, if call 1 uses arc 2~, it
must also use one of the arcs 2+ or 3-. This constraint does not explicitly
prevent the occurrence of multiple disjoint loops. These so called sub-tours,
however, would not affect our problem. If a sub-tour exists containing edges
with positive costs associated with them, the objective prevents the sub-tour
from being generated. If ce = 0 for all edges in the sub-tour, only the band-
width capacities would be affected. If another profitable call could be placed
using this bandwidth, the objective causes the placement of the profitable
call instead of generating the sub-tour. Thus, the only case in which sub-tours
would be generated is when they have no effect on the objective value, in which
case we can simply use the path starting from the start node and ending at
the terminal node and ignore the sub-tours.

The network bandwidth constraint (2.6) prevents the network capacity
from being exceeded. A networks capacity is defined in terms of the bandwidth
capacities for each of its edges. Each edge, e, has a finite bandwidth capacity,
be, that limits the sum of the bandwidth of all calls using it. The sum of all
calls using each edge must not exceed this bandwidth capacity.
Looking at edge 1, the sum of the bandwidth used by calls 1 and 2 must
not exceed 20, or
(an x 20 + 2/2i x 10) < 20. (2.11)
Thus, call 1 or 2 can be placed using edge 1, but not both.
Constraint (2.7) forces the variable associated with edge e to be connected
to its associated arcs. Call 1 using axe 1 would cause Xu to reflect that
placement. The integrality constraint (2.8) prevents the partial placement
and splitting of calls.
This formulation, while theoretically valid, has limited practical signifi-
cance when it comes to actually solving an instance of BWP. This is due to
the sheer size of the problem. For the simple example above, the routing
constraint (2.5) yields |Q| x \N\ = 6 equations; the network capacity con-
straint (2.6) yields |E| = 3 inequalities; constraint (2.7) yields |Q| x \E\ = 6
equations; and there are \Q\ x \E\ + \Q\ x 2\E\ + \Q\ = 20 variables, all of
which are binary. Thus, even this trivial example requires 15 constraints and
20 variables to solve. As the size of the network and the number of calls in
the call table increase, the number of constraints and the number of variables
increase rapidly. In problem instances where |JV|, \E\, and \Q\ are not small, it
is impractical to solve BWP directly. We look at two IP approaches to BWP
that attempt to overcome the size of the problem.
The first, by Parker and Ryan [21], uses column generation and branch-
and-bound techniques. They use a path formulation of the problem in their
Two variables are used in this formulation:
if call i uses path p
if call i is routed

Also, let 5ep be 1 if edge e is in path p and 0 otherwise. With Pi being the
set of feasible paths for call z, BWP can be formulated with the following IP
maximize Y. viVi ~ Y/ C S Z! ^epTiXip i£Q eeE itQptPi (2.14)
subject to ^ %ip = Vi i ^ Qi pePi (2.15)
) ) ] &epPi%ip e G E, ieQ pPi (2.16)
%ipi Vi ^ {Oj 1} P ^ Pi) ^ Qm (2.17)
In words, the IP is to maximize revenue minus cost while placing each
call using at most one path and satisfying the bandwidth capacities for each
edge in the network. This formulation differs from the first IP formulation
in that it does not have a routing constraint. It relies on the set Pi to be
generated a'priori. This, in and of itself, is a difficult computational problem.
Removing this process from the IP problem and handling it using an efficient
route generating algorithm significantly reduces the size of the problem.
The number of variables XiP, can be very large if the number of feasible
paths for each call is large. The optimal solution, however, contains only one
path for each call placed. Solving the problem considering only these paths, the
same solution would be generated but the problem needed to be solved would
be greatly reduced. Unfortunately, knowing which paths to omit is impossible
until the solution is found. However, the paths that are very costly, or that
contain a large number of edges, are less likely to be included in an optimal
solution. Solving a problem related to the problem above, one that considers
only paths that are likely to be included in the optimal solution, could be
used to generate approximations to the original problem. This solution can be
tested, using linear programming techniques, as to whether it solves the orig-
inal problem. If it does not, an iterative process of adding paths and solving
related sub-problems can be performed until it does. This is the motivation
behind column generation. Column generation is a linear programming tech-
nique used to solve problems with a large number of columns (variables) that
can be generated (added) as needed.
The first step in this approach to BWP is to solve the linear relaxation of
the original IP, denoted LP, using column generation. This column generation
implementation is done by solving a related sub-problem of LP, denoted LP',

by substituting 5j for Pj for each call i where Si C Pt. At the first iteration,
Si = 0. Paths are added to each Si after each iteration as described below.
By linear programming duality theory, the solution to LP' solves LP if,
and only if, the solution is dual feasible. After solving an instance of LP',
feasibility is checked for each dual constraint. The dual constraints associated
with yi are satisfied by the fact that these constraints are included in the dual
of LP'. Thus, only the dual constraints associated with the paths in Pj and
not in Si need to be considered for each call i. Let Zi and we be the dual
variables associated with the two sets of constraints in LP, respectively. The
dual constraints are then
zi + riYs SePWe e£E eE
This can be expressed equivalently as
) ] 5gp(ce We) ^ i £ QiP ^ Pi~ (2.19)
Feasibility needs to be verified for each dual constraint associated with each
XiP not included in LP'. However, if the inequality is verified for the shortest
path not in Si, relative to the weights (ce we), all other paths also satisfy
this constraint. By assumption, ce > 0, and by duality, we < 0, which yields
(ce we) > 0, so Dijkstras algorithm can be used to find the shortest path. If
a shortest path for some call i does not satisfy the dual constraint, the path is
then added to Si and LP' is resolved. When all dual constraints are satisfied,
the solution to LP' solves LP and branch-and-bound can then be applied to
generate a solution to the original IP.
In a typical branch-and-bound approach, a non-integer XiP would be chosen
to branch on by solving the two problems associated with forcing XiP = 0 and
X{P = 1, respectively. Forcing Xip = 1 in this problem corresponds to placing
call i using path p. Equivalently, we can remove call i from the call list and
remove its associated bandwidth requirement from all edges contained in path
p. This creates no problems and the column generation technique described
above can be used to solve this new problem. Effecting XiP = 0 by removing it
from Si, on the other hand, creates problems because of the column generation
process used to solve LP. The removal of the column associated with path p by
removing it from Si would be counteracted by the column generation process
immediately adding the column back. To avoid this, an alternate approach of

creating multiple branches at each step, one for each edge in path p, is used.
Each new branch redefines the problem by removing all paths in Pi that use
the edge associated with the branch. This prevents the path from being added
to Si because it simply does not exist in Pi for the new problem.
To illustrate this IP approach, consider the problem defined by the network
described by Figure 2.1 and the call table defined by Table 2.3.
Call From To Demand Revenue
1 1 2 10 200
2 1 2 20 100
3 2 3 5 100
Table 2.3: Call Table for IP
Our decision variables are the following:
Xn refers to placing call 1 using edge 1;
X12 refers to placing call 1 using edges 2 and 3;
X21 refers to placing call 2 using edge 1;
X31 refers to placing call 3 using edge 2;
X32 refers to placing call 3 using edges 1 and 3;
yi refers to call 1 being placed;
1/2 refers to call 2 being placed; and
2/3 refers to call 3 being placed.
At the first step, 2/1 = 0 (2.20)
2/2 = 0 (2.21)
2/3 = 0. (2.22)
Feasibility now needs to be checked for the dual constraints. This corre-
sponds to verifying that the length of the shortest path, relative to the weights
(ce we), for each call, is less than the ratio Zi/ri. The edge weights are:

Edge (ce we)
1 2-20 = -18
2 1-10 = -9
3 1 10 = -9
All of the edge weights are negative so the length of any path must also
be negative. The ratio Z{/r{ = 0 for all i, so none of the dual constraints are
satisfied. The variables corresponding to the shortest path for each call are
X12, £21 and x32. These paths are added to the appropriate S{ and the new
LP solved. Solving yields:
x12 = 1 (2.23)
X21 = 1 (2.24)
0 II eo (2.25)
2/1 = 1 (2.26)
2/2 = 1 (2.27)
2/3 = 0 (2.28)
There are two variables whose associated dual constraints need to be veri-
fied: Xu and x3\. The new edge weights are:
Edge (ce we)
1 2-0 = 2
2 1-0=1
3 1-0=1
The length of any path must be positive and Zj = 0 for all i. Thus, all
paths satisfy the dual constraints and we have found the optimal solution to
the linear relaxation. This solution is also integer so we have found the solution
to BWP. If there had been a non-integer variable at this stage, say 2:32 = 1/2,
we would branch on X32 = 1 V x32 = 0. The branch corresponding to x32 = 1
corresponds to placing call 3 using edges 1 and 3. This is done by removing
call 3 from the call table and subtracting c3 = 5 from edges 1 and 3. Two
branches are implemented to effect x32 = 0, one for each edge in the path.
The first of these branches would remove all paths for call 3 that use edge 1
from P3. The second branch would remove all paths from P3 that use edge 3.
Again, it is important that the paths are removed from P3 and not S3 as the
column generation phase would counteract the removal.

The second integer programming approach, taken by Park et al. [20], is
similar to the approach described above in that it uses the same formulation
and column generation technique, but it uses more complicated techniques
to satisfy the integrality constraints. I briefly describe these techniques and
provide references for a more detailed explanation.
The Park-Kang-Park algorithm begins by solving LP using the column
generation technique described above. Generate minimal cover inequalities
that are violated by the solution to LP. These minimal cover inequalities are
valid inequalities that are added to LP to remove the incumbent fractional
solution from the feasible region while retaining the integral solution to the
original IP problem. For explicit details on how these minimal cover inequali-
ties are generated, refer to Park et al. [20]. For a complete discussion on cov-
erings, minimal cover inequalities and valid inequalities, refer to Nemhauser
and Wolsey [18]. This new augmented linear problem is referred to as ALP.
ALP is then solved using column generation. Note that new dual variables
now exist for the added minimal cover constraints. When ALP is solved, new
minimal cover inequalities may result and are added to ALP. This process of
augmenting then solving ALP is continued until no minimal cover inequalities
can be found. If this solution is integral, it also solves BWP. If not, branch-
and-bound is implemented to satisfy the integrality constraints. If this integer
solution yields the same objective value as the solution to ALP, the integer
solution is optimal. If not, a branch-and-cut procedure is implemented either
to prove the branch-and-bound result is optimal or to provide the optimal
solution. This branch-and-cut routine is similar to branch-and-bound except
that it uses strong cutting-planes and column generation to solve the subprob-
lem associated with each node. Describing the cut used is beyond the scope
of this thesis and I refer readers interested in seeing this cutting process to
the original paper by Park et al. [20]. The problem of the column generation
step adding the column removed by the cutting-plane step is handled by stor-
ing variables that keep track of whether or not the path associated with that
column has been previously generated. If it has, the column associated with
the shortest path that has not been previously generated is added. After this
final step, the solution to BWP will have been found.

3. The Metaheuristic Approach
As a first approach in moving from IP techniques, which can produce optimal
solutions, to approximation methods, which may not, the idea of generating
lists in which to place the calls is a natural direction to take. A method that
produces solutions in this manner is considered a order-based, heuristic. One
order-based heuristic is to order the calls based on the ratio of each calls
revenue to its bandwidth requirement and sequentially attempt to place each
call in the list. This would tend to place the calls with high revenues relative
to the cost of their placement.
After a list is generated, a method to choose the routing for each call
must be constructed. This routing heuristic could range from considering only
the shortest path to using a order-based heuristic to order the feasible paths
for each call. The more elaborate the heuristic, the more costly its imple-
mentation, with the hope that better solutions are generated. The greedy
order-based heuristic used for calls and the shortest-path-only routing heuris-
tic generates a solution with very little effort but generally with low profit
as well. For any heuristic to be capable of generating the optimal solution
for any problem instance, it must be capable of generating all possible call
orderings as well as allowing for each call to be placed using any of its feasible
paths. Heuristics that neglect certain paths by considering only a subset of
a calls feasible paths run the risk of eliminating the optimal solution from its
searchable region.
Considering again the problem described by Figure 2.1 and Table 2.1. An
optimal solution is achieved by placing call 1 using edges 2 and 3, and call 2
using edge 1, with a profit of 240. Generating a call ordering using the revenue
to bandwidth requirement ratio described above yields the ordering (1, 2).
This ordering, along with the shortest-path-only path heuristic, generates a
solution to place call 1 using edge 1. After call 1 has been placed, call 2 cannot
be placed due to its bandwidth requirements and our profit is 180. No ordering
can produce the optimal solution using the shortest-path-only heuristic. No
heuristic that excludes the routing of call 1 using edges 2 and 3 can find the
optimal solution. Although it is conjectured that the ratio of the best solution
generated by a shortest path permutation to the optimal solution goes to one
with probability one [8], problems of any size can be generated where the best

permutation schedule performs arbitrarily poorly when only the shortest paths
are considered. Approximation methods that have been applied to BWP are
now considered.
The bandwidth packing problem is a member of a class of problems that are
considered NP-complete. The term NP-complete comes from computational
complexity theory, which was invented by Edmonds with the first major theo-
rem by Cook [6]. This theory categorizes problems in terms of the complexity
of algorithms used to solve them. It is conjectured that any algorithm de-
signed to solve an NP-complete problem, including BWP, requires a number
of computational steps that grows exponentially with the size of the problem.
Increasing the number of calls in the call list or increasing the size of the
network to support the calls, increases both the number of variables and the
number of constraints in either IP formulation of BWP. Thus, both approaches
described above, as with all algorithms designed to find the optimal solution to
BWP, require an exponential amount of time, in terms of the problem size, to
guarantee that an optimal solution is found. In cases were the optimal solution
is not needed, and a trade-off between computation time and quality of solu-
tion can be made, metaheuristics can be employed. Metaheuristics comprise
a class of approximation methods that apply to combinatorial optimization
problems, such as BWP, when exact algorithms are impractical.
Definition 1 [19] A metaheuristic is an iterative generation process that guides
a subordinate heuristic by combining intelligently different concepts for ex-
ploring and exploiting the search spaces using learning strategies to structure
information in order to find efficiently near-optimal solutions.
Two metaheuristic approaches to BWP are reported in the literature. The
next two sections present a brief introduction to each metaheuristic, followed
by a description of the implementation as applied to BWP.
3.1 Genetic Algorithms
A Genetic Algorithm (GA) is a metaheuristic that mimics the observed ability
of life to adapt successfully to its environment through generational evolution.
John Holland, the fields inventor, took this observed ability and turned it into
the basis for this highly successful metaheuristic. Life, through its reproductive
mechanisms, is able to change over generations to adapt to its environment.

While this process is not fully understood, some features that are generally
accepted and form the basis for GA are as follows:
Living beings are based on encoded chromosomes. Evolution takes place
on these encoded structures.
Natural selection is the process by which the chromosomes that develop
into successful members of the population pass on the traits that made
them successful while less successful member are unable to pass on their
Reproduction is the mechanism through which natural selection oper-
ates. Children possess chromosomes based on the recombination or
crossover of the chromosomes of their parents. Mutation can occur where
pieces on the childs chromosome are generated randomly instead of de-
rived from its parents.
Evolution has no explicit memory of past generations.
At the most basic level, GA takes a population of chromosomes and evolves
them through a process of natural selection. Each chromosome represents an
encrypted solution to the problem at hand with a certain fitness. This fitness
is used by the algorithm to determine the probability that a chromosome will
reproduce, and consequently pass on the traits that contributed to its high level
of fitness. The hope is that a relatively nondescript initial population evolves
into a population of encrypted solutions that yield competitive solutions to
the problem being solved.
Each GA is defined by its construction of chromosomes, a metric for evalu-
ating a chromosomes fitness, and the means by which chromosomes reproduce
to form subsequent populations. There are no hard and fast rules for defining
these characteristics, and two different definitions could yield two drastically
different results. While many facets of GA exist, this section describes only
the mechanisms used to solve the bandwidth packing problem as described
in [8].
The chromosome used in this application of GA is simply an ordering of
the calls. This ordering is used to generate solutions to BWP by attempting to
place each call sequentially in the call list using the shortest feasible path for
that call. In this context, the shortest path is the path that is the least costly

1. Sum the fitnesses of all population members, call it F.
2. Generate a uniform random number, n, between 0 and F.
3. The first population member whose fitness, added to the fitnesses
of all preceding population members, exceeds n is used to parent
the next generation.
Figure 3.1: Roulette Wheel Parent Selection
in terms of the sum of the costs on its edges. If there is enough bandwidth to
support the placement of the call, it is placed, if not, the next call in the list
is attempted until the end of the list is reached.
A chromosomes fitness is used to determine whether or not it is used to
parent the next generation. This GA implementation assigns fitnesses to each
chromosome in the following way. For a given parameter ir, a list of fitness
values is generated of the form (1000,10007T, 10007T2,...). These values are
assigned to the schedules so that the schedule that generates the greatest profit
receives the greatest fitness and the schedule associated with the least profit
is assigned the least fitness. These fitness values determine the probability
that a schedule is used to generate offspring in the next generation by means
of roulette wheel parent selection, which is described in Figure 3.1. As an
example of this selection process, consider a population of 10 members with
7r = .9. The sum of the fitness values is approximately 6,513. If the number
2,319 is randomly generated, the third most fit chromosome would be chosen
(1000 + 1000 x .9 + 1000 x ,92 = 2710 > 2319).
Once the parents have been chosen, the children are generated by means of
one of the two reproduction mechanisms used in this GA. The two reproduction
mechanisms are uniform order-based crossover and scramble sublist mutation,
both of which are described shortly. To decide which mechanism is applied,
each is assigned a fitness as a parameter. This fitness is used in the same way
a chromosomes fitness is used in the parent selection phase. A roulette wheel
selection scheme is utilized to determine the mechanism to use. The fitness
values can be fixed over the course of the run or can be adjusted dynamically.
This implementation employs a dynamic scheme.

1. Generate a bit string, at random, that is the same length as the
two parents. For BWP this is the number of calls in the list.
2. For each position in the bit string containing a 1, the first child
is identical to the first parent.
3. For each position in the bit string containing a 0, permute the
calls to the order the calls appear in parent two.
4. Repeat steps 2 and 3 to generate child two, reversing the roles of
the parents.
Figure 3.2: Uniform Order-Based Crossover
Uniform order-based crossover is designed to create children with traits
similar to their parents traits. The idea is that traits that tend to create
better solutions are passed on to future generations while maintaining enough
diversity that each generation also differs from its predecessors. Uniform order-
based crossover achieves this as described in Figure 3.2
As an example, consider the two parents:
Parent 1 1 23456789 10
Parent 2 10 86429753 1
Suppose, at step 1, the following bit string is generated:
Bit String 1000110110
Then, step 2, for both children, produce
Child 1 1 ___5 6_8 9_
Child 2 10__ 2 9 5 3 .
Step 3, for both children, produce
Child 1 1 10 4 2567893
Child 2 10 1 46297538
The order in which calls are placed is crucial to the quality of a solution
to BWP. Uniform order-based crossover attempts to capture pieces of the

1. Pick two elements of the parent chromosome.
2. Randomly permute all of the calls between the elements chosen.
Figure 3.3: Scramble Sublist Mutation
ordering from both parents to create their children.
Scramble sublist mutation is designed to introduce an element of random-
ness into the population. This may help prevent the algorithm from converging
to suboptimal solutions by generating populations that would be unlikely to be
generated by the order-based crossover mechanism. While uniform order-based
crossover attempts to retain characteristics of previous generations, scramble
sublist mutation is designed to disrupt this process. This is done as described
in Figure 3.3
Again, because order is essential to the quality of a solution to BWP,
scramble sublist mutation is designed to disrupt the ordering contained by
the chromosome. This may help to search new areas of the search space and
prevent the convergence to a suboptimal solution.
3.2 Tabu Search
Tabu Search (TS) is a metaheuristic that uses adaptive memory to guide its
underlying heuristic. This adaptive memory is used to encourage the heuristic
to search in some areas of the solution space while discouraging the heuristics
search in others. The goal is to spend more effort searching promising areas,
while not excluding seemingly unpromising areas altogether. A brief introduc-
tion and explanation of the features utilized by two TS approaches to BWP
follows. For a complete description of TS refer to [14] by Glover and Laguna,
or to [12] and [13] by Glover.
Tabu search is a neighborhood search heuristic. The idea behind any neigh-
borhood search heuristic is to iterate from solution to solution considering only
solutions that are neighbors of the current solution at each iteration. The met-
ric which determines whether or not two solutions are neighbors is the basis
for any neighborhood search. A neighborhood for BWP could be as follows:
given a call list, x, a neighborhood of x, denoted A/*(x), could be the set of

1. Generate a permutation of the calls, call it x.
2. Calculate the profit of x by attempting to place each call using
its shortest path. If the network can support the calls bandwidth
requirement, place the call, if not, move to the next call in the
3. Build 7V(x) by generating all solutions created by switching the
first call in x with all other calls in the list.
4. Compute the profit associated with each element of A/(x). Set x
to be the element with the greatest profit.
5. Repeat until 10 successive iterations have not resulted in an
improvement over all of its predecessors.
Figure 3.4: Simple Neighborhood Search
solutions constructed from x by switching the first call in x with any other
call in the list. Defining jV(x) in this manner yields a subset of the solution
space, at any given solution, of size \Q\ 1. This greatly reduces the number
of solutions that the heuristic must evaluate before it makes its next move.
This definition also has the property that every permutation can be reached
by any other permutation given enough neighborhoods through which to tran-
sition. For BWP, a simple neighborhood search heuristic could be defined as
described in Figure 3.4.
Notice that there is nothing to preclude this heuristic from visiting the
same solutions over and over again. Thus, there is motivation to diversify
the solutions in some way by forcing them into new regions of the solution
space. To see how this might be achieved, consider Figure 3.5 and the calls in
Table 3.1.
If we let x = (1,2,3,4), jV(x) becomes {(2,1,3,4), (3,2,1,4), (4,2,3,1)}.
Every element in M{x) yields a profit of 210, the best of the set. Our heuristic
might well choose (2,1,3,4) as its next move. A/"(x) now becomes {(1,2,3,4),
(3,1,2,4), (4,1,3,2)}. Nothing would prevent our heuristic from simply choosing
its next move to be (1,2,3,4) and we would be back where we started. If our
heuristic could remember that (1,2,3,4) had already been visited, it might

Figure 3.5: Network for Neighborhood Search
Call From To Demand Revenue
1 1 2 5 150
2 1 2 5 150
3 1 3 1 16
4 1 2 5 150
Table 3.1: Call Table for Neighborhood Search
choose (3,1,2,4) as its next move. This would cause the optimal solution
of (4,1,2,3) to show up in the next A/*(x)- This method of remembering
past solutions is called explicit memory. Explicit memory can create immense
demands on storage because entire solutions must be remembered. A more
efficient scheme is to remember defining characteristics of visited solutions.
This is called attributive memory. For example, if we had simply remembered
that the calls in the first and second positions had been swapped, and that
the swap should not be repeated, the same result of moving to a new neighbor
would have been achieved. TS keeps a list, known as the tabu list, to prevent
areas of the solution space that have recently been visited from being revisited.
By using attributive memory instead of explicit, some solutions that have
not been visited also end up as tabu. If these are optimal, or would lead
to an optimal solution, our memory has backfired on us. TS provides two
mechanisms to overcome this.
The first of these two mechanisms is tabu tenure. Each element of the tabu
list is resident for only some specified number of moves, its tabu tenure. When
this number of moves has been performed, the element is removed from the
tabu list and the heuristic is free to make those moves again. This allows TS
to visit regions of the solution space more than once. While the solutions in
the region are the same, the structure of TS may be different with each visit

because its tabu list may be different. This may cause different solutions to
be generated each time a region is visited.
The second mechanism used by TS is the use of aspiration criteria. This
simply provides TS the ability to Override the tabu status of a move if it
meets some specified criteria. This criteria can be anything from achieving the
best solution so far to making large impacts on the TS structure. The only
aspiration criteria used by the two TS implementations below is achieving the
best solution so far.
There are many facets to TS not included here. I merely introduce the
aspects that aid in the understanding of the TS implementations described in
the following two sections.
3.2.1 Tabu Search I
The approach taken by Anderson et al. [1] uses a permutation of calls to
represent each solution. To evaluate a given solution, the first call in the list is
placed using its shortest feasible path. The available bandwidth is updated for
each edge used to place the call by subtracting the bandwidth requirement for
the call. An attempt to place each subsequent call is made using the shortest
feasible path. This TS implementation considers only instances of BWP where
the cost of using bandwidth on each edge is negligible, that is, ce = 0 for all
e E. Under this constraint, the notion of the shortest feasible path is simply
the path with the fewest edges instead of the path with the lowest cost.
There are three types of moves defined for this implementation. Each has
its own notion of a neighborhood. The first type of move is a partial 2-opt,
done by considering all the permutations generated by swapping the first call
with every other call. When a move of this type is made, it becomes tabu to
repeat the swap until its tabu tenure has expired.
When 10 moves are performed in succession without an improvement in
objective value, a cut move is performed. This is done by defining Af (x) to be
the lists formed by placing the calls 1 though i after the calls i + 1 through
|<2|. A move of this type erases the tabu list.
If no cut move produces an improvement in objective value, a full 2-opt is
performed. Notice that a swap of position i with j is equivalent to swapping
i with 1 then 1 with j. Thus, we can update the tabu list as was described
above for the partial 2-opt move.

3.2.2 Tabu Search II
The second TS implementation, by Laguna and Glover [16], is a more complex
implementation in that it uses dynamic tabu tenure values and two types of
memory: long-term and short-term. These new features are described below.
This implementation starts by generating a specified number, denoted ki,
of shortest feasible paths for each call i. A parameter, K, is used as an upper
bound for this number. The length of a path p, denoted lp, is defined
tp ="£<*, (3.1)
where Ep is the set of edges in path p. The calls are then ordered in decreasing
order of the ratio of their revenue to their bandwidth demand (i>i/rj). An
attempt to place each call is made starting with the first call and its shortest
path down to the last call and its longest path. This constitutes the first
solution. A move, indicates call i moved from its current path, g, to
path h. Once a move is made, TS prevents call i from being assigned to path
g in the near future by placing move (i, g) on the tabu list. Each calls path
list is augmented with a null path, used to indicate that a call is not placed.
At each iteration, all non-tabu moves are considered and the move with
the greatest value chosen. The value of a move is a function of local network
information and the contents of its long term memory structures. The local
information component of a moves value is,
V{i, h) = n (lg-lh) (3.2)
Looking closer at V(i,h), notice that it does not consider revenue, V{, as a
part of the moves value. It considers only the bandwidth costs associated with
the feasible paths. The null path, however, has no cost and therefore always
have the greatest local component. When V(i, h) is employed, it encourages
the use of the null path for all placed calls. The only way for a call to be
placed is if the null path is contained on the tabu list and thus cannot be
selected. This would seem to discourage the placement of calls and is seems
contrary to the objective of maximizing profit, which can be generated only
by the placement of calls. It is not clear to the author how this method would
yield positive results without handling the local component of the null paths
move value as a special case. Using V(i,null) = 0 would seem an intuitive

value as V(i, h) is a measure of how a calls bandwidth requirement effects the
state of the network. Not placing a call has no effect on the network capacities
so 0 would be a logical choice.
While the short-term memory is affected by the recency of a move, the
long-term memory is affected by the frequency of a move. Each move has
an associated value, u(i,h), that keeps track of how many times that move
has been made. If a move has been made many times over the course of the
run, it is penalized to create diversity. Combining V(i, h) and u(i, h) gives us
a moves penalized move value, denoted (i,h), which is how TS selects the
next move to make. h) is defined:
,fv(i,h) if V(i, h) >0;
' { V(i, h) 8 x u(i, h) otherwise,
where 9 is a parameter.
A moves tabu tenure in this implementation is dynamic, varying among
three values: short, medium, and long. The values used for each setting are ki,
1.5ki, and 2ki. This requires that each member of the tabu list keep track of the
number of iterations required before it can be removed from the tabu list. TS
iterates through the three values by the sequence {ki, 1.5/cj, ki, 2ki, 1.5k,, 2k,}.
A change in tabu tenure is done at the number of moves equal to twice the
value of the current tabu tenure.
Consider the simple example described by Figure 2.1 and Table 2.1. For
this simple example, we have 3 paths for call 1 and 2 paths for call 2. Use
Pio to denote the null path for call 1, pu to denote the route using edge 1 for
call 1, pi2 to denote the route using edges 2 and 3 for call 1, P20 to denote the
null path for call 2, and p2i to denote the route using edge 1 for call 2. A
solution to this problem consists of a list of length 2, consisting of the routing
information for each call.
Consider the solution (pn,P2o)- This represents the solution of placing call
1 using path pn and not placing call 2. From this solution, there are 2 moves
that could be made. Move (l,pi2), which switches the routing of call 1 to
P12, and move (l,Pio), which results in call 1 not being placed. Call 2 has no
feasible moves at this stage. The value of each move is as follows:
0(1, Pio) = 10 x (2-0) = 2 (3.4)
0(1,P12) = 10 x (2 2) = 0. (3.5)

Choosing the greater of the two values, move (l,Pio) is made. After each move
is made the tabu list and tenure values must be updated. Only one move has
been made at this stage so neither of the calls tabu tenure values need to be
changed. Move (l,pn) is placed on the tabu list for 2 moves.
We now have two non-tabu moves to evaluate: (1,^12) and (2,p2i)- The
penalized move values for each of these moves are
0(1,P12) = 10x(2-2) =0, (3.6)
0(2,p2i) = 20 x (0 2) 0 = -40. (3.7)
Move (l, P12) has the greatest penalized move value, and the tabu move
(l,pu) does not meet the aspiration criteria, so move (l,pi2) is made. We
have now reached 2 times the tabu tenure for call 2 which requires it to be
changed to 1.5 x k2 = 1.5. Non-integer values, however, are not allowed. We
keep the tabu tenure at 1 but will wait 2 x 1.5 = 3 moves before changing it
to 2 x = 2. Move (l,pio) is placed on the tabu list for 2 moves. The state
of the tabu list is now
Tabu move Moves remaining
(l,Pii) 1
(l,Pio) 2
There is only one non-tabu move available at this stage: (2,p2i)- By
default, its penalized move value of -40 is the best move value and is made.
The current solution (pi2,P2i) is the optimal solution for this problem. TS
has no way of determining this however, and will continue to search for better
solutions until some termination criteria is reached.
Both TS implementations for BWP report to generate competitive solu-
tions in reasonable amounts of time. The second implementation contributes
support for the use of long term memory in addition to short term memory,
as well as the use of a frequency function.

4. Ant Colony Optimization
Ant Colony Optimization (ACO) is a recent addition to metaheuristics. It
has been applied successfully to many combinatorially difficult problems, in-
cluding: the traveling salesman problem [9], the sequential ordering prob-
lem [10], the quadratic assignment problem [17], the vehicle routing prob-
lem [4], scheduling problems [5], and graph coloring [7]. Ant Colony Opti-
mization is motivated by the observed ability of ants to find the shortest path
from their nest to a food source. This is done by means of laying a scent,
called pheromone, as they walk. Once a path is established, ants can follow
this pheromone trail to and from the food source. The establishment of the
shortest path is described in Figure 4.1, taken from [9].
Figure 4.1: Ants Finding The Shortest Path
In a), the ants have an established path between points A and E. b) depicts
an obstacle being placed in the path. The ants are now forced to choose a
direction. The ants, having no information as to which direction is better,
choose left or right with equal probability. The ants that choose right arrive
at point E considerably before the ants that choose left. As a consequence,
the ants that chose right arrive back at point B before those that chose left.

This begins to bias the amount of pheromone on the shorter of the two paths,
increasing the chances that future ants choose the shorter path. As time
progresses, the bias increases until nearly all of the ants choose the shorter
path. It is important to note that the pheromone level increases the likelihood
of ants choosing a particular path but does not necessarily imply that they
will. There are still occasional ants that stray from the established path. This
has an important implication on the paradigm; as solutions are found, we
want to keep searching for new paths to keep from converging to suboptimal
This paradigm can be applied to any problem where solutions can be ob-
tained by generating a decision path. Virtual ants begin at some starting
state and make decisions, creating a solution path until some terminal state
is reached. The ants lay pheromone, proportional to the merit of the solution
found, on the decisions they used to encourage future ants to make similar deci-
sions. Pheromone is not our virtual ants only factor in making their decisions.
A fitness value for each decision is also incorporated based on local information
in the network. At any given state where a decision must be made, the ant
generates probabilities for each decision based on the pheromone and fitness
of that decision. The problem instance dictates the definition of pheromone
and fitness.
The BWP decision path must consist of selecting which calls to place and
which routes to use when placing the selected calls. This yields two levels
of decisions to which ACO can be applied. The first level, selecting calls,
is done by generating an ordering in which to attempt to place the calls.
The second level, selecting the routing, is done by generating an ordering of
feasible paths for the call being placed. Each ant generates a call ordering
and a route ordering for each call. For a given network state, this ant decision
path generates a unique solution. An outline of the algorithm is as described
in Figure 4.2
The algorithms first step, finding /c shortest paths, attempts to find K
feasible routes for call i where if is a parameter. K feasible paths may not
exist for each call i so k{ is used to designate the number of paths actually
found with an upper bound of K. This route finding stage can be done using
any route finding algorithm. The implementation used to generate the data in
§4.1 uses a breadth first search. As a side note, the routes are stored in a linked
list that grows as routes are generated then shrinks as they are completed.
For dense networks the number of routes for a given call can be large; up

Step 1.
Step 2.
Step 3.
Step 4.
Step 5.
Step 6.
Step 7.
Outline of ACO as applied to BWP
Find hi feasible routings.
Generate a call ordering for each ant.
Generate a path ordering for each call in each ants
call list.
Attempt to place each call on each ants call list with
each path in its path list.
Save the best ants call and path lists.
Update pheromone levels on all calls and paths.
Repeat Steps 1-6 until some stop criteria has been
Figure 4.2: Ant Colony Algorithm
to (n 2)! for complete networks. Thus, there are potentially many routes
to store, requiring a great deal of memory. For this reason there is a cutoff
at 30,000 entries in the linked list. If this number is reached, the current
set of feasible routings is used. For large problems, a different route finding
algorithm would need to be used. The test problems used for comparisons
done in §4.1 are of the order n E (14,192), |Q| E (20,93), and \E\ E (16,210).
With these problems, generating 30,000 unfinished paths rarely occurs.
Ants generate the orderings of the calls and routings using a probability
function. This function depends on pheromone and fitness values and must be
delineated in the context of BWR Pheromone, generally speaking, gives an ant
information about how a decision has performed in past solutions. Decisions
that lead to good solutions should accumulate more pheromone than those that
produce poor solutions, thus biasing future ants decisions toward decisions that
yield good solutions. We must also prevent the pheromone level from becoming
too high, even on good decisions, to encourage ants to explore new areas of
the solution space. This corresponds to evaporation of the pheromone trail.
Let r(d) denote the pheromone level on decision d. After each iteration,

r(d) is updated by adding Ar(d) to (1 p) x r(d), where p is a parameter
corresponding to pheromone evaporation. Ar(d) is based on the merit of the
solution found by the ants that made decision d. Note that d can be a decision
of selecting a specific call or a decision of selecting a specific routing for a call.
To distinguish between the two, let Tcaii(i) denote the pheromone accumulated
on call i and Tpath{i,p) denote the pheromone accumulated on path p for call
i. We use r with no subscript when referring to both.
Let AiP be the set of ants that placed call i using path p and Aj. = Upep.AiP.
A{. corresponds to the set of ants that placed call i using any path. The added
pheromone after each iteration becomes,
for calls, and
z* -za + 1
z* -za + 1
for routes, where z* is the best profit so far and za is the profit generated by
ant a. Notice that At is always positive and less than or equal to the number
of ants. This keeps r from becoming too large and dominating the decision
process. The constant 1 is added to the denominator to prevent the possibility
of dividing by 0.
A decisions fitness is designed to encourage ants to make decisions that are
likely to generate better solutions. For example, a call with a high revenue and
a low bandwidth requirement is more likely to be included in a good solution
than a call with a low revenue and a high bandwidth requirement. Using this
greedy bias on the ants decision greatly reduces the time it takes for ants to
find good solutions. A decisions fitness is denoted by 77(d) with the distinction
between calls and paths handled in the same manner as was handled in the
notation of r(d). This heuristic reasoning motivates the fitness for call i to be
defined as follows;
T)call ('O
{Vi/n) min(vi/ri) + 1
max(uj/r,-) min^j/Vj) + 2"
v ' ' ieQv ' '
The constants, 1 and 2, are arbitrary and are simply used to satisfy 0 <
Vcall (^)

Paths also need fitness values. Paths that cost less to use tend to generate
more profit and should therefore be biased to be included in the solution. There
are problem instances, however, when the cost of using an edge is negligible.
In this case, paths with fewer edges should be biased as they leave more room
for other calls to be placed. Both cases are defined as follows:
l/\Ep\ when neglecting edge costs,
1/ e otherwise.
We assume ce > 0, so 0 < r}path(p) < 1. This prevents %athip) from being
too large and dominating the decision process.
With the above definitions it is now possible to construct the probability
functions the ants use to generate their call and path lists. We denote ant as
call list with Ca and its path list for call i with Lai. \P(a, d) is used to denote
the probability of ant a making decision d, with the distinction between calls
and paths made as above. Define ^^(a, i) to be the probability of ant a
choosing call i to be the next call placed on Ca as,
VpaM = <
Vcall (ttj i)
' Tcall(i)a'ncali(i)l}
J2 Tcau(u)arlcall(u)p,
< ueQ\u£Ca
for i £ Ca
where a and j3 are positive parameters.
At iteration one, no ants will have had the opportunity to leave pheromone
on any of the calls. To prevent division by zero, Tcau(i) is initialized to 0.0001
for each call i. Notice that Ca changes after each call is added to it. This
in turn affects ^caH(a,i), so the probabilities must be regenerated at each
selection. After the probabilities have been generated, a call is selected based
on these probabilities and added to Ca. This process is repeated until Ca
contains all of the calls.
The probability function that the ants use to choose paths is defined in a
similar manner. For each call in the ants call list, an associated path list is
generated. The probability that ant a chooses path p to be the next path in
Lai is

^pathiP1) >p) i
______Tpathiii'P) Vyathip)^_____
for p <£ Lai,
> 0 otherwise.
Again we initialize Tpath(i,p) to 0.0001 for each path.
Metaheuristics are designed to find high quality solutions in a reasonable
amount of time. In an attempt to improve the rate at which good solutions
are found, two factors are commonly incorporated.
The first incorporates forcing ants to make their decisions based solely
on fitness levels. If all ants make their decisions greedily at every state, the
algorithm generates poor solutions, and in all but contrived problem instances,
never reach an optimum. If ants make greedy decisions probabilistically, the
solutions tend to be higher in profit at all levels of effort. This factor is based
on a parameter, qo (0,1). Before an ant makes a decision, a uniform random
number between 0 and 1 is generated. If the number is less than qQ, the ant
makes a greedy decision, otherwise the above definition of \&(a, d) is used.
The second factor involves adding elite ants, a of them, to the problem.
Elite ants simulate ants using the best known solution. This creates extra
pheromone on those decisions used to generate the best solution and therefore
encourages future ants to stay closer to them. Since good solutions tend to
include similar decisions, this positively affects both the objective value and
the rate at which the objective value improves. Computationally, elite ants are
very inexpensive because they do not cost anything in the solution generation
process. They simply affect At in the pheromone update stage.
To demonstrate ACO, we walk through one iteration using Figure 2.1 and
Table 2.1. The following parameter settings are used: qo = 0.05, a = 0.25,
P = 5, a = 2, p = 0.3, and K = 15. We also use only 1 ant.
Step 1. Find hi feasible paths.
This step attempts to generate K feasible paths for each call. For our
simple example, we have 2 paths for call 1 and 1 path for call 2. Use pn to
denote the route using edge 1 for call 1, pn to denote the route using edges 2
and 3 for call 1, and p21 to denote the route using edge 1 for call 2.
Step 2. Generate a call ordering for each ant.

The first step in generating Ca is to calculate each calls fitness and phero-
mone levels. These are used to calculate the probabilities and subsequently,
generate the orderings. The fitness values for each of our calls are
(200/10-6 + 1)
Wl)- (20-5 + 2) -16/17
(100/20 5+1)
W2)- (20 5 + 2) /17'
Since this is the first iteration, 7^(1) and tcM (2) are both at their initial
values of 0.0001. We can now calculate the probabilities for both calls.
^caii{a, 1) =
^call{2) =
O.OOOl0-25 x 0.9415
0.00010-25 x 0.9415 + O.OOOl0-25 x 0.0595
O.OOOl0 25 x 0.059s
O.OOOl0-25 x 0.9415 + O.OOOl0-25 x 0.0595
= 99.9999% (4.9)
= 0.0001% (4.10)
Once the probabilities have been calculated, a random number between 0
and 1 is generated to determine if a simple greedy decision is made. With
<7o = 0.05, a number between 0 and 0.05 would place call 1 on Ca. A number
between 0.05 and 1 would require the use of '&can(a, 1) and '&Caii(a, 2). This
would most likely result in the placement of call 1 on Ca. Suppose that call 1
is chosen as the first call on Ca. This leaves only call 2 to be placed and it is
added to Ca with probability 1.
Step 3. Generate a path ordering for each call in each ants call list.
Before an ordering can be generated, each paths fitness and pheromone
levels need to be calculated. The fitness values for each of our paths are
Vpathipn) = 1/2, (4.11)
Vpathfaa) = 1/(1 + 1) = 1/2, (4.12)
Vpath(l>2l) = 1/2. (4.13)
Again, this is the first iteration and the pheromone levels are at their initial
values of 0.0001. The path probabilities the ant uses are
O.OOOl0-25 x 0.5s
a, ,Pn) o.00010.25 x 0.55 + O.OOOl0-25 x 0.55

ypath(a, l,Pl2) =
O.OOOl0-25 x 0.5s
O.OOOl0-25 x 0.55 + O.OOOl0-25 x 0.55
= 50%
^pathifij 2,P2l)
O.OOOl025 x 0.5s
O.OOOl0-25 x 0.55
= 100%
As above, a random number must be generated to determine if a greedy
decision should be made. A number between 0 and g0 would normally result
in placing the path with the highest value of ^path- In this case, for call
1, the values are the same so \&poth is used. Suppose that call 1 generates
Lai = (P111P12) and La2 = (fti)-
In summary, we now have Ca = (1,2), LaX = (pu,pi2), and La2 = (p21)-
These lists are used to generate a solution in the next step.
Step 4. Attempt to place each call on each ants call list with each path in
its path list.
This step generates the solutions. We iterate through each ants lists se-
quentially attempting to place each call. The first call, call 1, can be placed
using its first path, pxx. Once the call is placed, the next call in Ca is consid-
ered. In this example, call 2 is the next call in Ca. Call 2 has only one feasible
path and after the placement on call 1, p2i cannot support the bandwidth
demand of call 2. Thus, call 2 cannot be placed. The solution of placing call
1 using path pn and not placing call 2 yields a profit of 180.
Step 5. Save the best ants call and path lists.
After all of the ants have generated solutions, each solution is checked
against the best solution so far. In our example, it is the first iteration so the
best profit so far is 0. Our ant generated a profit of 180 so its lists are saved.
Step 6. Update pheromone levels on all calls and paths.
The decisions made by each ant must now be rewarded based on the quality
of the solutions they obtained. The a elite ants also come into play at this
stage. Adding the pheromone from our ant, plus the additional 2 elite ants
using the best solution so far yield new pheromone levels of
Wl) = (1 0.3) x 0.0001 + (1 + 2)180_i80+1 = 207 <4-17)
Teal!(2) = (1 0.3) x 0.0001 + 0 = 0.00007 (4.18)
Tpath{l,Pn) = (1 0-3) x 0.0001 + (1 + 2)-0_-^+- = 2.00007 (4.19)

TpathihPu) = (1 0.3) x 0.0001 + 0 = 0.00007
7-patft(2, P2i) = (1 0.3) x 0.0001 + 0 = 0.00007
This process is repeated until the stop criteria is reached.
4.1 Evolution of Parameters
This algorithm is driven by several parameters. The following is a discussion
of each parameter, how it affects the algorithm, and what values tended to
produce good results for the problems tested. Each parameter, except K,
was examined by varying the parameter under discussion and comparing the
results to a base case as applied to a problem representative of the problems
used for test. This base case consisted of: q0 = 0.05, a = 0.25, /? = 5,
m = 4|Q|, cr = 2|Q|, p = 0.3, and K = 15. Various values for K were not
explored because the best value for K seemed particularly problem dependent.
Problems can be generated that produce poor performance results for any
value of K. The problems tested rarely contained calls that had more than 15
possible paths to choose from. Setting K = 15 simply eliminates its effect on
the algorithm for the test problems.
Fifteen problems were tested in which the best solutions generated by
both the GA implementation and the Laguna-Glover TS implementation were
known. This data served as a baseline from which some comparisons can be
drawn. Comparisons based on the best solutions can be performed. Com-
putational effort required to generate these solutions, however, is not known
and only best solutions can be compared. The ACO algorithm matched or
exceeded the results in 13 of the 15 problems. These results are summarized
in Table 4.1.
The following parameter analysis was done on one of the fifteen problems
that previous results are known. The problem used consisted of a network of
27 nodes, 93 calls and 37 edges and is described by Figure 4.3 and Table 4.2.
The maximum number of paths for any call was 15 with a mean of 3.86. This
problem was chosen because it is representative of a typical network of the
problems tested and produced results that varied from run to run. Some of the
test problems were easy in the sense that the algorithm converged to the best
known solution almost without fail, and did so with very little computational
effort under virtually any parameter settings. This test problem was sensitive
to parameter modifications and tended to take longer to converge than most

{n,q) GA Tabu ACO
(14, 35) 6580 6580 6580
(24, 68) 7060 7270 7270
(29, 70) 27710 27340 28250
(18, 58) 16210 16190 16210
(19, 47) 7770 7790 7790
(27, 93) 18900 18550 18910
(23, 93) 13800 13810 13860
(28, 41) 8770 8770 8770
(24, 87) 21360 21340 21250
(19, 41) 7640 7640 7640
(14, 23) 6110 6110 6110
(27, 81) 13330 13290 13290
(29, 52) 9020 9020 9020
(20, 46) 7900 7900 7900
(192, 20) 2070 2070 2090
Table 4.1: Comparison Results
of the problems tested. 100 tests were run with each parameter setting and
the average used for comparison.
To compare computational results, a metric needs to be defined to com-
pare the amount of work for a given run. Counting iterations is not sufficient
because different parameter settings yield different amounts of work to com-
plete an iteration. A parameter setting that uses 100 ants takes 100 times as
much work and produces 100 times as many solutions as a setting that uses
only one ant. Thus, the idea of a one-ant cycle is introduced. A one-ant cycle
is the amount of effort it takes for one ant to generate one solution. For this
implementation, this consists of generating a call ordering, generating a path
ordering for each call, and performing a call placement phase. This is not an
exact quantity as the call placement phase can take two ants a different num-
ber of computations. This difference is minimal for sparse networks, however,
and is ignored. All of the fifteen problems tested had an average number of
paths in the neighborhood of 1 to 5, so the work expended for each ant was
very similar. Each run was given 200 one-ant cycles before being cut off. This
translates to 66 iterations for 3 ants per call and 40 iterations for 5 ants per

Figure 4.3: Network for Test Problem

Call Prom Tb Demand Revenue
47 9 12 15 650
48 9 13 10 480
49 9 14 5 230
50 9 16 20 900
51 9 18 5 210
52 9 22 5 230
53 9 23 25 930
54 9 25 10 340
55 10 12 5 170
56 10 16 5 170
57 10 17 25 . 830
58 10 21 25 1010
59 10 26 25 950
60 11 13 5 170
61 11 15 15 470
62 11 19 30 900
63 11 20 30 1240
64 11 24 30 1160
65 11 25 20 920
66 11 26 5 230
67 12 13 15 650
68 12 16 5 230
69 12 22 20 640
70 12 26 10 440
71 13 24 5 230
72 14 26 5 150
73 15 17 5 210
74 15 19 10 320
75 15 21 10 340
76 16 17 20 900
77 16 23 10 440
78 16 26 30 980
79 17 18 10 380
80 17 20 25 770
81 17 21 5 170
82 17 25 15 450
83 18 20 5 150
84 19 24 15 530
85 20 21 20 780
86 20 23 10 440
87 20 25 5 210
88 21 23 15 650
89 22 23 15 470
90 22 24 30 1180
91 22 26 20 820
92 23 26 5 170
93 23 27 10 420
Call From lb Demand Revenue
1 1 3 10 380
2 1 4 5 190
3 1 7 30 940
4 1 9 20 760
5 1 19 15 650
6 1 20 15 590
7 1 23 25 770
8 1 24 15 530
9 1 26 10 420
10 1 27 10 420
11 2 8 25 1130
12 2 10 25 890
13 2 13 5 170
14 2 20 5 170
15 2 25 10 360
16 3 6 10 300
17 3 9 5 190
18 3 20 10 360
19 4 6 10 480
20 4 12 5 230
21 4 16 10 400
22 4 19 10 480
23 5 8 10 300
24 5 10 5 170
25 5 17 25 1230
26 5 18 10 380
27 5 21 15 590
28 5 22 10 360
29 5 23 10 300
30 6 8 5 230
31 6 10 10 440
32 6 17 10 380
33 6 27 10 480
34 7 8 15 570
35 7 9 10 460
36 7 11 15 570
37 7 18 10 340
38 7 19 10 300
39 7 20 15 450
40 7 22 5 190
41 7 23 20 620
42 8 10 10 420
43 8 17 15 630
44 8 22 30 1020
45 8 23 25 1110
46 9 11 20 920
Table 4.2: Call Table for Test Problem

The first parameter is the number of ants, m. A small number of ants can
search only a small subset of the solution space before pheromone is added.
This causes misleading information in the pheromone trail. Because of this,
ants tend to converge on suboptimal solutions and never venture far from
them. Large values for m, on the other hand, generate a large set of solutions
before pheromone levels grow. This causes the pheromone trail to provide more
accurate information as to how a particular decision affects the solution. On
the down side, each ant can be costly if the problem is large. A balance must
be found as to how many ants are adequate to find a good solution without
becoming too costly computationally. In general, tests using a small number
of ants found better solutions early on but were eventually outperformed by
runs that used more ants. The average values over 100 runs for m set at 3,
4, and 5 are displayed in Figure 4.4. Notice that smaller values of m tend
to perform better early on, less than 50 one-ant cycles, while larger values
tend to perform better in the long term. When the test was stopped after
its allotted 200 one-ant cycles, m = 4 was the best performing value. Larger
values exceeded these results when more computational time was allowed.
The basis of ACO is its probability function, ^(a, d). This function governs
the ants decisions, and thus, build their solutions. The two factors that drive
this function, r(d) and 77(d), are controlled by means of two parameters, a and
j3, respectively.
The parameter a is the exponent of r(d) in \&(a, d). As r(d) tends to be
small, values of a > 1 significantly reduce the affect the pheromone has on
the decision process. This causes ACO to become nothing more than a greedy
algorithm, using only local network information when constructing solutions.
The results for the problems tested showed higher profits for values of a = 0.25.
The results for a set at 0.15, 0.25, and 0.4 are shown in Figure 4.5. a = 0.4
significantly dominates both of the other parameter settings for about the first
100 one-ant cycles or so, at which point a = 0.25 surpasses it. This type of
behavior leads me to think that a dynamic value for a may prove successful.
For example, interpolating between 0.4 and 0.25 over the course of the run
would have the advantage of finding good solutions early but exploring more
as time proceeds.
The parameter (3 is the exponent of 77(d) in \?(a, d). Early tests for various
values of (3 showed drastically different results. As a result of this, a value for
f3 that changed over the course of the run was explored. The value used for
the exponent of 77(d) was interpolated from 0 to the parameter value /? over
the course of the run. The results of such a dynamic j3 produced significant

Figure 4.4: Variation of m

20 40 60 80 100 120 140 160 180
one-ant cycles
A: a = 0.15 : a = 0.25 O: a = 0.4
Figure 4.5: Variation of a

Figure 4.6: Variation of /?
improvements not only in profits but also in the size of the stability region for
the parameter. Figure 4.6 shows the test results for values of 0, 5, and 10.
These settings are drastically different, considering they are used as exponents,
yet the profits yielded by both non-zero values produced results in the same
neighborhood. (3 = 0 removes the fitness aspect from \&(a, d) completely and
performs very poorly. This is encouraging in that it validates the use of 77(d)
in the ants decisions.
The pheromone trail, r(d), has two forces affecting it: Accumulation and
evaporation. Accumulation is controlled by Ar(d) which is driven by the qual-
ity of solution the decision has helped to generate. Evaporation is controlled
by the parameter (1 p). This evaporation factor is used to prevent the level

Figure 4.7: Variation of (1 p)
of pheromone on decisions becoming too large, dominating the decision pro-
cess and discouraging the ants from searching previously unexplored regions
of the solution space. A large evaporation factor prevents this. Too much
evaporation, however, disguises the pheromone trail and not provide the ants
information as to how the decision has performed historically. The results for
values of (1 p) of 0.8, 0.7, and 0.6 are displayed in Figure 4.7. The results for
these three settings are all similar in the early iterations. As time progresses,
the amount of pheromone has been allowed to accumulate and the effect of
(1 p) becomes significant.
The results of the test problems showed improvements when a greedy as-
pect was added. The parameter qo, is used to control this greedy component.

Figure 4.8: Variation of qo
qo = 0 provides no greedy aspect while q0 = 1 makes the ants decision purely
greedy. Results for values of qo are shown in Figure 4.8. Values for qo that
were much greater than zero did not perform well for this parameter. Even
small values improved performance only slightly. This may warrant discarding
this parameter altogether.
The number of elite ants, a, is used to encourage the decisions used to
generate the best solution. Results of tests using the addition of elite ants
were observed to strongly dominate tests in which the elite ants were not
used. They generated better solutions at every stage of the tests as long as
the number of elite ants was not so great as to cause all of the ants to make
the same decisions. Results for values of a set at 0, 2, and 4 are shown in

Figure 4.9: Variation of a
Figure 4.9. Notice how drastic the performance increases when elite ants are
used. Both non-zero settings strongly dominate the solutions generated by
using no elite ants at all.

5. Conclusions
This thesis has presented the bandwidth packing problem and several ap-
proaches that have been applied to it. In addition, we have presented a new
approach using Ant Colony Optimization. BWP can be formulated as an in-
teger program but becomes too large to solve directly. Metaheuristics have
become an increasingly accepted approach to solving problems that are not
suited to direct methods. Genetic algorithms and tabu search have both been
applied to BWP. ACO is a novel approach to solve combinatorially difficult
problems ranging from the traveling salesman problem to adaptive routing in
communications networks. Fifteen problems were tested in which the best solu-
tions generated by GA and the Laguna-Glover TS implementation were known.
The ACO results performed well against these other approaches, matching or
exceeding the results in 13 of the 15 problems. Computational comparisons in
terms of processor time was not available however.
The ACO approach taken here differs from standard ACO implementa-
tions in that it uses a dynamic value for /? instead of the usual static param-
eter value. This approach proved very successful for this implementation and
warrants further study on other ACO applications. Test results also showed
that dynamic values for other parameters may also be beneficial. The ACO
metaphor could be further implemented by allowing the values of the parame-
ters to change probabilistically based on past performance. This would allow
the heuristic to adapt to the specific problem instance it is solving.

[1] C. Anderson, K. Jones, M. Parker, and J. Ryan. Path assignment for call
routing: An application of tabu search. Annals of Operations Research,
41:301-312, 1993.
[2] M. Ball, A. Vakhutinsky, P. Chimento, L Gun, and T. Tedijanto. Dis-
tributed call rerouting in multiclass broadband networks. Journal of Net-
work and Systems Management, 5(4):381-404,1995.
[3] M.O. Ball and A. Vakhutinsky. Call rerouting in an ATM environment.
Technical research report, Center for Satellite and Hybrid Communication
Networks, University of Maryland, College Park, MD, 1995.
[4] B. Bullnheimer, R.F. Hartl, and C. Strauss. An improved ant system
algorithm for the vehicle routing problem. Technical Research Report
POM-10/97, Institue of Management Science, University of Vienna, Vi-
enna, Austria, 1997. To appear in the Annals of Operations Research.
[5] A. Colomi, M. Dorigo, V. Maniezzo, and M. Trubian. Ant system for
job-shop scheduling. Belgian Journal of Operations Research, Statistics
and Computer Science(JORBEL), 34:39-53, 1994.
[6] S.A. Cook. The complexity of theorem-proving procedures. Proceedings
of the 3rd Annual ACM Symposium on the Theory of Computing, pages
151-158, 1971.
[7] D. Costa and A. Hertz. Ants can colour graphs. Journal of the Operational
Research Society, 48:295-305, 1997.
[8] L.A. Cox, L. Davis, and Y. Qiu. Handbook of Genetic Algorithms. Van
Nostrand Reinhold, New York, NY, 1991.
[9] M. Dorigo, V. Maniezzo, and A. Colomi. The ant system: Optimization
by a colony of cooperating agents. IEEE Transactions on Systems, Man,
and CyberneticsPart B, 26(1):29-41, 1996.
[10] L.M. Gambardella and M. Dorigo. HAS-SOP: An hybrid ant system for
the sequential ordering problem. Technical Research Report 11-97, IDSIA,
Lugano, CH, 1997. To appear in the INFORMS Journal on Computing.

[11] M.R. Garey and D.S. Johnson. Computers and Intractability: A guide to
the theory of NP-completeness. W.H. Freeman and Company, New York,
NY, 1979.
[12] F. Glover. Tabu search, part I. ORSA Journal on Computing, 1:190-206,
[13] F. Glover. Tabu search, part II. ORSA Journal on Computing, 2:4-32,
[14] F. Glover and M. Laguna. Tabu Search. Kluwer Academic Publishers,
Norwell, MA, 1997.
[15] H.J. Greenberg. Mathematical Programming Glossary. World Wide Web,,
[16] M. Laguna and F. Glover. Bandwidth packing: A tabu search approach.
Management Science, 39(4):492-500, 1993.
[17] V. Maniezzo, A. Colomi, and M. Dorigo. The ant system applied to the
quadratic assignment problem. Technical Research Report IRIDIA/94-28,
Universite Libre de Bruxelles, Belgium, 1994.
[18] G.L. Nemhauser and L.A. Wolsey. Integer and Combinitorial Optimiza-
tion. John Wiley and Sons, New York, NY, 1988.
[19] I.H. Osman and J.P Kelly. Meta-Heuristics: Theory and Applications.
Kluwer Academic Publishers, Norwell, MA, 1996.
[20] K. Park, S. Kang, and S. Park. An integer programming approach to
the bandwidth packing problem. Managemant Science, 42(9):1277-1291,
[21] M. Parker and J. Ryan. A column generation algorithm for bandwidth
packing. Telecommunication Systems, 2:185-195, 1994.

Glossary of Symbols and Notation
be The
Vi The
Ti The
Si The
ti The
N The
Q The
Pi The
E The
E The
m The
T(j) The
K The
lp The
EP The
V(i,h) The
hi The
u(i, h) The
r{d) The
A r(d) The
(1 ~P) The
Aip The
Ai. The
11(d) The
V{a,d) The
bandwidth capacity of edge e
cost for each unit of bandwidth used on edge e
revenue associated with placing call i
bandwidth requirement for call i
start node for call i
terminal node of call i
set of network nodes
set of calls
set of feasible paths for call i
set of network edges
set of network arcs
set of arcs terminating at node j
set of arcs originating from node j
set of solutions neighboring x
maximum number of feasible paths to search for
cost of path p
set of edges in path p
value of move (i, h)
number of feasible paths considered for call i
number of times move (i, h) has been made
penalized move value for move (i, h)
pheromone on decision d
amount of pheromone added to decision d
evaporation factor
set of ants that placed call i using path p
set of ants the placed call i
fitness of decision d
probability associated with ant a making decision d

a The parameter that determines the level which
pheromone governs an ants decision
(3 The parameter that determines the level which fitness
governs an ants decision
q0 The parameter that determines the greedy component of
an ants decisions
a The number of elite ants
m The number of ants