
Citation 
 Permanent Link:
 http://digital.auraria.edu/AA00002177/00001
Material Information
 Title:
 Heuristics for the weighted early/tardy problem
 Creator:
 Xu, Jiang
 Place of Publication:
 Denver, CO
 Publisher:
 University of Colorado Denver
 Publication Date:
 1999
 Language:
 English
 Physical Description:
 vii, 81 leaves : illustrations (some color) ; 28 cm
Subjects
 Subjects / Keywords:
 Heuristic programming  Evaluation ( lcsh )
Genetic algorithms ( lcsh ) Scheduling ( lcsh )
 Genre:
 bibliography ( marcgt )
theses ( marcgt ) nonfiction ( marcgt )
Notes
 Bibliography:
 Includes bibliographical references (leaves 8081).
 Thesis:
 Computer science
 General Note:
 Department of Computer Science and Engineering
 Statement of Responsibility:
 by Jiang Xu.
Record Information
 Source Institution:
 University of Colorado Denver
 Holding Location:
 Auraria Library
 Rights Management:
 All applicable rights reserved by the source institution and holding location.
 Resource Identifier:
 42612531 ( OCLC )
ocm42612531
 Classification:
 LD1190.E52 1999m .X8 ( lcc )

Downloads 
This item has the following downloads:

Full Text 
HEURISTICS
FOR THE WEIGHTED EARLY/TARDY PROBLEM
by
Jiang XU
B.S., Shanghai Jiao Tong University, 1997
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Computer Science
1999
This thesis for the Master of Science
degree by
Jiang XU
has been approved
by
oss McConnell
Jiang XU (M.S., Computer Science)
Heuristics for the Weighted Early/Tardy problem
Thesis directed by Associate Professor William J. Wolfe
ABSTRACT
This thesis presents an analysis of the weighted early/tardy problem and an
evaluation of several heuristics, which are EDD (earliest due date), WSPT (weighted
shortest processing time), WLPT (weighted longest processing time), Pairwise
Interchange and Genetic Algorithm. The EDD, WSPT and WLPT are simple
heuristics that provide the optimal solutions only under very restrictive conditions.
The goal of using heuristics is to find near optimal solutions in an efficient manner.
The heuristics were tested on a multitude of problem instances with up to twenty jobs
per problem and a wide range of due date distributions. The heuristics were also
tested against problems with only eight jobs to be compared with the optimal
solutions, which is captured by exhaustive search. The results of the experiments
indicate:
1. The performance of the Pairwise Interchange heuristic is strongly
dependent on the initial sequence.
m
2. The Pairwise Interchange heuristic performs better than the simple
heuristics (EDD, WSPT, WLPT)
3. The Pairwise Interchange using EDD as the initial sequence performs
much better than the other Pairwise Interchanges when all due dates
spread out enough over the time line.
4. The Genetic Algorithm can provide a better solution than the Pairwise
Interchange heuristic, but normally takes a longer period of time.
5. With enough generations, the solutions provided by the Genetic
Algorithm using different recombination strategies are same as each
other or very close to the same.
6. With small number of generations, the Genetic Algorithm using Q
crossover will perform much better than the PMX and Uniform
crossover when all due dates spread out enough over the time line.
IV
CONTENTS
Chapter
1. Introduction..............................................................1
2. Problem Statement.........................................................7
3. Optimal Slide Algorithm..................................................13
4. Simple heuristics........................................................17
5. Pairwise Interchange.....................................................23
5.1 Experiment: Comparison of Simple heuristic and Pairwise Interchange.....29
5.2 Experiment: Comparison of the PI using different initial sequence......35
5.3 Experiment: Comparison of the heuristics and the optimal solutions.....40
6. Genetic Algorithm........................................................43
6.1 Other recombination strategies..........................................50
6.2 Experiment: Comparison of Genetic Algorithm and Pairwise Interchange...53
6.3 Experiment: Comparison of GA using different recombination strategies..60
6.4 Experiment: Comparison of Genetic Algorithm and Optimal solutions......75
7. Conclusion...............................................................78
References..................................................................80
v
FIGURES
Figure
2.0 The problem model.........................................................8
2.1 The due date distribution factor..........................................9
3.1 Example of the implementation of the Optimal Slide Algorithm.............15
4.1 Example of the worst case of the WSPT and WLPT heuristic.................21
5.1 Average Q of the WSPT and the PI using WSPT as the initial sequence......32
5.2 Average Q of the WLPT and the PI using WLPT as the initial sequence......33
5.3 Average Q of the EDD and the PI using EDD as the initial sequence........34
5.4 Average Q of the PI using different initial sequence.....................38
5.5 Average improvement by using PI procedure on different initial sequence....39
5.6 Average Q of the heuristics and the optimal solutions....................42
6.0 Qcrossover..............................................................47
6.1 PMX crossover............................................................51
6.2 Uniform crossover........................................................52
6.3 Average Q value of the GA and the PI.....................................58
6.4 Average difference between the PI and the GA.............................59
6.5 Average Q value of the GA with 1000 generations..........................63
6.6 Average Q value of the GA with 1500 generations..........................64
6.7 Average Q value of the GA with 5000 generations..........................65
VI
6.8 Relationship between Final solutions and Generations (ddd factor =0)....67
6.9 Relationship between Final solutions and Generations (ddd factor =50)...68
6.10 Relationship between Final solutions and Generations (ddd factor =100)..71
6.11 Relationship between Final solutions and Generations (ddd factor =150)..72
6.12 Relationship between Final solutions and Generations (ddd factor =200)..73
6.13 Average Q value of GA with 50 Generations...............................74
6.14 Average Q value of the GA and the optimal solution......................77
vii
1.
Introduction
Scheduling problems occur in many economic activities. In Industrial
Engineering, Management Science, Operations Research and in many other fields,
there are usually a great number of tasks that need to be accomplished with a limited
amount of resources in short periods of time. Scheduling is the process of organizing,
choosing and timing tasks along with the necessary resources to carry out all the
activities needed to produce the desired output at the specified time [8], One always
strives to achieve an optimal schedule, which will result in completion of ones tasks
in the timeliest manner. The optimal schedule usually results in maximum savings of
time and money. However, a typical scheduling problem can be comprised of several
different and concurrent goals, which have a finite amount of resources available.
Combining these goals and resources often leads to exponentially growing numbers
of possible solutions. Therefore, it can be difficult or impossible to find the exact
solutions in polynomial time. If an optimal schedule is not achievable in a reasonable
amount of time, then an approximate solution must be found to minimize the
penalties. Heuristic methods can be used to offer near optimal solutions. They
usually take a minor amount of time, while providing good solutions. Most heuristics
are always combined with the appropriate variable abstractions such as earliest due
date, weighted shortest processing time and weighted longest processing time, and the
1
heuristics will be very effective under specific conditions. On the other hand, such
abstractions may constrain the performance and make the heuristics less flexible.
Until fairly recently, scheduling problems focused on single performance
measures. That is, the problems involved penalty functions that are nondecreasing
function of increasing completion times. These measures were referred to as
regular measures, which meant that earlier was always better. In many articles,
authors called such regular measures mean flowtime, mean lateness or mean
tardiness. The time spent by a job in the system is defined as its flowtime. For job i,
its flowtime Fj = C, r; where Q is its completion time and r; is its ready time (=0 for
static case). For a problem with n jobs, the mean flow time is that F = (Fi + F2 + ...
+F)/n. The job lateness is defined as Li = Q dj where Q is the job completion time
and di is the job due date. For a problem with n jobs, the mean lateness is defined as
L = (Li + L2 + ... + L)/n. The job tardiness is defined as Tj = max{0, Lj} where Li is
the job lateness. For the problem with up to n jobs, the mean tardiness is T = (Ti + T2
+ ... +Tn)/n. The mean tardiness criterion has been a standard way to measure
conformance to due dates, but it ignores the consequences of jobs completing early.
Baker discussed such measures in his book [19] and provided many optimal and
efficient algorithms.
However, penalties are also associated with products that are completed early.
For example, in Industry, if the products are finished early and cannot be shipped
before their due dates, they can still cost money to be held in finished goods
2
inventories until their due dates. Similar to Japanese longrange concerns in justin
time (JIT) production methods, the earliness should be discouraged as well as
tardiness. Both earliness and tardiness penalties create nonregular performance
measures. The importance of the nonregular measure is that the set of nondelay
schedules does not constitute a dominant set [3]. A nonregular measure may be
improved by inserting job idle time; thus, the search for an optimal solution is no
longer restricted to a finite set such as the nondelay schedules [3]. This in turn leads
to new developments in the field of scheduling research.
In this thesis, the problem of scheduling jobs to minimize the total earliness
and tardiness penalties by using heuristics is discussed. Several heuristics will be
introduced, which are EDD (earliest due date), WSPT (weighted shortest processing
time), WLPT (weighted longest processing time), Pairwise Interchange and Genetic
Algorithm. The Pairwise Interchange and the Genetic Algorithm will be emphasized
in this thesis. The above heuristics are analyzed and are compared to each other by
using a large number of experiments. The Pairwise Interchange and the Genetic
Algorithm heuristic are analyzed to discover the factors that will significantly affect
their performance. These studies are based on the results of the experiments below.
There are many publications, which discuss the Weighted Early/Tardy
scheduling problem. Some of these works address specific cases, which are
3
associated with limited conditions and provide the algorithms that can produce the
optimal solutions. Some of them also consider problems with broad conditions and
provide the heuristics for all conditions. Baker reviewed the state of the Early/Tardy
problem in his article [1]. He introduced the E/T model and described many
variations of the problem. Based on a model that contains one machine and a
common due date he added many different features such as distinct due dates and
parallel machines. He also listed many different job parameters and various penalty
functions. A special case of the problem was discussed in article [10], in which all
weights are equal and the jobs have one common due date. Kanet [10] presented an
efficient procedure for minimizing the maximum penalty when the common due date
is greater than the sum of all the jobs processing times. The algorithm is very simple
and fast. Kanet also provided proof of the algorithm. In article [18], Garey proved
that the problem of finding solutions of minimum penalty is NPComplete, and
provided an optimal algorithm for the fixed job sequence case, which can be done in
0(NlogN) time. The time complexity of an optimal timing algorithm is an important
factor in solving the problem, this is because it may have to run for a long time to find
an optimal solution. Other authors have also given various heuristics with Linear
Programming and Dynamic Programming solutions for the problem. In article [3],
Fry presented a heuristic based on the Adjacent Pairwise Interchange algorithm to
minimize mean absolute lateness (MAL). Using Frys algorithm, the optimal timing
can be determined through a linear program. In article [2], Yano analyzed the special
4
case where job weights are proportional to the processing time. He developed and
compared several heuristic procedures and the experimental data of the study showed
that heuristics perform well. In article [16], Peng Si Ow investigated seven heuristics,
including a variation of Beam Search, and then ran tests on 1440 problems to
compare these heuristics and their performance.
The next section will provide a brief problem statement and then will discuss
the complex aspects of the problem along with exploring the more moderate portions
of the problem.
Then in section 3, the Optimal Slide Algorithm will be introduced, which can
provide the optimal solution for a fixed job sequence. The Optimal Slide Algorithm
is used both in all heuristics to give the accurate Q value for a decided job sequence.
In section 4, some simple heuristics will be introduced, they are WSPT,
WLPT and EDD. These simple heuristics can give the optimal solutions under
certain conditions that are described below.
Section 5 will introduce the Pairwise Interchange heuristic. A description will
be given of the Pairwise Interchange procedure and it will be analyzed by the results
of the experiments. It will then be compared to the simple heuristics that are
discussed in section 3 to show that the Pairwise Interchange procedure can greatly
improve the simple heuristics (EDD, WSPT, and WLPT).
5
In section 6, the Genetic Algorithm will be introduced. As in section 5, a
description of the Genetic Algorithm is given and the results of the experiments are
then analyzed. A comparison of the Genetic Algorithm with the Pairwise Interchange
is given to illustrate that the Genetic Algorithm can achieve better results.
A summary of these heuristics is given in the last section.
6
2.
Problem Statement
The problem, which is studied in this thesis, is a single machine
nonpreemptive Weighted Early/Tardy scheduling problem with the following
parameters:
The number of jobs:
A set of jobs:
Release times:
Processing time:
Due date:
Due date distribution factor:
Earliness weights:
Tardiness weights:
Completion times:
Penalty function:
N
{JOBi l
All jobs available at time = 0
Pi(l
D, (1< Dj < Djmax)
x, all due dates are randomly over [0,x]
WEi>0
WTi>0
Ci
Q = Â£iqi,
where qi = WEi  C[ Dj  if C; > Dj which means
JOBj is finished early,
qi = Wxi  C; Dj  if Cj < Di which
means JOBi is finished late.
7
q; = 0 which means JOB* is finished on
time.
Time line: [0, Tmax] where Tmax = N(2Pmax 1)
Figure 2.0: The problem model. For each job i, it has a due date (Di), a complete
date (Q), an earliness weight (Wei) and a tardiness weight (Wn). If the job is
finished on time, the penalty will be zero. If the job is finished before its due
date, the penalty will be Wei  Q D; ]. If the job is finished after its due date,
the penalty will be Wn  Ci Di . In the thesis, the problem is simplified that all
the earliness weights are equal to the tardiness weights and all of them are equal
to one.
Due date distribution factor is to be used in the experiment model. The due
date distribution factor is defined to represent the % of the time line used for random
due date placement. All the due dates are in the range [0, x] where x is the due date
distribution factor. For a due date distribution factor of 200, x = 200, which means
8
the due dates are random over the entire time horizon from 0 to 200. For a due date
distribution factor of 0, x 0, which means all the due dates equal to 0.
Due date distribution factor
200
For each ddd factor
x, jobs due date is
randomly
generated in [0,x].
0 ^ x :Â£ 200
x
Figure 2.1: The due date distribution factor. The due date distribution factor is
to be used in the experimental model. The due date distribution factor rangers
from 0 to 200. The due dates are randomly generated over the time horizon
([0,x], where x is the due date distribution factor). When x = 0, all the due dates
are equal to 0.
Since the average processing time is kept the same when the due date
distribution factor is decreased, the level of conflict between the jobs intensifies. This
results in making the problem becoming more difficult. In the experiments below,
the due dates are randomly generated and it is unlikely for the due dates to be equally
spaced. In fact, distinct due dates may have the same time.
Due date 200/C
180
9
In the problem model, the earliness weights are treated like storage fees
imposed on products, which cannot be shipped on time. In contrast, tardiness weights
are treated like the penalty cost for products that cannot be finished on time.
Nonpreemption means once a job is started it cannot be interrupted and must be
executed to its completion time, but idle time is allowed between jobs. Time Line
([0, Tmax]) means that all jobs must be finished before Tmax. The length of the time
line (Tmax = N(2Pmax 1)) was chosen to ensure that all jobs would make schedule
(with some room to spare).
In this thesis, the Weighted Early/Tardy problem is simplified, in which the
earliness weights are equal to the tardiness weights and all of them are equal to one.
The objective is to find optimal solutions that can minimize the total penalty, which is
represented by Q value.
The problem has a complex component and a more moderate one. The
moderate element is to achieve the best start time for each job in a given job
sequence. Garey [18] presented an optimal algorithm for the fixed job sequence that
can be done in 0(NlogN) time. A variation of Gareys algorithm is implemented in
this thesis; it is called the Optimal Slide Algorithm.
The complex component of the problem is to find the optimal job sequence,
which means to find the ordering of the jobs on a time line, which can provide the
lowest cumulative penalty. Since there are N jobs in the problem, the total number of
the distinct solutions to the nonpreemptive scheduling problem is therefore N!, which
10
is the number of the different permutations of N elements. An exhaustive search will
guarantee that the optimal solution for any early/tardy problem will be found. In an
exhaustive search, each possible sequences Q value is scored and the sequence with
the lowest Q value is absolutely the optimal solution. For a problem that is small, an
exhaustive search will quickly find the optimal solution. For example, when N = 6, it
is only necessary to compare 720 sequences. However, with a larger size problem
there will be much greater difficulties using the exhaustive search method. Suppose
N = 20, the number of the distinct solutions are 20! = 2.432902008177e+18. This
means that the exhaustive search method will evaluate a very large number of
sequences and it will therefore take the computer an exponentially longer run time to
find the optimal solution. Using a computer with 200 MHz CPU, it may take 386
years to get the optimal solution by using the exhaustive search. When N = 25, there
are 1.551121004333e+25 possible sequences and the run time of the problems with
25 jobs is therefore 6375600 times greater than a problem with 20 jobs. Suppose
there is a computer Al, that can calculate the optimal solution for the problem with
20 jobs in an acceptable time. However, one would need a much faster computer
Bl whose speed is 6375600 times faster than that of Al to solve the problem that
has 25 jobs in the same acceptable amount of time as the original 20 job problem.
Therefore, it is clear that the general problem has an exponentially increasing number
of possible sequences to consider, and that achieving an acceptable solution time will
11
be a major factor to consider, if even achievable at all. Thus, the exhaustive search
method is an inefficient algorithm.
There is a large class of problems similar to the scheduling problem that is
previously discussed. One example would be the traveling salesman problem (TSP).
These problems are called NPcomplete problems [11], and none of them have
known polynomialtime solutions. If one can find a polynomialtime algorithm to
solve one of these problems, one will then find a polynomialtime algorithm for every
problem in this class. However, it is not known whether there exits such an
algorithmic solution for these types of problems. Therefore, until now only
exhaustive search methods guarantee that the optimal solutions for these types of
problems would be found. Nevertheless, it is ineffective method of computer use due
to the amount of time expended on the calculation. Therefore, it is more efficient to
use the heuristic method to achieve approximations to the optimal solutions in an
acceptable time frame.
12
3. The Optimal Slide Algorithm
The Optimal Slide Algorithm is used to give an optimal solution for a fixed
job sequence. The algorithm is from the same idea as Gareys algorithm in [18]. The
difference is that Gareys algorithm looks like a shift back algorithm where this one
looks like a slide ahead.
The procedure of the Optimal Slide Algorithm is as follows:
At the beginning, all the jobs are shifted to the furthest left on the time line
and no idle time is inserted between each job. Then as JOB 1 starts at 0, JOB i starts
at lPk (k = 1, ... i 1). Assume each jobs start time is S; and complete time is Q,
where Ci = Si + Pi, a special weight is used for the Optimal Slide algorithm:
If D; > Ci then W; WEi
If Di < Ci then W* = WTi
Now, a cluster has been created with all jobs in it. The first job in the cluster is
JOB 1 and the last job is JOB N. Each jobs weight is updated using the rules above.
After updating the weights, one begins to add the weights of the jobs in the
cluster. Two cases may occur when the weights are added:
Case 1: The current sum is negative or equal to zero and the weight of the next
job in the cluster is positive. That is HWj (i = 1, ..., k) < 0 and Wk+i > 0. If these
conditions are met, the current cluster is then broken into two parts. The first part will
13
not be used, including all jobs before JOB k+i. The second part is then used as a new
cluster, which includes all jobs after JOB k
Case 2: The sum of all weights in the cluster is positive. For example, when
M jobs are in the cluster, ZW, (i = 1, ..., M) >0. When this case occurs, the entire
cluster is shifted to the right. The amount shifted is the minimum difference between
the completion time and the due date for all the jobs in the cluster (Min( Q Dj ) i =
1,..., M).
After sliding the cluster (case 2) or using a new cluster (case 1), the process is
then started over again on the newly created cluster. The weights are updated and the
summation procedure is started again until the sum of the weights of all jobs in the
cluster is negative and without case 1 occurring during the summation procedure.
Figure 3.1 shows an example of the Optimal Slide Algorithm. There are five
jobs in the example. All the earliness weights and the tardiness weights are equal to
1. The processing time and the due date of each job is shown in the following table:
Jobl Job2 Job3 Job4 Job5
Processing time 2 3 5 4 3
Due date 1 4 12 15 16
14
Example of Optimal Slide Algorithm
dl
1
Jobl
'dl
Jobl Job2 Job3 Job4 Job5
'd2 'd3 <14 <15
Cluster
l 1 +1 +1 w 1
Jobl Job2 Job3 Job4 Job5
d2
a3
New Cluster
d4 d5
1 1
^ w +1 +1 1
Jobl Job2 Job3 Job4 Job5
1
Job2
+1
New Cluster.
1
1
Job3
Job4
d2
d3
Job5
d4 d5
Step 1: Shift all jobs to the
further left on the time line. No
idle time is inserted.
Mark each job with a special
weight, which is calculated as:
If Dj > Ci then W; = WEi
If Di <, C; then Wj = WTi
The cluster includes all jobs (jobl
 job5).
Step 2: Because the sum of first
two jobs weights is negative
(wl+w2=ll2) and the job3s
weight is positive (w3=l), so the
cluster is broken. The first two
jobs are left in their position and
the new cluster includes the rest
jobs.
Calculate and mark the special
weight of each job in the new
cluster.
Step 3: Because the sum of all
weights in the new cluster is
positive (see step 2: +1+11=1), the
entire cluster is shifted to the right.
The amount shifted is the
minimum difference between the
completion time and the due date
(min = d4c4=l).
After sliding the cluster, re
calculate the weight of each job in
the cluster. Since the sum of all
weights in the cluster is negative
(w3+w4+w5=+lll=l), so stop
the algorithm and it is the optimal
solution for this fixed sequence.
Figure 3.1, Example of the implementation of the Optimal Slide Algorithm.
Each job is marked with a special weight above the job box. All the earliness
weights and the tardiness weights are equal to 1. The processing time and the
due date of each job is shown in the Table above.
15
Same as Gareys algorithm, the Optimal slide algorithm can be done in
0(NlgN) time.
The algorithm is like a train moving on the time line; at each stop, it may
leave some carriages behind and continue with the rest.
16
4. Simple Heuristics
There are many simple and fast heuristics, which can provide optimal
solutions under certain conditions, but can be very weak under certain other
conditions. Three simple heuristics are introduced here, which are EDD (earliest due
date), WSPT (weighted shortest processing time), and WLPT (weighted longest
processing time). They are used for comparison purposes with the Pairwise
Interchange and the Genetic Algorithm, which are presented in the following
sections. In the definitions below, the Optimal Slide Algorithm was applied to each
of the heuristics to provide the optimal timing for the resulted job sequence.
EDD (earliest due date): The EDD heuristic is to sequence the jobs on the time line
according to the rule:
Dj < Dj +1
The EDD heuristic is optimal when the due dates are sufficiently spaced that
each job can be placed at its due date without conflict. The proof is obvious. For the
static weighted tardiness problem (Twt = ZWn (Cj Dj)), if any sequences can
produce Twt = 0, EDD is optimal [8]. In the static problem of minimizing the
maximum lateness factor there is an optimal policy satisfying the EDD rule [8].
Minimizing the maximum lateness is also very important if all jobs are going to be
needed for the same project somewhere downstream. Using the EDD rule helps to
17
prevent the embarrassment of very large tardiness. However, as discussed in [8], the
EDD rule is often weaker with respect to average tardiness. Since EDD rule could
minimizes the maximum job tardiness and hence should do better for lightly loaded
job sequences, which means that EDD heuristic could give very good results for low
tardiness problems with wide due date ranges.
WSPT (weighted shortest processing time): The WSPT heuristic is to sequence
the jobs on the time line according to the rule:
WTi/Pi>WTi+i/Pi+i
In the experiment, all the weights of the tardiness are same, so that the rule is
simplified as:
Pi < Pi +i
The WSPT heuristic is optimal when the due dates are such that all jobs must
be late. The WSPT heuristic can provide the optimal solutions for such problems in
which all due dates are equal to zero or smaller than the shortest job processing time.
The proof is shown below:
Theorem: In the Weighted Earl/Tardy problem, when all the due dates are
smaller than the shortest processing time, the WSPT heuristic can provide the optimal
solutions.
Proof: Assume there are N jobs in the problem, each job has a process time pi,
due date dj, earliness weight wEj and tardiness weight wn, d; < min(pi) (i = 1,2,...N).
18
All the jobs are late so that earliness weight will not be used. Since all the jobs are
late, there will be no idle time inserted between each job.
Suppose there is an optimal solution S, which is not the WSPT sequence.
That is, somewhere in the sequence there must exist an adjacent pair of jobs, JOB i
and JOB i+1, such that p* > pi+i. The Q value of the sequence S is:
Q = (pi di) wTi + (pi + P2 d2) wt2 + ... + (pi + p2 + ... + Pn dN) wTO
= pi (wTl + WT2 + ... + Wtn) + P2 (WT2 + WT3 + ... + Wtn) + ... Pn Wtn
(di wTi + d2* wt2+ ... + dN wtn) where (pi
Construct a new sequence S' by changing the position of the two jobs, JOBj
and JOBj+i.
Now the Q value of the new sequence S' is:
Q' = Pi (wTi + WT2 + ... + Wtn) + ... + Pi+i (wTi + wTi+i + ... + Wtn) + Pi *
(wTi+i + wTi+i + ... + Wtn) + ... Pn wN (di wTi + d2* wT2 + ... + dN Wtn)
For the sequence S, the Q value is:
Q = pi (wTi + wT3 + ... + wtn) + ... + Pi (wTj + wTi+i + ... + wtn) + Pi+i *
(wTi+i + Wm2 + ... + Wtn) + ... Pn wN (di wTi + d2* wT2 + ... + dN Wtn)
Q' Q = Pi+i (wTi + w^+i +... + wtn) + Pi (wTi+i + wTi+2 + ... + wtn) Pi
* (wT; + wTi+i + ... + Wtn) Pi+i (wTi+i + wTj+2 + ... + Wtn)
= Pi+l WTi Pi WTi
= (pi+l Pi) WTi
Since pi > pi+i, so Q' Q < 0.
19
Therefore, Changing the position of JOBi and JOBi+1 will decrease the Q.
So there is a contradiction, in which sequence S is not optimal solution. Thus,
for any sequence that is not the WSPT sequence, it can be improved by interchanging
an adjacent pair of jobs, which do not follow the WSPT rule. Therefore, it follows
that the WSPT sequence itself must be optimal.
For the static Twt problem, if in a sequence it is not possible to make any job
nontardy, then Twt is minimized by WSPT [8].
WLPT (weighted longest processing time): The WLPT heuristic will sequence the
jobs on the time line according to the rule:
Wk / Pi < Wei +1 / Pi +1
Because, all the Weights of the Tardiness are the same in the experiments, the
rule is simplified as:
Pi>Pi+i.
The WLPT heuristic is optimal if all the jobs must be early. Such as when all
due dates equal to the Tmax = N(2Pmax 1) where Pmax = maximum; { Pi }, WLPT
provides the optimal solution. The proof is similar to the proof used in WSPT. For
the nonregular static problems, if WLPT results in a schedule that does not have any
tardy jobs, then this sequence will be optimal in schedules with no idle time [8].
The WLPT heuristic could be very weak when the WSPT provides a good
solution. Since the WLPT heuristic could be looked upon as the reverse of WSPT
20
heuristic. Therefore, in cases, where the WSPT heuristic provides the optimal
solution, then, the WLPT heuristic will provide the worst. On the other hand, when
the WLPT heuristic gives the optimal solution, the WSPT heuristic will provide the
worst. Figure 4.1 shows an example about when WSPT and WLPT heuristic will
provide the optimal solutions.
Case 1: When all due dates are equal to Tmax = 85, WLPT provides optimal
solution where WSPT has worst performance
WLPT: Q = 46
WSPT: Q = 74
Case 2: When all due dates are equal to 0, WSPT provides optimal solution where
WLPT has worst performance
WLPT: Q = 104
WSPT: Q = 76
Tmax = 85
Figure 4.1: Example of the worst case of the WSPT and WLPT heuristic. There
are five jobs and the length of time line is 85 (T = 85); each job is marked with its
processing time. When all due dates are equal to 0, the WSPT heuristic provides
the optimal solution where the WLPT heuristic has the worst performance. On
the other hand, when all due dates are equal to T, the WLPT heuristic provides
the optimal solution and the WSPT heuristic has its worst case.
21
The EDD heuristic can help to prevent lateness factors that are very large in
nature. Since the EDD heuristic will lower the maximum tardiness, it will perform
better on a lightly loaded job sequence. The WSPT heuristic will perform better
when all the jobs must be late; therefore, it will work better with heavily loaded job
sequences. The WLPT heuristic is the reverse of the WSPT heuristic and performs
well when all the jobs are required to be completed early. The implementation of
these simple heuristics has two parts, which are the sorting part and timing part.
Using Heapsort algorithm, the sorting part can be done in 0(NlgN) time; using
Optimal Slide Algorithm, the timing part can be done in 0(NlgN) time. Therefore,
all three simple heuristics can be completed in 0(NlgN) time.
22
5. Pairwise Interchange
Having introduced several basic heuristics, now the Pairwise Interchange
heuristic is presented for creating better schedules; schedules which have an
acceptable low Q value.
The Pairwise Interchange heuristic is a neighborhood search technique. It is a
simple technique and normally only requires a small amount of time in its execution.
The intuitive idea behind the Pairwise Interchange heuristic is that by swapping two
jobs until there are no other swaps available that could improve the schedule, the
results may be the optimal solution.
The following is a brief statement of the Pairwise Interchange heuristic.
INITIALIZE
1. Initialize the system, which generates the initial job sequence.
The initial job sequence can be EDD (earliest due date rule), WSPT
(weighted shortest processing time), WLPT (weighted longest processing
time) or any other sequences such as RAN (randomly generated
sequence).
2. Use the Optimal Slide Algorithm on the initial job sequence to get the Q
value. Assume the resulting sequence is the current best solution and that
the Q value is the current best Q value.
3. Set the Circulation Control = True.
23
CYCLE
4. If the Circulation Control is True, then begin the Pairwise Interchange
procedure.
Consider swapping each job with each of the other jobs in the current best
solution until all possible swaps are done. Then, use the Optimal Slide
algorithm on each new sequence that resulted from the swapping to get the
Q value for each new sequence.
5. If a sequence has a Q value lower than the current best Q value, replace
the current best solution with this sequence and replace the current best Q
value with the new Q value. Then, go back to step 4. If more than one
sequence has Q value lower than the current best Q value, pick the
sequence with the lowest Q value.
If no such sequence exists whose Q value is lower than the best Q value,
set the Circulation control = False.
6. If the Circulation control is False, quit the CYCLE and print the current
best solution and the current best Q. This is the result.
Since continually swapping arbitrary two jobs in the sequence can lead to all
possible sequences, the worst case of the Pairwise Interchange may need 0(N!) time
to get the solution.
24
The Pairwise Interchange heuristic that is studied in this thesis is different
from the Adjacent Pairwise Interchange (API), which is introduced by Fry [3]. It is
also different from the Pairwise Interchange procedure that is introduced by Yano [2],
In article [3], the Adjacent Pairwise Interchange procedure states that if two adjacent
jobs (i, j) exist such that i  j results in a reduction of total absolute lateness, the
switch is then made and another pair is considered. The evaluation of each pair of
jobs for switching requires the solution of the linear program [3]. In article [2],
Yanos Pairwise Interchange procedure is used to improve the heuristic solutions.
Yanos Pairwise Interchange procedure requires that each job swaps with each of the
other jobs in the sequence until a swap produces a better schedule.
All three heuristics (API, Yanos PI and PI discussed in the thesis) are types
of neighborhood search strategies, but there is still difference between each of them.
The Adjacent Pairwise Interchange only considers swapping the positions of
sequential neighbors, where as, the Pairwise Interchange discussed in this thesis will
consider all the possible job swaps. In addition, Frys API uses linear programming,
where as the Pairwise Interchange studied in this thesis uses dynamic programming to
consider swapping arbitrary jobs over all of the sequence. Yanos Pairwise
Interchange procedure is to swap each job with each of the other jobs in the sequence
and as soon as such a swap produces a better schedule the process will be stopped.
However, the Pairwise Interchange studied here considers swapping each job with
each of the other jobs in the sequence until all possible swaps are done. Then, a
25
better schedule will be picked and the process will continue running until there is no
better schedule generated. This strategy will give more chances to find a better
schedule during each cycle and therefore in some cases may perform somewhat better
than Yanos Pairwise Interchange.
The following experiment model is used to investigate the quality of the
Pairwise Interchange heuristic for problems with up to twenty jobs. The experiment
model is also used for later simulations of the Genetic Algorithm heuristic.
The problem size is twenty jobs, in which N = 20 and the processing time for
each job is randomly generated between 1 and 10 that 1 < Pj < 10. The due dates are
chosen uniformly over the interval [0, x] where x varies from 0 to 200. For this
purpose, a due date distribution factor (ddd factor) is defined before (described in
section 2: Problem Statement) to represent a percentage of the time line used for
random due date placement. For each due date distribution factor, one hundred
Early/Tardy problems will be randomly generated, so that the average performance of
the heuristics can be studied.
In section 5.1, 5.2 and 5.3, different experimental results will be presented.
The experimental results indicate that when given a sequence, the Pairwise
Interchange procedure can produce a better solution than the initial sequence. The
nature of the Pairwise Interchange is simple; it is a kind of exchange method, which
can also be looked upon as a search strategy. It is better than simple heuristics such
26
as EDD, WSPT and WLPT as it provides more opportunities to improve the schedule
by interchanging jobs based on a given schedule. However, the experimental results
also indicate that the performance of the Pairwise Interchange is strongly dependent
on the initial sequence. In the experiments, the EDD is shown to be a good initial
sequence when the due date distribution factor is large. Conversely, WSPT is proved
to be a good initial sequence for the cases when the due date distribution factor is
small.
One thing must be mentioned here. Although the experimental results indicate
that the Pairwise Interchange may greatly improve a given schedule and always
perform well in the small problem size; even when the problem size is N = 3, it may
still not produce the optimal solution. For example, when there are three jobs
described as following:
Job 1 Job2 Job3
Due date 52 42 54
Processing time 15 12 30
ll H 1 1 1
If the Pairwise Interchange uses EDD or WSPT as the initial sequence, then
the optimal solution will not be found. The initial sequence generated by EDD or
27
WSPT is 2 > 1 > 3, after Pairwise Interchange procedure, the solution is 2 > 1 >
3, which is the same as the initial sequence. However, the optimal solution is 3 2
1. This is because:
1. The sum of all the processing times is greater than the largest due date, so
that there will be no idle time inserted between any two jobs. In this
example, the sum of all the processing time is 57 and the largest due date
is 54. Therefore, no job will be finished later than 57. The due dates of
JOB 1 and JOB 3 are close to the 57, so if JOB 1 or JOB 3 is put in the last
position of the sequence, the total penalty may be reduced.
2. The processing time of JOB 3 is much longer than any of the other jobs,
and is even much greater than the sum of the other two processing times.
Therefore, if JOB 3 is put in the first position of the sequence, the total
penalty may be greatly reduced.
3. Also, if JOB 3 is put in the first position of the sequence and JOB 2 is put
just following JOB 3, this results in JOB 2 being finished on time. Which
results in the penalty for JOB 2 to equal 0.
However, if the EDD or the WSPT is chosen as the initial sequence, the
first cycle of the swapping procedure produces the following sequences: 1 > 2 > 3,
3 1 2 and 2 > 3 > 1; no ones Q value is lower than the initial sequence.
Therefore, the Pairwise Interchange achieves the balance quickly and presents the
28
initial sequence as the solution, but the optimal solution can never be achieved as 3 
2 1.
5.1 Experiment: Comparison of Simple Heuristic
and Pairwise Interchange
In the experiments of the Pairwise Interchange heuristic, a number of different
initial sequences for the heuristic were tested to see how the initial sequence would
affect the results. EDD, WSPT, WLPT and a Random sequence were chosen as the
initial sequences for the experiments. As mentioned before, EDD, WSPT and WLPT
are simple, fast heuristics that can provide optimal solutions under certain conditions.
In the experiments, these simple heuristics are first used for comparison purposes to
see how the Pairwise Interchange procedure can improve the results of the simple
heuristics.
Figure 5.1 shows the comparison of the WSPT heuristic and the Pairwise
Interchange using WSPT as the initial sequence. The average improvement made by
using the Pairwise Interchange procedure on the WSPT heuristic is about 15%. When
the due date distribution factor is very small, the Pairwise Interchange does not
improve as much as compared to the WSPT heuristic. This is because WSPT will
give better solutions when all the due dates are on a very tight time line. Especially,
when all the due dates are zero or are smaller than the shortest processing time, the
29
WSPT heuristic will give an optimal solution; and at that time, the improvement will
be absolutely 0. However, when the due date distribution factor becomes larger, the
Pairwise Interchange procedure will improve WSPT immensely. This is because
WSPT will not perform as well when the due dates are spread over a larger range.
Figure 5.2 shows the comparison of the WLPT heuristic and the Pairwise
Interchange using WLPT as the initial sequence. The results indicate that the
Pairwise Interchange procedure greatly improves the WLPT at any given due date
distribution factor, and the average improvement is about 18%. This is because in the
experiments, the due dates are random over [0, x], but the WLPT heuristic will only
perform well when all the jobs are intended to be finished early. The WLPT heuristic
can give the optimal solution when all the due dates are equal to Tmax where Tmax =
N(2Pmax 1), Pmax = maximum { Pj }; therefore when all the due dates are very tight
on the given time line and over the interval [x, Tmax] where x is close to Tmax, then
WLPT should perform much better.
Figure 5.3 is the comparison of the EDD heuristic and the Pairwise
Interchange using EDD as the initial sequence. The results indicate that when the due
date distribution factor is small, the Pairwise Interchange procedure marginally
improves the EDD more so than when the due date distribution factor is large. This is
because when the due date distribution factor is large, the due dates are over a wide
30
range on the time line; therefore, more chances are created for the occurrence of the
low tardiness problem. The EDD heuristic can produce better solutions for the low
tardiness problems when a wide range of due dates occur. Especially, when the due
dates have sufficient space between them, which results in each job being placed at its
scheduled due date, then the EDD heuristic will give a 0penalty. Therefore, the
average improvement achieved by using Pairwise Interchange procedure on the EDD
is about 9%, much smaller than the improvement on the WSPT and the WLPT.
31
Pairwise (Using WSPT as initial sequence) vs WSPT
Figure 5.1: Average Q value of the WSPT and the Pairwise Interchange using WSPT as the initial sequence. For each
problem, randomly generate 20 jobs. For each job, the processing time is generated between 1 and 10. The due date
distribution factor is between 0 and 200. For each due date distribution factor, randomly generate 100 problems and get
the average Q. The average improvement made by using the Pairwise Interchange procedure on the WSPT heuristic is
about 15%. When the due date distribution factor is very small. The Pairwise Interchange does not improve as much as
compared to the WSPT heuristic. Especially when the due date distribution factor is 0 and 1, the improvement is 0. This
is because WSPT can provide optimal solution when all due dates are smaller than the shortest processing time.
Figure 5.2: Average Q value of the WLPT and the Pairwise Interchange using WLPT as the initial sequence. For each
problem, randomly generate 20 jobs. For each job, the processing time is generated between 1 and 10. The due date
distribution factor is between 0 and 200. For each due date distribution factor, randomly generate 100 problems and get
the average Q. The Pairwise Interchange procedure greatly improves the WLPT at any given due date distribution factor,
and the average improvement is about 18%.
u>
U
Figure 5.3: Average Q value of the EDD and the Pairwise Interchange using EDD as the initial sequence on each due date
distribution factor. For each problem, randomly generate 20 jobs. For each job, the processing time is generated between
1 and 10. The due date distribution factor is between 0 and 200. For each due date distribution factor, randomly generate
100 problems and get the average Q. The average improvement by using Pairwise Interchange on the EDD is about 9%,
much smaller than the improvement on the WSPT and the WLPT. But when the due date distribution factor is small, the
Pairwise Interchange procedure improves the EDD much more than when the due date distribution factor is large. This is
because the EDD heuristic can give much better solutions for the low tardiness problem when a wide range of due dates
occurs.
5.2 Experiment: Comparison of the PI Using
Different Initial Sequence
Figure 5.4 is the comparison of the Pairwise Interchange heuristics using
different initial sequences. The results indicate that the different initial sequences
affect the final solutions significantly. A randomly generated initial sequence (RAN)
is added to the experiments. The randomly generated initial sequence is utilized to
provide a purely random sequence that is not biased by any job attributes, so that the
performance of the Pairwise Interchange can be studied on normal arbitrary cases.
The results show that by using different initial sequences, the final solutions can be
noticeably different. The Pairwise Interchange that uses EDD as the initial sequence
provides better solutions on a majority of the due date distribution factors.
Nevertheless, it performs much worse when all due dates are tight and are not spread
out enough to allow for job insertion for the correct due date on the time line. On the
contrary, when the Pairwise Interchange using EDD as the initial sequence does not
perform well, then the Pairwise Interchange using WSPT as the initial sequence will
provide a better set of solutions. In the experiment, when the due date distribution
factor is between 0 and 53 (ddd factor: [0,53]), the Pairwise Interchange using EDD
as the initial sequence averaged 19.5% greater than the Pairwise Interchange using
WSPT as the initial sequence. However, when the due date distribution factor is
between 54 and 200 (ddd factor: [54, 200]), the Pairwise Interchange using EDD as
35
the initial sequence averaged 69.8% lower than the Pairwise Interchange using WSPT
as the initial sequence. When the due date distribution factor becomes large, the
Pairwise Interchange using WSPT, WLPT or RAN as the initial sequence all arrive at
much the same solutions. These solutions are much worse than using EDD as the
initial sequence. Some of these solutions provided by Pairwise Interchange using
WSPT, WLPT or RAN as the initial sequence cannot achieve an acceptable low Q
value. The Pairwise Interchange using WLPT as the initial sequence is even worse
than the Pairwise Interchange using RAN as initial sequence when the due date
distribution factor is small. In light of the experiments preformed, it is clear that the
solutions provided by the Pairwise Interchange are strongly dependent on the initial
sequence. EDD is a strong initial sequence for most of the cases, especially when all
the due dates are spread sufficiently over a wide enough range.
Figure 5.5 shows the average improvement by using the Pairwise Interchange
procedure on the different initial sequences. The improvement is measured by
percentage, which is calculated as:
% improvement = ((PI initial sequence)/initial sequence) 100%
where the initial sequence is WSPT, WLPT or EDD.
In Figure 5.5, the results indicate that the improvement of the Pairwise
Interchange using EDD as the initial sequence is relatively stable over the entire
range of due date distribution factors and averages about 8.7%. The Improvement of
36
the Pairwise Interchange using WLPT or WSPT as initial sequence is stable when the
due date distribution factor is more than 90 and it averages about 20%. However,
when the due date distribution factor is less than 90, the improvement of the Pairwise
Interchange using WSPT or WLTP as the initial sequence quickly increases, as the
due date distribution factor becomes larger. Especially for the WSPT, the
improvement increases significantly and then plateaus at 20% when the due date
distribution factor is between 0 and 90. This is because WSPT performs well when
the due date distribution factor is small. Especially, when all due date distribution
factors are smaller than the shortest processing time, WSPT will provide the optimal
solutions, but when the due date distribution factor becomes larger, WSPT will
degrade very quickly. Therefore, as the WSPT rapidly degrades, the improvements
achieved by using the Pairwise Interchange procedure will be greater as the due date
distribution factor becomes larger. After achieving a plateau (ddd factor is around 90,
improvement is around 20%), the improvement becomes relatively stable.
37
Pairwise Using different initial sequence
W S P T
W LPT
*EDD
*Random
Figure 5.4: Average Q value of the Pairwise Interchange using different initial sequences. For each problem, randomly
generate 20 jobs. For each job, the processing time is generated between 1 and 10. The due date distribution factor is
between 0 and 200. For each due date distribution factor, randomly generate 100 problems and get the average Q. There
are four different initial sequences, which are VVSPT, WLPT, EDD, RAN. The Pairwise Interchange that uses EDD as the
initial sequence provides better solutions on the majority of the due date distribution factors. Nevertheless, it performs
much worse when all due dates are tight on the time line. On the contrary, Pairwise Interchange using WSPT as the initial
sequence performs much better when all the due dates are very tight. When the due date distribution factor is between 0
and 53, PI using EDD as the initial sequence averaged 19.5% greater than the PI using WSPT as the initial sequence.
When the due date distribution factor is between 54 and 200, PI using EDD as the initial sequence is averaged 69.8% lower
than the PI using WSPT as the initial sequence.
u>
Figure 5.5: Average improvement by using the Pairwise Interchange procedure on different initial sequences. There are
three different initial sequences, which are EDD, WSPT and WLPT. The average improvement by using PI procedure on
the WSPT is 14.89%; the average improvement by using PI procedure on the WLPT is 17.89%; the average improvement
by using PI procedure on the EDD is 8.66%. The improvement is calculated as % improvement = (initial sequence PI) /
initial sequence 100%. The improvement of PI using EDD as the initial sequence is relatively stable over the whole range
of ude date distribution factors. The Improvement of the PI using WSPT and WLPT as the initial sequence increases
significantly and plateaus at 20% when the due date distribution factor is between 0 and 90.
5.3 Experiment: Comparison of the Heuristics
and the Optimal Solutions
Figure 5.6 illustrates the comparison of the heuristics and the optimal
solutions for problems of moderate size. In the experimental model N = 8, processing
time is randomly generated between 1 and 10, the due date distribution factor is
between 0 and 100, for each due date distribution factor, 50 problems are randomly
generated.
Table 1.0 shows the total number of the cases in which the heuristics find the
optimal solutions for the problems.
Heuristic The number of cases in which heuristics achieves optimal solutions Number of test cases
WSPT 338 5050
WLPT 0 5050
EDD 1265 5050
PI using WSPT as the initial sequence 609 5050
PI using WLPT as the initial sequence 5 5050
PI using EDD as the initial sequence 2415 5050
Table 1.0 the comparison of the Heuristics and the optimal solution
In the experiment, the Pairwise Interchange using EDD as the initial sequence
found the optimal solutions in 2415 cases of the 5050 problem sets. Overall, the
40
Pairwise Interchange using EDD heuristic averaged 9% greater than the optimal
solution. In Table 1.0, the results also indicate that the number of optimal solutions
found by the Pairwise Interchange using EDD as the initial sequence is nearly twice
that of the EDD heuristic, and is nearly half of the 5050 problems. However, in
Figure 5.6, the Pairwise Interchange using EDD as the initial sequence performs
much worse when the due date distribution factor is small. When the due date
distribution factor is between 0 and 23, the Pairwise Interchange using EDD as the
initial sequence averaged 9.4% greater than the Pairwise Interchange using WSPT as
the initial sequence, and averaged 14.8% greater than the optimal solution. The
Pairwise Interchange using WSPT as the initial sequence finds all optimal solutions
when the due date distribution factor is 0,1,2. When the due date distribution factor is
between 24 and 100, the Pairwise Interchange using EDD as the initial sequence
averaged 79.7% lower than the Pairwise Interchange using WSPT as the initial
sequence, and averaged 6.8% greater than optimal solution.
41
The heuristics Vs the optlm al solutions
optlm a 1
PI using EDD
*i EDD
MW S P T
HW LPT
PI using WSPT
 PI u is n g WLPT
Figure 5.6: Average Q value of the heuristics and the optimal solutions on each due date distribution factor. For each
problem, randomly generate 8 jobs. For each job, the processing time is generated between 1 and 10. The due date
distribution factor is between 0 and 100. For each due date distribution factor, randomly generate 50 problems and get
the average Q. The heuristics are WSPT, WLPT, EDD, PI using WSPT, WLPT and EDD as the initial sequence. The PI
using EDD as the initial sequence found the optimal solutions in 2415 cases of the 5050 problem sets; overall, It is averaged
15.39% greater than the optimal.
6. Genetic Algorithm
A Genetic Algorithm is a search strategy based on the science of natural
selection. It is akin to the natural evolutionary process, in which the creature that is
most adaptable to its environment will tend to survive. In nature, the ability of a
creature to adapt to its changing environment will be crucial for its survival.
Individual characteristics are defined by an individuals genes, which are grouped
into chromosomes. New characteristics will be produced by the evolution process.
Genetic Algorithms were first introduced by John Holland in 1975 when he
used a computer to simulate the adaptive behavior of natural selection. Holland also
determined that crossover functions are the primary methods for generating new
material and the mutation is only used to preserve difference.
Using Genetic Algorithms to solve scheduling problems, we first need to give
a population of solutions, which is like a group of creatures in nature. In each
generation, two members are picked from the population as parents, and the new
solution (child) is generated by taking some features (genes) from each parent and
mixing them. The worst member in the population will be replaced with the child and
the process will be continued into the next generation. The best solution (fittest
individual) will survive until the end of all generations.
43
The Genetic Algorithm that is studied in this thesis includes three parts:
Selection, Recombination, and Mutation. The population is set to the number of jobs
and the initial members of the population are then randomly generated. Before each
generation, the Q value of each member in the population is calculated by using the
Optimal Slide algorithm; and the members are then sorted by the Q value. The
sequence with the lowest Q value will be put at the top of the population.
The Selection part is used to pick the parents. The Genetic Algorithm studied
in this thesis, has one parent selected from the top half of the population, which
means that this parent should have a lower Q value. The other parent is picked from
the whole population, which means this parent has a probability of not being very
good if it is picked from the bottom half of the population.
The Recombination part is used to generate the new solutions (children) by
taking features from the parents. Many operations can be used in this part such as
PMX crossover and Uniform crossover. In particular, the operation called the Q
crossover method is focused in this thesis. The Qcrossover method will take the best
features from each parent and mix them to generate a child. That is, the Q value of
each job in every parent sequence is calculated, and for each job, the one with the
lowest Q value will be chosen and its position will be recorded. Then, a new
sequence (child) will be generated by recombining these recorded jobs.
The Mutation part is used to avoid having the same sequences appear in the
population. If the Mutation part is not used in the Genetic Algorithm, then the
44
members in the population may become the same with each continuing generation.
Therefore, the Mutation part must be used to keep differences in the population,
which will give more choices for each generation.
The following steps describe the procedure for the Genetic Algorithm:
INITIALIZE:
1. Randomly generate N sequences for the population where N is the
number of the jobs.
2. For each member in the population, calculate the Q value of the
sequence by Optimal Slide Algorithm. If the Recombination Part uses
Qcrossover, the Q value of each job in the sequence also must be
calculated.
CYCLE:
3. Sort the members of the population by Q value and keep the member
with the lowest Q value at the top. Therefore, the sequence at the top
of the population is the current best solution of the Genetic Algorithm.
4. Randomly pick one member from the top half of the population and
pick the other one from the whole population. These two sequences
are the parents that will be used to generate the new child. (Selection
Part)
5. Qcrossover is used to generate a child from the two parents. To
create the child, compare each jobs Q value in each parent and record
45
the position of the job that has the lower Q value. After applying this
method on all jobs, the child will then be created by recombining the
jobs according to their recorded position. Some other methods could
be used here to create the child. In this thesis, Uniform crossover and
PMX crossover are applied for the comparison purposes. These
comparisons will be presented later.(Recombination Part)
6. Calculate the Q value of the child sequence and compute the Q value
of each job in the sequence. Then, replace the member that is at the
bottom of the population with the child.
7. After replacing, if there are two sequences that are exactly the same,
swap two random jobs in one of the sequences, until there are no
sequences in the population that are exactly the same sequences.
(Mutation Part)
Figure 6.0 shows an example of the Qcrossover process
46
Q crossover
Q value: 10 6 11 0 18 5 3 3 9 0
Mother 8 3 2 1 4 7 6 5 0 9
Father 3 4 9 8 7 6 2 1 0 5
Q value: 11 7 20 5 7 12 3 8 8 6
For each job, select the one with the lowest Q value.
Then recombine them by the position order: 3418762509
Finally the child is: 3418762509
Figure 6.0: Qcrossover method. The two parents are then marked with each
job cost (Q) below the job box. If a job has the lower cost when compared with
the same job in the other parent, then this job is chosen and its position is
recorded. The child is created by alternating between the parental sequence and
using the job if it is chosen in the sequence.
In each generation of the Genetic Algorithm, to sort the population, this can
be done in 0(NlgN) time by using Heapsort algorithm. The Select Part can be
completed in a fixed time. In the Recombination part, the Qcrossover method
includes two parts. The first part is to compare the Qvalue of each job in the mother
sequence with the same job in the father sequence and mark the one that has lowest
Q value. This part can be completed in 0(NlgN) by using Fibonacci Heap. The
second part of the Qcrossover method is to create the child by alternating between
the parental sequences and choosing the job if it is marked. This Part can be done in
47
0(N). Therefore, the Qcrossover method can be completed in 0(NlgN + N), which
is 0(NlgN). To get the Qvalue of the new sequence (child), the process will take
0(NlgN) by using Optimal Slide Algorithm. The Mutation part could be completed
in 0(N+lgN). Therefore, for the Genetic Algorithm using Qcrossover methods,
each generation can be completed in 0(NlgN). Suppose there are M generations, the
Genetic Algorithm using Qcrossover method can be completed in 0(MNlgN) time.
The recombination part will strongly affect the quality of the Genetic
Algorithm. The Qcrossover operation is derived from the idea that is used in the
TSP problem by Whitely [7]. Whitely use the edge recombination operation to
generate the childs tour, which is a sequence of cities decided by comparing the
weights of the edges between cities suggested by the two parents. There are some
other types of crossover operations. In this thesis, the Qcrossover will be
compared with the PMX crossover and the Uniform crossover. The processes of
those two crossover functions will be presented in the next section. The results of the
experiments indicate that the Qcrossover is better than the PMX crossover and the
Uniform crossover when the due date distribution factor is large. However, when the
due date distribution factor is small, the PMX crossover and the Uniform crossover
perform a little better than the Qcrossover. The results of the experiments also
indicate that with enough generations, all three Recombination strategies can provide
the same or very close to the same solutions. The details of this will be given later.
48
The selection part also will affect the performance of the algorithm. Likewise,
the initial sequence will strongly affect the Pairwise Interchange algorithm, the
members that are picked to be the parents will also strongly affect each generation.
This effect could be reduced by a large number of the generations. With enough
generations, the effect from the Selection part can always be greatly reduced.
There are also many mutation strategies and what is used in this thesis is a
simple expeditious one. Some different methods were tried and produced similar
results. Like the process in selecting parents, the effect of the mutation part will be
reduced by a large number of the generations also.
49
6.1 Other Recombination Strategies
As discussed in the previous section, the Recombination Part will greatly
affect the performance of the Genetic Algorithm. The primary recombination
strategy uses the crossover function. There have been many other strategies that used
mutation as the primary operation to introduce a new member (child). However,
Holland determined that the crossover function was the primary operation for
introducing new members and mutation should only be used to preserve diversity.
The Qcrossover is focused on in this thesis and will be compared with other
crossover functions: PMX crossover and Uniform crossover. The following
statement shows the processes of these two crossover functions.
PMX crossover: PMX crossover starts by randomly choosing two crossover points
to build a Window for both parental sequences. For each pair of jobs in the Window
(the two jobs are in the same sequential position), first find the job in the mother
sequence that matches the one in the father sequence; then change the position of the
jobs in the mother sequence so that it is aligned with the same job in the father
sequence (in the window).
Figure below shows the PMX crossover procedure:
50
PMX crossover
Window
*
Mother 8 3
2
1
4 7
6 5
0 9
Father 3 4 9
8 7 6 2 1
0 5
For the first pair in the window <1,8>, swap the position of them in the
mother sequence and get: 1328476509 and keep going.
Finally the child is: 5348762109
Figure 6.1: PMX crossover. First, randomly choose two crossover points to
build a window for both parental sequences. Then, for each pair of jobs in the
window (one from mother sequence, one from father sequence), using the job in
the father sequence as the reference to direct the swap of the jobs in the mother
sequence.
In the PMX crossover, the process of finding a specific job in the mother
sequences can be done in O(lgN) by using Fibonacci Heap. Changing the position of
two specific jobs can be done in 0(1). Since the maximum window size is N,
therefore, the maximum number of the jobs needed to be found in a sequence is N
and it can be completed in (NlgN). Also, there will be N pairs of jobs that are
required to be interchanged and can be completed in 0(N). Therefore, the PMX
crossover can be completed in 0(NlgN+N) = 0(NlgN) time, which is same as Q
crossover. Thus, the Genetic Algorithm using PMX crossover can be completed in
O(MNlgN).
51
Uniform crossover: Uniform crossover considers the window covering the entire
sequence. For each pair of jobs in the window, a random variable called Checker is
generated. If the variable is greater than 0.5 (Checker > 0.5), then swap the two jobs
in the mother sequence which is same as what is executed in the PMX crossover.
After this process is completed on all pairs of jobs, the new sequence will be the
child (of course the original mother sequence is still kept). The variable used here is
much like a decisionmaker. The childrens features are decided by this variable.
Figure below shows the procedure of Uniform crossover.
Uniform Crossover
_r Window
Mother 8 3 2 1 4 7 6 5 0 9
father 3 4 9 8 7 6 2 1 0 5
Checker 0.2 0.7 0.4 0.9 0.3 0.2 0.8 0.1 0.3 0.6
For the first pair in the window <8,3 >, because Checker < 0.5, so do not
swap the positions of them in the mother sequence. But for <3,4> we did.
Finally the child is: 1468372905
Figure 6.2: Uniform crossover. There is a Checker factor below each pair of
jobs, which is randomly generated between 0 and 1. If a Checker >: 0.5, for this
pair of jobs, then use the job in the father sequence as the reference to direct the
swap of the jobs in the mother sequence.
In the Uniform crossover, the process of finding a specific job in the mother
sequences can be completed in O(lgN) by using Fibonacci Heap. Changing the
52
position of two specific jobs can be completed in 0(1). Since there may be N pairs of
the jobs, which are needed to be interchanged. Therefore, the Uniform crossover can
be completed in 0(NlgN+N) =0(NlgN) time, which is the same as Qcrossover.
Thus, the Genetic Algorithm using Uniform crossover can be completed in
0(MNlgN).
53
6.2 Experiment: Comparison of Genetic Algorithm
and Pairwise Interchange
To study the average performance of the Genetic Algorithm, a large number
of the experiments have been created to test it. As in the experiment model that is
used for the Pairwise Interchange, the due date distribution factor is between 0 and
200. For each due date distribution factor, 100 scheduling problems are randomly
generated and the Genetic Algorithm and the Pairwise Interchange are applied to find
the solutions. The Pairwise Interchange Algorithms used in this thesis is for
comparison purposes; it uses EDD and WSPT as the initial sequences. For the
Genetic Algorithm, Qcrossover is chosen as the Recombination strategy to compare
with the Pairwise Interchange. To achieve the solution, apply 5000 generations for
the Genetic Algorithm.
Figure 6.3 is the average Q value of the Genetic Algorithm and the Pairwise
interchange on each due date distribution factor. The results show that the Genetic
Algorithm performs better than the Pairwise Interchange, no matter which initial
sequence is used by the Pairwise Interchange. When the due date distribution factor
is small, the difference between the Genetic Algorithm and the Pairwise Interchange
using EDD as the initial sequence is very large; the Genetic Algorithm gives a much
lower Q value. This is because of using EDD as the initial sequence of the Pairwise
54
Interchange. As discussed before, the Pairwise Interchange using EDD as the initial
sequence performs not as well when all due dates are very tight and is much worse
than the Pairwise Interchange using WSPT as the initial sequence. However, when
the due date distribution factor is small, the Genetic Algorithm performs better than
the Pairwise Interchange using WSPT as the initial sequence. But the difference
between the Genetic Algorithm and the Pairwise Interchange using WSPT as the
initial sequence is not very large. This is because the Pairwise Interchange using
WSPT as the initial sequence always performs better when all due dates are very tight
and small. The Genetic Algorithm provides better solutions than the Pairwise
Interchange using WSPT as the initial sequence on most of the due date distribution
factors. But when the due date distribution factor is 0, 1, 2, the Pairwise Interchange
using WSPT as the initial sequence is better than the Genetic Algorithm. The Genetic
Algorithm averaged 0.03% worse than the Pairwise Interchange using WSPT as the
initial sequence when due date distribution factor is 0,1,2. This is because WSPT can
give the optimal solutions when all the due date distribution factors are smaller than
the smallest processing time. In the experiments, the smallest processing time is 1, so
when the due date distribution factor is 0 and 1, WSPT can give the optimal solutions
and so does the Pairwise Interchange using WSPT as the initial sequence. When the
due date distribution factor is very small such as 2, the Pairwise Interchange using
WSPT as the initial sequence still has a greater chance to achieve the optimal
solutions using a minimal number of job swaps. As mentioned in the experiments
55
with problems of small size in the Pairwise Interchange, the Pairwise Interchange
using WSPT as the initial sequence found all optimal solutions when the due date
distribution factor is 0,1,2. When the due date distribution factor becomes large, the
difference between the Genetic Algorithm and the Pairwise Interchange using EDD
as the initial sequence becomes smaller. On the contrary, the difference between
Genetic Algorithm and the Pairwise Interchange using WSPT as the initial sequence
becomes larger. Although the difference between the Genetic Algorithm and the
Pairwise Interchange using EDD as the initial sequence could be very small, the
Genetic Algorithm still gives a lower Q value on each due date distribution factor.
Figure 6.4 shows the difference between Genetic Algorithm and the Pairwise
Interchange. The difference is measured by %, calculated as:
% difference = (PI GA)/PI *100%.
The results indicate that the Genetic Algorithm is always much better than the
Pairwise Interchange. The average difference between Genetic Algorithm and the
Pairwise Interchange using EDD as the initial sequence is about 17.5%. When the due
date distribution factor is small, the difference is very large and is around 20%.
Where 26% is the maximum difference between the Genetic Algorithm and the
Pairwise Interchange using EDD as the initial sequence. As the due date distribution
factor becomes larger, the performance of the Pairwise Interchange using EDD as the
initial sequence becomes much better. But the Genetic Algorithm still beats the
56
Pairwise Interchange by at least 6%, even when the due date distribution factor
becomes close to the maximum value (ddd factor = 200). The average difference
between the Genetic Algorithm and the Pairwise Interchange using WSPT as the
initial sequence is 57.82%. When the due date distribution factor is 0,1,2, the
difference between Genetic Algorithm and the Pairwise Interchange using WSPT as
the initial sequence is about 0.03% (PI is lower than GA). However, when the due
date distribution factor is between 3 and 200, the difference is about 59.3%. The
Genetic Algorithm performs much better than the Pairwise Interchange using WSPT
as the initial sequence.
The Genetic Algorithm also has the advantage that it can be executed as long
as one wishes and possibly achieve better results. However, the problem is that one
doesnt really know how long it is necessary to run the program for achieving a better
solution. This is because the Selection Part, Recombination Part, Mutation Part will
all affect the quality of each generation, although this effect can be reduced by a
larger number of generations.
One big disadvantage of the Genetic Algorithm is that it takes more time to
get a better solution, even hundreds of times longer than the Pairwise Interchange.
57
GA vs Pairwise Interchange
Figure 6.3: Average Q value of the Pairwise Interchange and the Genetic Algorithm. For each problem, randomly
generate 20 jobs. For each job, the processing time is generated between 1 and 10. The due date distribution factor is
between 0 and 200. For each due date distribution factor, randomly generate 100 problems and get the average Q. The
Pairwise Interchagne uses EDD and WSPT as the initial sequence. The Genetic Algorithm uses Qcrossover as the
recombination strategy. The Genetic Algorithm performs much better than the Pairwise Interchange. The average
difference between Genetic Algorithm and the Pairwise Interchange using EDD as the initial sequence is about 17.5%.
The average difference between Genetic Algorithm and the Pairwise Interchange using WSPT as the initial sequence is
57.82. However, when the due date distribution factor is 0,1,2, the PI using WSPT as the initial sequence is 0.03% lower
than the GA.
Difference between Genetic Algorithm and Rairwise Interchange
due date distribution factor
difference between GA
and PI using WSPT
difference between GA
and PI using EDO
Figure 6.4 Average difference between the Pairwise Interchange and Genetic Algorithm on each due date distribution
factor. The Pairwise Interchange uses EDD and WSPT as the initial sequence. For all the due date distribution factors,
the average difference between Genetic Algorithm and the PI using EDD as the initial sequence is 17.5%, the average
difference between GA and PI using WSPT as the initial sequence is 57.82%. The difference is calculated as: %difference
= (PI GA) / PI 100%.
6.3 Experiment: Comparison of the Genetic Algorithm
Using Different Recombination Strategies
Figure 6.5, Figure 6.6 and Figure 6.7 shows the comparison of the Genetic
Algorithm using different Recombination strategies. Three Recombination strategies
are compared with each other in this thesis; they are Qcrossover, Uniform crossover
and PMX crossover. The processes of these three crossover functions are described
in sections 6.0 and 6.1.
The three Recombination strategies are investigated through a large number of
simulations. The same experiment model used previously is also applied here, which
is 20 jobs for each problem and the due date distribution factor is between 0 and 200.
For each job sequence, the Genetic Algorithm is applied with the same Selection
strategy and Mutation strategy but different Recombination strategies. The results are
studied mainly from two perspectives:
1. How good a solution can the strategy provide? That is, which
Recombination strategy used by Genetic Algorithm could give a much
lower Q value for the sequence.
2. How fast can the strategy achieve an acceptable solution? That is, which
Recombination strategy could lead to solutions that are improving faster as
the run time carries on.
60
Figure 6.5 is the average Q value generated by the Genetic Algorithm using
three different Recombination procedures with 1000 generations. The results indicate
that when the due date distribution factor is small such as ddd factor < 100, three
Genetic Algorithms give the same or very close to the same solutions. The Genetic
Algorithm using Uniform crossover or PMX crossover is a little better than the
Genetic Algorithm using Qcrossover; they provide a little lower Q value than the Q
crossover. When the due date distribution factor is between 0 and 80, the Genetic
Algorithm using PMX crossover averages 0.61% lower than the Genetic Algorithm
using Qcrossover. When the due date distribution factor is between 0 and 96, the
Genetic Algorithm using Uniform crossover averaged 1.49% lower than the Genetic
Algorithm using Qcrossover. However, when the due date distribution factor
becomes larger (ddd factor >100), the Genetic Algorithm using Qcrossover works
well, which is much better than the other two Genetic Algorithms. When the due date
distribution factor is between 81 and 200, the Genetic Algorithm using PMX
crossover averaged 24.86% greater than the Genetic Algorithm using Qcrossover.
When the due date distribution factor is between 97 and 200, the Genetic Algorithm
using Uniform crossover averaged 11.48% greater than the Genetic Algorithm using
Qcrossover.
Figure 6.6 and Figure 6.7 are the average Q value generated by three Genetic
Algorithms with 1500 and 5000 generations. The results indicate that with the
61
number of the generations growing up, the solutions provided by three Genetic
Algorithms become much closer. With enough generations (5000 generations) three
Genetic Algorithms can provide same or very close to the same solutions on all the
due date distribution factors. As in Figure 6.7, the difference of the average Q value
between each Genetic Algorithm is harder to find, especially when ddd factor >100.
However, this difference is clear when there are only 1000 generations (Figure 3.0).
After 5000 generations, the average difference between the Genetic Algorithm using
Qcrossover and the Genetic Algorithm using Uniform crossover is 0.24%, the
average difference between the Genetic Algorithm using Qcrossover and the Genetic
Algorithm using PMX crossover is 0.57%.
62
Genetic Algorithm with 1000 generations
Figure 6.5: Average Q of the Genetic Algorithms. The Genetic Algorithm uses different
recombination strategies: Qcrossover, Uniform operator and PMX crossover. The
Genetic Algorithm stopped after 1000 generations. For each problem, there are 20
randomly generated jobs. For each job, the processing time is randomly generated
between 1 and 10. The due date distribution factor is between 0 and 200. For each ddd
factor, there are 100 randomly generated problems. When the due date distribution
factor is between 0 and 80, the Genetic Algorithm using PMX crossover is averaged
0.61% lower than the Genetic Algorithm using Qcrossover. When the due date
distribution factor is between 0 and 96, the Genetic Algorithm using Uniform crossover
is averaged 1.49% lower than the Genetic Algorithm using Qcrossover. However,
When the due date distribution factor is between 81 and 200, the Genetic Algorithm
using PMX crossover is averaged 24.86% greater than the Genetic Algorithm using Q
crossover. When the due date distribution factor is between 97 and 200, the Genetic
Algorithm using Uniform crossover is averaged 11.48% greater than the Genetic
Algorithm using Qcrossover.
63
Gferelkyilgcrilhrivyith 1500gsrEralkrB
cLecttscfctrikxiicnfecta'
Figure 6.6: Average Q value of the Genetic Algorithms. The Genetic Algorithm uses
different recombination strategies: Qcrossover, Uniform operator and PMX crossover.
The Genetic Algorithm stopped after 1500 generations. For each problem, there are 20
randomly generated jobs. For each job, the processing time is randomly generated
between 1 and 10. The due date distribution factor is between 0 and 200. For each ddd
factor, there are 100 randomly generated problems. The difference between each
Genetic Algorithm is much smaller than the difference in the Figure 6.5.
64
Genetic Algorithm with 5000 generations
Qcrossover
" Uniform operator
PMX crossover
Figure 6.7: Average Q value of the Genetic Algorithms. The Genetic Algorithm uses
different recombination strategies: Qcrossover, Uniform operator and PMX crossover.
The Genetic Algorithm stopped after 5000 generations. For each problem, there are 20
randomly generated jobs. For each job, the processing time is randomly generated
between 1 and 10. The due date distribution factor is between 0 and 200. For each ddd
factor, there are 100 randomly generated problems. After 5000 generations, the results
provided by three Genetic Algorithms are very close. The average difference between
Genetic Algorithm using Qcrossover and the Genetic Algorithm using Uniform
crossover is 0.24%, the average difference between the Genetic Algorithm using Q
crossover and the Genetic Algorithm using PMX crossover is 0.57%.
65
Figure 6.8, Figure 6.9, Figure 6.10, Figure 6.11, and Figure 6.12 are the
performance of the Genetic Algorithms using different Recombination procedures on
the specific case with the number of the generations growing. These figures are
based on the study of a large amount of data; they can represent the average
relationship between solutions and the number of the generations in the experiments.
Figure 6.8 and Figure 6.9 are the results when the due date distribution factor
is 0 and 50. The results show that when the due date distribution factor is very small,
Qcrossover doesnt work very well. It needs more generations to produce a better
solution and the improvement is more gradual than the other two. Uniform crossover
is better than Qcrossover when all due dates are very tight on the time line. It can
provide a better solution quicker than Qcrossover. The improvement is faster.
Uniform crossover also works a little better than the PMX crossover. The
performance of the PMX crossover is close to the Qcrossover when the due date
distribution factor is small. Especially when all due dates are very tight such as when
ddd factor < 30, it works a little better than the Qcrossover. That is, the
improvement of the PMX crossover is always faster than the Qcrossover when ddd
factor <30.
66
A specific case when ddd factor = 0, N = 20, Up to 20000 genertations
generations
Figure 6.8: The relationship between final solutions and the number of the generations.
The Genetic Algorithm uses different recombination strategies: Qcrossover, Uniform
operator and PMX crossover. The Genetic Algorithm stopped after 20000 generations.
For the specific problem, there are 20 randomly generated jobs. For each job, the
processing time is randomly generated between 1 and 10. The due date distribution
factor is 0. Uniform crossover is better than the Qcrossover when all due dates are 0.
It can provide a better solution more quickly than the other two crossovers; the
improvement is fast.
67
A specific case when ddd factor = 50, N = 20, Up to 20000 generations
generations
Figure 6.9: The relationship between final solutions and the number of the generations.
The Genetic Algorithm uses different recombination strategies: Qcrossover, Uniform
operator and PMX crossover. The Genetic Algorithm stopped after 20000 generations.
For the specific problem, there are 20 randomly generated jobs. For each job, the
processing time is randomly generated between 1 and 10. The due date distribution
factor is 50. Same as in the Figure 6.8, the Uniform crossover performs better when the
due date distribution factor is small. However, the improvement of the PMX crossover
is always faster than the Qcrossover when ddd factor <30.
68
Figure 6.10 shows the results when the due date distribution factor is 100.
The results show that the performance of Qcrossover has improved so much; that it
can give a better solution much quicker. Although Uniform crossover still always
gets the lower Q value with fewer generations than Qcrossover, Qcrossover always
provides lower Q values when the number of generations is very small, such as less
than 500 generations. The PMX crossover becomes worse at this due date
distribution factor where it requires a few more generations to get a better solution.
Figure 6.11 and Figure 6.12 show the results when the due date distribution
factor is 150 and 200. The results show that Qcrossover is significantly better when
all due dates spread out enough on the time line. It can give a better solution with a
very small number of generations. Uniform crossover and PMX crossover becomes
much worse. They require many more generations to achieve a acceptable solution
than Qcrossover. In addition, PMX crossover is always worse than the Uniform
crossover.
The experiments show that with all due dates spread over a wider range, the
performance of Qcrossover improves much faster when the performance of Uniform
crossover and PMX crossover both fall behind and become much worse. Figure 6.13
is the comparison of three Genetic Algorithms with only 50 generations. The results
also indicate that when the due date distribution factor is large, Qcrossover always
69
can give a better solution than PMX crossover and Uniform crossover where there are
only a small number of generations. Moreover, PMX crossover is even worse than
the Uniform crossover.
So when all due dates are tight on the time line, Uniform crossover and PMX
crossover will give a much better solution with a smaller number of generations,
where Qcrossover needs more generations to get the same solution. However, when
all due dates are spread out enough over the time line, Qcrossover can give a much
better solution with a very small number of the generations. On the contrary,
Uniform crossover and PMX crossover takes a much longer time to get the same
solutions. Nevertheless, what needs to be emphasized here is that with enough
generations, Qcrossover, Uniform crossover and PMX crossover can all give the
same or very close to the same solutions and the solutions will all be very good.
70
A specific case when ddd factor = 100, N 20, up to 20000 generations
Qcrossover
Uniform operator
FM( crossover
generations
x 10
Figure 6.10: The relationship between final solutions and the number of the generations.
The Genetic Algorithm uses different recombination strategies: Qcrossover, Uniform
operator and PMX crossover. The Genetic Algorithm stopped after 20000 generations.
For the specific problem, there are 20 randomly generated jobs. For each job, the
processing time is randomly generated between 1 and 10. The due date distribution
factor is 100. The performs of the Qcrossover has greatly improved; it can give a
better solution much quickly. However, Uniform crossover and PMX crossover still
always provides lower Q value with fewer generations than Qcrossover.
71
A specific case when ddd factor = 150, N = 20, Up to 20000 generations
.... I Qcrossover
Uniform operator
PM( crossover
x 10
Figure 6.11: The relationship between final solutions and the number of the generations.
The Genetic Algorithm uses different recombination strategies: Qcrossover, Uniform
operator and PMX crossover. The Genetic Algorithm stopped after 20000 generations.
For the specific problem, there are 20 randomly generated jobs. For each job, the
processing time is randomly generated between 1 and 10. The due date distribution
factor is 150. When the due date distribution factor is large, the Qcrossover is
significant good. It can provide good solutions with much fewer generations than the
other two crossovers.
72
Figure 6.12: The relationship between final solutions and the number of the generations.
The Genetic Algorithm uses different recombination strategies: Qcrossover, Uniform
operator and PMX crossover. The Genetic Algorithm stopped after 20000 generations.
For the specific problem, there are 20 randomly generated jobs. For each job, the
processing time is randomly generated between 1 and 10. The due date distribution
factor is 200. Same as in the Figure 6.11, the Qcrossover performs much better than
the Uniform crossover and PMX crossover when the du e dates are spread enough on
the time line.
73
Genetic Algorithm with 50 generations
Qcrossover
Uniform operator
PMX crossover
due date distribution factor
Figure 6.13: Average Q of the Genetic Algorithms. The Genetic Algorithm uses
different recombination strategies: Qcrossover, Uniform operator and PMX crossover.
The Genetic Algorithm stopped after 50 generations. For each problem, there are 20
randomly generated jobs. For each job, the processing time is randomly generated
between 1 and 10. The due date distribution factor is between 0 and 200. For each ddd
factor, there are 100 randomly generated problems. When there are only small number
of generations, Qcrossover always performs much better than the Uniform crossover
and PMX crossover. Especially when the due date distribution factor is large, the Q
crossover can provide a much better solutions.
74
6.4 Experiment: Comparison of Genetic Algorithm
and Optimal solutions
The Genetic Algorithm is also tested on the problems with small size to
compare with the optimal solutions. The same experiment model is applied, which is
used for the Pairwise Interchange for the small problem size. That is, N = 8; and the
due date distribution factor is between 0 and 100; processing time is randomly
generated between 1 and 10; for each due date distribution factor, 50 jobs are
randomly generated. For each Genetic Algorithm, apply 6000 generations. Figure
6.14 shows the comparison of the Genetic Algorithm and the optimal solutions.
Table 2.0 shows the number of the cases in which the Genetic Algorithm found the
optimal solutions.
Heuristics The number of the cases that Genetic algorithm find the optimal solutions The number of the test cases
Genetic using Q crossover 5047 5050
Genetic using Uniform crossover 5047 5050
Genetic using PMX crossover 5046 5050
Table 2.0
The results are impressive. With enough generations, all the Genetic
Algorithms found the optimal solutions in most cases. In only three cases, Q
75
crossover averaged 2.7% greater than the optimal and Uniform crossover averaged
2.8% greater than the optimal. In only four cases, PMX crossover averaged 5.7%
greater than the optimal solutions; in one of the four cases, PMX crossover is 18.7%
greater than the optimal. As the results shows, the quality of the Genetic Algorithm is
significantly greater for the smaller problem size with enough generations.
76
Figure 6.14: Average Q value of the Genetic Algorithm and the optimal solution. For
each problem, randomly generate 8 jobs. For each job, the processing time is generated
between 1 and 10. The due date distribution factor is between 0 and 100. For each due
date distribution factor, randomly generate 50 problems and get the average Q. The
GA uses three difference recombination strategies, which are Qcrossover, Uniform
operator and PMX crossover. With enough generations, three genetic algorithms found
nearly all the optimal solutions of 5050 problems. In only 3 cases, GA using Q
crossover is averaged 2.7% greater than the optimal solutions and GA using Uniform
crossover is averaged 2.8% greater than the optimal. In only 4 cases, GA using PMX
crossover is averaged 18.7% greater than the optimal solutions.
77
7.
Conclusions
In this thesis, an overview of the Early/Tardy scheduling problem is given.
Two heuristics are mainly introduced, which are the Pairwise Interchange and the
Genetic Algorithm. The Pairwise Interchange heuristic is simple and fast and can
create acceptable solutions. However, the performance of the Pairwise Interchange is
strongly dependent on the initial sequence. Compared with several initial sequences,
the Pairwise Interchange using EDD as the initial sequence always provides better
solutions, especially when all due dates are spread out enough on a given time line.
The Pairwise Interchange using WSPT as the initial sequence can perform much
better when all due dates are tightly grouped and small. The study also indicates even
when the problem size is minimum, which means the number of the jobs is 3, the
Pairwise Interchange may not give the optimal solution because of the initial
sequence.
The Genetic Algorithm always takes a long time, but provides much better
solutions than the Pairwise Interchange. Three parts of the Genetic Algorithm affect
the quality, which are Selection, Recombination, and Mutation. Recombination
affects the quality of the algorithm more than other two parts. Three Recombination
strategies are introduced, these are Qcrossover, PMX crossover and Uniform
crossover. The experiments indicate that with enough generations the three strategies
can give better solutions than the Pairwise Interchange and the solutions are very
78
close to the same. Therefore, in measuring the performance of the Genetic Algorithm
greater focus should be kept on the speed of the improvement of the solution  how
fast can the Genetic algorithm give an acceptable solution. Qcrossover always can
achieve good solutions much quicker when the due date distribution factor is big.
PMX crossover and Uniform crossover always can produce good solutions quickly
when the due date distribution factor is very small.
The choice between these heuristics for a specified problem will depend on
the result of the following model:
Assume L represents the performance improvement for each time increment.
Assume CostQ represents the reduced cost fee for each performance
increment.
Assume CostT represents the cost fee for each time increment.
Now the model is F = L CostQ / CostT
If the F is much bigger, which means the performance will be more important
than the time, therefore the Genetic Algorithm should be preferred. If the F is much
smaller, especially smaller than 1, this means that the time will be more important,
and that will suggest the Pairwise Interchange should be chosen.
79
REFERENCES
[1] Kenneth R. Baker and Gary D. Scudder, Sequencing with earliness and
tardiness: penalties a review, Operations Research, JanuaryFebruary 1990
[2] Candace Arai Yano, Algorithms for a class of singlemachine weighted
tardiness and earliness problems, European Journal of Operational Research
1991
[3] Timothy D. Fry, Ronald D. Armstrong and L. Drew Rosen, single machine
scheduling to minimize mean absolute lateness: a heuristic solution,
Computers Opns. Res. 1990
[4] Morton, T. T., Pentico, D. W., Heuristic Scheduling Systems, John Wiley
and Sons, 1993.
[5] William J. Wolfe, Stephen E. Sorensen, three scheduling algorithms applied
to the earth observing systems domain, manuscript, 1998
[6] William J. Wolfe, et. Al., Empirical study of heuristics for the single
machine early/tardy problem, manuscript, 1998
[7] Whitley, D., T. Starkweather, and D. Fuquary, scheduling problems and the
Traveling Salesman: The Genetic Edge Recombination Operator, Proc. Third
Intl Conference on Genetic Algorithms and their Applications, J. D.
Schaeffer, ed. Morgan Kauffmann, 1989
[8] F. Syswerda, J. Palmucci, The Application of genetic Algorithms to
Resource Scheduling, Fourth International Conference on Genetic
Algorithms, 1991.
[9] Zwenben, M., Fox, M., Intelligent Scheduling, Morgan Kaufmann
Publishers, 1994.
[10] Kanet, J., Minimizing Variation of Flowtime in Single machine systems,
Management Science, 1981.
[11] Garey, M., Johnson, D., Computers and Intractability, W. H. Freeman and
Company, 1979.
80
[12] L. Davis, ed., Handbook of Genetic Algorithms, VanNostrandReinhold,
NewYork, 1991.
[13] Fry, T., G, Leong and T. Rakes, Single machine scheduling: A comparison of
Two Solution Procedures, OMEGA, 1987.
[14] Coffman, Computer and JobShop Scheduling Theory, John Wiley & Sons,
1976.
[15] Uckun, S., et. Al., Managing Genetic Search in Job Shop Scheduling. IEEE
Expert, October, 1993.
[16] Peng Si OW and Thomas E. Morton, The single machine early/tardy
problem, Management Science 1989
[17] John J. Kanet, Minimizing the average deviation of job completion times
about a common due date, Naval Research Logistics Quarterly, December
1981
[18] Michael R. Garey, Roberte E. Tarjan and Gordon T. Wilfong, Oneprocessor
scheduling with symmetric earliness and tardiness penalties, Mathematics of
Operations Research, May 1988
[19] Kenneth R. Baker, Introduction to sequencing and scheduling, John Wiley
& Sons, 1974
81

Full Text 
PAGE 1
HEURISTICS FOR THE WEIGHTED EARLY/TARDY PROBLEM by Jiang XU B.S Shanghai Jiao Tong University, 1997 A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Computer Science 1999
PAGE 2
This thesis for the Master of Science degree by Jiang XU has been approved by Date
PAGE 3
Jiang XU (M.S., Computer Science) Heuristics for the Weighted Early/Tardy problem Thesis directed by Associate Professor William J. Wolfe ABSTRACT This thesis presents an analysis of the "weighted early/tardy problem" and an evaluation of several heuristics, which are EDD (earliest due date), WSPT (weighted shortest processing time), WLPT (weighted longest processing time), Pairwise Interchange and Genetic Algorithm. The EDD, WSPT and WLPT are simple heuristics that provide the optimal solutions only under very restrictive conditions. The goal of using heuristics is to fmd near optimal solutions in an efficient manner. The heuristics were tested on a multitude of problem instances with up to twenty jobs per problem and a wide range of due date distributions. The heuristics were also tested against problems with only eight jobs to be compared with the optimal solutions, which is captured by exhaustive search. The results of the experiments indicate: 1. The performance of the Pairwise Interchange heuristic is strongly dependent on the initial sequence. iii
PAGE 4
2. The Pairwise Interchange heuristic performs better than the simple heuristics ( EDD, WSPT WLPT) 3. The Pairwise Interchange using EDD as the initial sequence performs much better than the other Pairwise Interchanges when all due dates spread out enough over the time line 4. The Genetic Algorithm can provide a better solution than the Pairwise Interchange heuristic, but normally takes a longer period of time. 5 With enough generations the solutions provided by the Genetic Algorithm using different recombination strategies are same as each other or very close to the same 6. With small number of generations the Genetic Algorithm using Q crossover will perform much better than the PMX and Uniform crossover when all due dates spread out enough over the time line This abstract accurately represents the content of the candidate s thesis I recommend its publication. / i v
PAGE 5
CONTENTS Chapter 1. Introduction .............................................................................................................. 1 2. Problem Statement ................................................................................................... 7 3. Optimal Slide Algorithm ....................................................................................... 13 4. Simple heuristics .................................................................................................... 17 5. Pairwise Interchange .............................................................................................. 23 5.1 Experiment: Comparison of Simple heuristic and Pairwise Interchange .............. 29 5.2 Experiment: Comparison ofthe PI using different initial sequence ...................... 35 5.3 Experiment: Comparison of the heuristics and the optimal solutions .................. .40 6. Genetic Algorithm ................................................................................................. 43 6.1 Other recombination strategies .............................................................................. 50 6.2 Experiment: Comparison of Genetic Algorithm and Pairwise Interchange .......... 53 6.3 Experiment: Comparison ofGA using different recombination strategies ........... 60 6.4 Experiment: Comparison of Genetic Algorithm and Optimal solutions ............... 75 7. Conclusion ............................................................................................................. 78 References .................................................................................................................... 80 v
PAGE 6
FIGURES Figure 2.0 The problem model ............... . .................................................................... ... .. 8 2.1 The due date distribution factor ......................................................................... 9 3.1 Example of the implementation of the Optimal Slide Algorithm ................... 15 4.1 Example ofthe worst case of the WSPT and WLPT heuristic ........................ 21 5.1 Average Q ofthe WSPT and the PI using WSPT as the initial sequence ....... 32 5.2 Average Q ofthe WLPT and the PI using WLPT as the initial sequence ..... . 33 5.3 Average Q of the EDD and the PI using EDD as the initial sequence .......... . 34 5.4 Average Q ofthe PI using different initial sequence ....................... .............. 38 5.5 Average improvement by using PI procedure on different initial sequence .... 39 5.6 Average Q of the heuristics and the optimal solutions .................................... 42 6.0 Qcrossover ............................ . .................................................................. ..... 47 6.1 PMX crossover ............................. ............................................................. . . 51 6 2 Uniform crossover ............. ..... ................................................................... . . . 52 6.3 Average Q value of the GA and the PI .......................................................... . 58 6.4 Average difference between the PI and the GA .......... ........................... .... . 59 6.5 Average Q value ofthe GA with 1000 generations ......................................... 63 6.6 Average Q value ofthe GA with 1500 generations .................................... .... 64 6.7 Average Q value ofthe GA with 5000 generations ....................................... . 65 vi
PAGE 7
6.8 Relationship between Final solutions and Generations (ddd factor=0) ..... .... 67 6.9 Relationship between Final solutions and Generations ( ddd factor =50) ...... . 68 6.10 Relationship between Final solutions and Generations ( ddd factor =100) ..... 71 6.11 Relationship between Final solutions and Generations ( ddd factor =150) ... .. 72 6.12 Relationship between Final solutions and Generations ( ddd factor =200) ...... 73 6 .13 Average Q value ofGA with 50 Generations ....................... ....... . ............... 74 6.14 Average Q value of the GA and the optimal solution ........... .............. . .... ... 77 vii
PAGE 8
1. Introduction Scheduling problems occur m many economic activities. In Industrial Engineering, Management Science, Operations Research and in many other fields, there are usually a great number of tasks that need to be accomplished with a limited amount of resources in short periods of time. Scheduling is the process of organizing, choosing and timing tasks along with the necessary resources to carry out all the activities needed to produce the desired output at the specified time [8]. One always strives to achieve an optimal schedule, which will result in completion of ones tasks in the timeliest manner. The optimal schedule usually results in maximum savings of time and money However, a typical scheduling problem can be comprised of several different and concurrent goals, which have a finite amount of resources available. Combining these goals and resources often leads to exponentially growing numbers of possible solutions. Therefore, it can be difficult or impossible to fmd the exact solutions in polynomial time. If an optimal schedule is not achievable in a reasonable amount of time, then an approximate solution must be found to minimize the penalties. Heuristic methods can be used to offer near optimal solutions. They usually take a minor amount of time, while providing good solutions. Most heuristics are always combined with the appropriate variable abstractions such as earliest due date, weighted shortest processing time and weighted longest processing time, and the
PAGE 9
heuristics will be very effective under specific conditions. On the other hand, such abstractions may constrain the performance and make the heuristics less flexible. Until fairly recently, scheduling problems focused on single performance measures. That is, the problems involved penalty functions that are nondecreasing function of increasing completion times. These measures were referred to as "regular" measures, which meant that earlier was always better. In many articles, authors called such regular measures "mean flowtime", "mean lateness" or "mean tardiness". The time spent by a job in the system is defmed as its flowtime. For job i, its flowtime Fi = Ciri where Ci is its completion time and ri is its ready time (=0 for static case). For a problem with njobs, the mean flow time is that F = (F 1 + F 2 + ... +Fn)/n. The job lateness is defmed as Li = Cidi where Ci is the job completion time and di is the job due date. For a problem with n jobs, the mean lateness is defmed as L = (L1 + L2 + ... + Ln)/n. The job tardiness is defined as Ti = max { 0, Li} where Li is the job lateness. For the problem with up to njobs, the mean tardiness is T = (T1 + T2 + . + T n)/n. The mean tardiness criterion has been a standard way to measure conformance to due dates, but it ignores the consequences of jobs completing early. Baker discussed such measures in his book [ 19] and provided many optimal and efficient algorithms. However, penalties are also associated with products that are completed early. For example, in Industry, if the products are fmished early and cannot be shipped before their due dates, they can still cost money to be held in fmished goods 2
PAGE 10
inventories until their due dates. Similar to Japanese longrange concerns in justin time (TIT) production methods, the earliness should be discouraged as well as tardiness. Both earliness and tardiness penalties create nonregular performance measures. The importance of the nonregular measure is that the set of nondelay schedules does not constitute a dominant set [3). A nonregular measure may be improved by inserting job idle time; thus, the search for an optimal solution is no longer restricted to a fmite set such as the nondelay schedules [3]. This in tum leads to new developments in the field of scheduling research. In this thesis, the problem of scheduling jobs to minimize the total earliness and tardiness penalties by using heuristics is discussed. Several heuristics will be introduced, which are EDD (earliest due date), WSPT (weighted shortest processing time), WLPT (weighted longest processing time), Pairwise Interchange and Genetic Algorithm. The Pairwise Interchange and the Genetic Algorithm will be emphasized in this thesis. The above heuristics are analyzed and are compared to each other by using a large number of experiments. The Pairwise Interchange and the Genetic Algorithm heuristic are analyzed to discover the factors that will significantly affect their performance. These studies are based on the results of the experiments below. There are many publications, which discuss the Weighted Early/Tardy scheduling problem. Some of these works address specific cases, which are 3
PAGE 11
associated with limited conditions and provide the algorithms that can produce the optimal solutions. Some of them also consider problems with broad conditions and provide the heuristics for all conditions. Baker reviewed the state of the Early/Tardy problem in his article [1]. He introduced the E/T model and described many variations of the problem. Based on a model that contains one machine and a common due date he added many different features such as distinct due dates and parallel machines. He also listed many different job parameters and various penalty functions. A special case of the problem was discussed in article [1 0], in which all weights are equal and the jobs have one common due date. Kanet [1 0] presented an efficient procedure for minimizing the maximum penalty when the common due date is greater than the sum of all the jobs' processing times. The algorithm is very simple and fast. Kanet also provided proof of the algorithm. In article [18], Garey proved that the problem of finding solutions of minimum penalty is NPComplete, and provided an optimal algorithm for the ftxed job sequence case, which can be done in O(NlogN) time. The time complexity of an optimal timing algorithm is an important factor in solving the problem, this is because it may have to run for a long time to fmd an optimal solution. Other authors have also given various heuristics with Linear Programming and Dynamic Programming solutions for the problem. In article [3], Fry presented a heuristic based on the Adjacent Pairwise Interchange algorithm to minimize mean absolute lateness (MAL). Using Fry's algorithm, the optimal timing can be determined through a linear program. In article [2], Yano analyzed the special 4
PAGE 12
case where job weights are proportional to the processing time. He developed and compared several heuristic procedures and the experimental data of the study showed that heuristics perform well. In article [16], Peng Si Ow investigated seven heuristics, including a variation of Beam Search, and then ran tests on 1440 problems to compare these heuristics and their performance. The next section will provide a brief problem statement and then will discuss the complex aspects of the problem along with exploring the more moderate portions of the problem. Then in section 3, the Optimal Slide Algorithm will be introduced, which can provide the optimal solution for a fixed job sequence. The Optimal Slide Algorithm is used both in all heuristics to give the accurate Q value for a decided job sequence. In section 4, some simple heuristics will be introduced, they are WSPT, WLPT and EDD These simple heuristics can give the optimal solutions under certain conditions that are described below. Section 5 will introduce the Pairwise Interchange heuristic. A description will be given of the Pairwise Interchange procedure and it will be analyzed by the results of the experiments. It will then be compared to the simple heuristics that are discussed in section 3 to show that the Pairwise Interchange procedure can greatly improve the simple heuristics (EDD, WSPT, and WLPT). 5
PAGE 13
In section 6, the Genetic Algorithm will be introduced. As in section 5, a description of the Genetic Algorithm is given and the results of the experiments are then analyzed. A comparison of the Genetic Algorithm with the Pairwise Interchange is given to illustrate that the Genetic Algorithm can achieve better results. A summary of these heuristics is given in the last section. 6
PAGE 14
2. Problem Statement The problem, which is studied in this thesis, is a single machine nonpreemptive Weighted Early/Tardy scheduling problem with the following parameters: The number of jobs: A set of jobs: Release times: Processing time: Due date: N i N} All jobs available at time = 0 Pi Pmax) Di Di Dimax) Due date distribution factor: x, all due dates are randomly over [O,x] Earliness weights: Tardiness weights: Completion times: Penalty function: ci Q= Iiqi, where qi = wEi I ci Di I if ci > Di which means JOBi is fmished early, 7 qi = WTi I ci Di I if ci < Di which means JOBi is fmished late.
PAGE 15
Time line: Problem Model Earliness penalty Case 1: qi = wEi I ci Di I Case2: qi=O Case3: qi=WTi I CiDi I qi = 0 which means JOBi is fmished on time. [0, T max] where T max = N(2P max1) 14Tardiness penalty Figure 2.0: The problem model. For each job i, it has a due date (Di), a complete date (Ci), an earliness weight (WEi) and a tardiness weight (WTi) If the job is finished on time, the penalty will be zero. If the job is finished before its due date, the penalty will be WEi I ci Di I If the job is finished after its due date, the penalty will be WTi I ci Di I In the thesis, the problem is simplified that all the earliness weights are equal to the tardiness weights and all of them are equal to one. Due date distribution factor is to be used in the experiment model. The due date distribution factor is defmed to represent the % of the time line used for random due date placement. All the due dates are in the range [0, x] where x is the due date distribution factor. For a due date distribution factor of"200", x = 200, which means 8
PAGE 16
the due dates are random over the entire time horizon from 0 to 200. For a due date distribution factor of"O", x = 0, which means all the due dates equal to 0. Due date distribution factor Due Date Distributi on X For each ddd factor x, job's due date is randomly generated in [O,x]. Figure 2.1: The due date distribution factor. The due date distribution factor is to be used in the experimental model. The due date distribution factor rangers from 0 to 200. The due dates are randomly generated over the time horizon ([O,x], where x is the due date distribution factor). When x = 0, all the due dates are equal to 0. Since the average processing time is kept the same when the due date distribution factor is decreased, the level of conflict between the jobs intensifies. This results in making the problem becoming more difficult. In the experiments below, the due dates are randomly generated and it is unlikely for the due dates to be equally spaced. In fact, distinct due dates may have the same time 9
PAGE 17
In the problem model, the earliness weights are treated like storage fees imposed on products, which cannot be shipped on time. In contrast, tardiness weights are treated like the penalty cost for products that cannot be fmished on time. Nonpreemption means once a job is started it cannot be interrupted and must be executed to its completion time, but idle time is allowed between jobs. Time Line ([0, T max]) means that all jobs must be fmished before T max. The length of the time line (Tmax = N(2Pmax1)) was chosen to ensure that all jobs would make schedule (with some room to spare). In this thesis, the Weighted Early/Tardy problem is simplified, in which the earliness weights are equal to the tardiness weights and all of them are equal to one. The objective is to fmd optimal solutions that can minimize the total penalty, which is represented by Q value. The problem has a complex component and a more moderate one. The moderate element is to achieve the best start time for each job in a given job sequence. Garey [18] presented an optimal algorithm for the fiXed job sequence that can be done in O(NlogN) time. A variation of Garey's algorithm is implemented in this thesis; it is called the Optimal Slide Algorithm. The complex component of the problem is to fmd the optimal job sequence, which means to fmd the ordering of the jobs on a time line, which can provide the lowest cumulative penalty. Since there are N jobs in the problem, the total number of the distinct solutions to the nonpreemptive scheduling problem is therefore N!, which 10
PAGE 18
is the number of the different permutations ofN elements. An exhaustive search will guarantee that the optimal solution for any early/tardy problem will be found. In an exhaustive search, each possible sequence's Q value is scored and the sequence with the lowest Q value is absolutely the optimal solution. For a problem that is small, an exhaustive search will quickly fmd the optimal solution. For example, when N = 6, it is only necessary to compare 720 sequences. However, with a larger size problem there will be much greater difficulties using the exhaustive search method. Suppose N = 20, the number of the distinct solutions are 20! = 2.432902008177e+l8. This means that the exhaustive search method will evaluate a very large number of sequences and it will therefore take the computer an exponentially longer run time to fmd the optimal solution. Using a computer with 200 MHz CPU, it may take 386 years to get the optimal solution by using the exhaustive search. When N = 25, there are 1.551121004333e+25 possible sequences and the run time of the problems with 25 jobs is therefore 6375600 times greater than a problem with 20 jobs. Suppose there is a computer "Al", that can calculate the optimal solution for the problem with 20 jobs in an acceptable time. However, one would need a much faster computer "Bl" whose speed is 6375600 times faster than that of"Al" to solve the problem that has 25 jobs in the same acceptable amount of time as the original 20 job problem. Therefore, it is clear that the general problem has an exponentially increasing number of possible sequences to consider, and that achieving an acceptable solution time will 11
PAGE 19
be a major factor to consider, if even achievable at all. Thus, the exhaustive search method is an inefficient algorithm. There is a large class of problems similar to the scheduling problem that is previously discussed. One example would be the traveling salesman problem (TSP). These problems are called ''NPcomplete" problems [11], and none of them have known polynomialtime solutions. If one can fmd a polynomialtime algorithm to solve one of these problems, one will then find a polynomialtime algorithm for every problem in this class. However, it is not known whether there exits such an algorithmic solution for these types of problems. Therefore, until now only exhaustive search methods guarantee that the optimal solutions for these types of problems would be found. Nevertheless, it is ineffective method of computer use due to the amount of time expended on the calculation. Therefore, it is more efficient to use the heuristic method to achieve approximations to the optimal solutions in an acceptable time frame. 12
PAGE 20
3. The Optimal Slide Algorithm The Optimal Slide Algorithm is used to give an optimal solution for a fixed job sequence. The algorithm is from the same idea as Garey's algorithm in [18]. The difference is that Garey's algorithm looks like a "shift back" algorithm where this one looks like a "slide ahead". The procedure of the Optimal Slide Algorithm is as follows: At the beginning, all the jobs are shifted to the furthest left on the time line and no idle time is inserted between each job. Then as JOB 1 starts at 0, JOB i starts at IPk (k = 1, ... i1). Assume each job's start time is Si and complete time is Ci, where Ci = Si + Pi. a special weight is used for the Optimal Slide algorithm: lfDj > cj then wi =WEi lfDj::;; ci then wi =WTi Now, a cluster has been created with all jobs in it. The frrst job in the cluster is JOB 1 and the last job is JOB N. Each job's weight is updated using the rules above. After updating the weights, one begins to add the weights of the jobs in the cluster. Two cases may occur when the weights are added: Case 1 : The current sum is negative or equal to zero and the weight of the next job in the cluster is positive. That is LWi (i = 1, ... k) ::;; 0 and wk+l > 0. If these conditions are met, the current cluster is then broken into two parts. The fust part will 13
PAGE 21
not be used, including all jobs before JOB k+I. The second part is then used as a new cluster, which includes all jobs after JOB k Case 2: The sum of all weights in the cluster is positive. For example, when M jobs are in the cluster, I.Wi (i = I, ... M) >0. When this case occurs, the entire cluster is shifted to the right. The amount shifted is the minimum difference between the completion time and the due date for all the jobs in the cluster (Min(l Ci Di I) i = I, ... M). After sliding the cluster (case 2) or using a new cluster (case I), the process is then started over again on the newly created cluster. The weights are updated and the summation procedure is started again until the sum of the weights of all jobs in the cluster is negative and without case I occurring during the summation procedure. Figure 3.I shows an example of the Optimal Slide Algorithm. There are five jobs in the example. All the earliness weights and the tardiness weights are equal to I. The processing time and the due date of each job is shown in the following table: Jobl Job2 Job3 Job4 JobS Processing time 2 3 5 4 3 Due date I 4 12 I5 16 14
PAGE 22
Example of Optimal Slide Algorithm 1 Job1 1 Job2 1 1 Job3 II Job4 I JobS I ld2 .... Cluster 1 1 +1 +1 1 ... Job1 I Job2 Job3 I Job4 I JobS New Cluster 1 1 +1 +1 1 ... Jobl I Job2 Job3 I Job4 I JobS Step 1: Shift all jobs to the further left on the time line. No idle time is inserted. Mark each job with a special weight, which is calculated as: IfD1 > C1 then W1 =WE; IfDI s cl then W; =WTI The cluster includes all jobs (job1 jobS). Step 2: Because the sum of first two jobs' weights is negative (wl+w2=11=2) and the job3's weight is positive (w3=1), so the cluster is broken. The first two jobs are left in their position and the new cluster includes the rest jobs. Calculate and mark the special weight of each job in the new cluster. Step 3: Because the sum of all weights in the new cluster is positive (see step 2: +1+11=1), the loll! .... .,_ ___ New entire cluster is shifted to the right. .... + 1 1 1 ... The amount shifted is the 1 1 Jobl I Job2 Job3 I Job4 JobS a4 minimum difference between the completion time and the due date (min= d4c4=1). After sliding the cluster, re calculate the weight of each job in the cluster. Since the sum of all weights in the cluster is negative (w3+w4+w5=+111=1), so stop the algorithm and it is the optimal solution for this fiXed sequence. Figure 3.1, Example of the implementation of the Optimal Slide Algorithm. Each job is marked with a special weight above the job box. All the earliness weights and the tardiness weights are equal to 1. The processing time and the due date of each job is shown in the Table above. 1S
PAGE 23
Same as Garey's algorithm, the Optimal slide algorithm can be done m O(NlgN) time. The algorithm is like a train moving on the time line; at each stop, it may leave some carriages behind and continue with the rest. 16
PAGE 24
4. Simple Heuristics There are many simple and fast heuristics, which can provide optimal solutions under certain conditions, but can be very weak under certain other conditions. Three simple heuristics are introduced here, which are EDD (earliest due date), WSPT (weighted shortest processing time), and WLPT (weighted longest processing time). They are used for comparison purposes with the Pairwise Interchange and the Genetic Algorithm, which are presented in the following sections. In the defmitions below, the Optimal Slide Algorithm was applied to each of the heuristics to provide the optimal timing for the resulted job sequence. EDD (earliest due date): The EDD heuristic is to sequence the jobs on the time line according to the rule: The EDD heuristic is optimal when the due dates are sufficiently spaced that eachjob can be placed at its due date without conflict. The proof is obvious. For the static weighted tardiness problem (Twt = IWTi (Ci Di)), if any sequences can produce T wt = 0, EDD is optimal [8]. In the static problem of minimizing the maximum lateness factor there is an optimal policy satisfying the EDD rule [8]. Minimizing the maximum lateness is also very important if all jobs are going to be needed for the same project somewhere downstream. Using the EDD rule helps to 17
PAGE 25
prevent the embarrassment of very large tardiness. However, as discussed in [8], the EDD rule is often weaker with respect to average tardiness. Since EDD rule could minimizes the maximum job tardiness and hence should do better for lightly loaded job sequences, which means that EDD heuristic could give very good results for low tardiness problems with wide due date ranges. WSPT (weighted shortest processing time): The WSPT heuristic is to sequence the jobs on the time line according to the rule: WTi/Pi WTi+l/Pi+l In the experiment, all the weights of the tardiness are same, so that the rule is simplified as: The WSPT heuristic is optimal when the due dates are such that all jobs must be late. The WSPT heuristic can provide the optimal solutions for such problems in which all due dates are equal to zero or smaller than the shortest job processing time. The proof is shown below: Theorem: In the Weighted Earl/Tardy problem, when all the due dates are smaller than the shortest processing time, the WSPT heuristic can provide the optimal solutions. Proof Assume there are N jobs in the problem, each job has a process time Pb due date db earliness weight WEi and tardiness weight WTi, di:::; min(pi) (i = 1,2, ... N). 18
PAGE 26
All the jobs are late so that earliness weight will not be used. Since all the jobs are late, there will be no idle time inserted between each job. Suppose there is an optimal solution S, which is not the WSPT sequence. That is, somewhere in the sequence there must exist an adjacent pair of jobs, JOB i and JOB i+ 1, such that Pi> Pi+l The Q value of the sequenceS is: Q = CPt dt) wn + CPt + pzd2) WTI + ... + (p1 + P2 + ... + PNdN) WTN = Pt (wn + W'f2 + ... + wrn) + P2 (wT2 + wn + ... + wrn) + ... PN WTN (dt wn + d2* W'f2 + ... + dN wrn) where (p1::;; P2::;; ... ::;; PN) Construct a new sequence S' by changing the position of the two jobs, JOBi andJOBi+t Now the Q value of the new sequenceS' is: Q' = Pt (wn + W'f2 + ... + wrn) + ... + Pi+l (wTi + WTi+l + ... + wrn) +Pi (WTi+l + WTi+l + ... + Wrn) + ... PN WN(dt Wn + d2* W'f2 + ... + dN Wrn) For the sequenceS, the Q value is: Q =PI (wn + wn + ... + WTN) + ... +Pi (wTi + WTi+I + ... + WTN) + Pi+l {WTi+l + WTi+2 + ... + WTN) + ... PN WN(dt Wn + d2* W'f2 + ... + dN Wrn) Q' Q =Pi+ I (wTi + WTi+I + ... + WTN) +Pi (wTi+I + WTi+2 + ... + wrn) Pi (WTi + WTi+l + ... + WTN)Pi+l (WTi+l + WTi+2 + ... + WTN) = Pi+l WTi Pi WTi = (pi+l Pi) WTi Since Pi> Pi+ I. so Q' Q < 0. 19
PAGE 27
Therefore, Changing the position of JOBi and JOBi+ 1 will decrease the Q. So there is a contradiction, in which sequence S is not optimal solution. Thus, for any sequence that is not the WSPT sequence, it can be improved by interchanging an adjacent pair of jobs, which do not follow the WSPT rule. Therefore, it follows that the WSPT sequence itself must be optimal. For the static Twt problem, if in a sequence it is not possible to make any job nontardy, then Twt is minimized by WSPT [8]. WLPT (weighted longest processing time): The WLPT heuristic will sequence the jobs on the time line according to the rule: WEi/ Pi:::;; WEi+l I pi+l Because, all the Weights ofthe Tardiness are the same in the experiments, the rule is simplified as: The WLPT heuristic is optimal if all the jobs must be early. Such as when all due dates equal to the Tmax = N(2Pmax1) where Pmax = maximumi { Pi }, WLPT provides the optimal solution. The proof is similar to the proof used in WSPT. For the nonregular static problems, if WLPT results in a schedule that does not have any tardy jobs, then this sequence will be optimal in schedules with no idle time [8]. The WLPT heuristic could be very weak when the WSPT provides a good solution. Since the WLPT heuristic could be looked upon as the reverse of WSPT 20
PAGE 28
heuristic. Therefore, in cases, where the WSPT heuristic provides the optimal solution, then, the WLPT heuristic will provide the worst. On the other hand, when the WLPT heuristic gives the optimal solution, the WSPT heuristic will provide the worst. Figure 4.1 shows an example about when WSPT and WLPT heuristic will provide the optimal solutions. Case 1: When all due dates are equal to T max= 85, WLPT provides optimal solution where WSPT has worst performance WLPT: Q=46 I 9 I 7 I 6 I WSPT: Q=74 1 3 I 5 I 6 I 7 I 5 13 9 Case 2: When all due dates are equal to 0, WSPT provides optimal solution where WLPT has worst performance 9 I 7 I 6 I 5 I 3 I WLPT: Q=104 3 I 5 I 6 J 7 1 9 I WSPT:Q=76 0 Tmax=85 Figure 4.1: Example of the worst case of the WSPT and WLPT heuristic. There are five jobs and the length of time line is 85 (T = 85); each job is marked with its processing time. When aU due dates are equal to 0, the WSPT heuristic provides the optimal solution where the WLPT heuristic has the worst performance. On the other hand, when all due dates are equal to T, the WLPT heuristic provides the optimal solution and the WSPT heuristic has its worst case. 21
PAGE 29
The EDD heuristic can help to prevent lateness factors that are very large in nature. Since the EDD heuristic will lower the maximum tardiness, it will perform better on a lightly loaded job sequence. The WSPT heuristic will perform better when all the jobs must be late; therefore, it will work better with heavily loaded job sequences. The WLPT heuristic is the reverse of the WSPT heuristic and performs well when all the jobs are required to be completed early. The implementation of these simple heuristics has two parts, which are the sorting part and timing part. Using Heapsort algorithm, the sorting part can be done in O(NlgN) time; using Optimal Slide Algorithm, the timing part can be done in O(NlgN) time. Therefore, all three simple heuristics can be completed in O(NlgN) time. 22
PAGE 30
5. Pairwise Interchange Having introduced several basic heuristics, now the Pairwise Interchange heuristic is presented for creating better schedules; schedules which have an acceptable low Q value. The Pairwise Interchange heuristic is a neighborhood search technique. It is a simple technique and normally only requires a small amount of time in its execution. The intuitive idea behind the Pairwise Interchange heuristic is that by swapping two jobs until there are no other swaps available that could improve the schedule, the results may be the optimal solution. The following is a brief statement of the Pairwise Interchange heuristic. INITIALIZE 1. Initialize the system, which generates the initial job sequence. The initial job sequence can be EDD (earliest due date rule), WSPT (weighted shortest processing time), WLPT (weighted longest processing time) or any other sequences such as RAN (randomly generated sequence). 2. Use the Optimal Slide Algorithm on the initial job sequence to get the Q value. Assume the resulting sequence is the current best solution and that the Q value is the current best Q value. 3. Set the Circulation Control= True. 23
PAGE 31
CYCLE 4. If the Circulation Control is True, then begin the Pairwise Interchange procedure. Consider swapping each job with each of the other jobs in the current best solution until all possible swaps are done. Then, use the Optimal Slide algorithm on each new sequence that resulted from the swapping to get the Q value for each new sequence. 5. If a sequence has a Q value lower than the current best Q value, replace the current best solution with this sequence and replace the current best Q value with the new Q value. Then, go back to step 4. If more than one sequence has Q value lower than the current best Q value, pick the sequence with the lowest Q value. If no such sequence exists whose Q value is lower than the best Q value, set the Circulation control= False. 6. If the Circulation control is False, quit the CYCLE and print the current best solution and the current best Q. This is the result. Since continually swapping arbitrary two jobs in the sequence can lead to all possible sequences, the worst case of the Pairwise Interchange may need O(N!) time to get the solution. 24
PAGE 32
The Pairwise Interchange heuristic that is studied in this thesis is different from the Adjacent Pairwise Interchange (API), which is introduced by Fry [3]. It is also different from the Pairwise Interchange procedure that is introduced by Yano [2]. In article [3], the Adjacent Pairwise Interchange procedure states that if two adjacent jobs (i, j) exist such that i j results in a reduction of total absolute lateness, the switch is then made and another pair is considered. The evaluation of each pair of jobs for switching requires the solution of the linear program [3]. In article [2], Yano's Pairwise Interchange procedure is used to improve the heuristic solutions. Yano's Pairwise Interchange procedure requires that each job swaps with each of the other jobs in the sequence until a swap produces a better schedule. All three heuristics (API, Yano's PI and PI discussed in the thesis) are types of neighborhood search strategies, but there is still difference between each of them. The Adjacent Pairwise Interchange only considers swapping the positions of sequential neighbors, where as, the Pairwise Interchange discussed in this thesis will consider all the possible job swaps. In addition, Fry's API uses linear programming, where as the Pairwise Interchange studied in this thesis uses dynamic programming to consider swapping arbitrary jobs over all of the sequence. Yano's Pairwise Interchange procedure is to swap each job with each of the other jobs in the sequence and as soon as such a swap produces a better schedule the process will be stopped However, the Pairwise Interchange studied here considers swapping each job with each of the other jobs in the sequence until all possible swaps are done. Then, a 25
PAGE 33
better schedule will be picked and the process will continue running until there is no better schedule generated. This strategy will give more chances to fmd a better schedule during each cycle and therefore in some cases may perform somewhat better than Yano's Pairwise Interchange. The following experiment model is used to investigate the quality of the Pairwise Interchange heuristic for problems with up to twenty jobs. The experiment model is also used for later simulations of the Genetic Algorithm heuristic. The problem size is twenty jobs, in which N = 20 and the processing time for each job is randomly generated between 1 and 10 that 1 ::;; Pi ::;; 10. The due dates are chosen uniformly over the interval [0, x] where x varies from 0 to 200. For this purpose, a "due date distribution factor" (ddd factor) is defmed before (described in section 2: Problem Statement) to represent a percentage of the time line used for random due date placement. For each due date distribution factor, one hundred Early/Tardy problems will be randomly generated, so that the average performance of the heuristics can be studied. In section 5.1, 5.2 and 5.3, different experimental results will be presented. The experimental results indicate that when given a sequence, the Pairwise Interchange procedure can produce a better solution than the initial sequence. The nature of the Pairwise Interchange is simple; it is a kind of exchange method, which can also be looked upon as a search strategy. It is better than simple heuristics such 26
PAGE 34
as EDD, WSPT and WLPT as it provides more opportunities to improve the schedule by interchanging jobs based on a given schedule. However, the experimental results also indicate that the perfonnance of the Pairwise Interchange is strongly dependent on the initial sequence. In the experiments, the EDD is shown to be a good initial sequence when the due date distribution factor is large. Conversely, WSPT is proved to be a good initial sequence for the cases when the due date distribution factor is small. One thing must be mentioned here. Although the experimental results indicate that the Pairwise Interchange may greatly improve a given schedule and always perfonn well in the small problem size; even when the problem size is N = 3, it may still not produce the optimal solution. For example, when there are three jobs described as following: Job 1 Job2 Job3 Due date 52 42 54 Processing time 15 12 30 WE=WT 1 1 1 If the Pairwise Interchange uses EDD or WSPT as the initial sequence, then the optimal solution will not be found. The initial sequence generated by EDD or 27
PAGE 35
WSPT is 2 1 3, after Pairwise Interchange procedure, the solution is 2 1 3, which is the same as the initial sequence. However, the optimal solution is 3 2 1. This is because: 1. The sum of all the processing times is greater than the largest due date, so that there will be no idle time inserted between any two jobs. In this example, the sum of all the processing time is 57 and the largest due date is 54. Therefore, no job will be fmished later than 57. The due dates of JOB 1 and JOB 3 are close to the 57, so if JOB 1 or JOB 3 is put in the last position of the sequence, the total penalty may be reduced. 2. The processing time of JOB 3 is much longer than any of the other jobs, and is even much greater than the sum of the other two processing times. Therefore, if JOB 3 is put in the first position of the sequence, the total penalty may be greatly reduced. 3. Also, if JOB 3 is put in the frrst position of the sequence and JOB 2 is put just following JOB 3, this results in JOB 2 being fmished on time. Which results in the penalty for JOB 2 to equal 0. However, if the EDD or the WSPT is chosen as the initial sequence, the frrst cycle of the swapping procedure produces the following sequences: 1 2 3, 3 1 2 and 2 3 1; no one's Q value is lower than the initial sequence. Therefore, the Pairwise Interchange achieves the balance quickly and presents the 28
PAGE 36
initial sequence as the solution, but the optimal solution can never be achieved as 3 I. 5.1 Experiment: Comparison of Simple Heuristic and Pairwise Interchange In the experiments of the Pairwise Interchange heuristic, a number of different initial sequences for the heuristic were tested to see how the initial sequence would affect the results. EDD, WSPT, WLPT and a Random sequence were chosen as the initial sequences for the experiments. As mentioned before, EDD, WSPT and WLPT are simple, fast heuristics that can provide optimal solutions under certain conditions. In the experiments, these simple heuristics are first used for comparison purposes to see how the Pairwise Interchange procedure can improve the results of the simple heuristics. Figure 5.1 shows the comparison of the WSPT heuristic and the Pairwise Interchange using WSPT as the initial sequence. The average improvement made by using the Pairwise Interchange procedure on the WSPT heuristic is about 15%. When the due date distribution factor is very small, the Pairwise Interchange does not improve as much as compared to the WSPT heuristic. This is because WSPT will give better solutions when all the due dates are on a very tight time line. Especially, when all the due dates are zero or are smaller than the shortest processing time, the 29
PAGE 37
WSPT heuristic will give an optimal solution; and at that time, the improvement will be absolutely 0. However, when the due date distribution factor becomes larger, the Pairwise Interchange procedure will improve WSPT immensely. This is because WSPT will not perform as well when the due dates are spread over a larger range. Figure 5.2 shows the comparison of the WLPT heuristic and the Pairwise Interchange using WLPT as the initial sequence. The results indicate that the Pairwise Interchange procedure greatly improves the WLPT at any given due date distribution factor, and the average improvement is about 18%. This is because in the experiments, the due dates are random over [0, x], but the WLPT heuristic will only perform well when all the jobs are intended to be fmished early. The WLPT heuristic can give the optimal solution when all the due dates are equal to T max where T max = N(2Pmax1), Pmax =maximum {Pi }; therefore when all the due dates are very tight on the given time line and over the interval [ x, T max] where x is close to T max, then WLPT should perform much better. Figure 5.3 is the comparison of the EDD heuristic and the Pairwise Interchange using EDD as the initial sequence. The results indicate that when the due date distribution factor is small, the Pairwise Interchange procedure marginally improves the EDD more so than when the due date distribution factor is large. This is because when the due date distribution factor is large, the due dates are over a wide 30
PAGE 38
range on the time line; therefore, more chances are created for the occurrence of the low tardiness problem. The EDD heuristic can produce better solutions for the low tardiness problems when a wide range of due dates occur. Especially, when the due dates have sufficient space between them, which results in each job being placed at its scheduled due date, then the EDD heuristic will give a 0penalty. Therefore, the average improvement achieved by using Pairwise Interchange procedure on the EDD is about 9%, much smaller than the improvement on the WSPT and the WLPT. 31
PAGE 39
w N Pairwise (Using WSPT as Initial sequence) vs WSPT 1100 .. 1000 900 J ... a ./1 .,,., .. WSPT I 700 800 600 '"'"''"'" '"'"'"""'"'"' '"" '"'"'"''" "''""''" """' '''""''"""""'"""'"''''"""'"""'"''"'"'"'"''""''"'"'"''"''""''"'"'"""'"''"'"'"""'"'" : ; : S N g : ; = due date distribution factor Figure 5.1: Average Q value of the WSPT and the Pairwise Interchange using WSPT as the initial sequence. For each problem, randomly generate 20 jobs. For each job, the processing time is generated between 1 and 10. The due date distribution factor is between 0 and 200. For each due date distribution factor, randomly generate 100 problems and get the average Q. The average improvement made by using the Pairwise Interchange procedure on the WSPT heuristic is about 15%. When the due date distribution factor is very small. The Pairwise Interchange does not improve as much as compared to the WSPT heuristic. Especially when the due date distribution factor is 0 and 1, the improvement is 0. This is because WSPT can provide optimal solution when all due dates are smaller than the shortest processing time.
PAGE 40
w w Pairwise(Using WLPT as initial sequence) Vs WLPT 1600 1500 1400 1300 1200 Ql ::s 'i 1100 .WLPT > a Pairwise 1000 900 I lllf\: t.. 1\r"' I soo 1 700 I ..r; I 600 !111 I I I Ill I ill I I I II I II Ill I II I I I II I I Ill I II II I I I iii I I II II I iii I I II II I Ill I I II I ill I I I II I I I II I I I ill I I II I I I II I I I Ill I I llj"jjii II I ill 11l'f11 I II I I I II I I I II I I I II I I I II I I I II I I I Ill I I II I I I II II I II II II I I II I II II I I I II I I I II I I ..CCI II) .... ::1 g: $ 8 $ i due date distribution factor Figure 5.2: Average Q value of the WLPT and the Pairwise Interchange using WLPT as the initial sequence. For each problem, randomly generate 20 jobs. For each job, the processing time is generated between 1 and 10. The due date distribution factor is between 0 and 200. For each due date distribution factor, randomly generate 100 problems and get the average Q. The Pairwise Interchange procedure greatly improves the WLPT at any given due date distribution factor, and the average improvement is about 18%.
PAGE 41
I w I !l iii > a Palrwlse(Uslng EOD as Initial sequence) Vs EDD 1200 1000 800 600 1 1 1+EDD Pairwise 400 200 0 J,,,,,,,,, ,,,,,,,,,,,,,,,,,, "''""'"'''""'"'""'''""'"''''"'""""''"'''''"''""''"'""'''"'""'''""'"'"""'"'''"'""''""''"'''"'''""'""'"'''"''"''''"'' ... 00 "' ... ::::: :1: i 8 m g due date distribution factor Figure 5.3: Average Q value of the EDD and the Pairwise Interchange using EDD as the initial sequence on each due date distribution factor. For each problem, randomly generate 20 jobs. For each job, the processing time is generated between 1 and 10. The due date distribution factor is between 0 and 200. For each due date distribution factor, randomly generate 100 problems and get the average Q. The average improvement by using Pairwise Interchange on the EDD is about 9%, much smaller than the improvement on the WSPT and the WLPT. But when the due date distribution factor is small, the Pairwise Interchange procedure improves the EDD much more than when the due date distribution factor is large. This is because the EDD heuristic can give much better solutions for the low tardiness problem when a wide range of due dates occurs.
PAGE 42
5.2 Experiment: Comparison of the PI Using DifferentlninaiSequence Figure 5.4 is the comparison of the Pairwise Interchange heuristics using different initial sequences. The results indicate that the different initial sequences affect the fmal solutions significantly. A randomly generated initial sequence (RAN) is added to the experiments. The randomly generated initial sequence is utilized to provide a purely random sequence that is not biased by any job attributes, so that the performance of the Pairwise Interchange can be studied on normal arbitrary cases. The results show that by using different initial sequences, the fmal solutions can be noticeably different. The Pairwise Interchange that uses EDD as the initial sequence provides better solutions on a majority of the due date distribution factors. Nevertheless, it performs much worse when all due dates are tight and are not spread out enough to allow for job insertion for the correct due date on the time line. On the contrary, when the Pairwise Interchange using EDD as the initial sequence does not perform well, then the Pairwise Interchange using WSPT as the initial sequence will provide a better set of solutions. In the experiment, when the due date distribution factor is between 0 and 53 (ddd factor: [0,53]), the Pairwise Interchange using EDD as the initial sequence averaged 19.5% greater than the Pairwise Interchange using WSPT as the initial sequence. However, when the due date distribution factor is between 54 and 200 (ddd factor: [54, 200]), the Pairwise Interchange using EDD as 35
PAGE 43
the initial sequence averaged 69.8% lower than the Pairwise Interchange using WSPT as the initial sequence. When the due date distribution factor becomes large, the Pairwise Interchange using WSPT, WLPT or RAN as the initial sequence all arrive at much the same solutions. These solutions are much worse than using EDD as the initial sequence. Some of these solutions provided by Pairwise Interchange using WSPT, WLPT or RAN as the initial sequence cannot achieve an acceptable low Q value. The Pairwise Interchange using WLPT as the initial sequence is even worse than the Pairwise Interchange using RAN as initial sequence when the due date distribution factor is small. In light of the experiments preformed, it is clear that the solutions provided by the Pairwise Interchange are strongly dependent on the initial sequence. EDD is a strong initial sequence for most of the cases, especially when all the due dates are spread sufficiently over a wide enough range. Figure 5.5 shows the average improvement by using the Pairwise Interchange procedure on the different initial sequences. The improvement is measured by percentage, which is calculated as: %improvement= ((PIinitial sequence)/initial sequence)* 100% where the initial sequence is WSPT, WLPT or EDD. In Figure 5.5, the results indicate that the improvement of the Pairwise Interchange using EDD as the initial sequence is relatively stable over the entire range of due date distribution factors and averages about 8. 7%. The Improvement of 36
PAGE 44
the Pairwise Interchange using WLPT or WSPT as initial sequence is stable when the due date distribution factor is more than 90 and it averages about 20%. However, when the due date distribution factor is less than 90 the improvement of the Pairwise Interchange using WSPT or WL TP as the initial sequence quickly increases, as the due date distribution factor becomes larger. Especially for the WSPT, the improvement increases significantly and then plateaus at 20% when the due date distribution factor is between 0 and 90. This is because WSPT performs well when the due date distribution factor is small. Especially, when all due date distribution factors are smaller than the shortest processing time, WSPT will provide the optimal solutions, but when the due date distribution factor becomes larger, WSPT will degrade very quickly. Therefore, as the WSPT rapidly degrades, the improvements achieved by using the Pairwise Interchange procedure will be greater as the due date distribution factor becomes larger. After achieving a plateau (ddd factor is around 90, improvement is around 20% ), the improvement becomes relatively stable. 37
PAGE 45
w 00 .. ::l ii > a Pairwise Using different Initial sequence 1400 1200 1000 800 600 400 200 0 l.n11 Ojili,lli liOjdli I """ liiiOj iilloiiiii,IIOj,ii OjjjjjOjiiiiiii i"lllliiil,,lliiiiliiiiiiiiiiiiOjiiii iiiiOjliiiniliOj,jllililll '"' iii,illliiiliiilil"iiiOjilii iiiiOjiiiiilil = :i! :;; ;z ... = "' ,._ CD N "' g; fD M 0 :: due date distribution factor ; CO M WSPT WLPT EDD _,.__Random Figure 5.4: Average Q value of the Pairwise Interchange using different initial sequences. For each problem, randomly generate 20 jobs. For each job, the processing time is generated between 1 and 10. The due date distribution factor is between 0 and 200. For each due date distribution factor, randomly generate 100 problems and get the average Q. There are four different initial sequences, which are WSPT, WLPT, EDD, RAN. The Pairwise Interchange that uses EDD as the initial sequence provides better solutions on the majority of the due date distribution factors. Nevertheless, it performs much worse when all due dates are tight on the time line. On the contrary Pairwise Interchange using WSPT as the initial sequence performs much better when all the due dates are very tight. When the due date distribution factor is between 0 and 53, PI using EDD as the initial sequence averaged 19.5% greater than the PI using WSPT as the initial sequence. When the due date distribution factor is between 54 and 200, PI using EDD as the initial sequence is averaged 69.8% lower than the PI using WSPT as the initial sequence.
PAGE 46
...., \0 I I The Improvement(%) by using Pairwise 25 20 1 I 15 'C .. 110 .. / 1 1+WSPT WLPT ... eoo 5 ... co "' ... m .., 0 ... .N N ... ...... due date distribution factor ; :g $ :.J i :;; .... ..... ..... .,... """ .......... ... ... ... Figure 5.5: Average improvement by using the Pairwise Interchange procedure on different initial sequences. There are three different initial sequences, which are EDD, WSPT and WLPT. The average improvement by using PI procedure on the WSPT is 14.89%; the average improvement by using PI procedure on the WLPT is 17.89%; the average improvement by using PI procedure on the EDD is 8.66%. The improvement is calculated as % improvement= (initial sequencePI) I initial sequence* 100%. The improvement of PI using EDD as the initial sequence is relatively stable over the whole range of ude date distribution factors. The Improvement of the PI using WSPT and WLPT as the initial sequence increases significantly and plateaus at 20% when the due date distribution factor is between 0 and 90.
PAGE 47
5.3 Experiment: Comparison of the Heuristics and the Optimal Solutions Figure 5 6 illustrates the comparison of the heuristics and the optimal solutions for problems of moderate size. In the experimental model N = 8, processing time is randomly generated between 1 and 10, the due date distribution factor is between 0 and 100, for each due date distribution factor, 50 problems are randomly generated. Table 1.0 shows the total number of the cases in which the heuristics find the optimal solutions for the problems. Heuristic The number of cases in Number of which heuristics achieves test cases optimal solutions WSPT 338 5050 WLPT 0 5050 EDD 1265 5050 PI using WSPT as the 609 5050 initial sequence PI using WLPT as the 5 5050 initial sequence PI using EDD as the 2415 5050 initial sequence Table 1.0 the comparison of the Heuristics and the optimal solution In the experiment, the Pairwise Interchange using EDD as the initial sequence found the optimal solutions in 2415 cases of the 5050 problem sets. Overall, the 40
PAGE 48
Pairwise Interchange using EDD heuristic averaged 9% greater than the optimal solution. In Table 1.0, the results also indicate that the number of optimal solutions found by the Pairwise Interchange using EDD as the initial sequence is nearly twice that of the EDD heuristic, and is nearly half of the 5050 problems. However, in Figure 5.6, the Pairwise Interchange using EDD as the initial sequence performs much worse when the due date distribution factor is small. When the due date distribution factor is between 0 and 23, the Pairwise Interchange using EDD as the initial sequence averaged 9.4% greater than the Pairwise Interchange using WSPT as the initial sequence, and averaged 14.8% greater than the optimal solution. The Pairwise Interchange using WSPT as the initial sequence finds all optimal solutions when the due date distribution factor is 0,1,2. When the due date distribution factor is between 24 and I 00, the Pairwise Interchange using EDD as the initial sequence averaged 79.7% lower than the Pairwise Interchange using WSPT as the initial sequence, and averaged 6.8% greater than optimal solution. 41
PAGE 49
tv 2 6 0 2 0 0 1 6 0 .. a 1 0 0 6 0 0 The heuristics Vs the optlm at solutions bd'l""' I ... JrA I .. I I _I lJf I ... .. l' ;;; :;; :s :0 :g due date distribution factor ... :e :0 :8 I 1 optlm at PiusingEDD W SPT W LPT PI using WSPT 1+P I ulsng W L P T Figure 5.6: Average Q value of the heuristics and the optimal solutions on each due date distribution factor. For each problem, randomly generate 8 jobs. For each job, the processing time is generated between 1 and 10. The due date distribution factor is between 0 and 100. For each due date distribution factor, randomly generate 50 problems and get the average Q. The heuristics are WSPT, WLPT, EDD, PI using WSPT, WLPT and EDD as the initial sequence. The PI using EDD as the initial sequence found the optimal solutions in 2415 cases of the 5050 problem sets; overall, It is averaged 15.39% greater than the optimal.
PAGE 50
6. Genetic Algorithm A Genetic Algorithm is a search strategy based on the science of natural selection. It is akin to the natural evolutionary process, in which the creature that is most adaptable to its environment will tend to survive. In nature, the ability of a creature to adapt to its changing environment will be crucial for its survival. Individual characteristics are defmed by an individual's genes, which are grouped into chromosomes. New characteristics will be produced by the evolution process. Genetic Algorithms were first introduced by John Holland in 1975 when he used a computer to simulate the adaptive behavior of natural selection. Holland also determined that crossover functions are the primary methods for generating new material and the mutation is only used to preserve difference. Using Genetic Algorithms to solve scheduling problems, we frrst need to give a population of solutions, which is like a group of creatures in nature. In each generation, two members are picked from the population as parents, and the new solution (child) is generated by taking some features (genes) from each parent and mixing them. The worst member in the population will be replaced with the child and the process will be continued into the next generation. The best solution (fittest individual) will survive until the end of all generations. 43
PAGE 51
The Genetic Algorithm that is studied in this thesis includes three parts: Selection, Recombination, and Mutation. The population is set to the number of jobs and the initial members of the population are then randomly generated. Before each generation, the Q value of each member in the population is calculated by using the Optimal Slide algorithm; and the members are then sorted by the Q value. The sequence with the lowest Q value will be put at the top of the population. The Selection part is used to pick the parents. The Genetic Algorithm studied in this thesis, has one parent selected from the top half of the population, which means that this parent should have a lower Q value. The other parent is picked from the whole population, which means this parent has a probability of not being very good if it is picked from the bottom half of the population. The Recombination part is used to generate the new solutions (children) by taking features from the parents. Many operations can be used in this part such as PMX crossover and Uniform crossover. In particular, the operation called the Q crossover method is focused in this thesis. The Qcrossover method will take the best features from each parent and mix them to generate a child. That is, the Q value of each job in every parent sequence is calculated, and for each job, the one with the lowest Q value will be chosen and its position will be recorded. Then, a new sequence (child) will be generated by recombining these recorded jobs. The Mutation part is used to avoid having the same sequences appear in the population. If the Mutation part is not used in the Genetic Algorithm, then the 44
PAGE 52
members in the population may become the same with each continuing generation. Therefore, the Mutation part must be used to keep differences in the population, which will give more choices for each generation. The following steps describe the procedure for the Genetic Algorithm: INITIALIZE: 1. Randomly generate N sequences for the population where N is the number of the jobs. 2. For each member in the population, calculate the Q value of the sequence by Optimal Slide Algorithm. If the Recombination Part uses Qcrossover, the Q value of each job in the sequence also must be calculated. CYCLE: 3. Sort the members of the population by Q value and keep the member with the lowest Q value at the top. Therefore, the sequence at the top of the population is the current best solution of the Genetic Algorithm. 4. Randomly pick one member from the top half of the population and pick the other one from the whole population. These two sequences are the parents that will be used to generate the new child. (Selection Part) 5. Qcrossover is used to generate a child from the two parents. To create the child, compare each job's Q value in each parent and record 45
PAGE 53
the position of the job that has the lower Q value. After applying this method on all jobs, the child will then be created by recombining the jobs according to their recorded position. Some other methods could be used here to create the child. In this thesis, Uniform crossover and PMX crossover are applied for the comparison purposes. These comparisons will be presented later.(Recombination Part) 6. Calculate the Q value of the child sequence and compute the Q value of each job in the sequence. Then, replace the member that is at the bottom of the population with the child. 7. After replacing, if there are two sequences that are exactly the same, swap two random jobs in one of the sequences, until there are no sequences in the population that are exactly the same sequences. (Mutation Part) Figure 6.0 shows an example of the Qcrossover process 46
PAGE 54
Q crossover Qvalue: 10 6 11 0 18 5 3 3 9 0 Mother 8 0 2 IT] 4 0 0 0 0 Father 3 0 9 0 7 6 0 1 [!] 5 Qvalue: 11 7 20 5 7 12 3 8 8 6 For each job, select the one with the lowest Q value. Then recombine them by the position order: 3 4 1 8 7 6 2 5 0 9 Finally the child is: 3 4 1 8 7 6 2 5 0 9 Figure 6.0: Qcrossover method. The two parents are then marked with each job cost (Q) below the job box. If a job has the lower cost when compared with the same job in the other parent, then this job is chosen and its position is recorded. The child is created by alternating between the parental sequence and using the job if it is chosen in the sequence. In each generation of the Genetic Algorithm to sort the population, this can be done in O(NlgN) time by using Heapsort algorithm. The Select Part can be completed in a fixed time. In the Recombination part, the Qcrossover method includes two parts. The first part is to compare the Qvalue of each job in the mother sequence with the same job in the father sequence and mark the one that has lowest Q value. This part can be completed in O(NlgN) by using Fibonacci Heap. The second part of the Qcrossover method is to create the child by alternating between the parental sequences and choosing the job if it is marked. This Part can be done in 47
PAGE 55
O(N). Therefore, the Qcrossover method can be completed in O(NlgN + N), which is O(NlgN). To get the Qvalue of the new sequence (child), the process will take O(NlgN) by using Optimal Slide Algorithm. The Mutation part could be completed in O(N+lgN). Therefore, for the Genetic Algorithm using Qcrossover methods, each generation can be completed in O(NlgN). Suppose there are M generations, the Genetic Algorithm using Qcrossover method can be completed in O(MNlgN) time. The recombination part will strongly affect the quality of the Genetic Algorithm. The Qcrossover operation is derived from the idea that is used in the TSP problem by Whitely [7]. Whitely use the "edge recombination" operation to generate the child's tour, which is a sequence of cities decided by comparing the weights of the edges between cities suggested by the two parents. There are some other types of "crossover" operations. In this thesis, the Qcrossover will be compared with the PMX crossover and the Uniform crossover. The processes of those two crossover functions will be presented in the next section. The results of the experiments indicate that the Qcrossover is better than the PMX crossover and the Uniform crossover when the due date distribution factor is large. However, when the due date distribution factor is small, the PMX crossover and the Uniform crossover perform a little better than the Qcrossover. The results of the experiments also indicate that with enough generations, all three Recombination strategies can provide the same or very close to the same solutions. The details of this will be given later. 48
PAGE 56
The selection part also will affect the performance of the algorithm. Likewise, the initial sequence will strongly affect the Pairwise Interchange algorithm, the members that are picked to be the parents will also strongly affect each generation. This effect could be reduced by a large number of the generations. With enough generations, the effect from the Selection part can always be greatly reduced. There are also many mutation strategies and what is used in this thesis is a simple expeditious one. Some different methods were tried and produced similar results. Like the process in selecting parents, the effect of the mutation part will be reduced by a large number of the generations also. 49
PAGE 57
6.1 Other Recombination Strategies As discussed in the previous section, the Recombination Part will greatly affect the performance of the Genetic Algorithm. The primary recombination strategy uses the crossover function. There have been many other strategies that used mutation as the primary operation to introduce a new member (child). However, Holland determined that the crossover function was the primary operation for introducing new members and mutation should only be used to preserve diversity. The Qcrossover is focused on in this thesis and will be compared with other crossover functions: PMX crossover and Uniform crossover. The following statement shows the processes of these two crossover functions. PMX crossover: PMX crossover starts by randomly choosing two crossover points to build a Window for both parental sequences. For each pair of jobs in the Window (the two jobs are in the same sequential position), first fmd the job in the mother sequence that matches the one in the father sequence; then change the position of the jobs in the mother sequence so that it is aligned with the same job in the father sequence (in the window). Figure below shows the PMX crossover procedure: 50
PAGE 58
PMX crossover Window Mother 8 3 2 1 4 7 6 5 0 Father 3 4 9 8 7 6 2 1 0 For the first pair in the window <1, 8>, swap the position of them in the mother sequence and get: 1 3 2 8 4 7 6 5 0 9 and keep going. Finally the child is: 5 3 4 8 7 6 2 1 0 9 9 5 Figure 6.1: PMX crossover. First, randomly choose two crossover points to build a window for both parental sequences. Then, for each pair of jobs in the window (one from mother sequence, one from father sequence), using the job in the father sequence as the reference to direct the swap of the jobs in the mother sequence. In the PMX crossover, the process of fmding a specific job in the mother sequences can be done in O(lgN) by using Fibonacci Heap. Changing the position of two specific jobs can be done in 0(1). Since the maximum window size is N, therefore, the maximum number of the jobs needed to be found in a sequence is N and it can be completed in (NlgN). Also, there will be N pairs of jobs that are required to be interchanged and can be completed in O(N). Therefore, the PMX crossover can be completed in O(NlgN+N) = O(NlgN) time, which is same as Qcrossover. Thus, the Genetic Algorithm using PMX crossover can be completed in O(MNlgN). 51
PAGE 59
Uniform crossover: Uniform crossover considers the window covering the entire sequence. For each pair of jobs in the window, a random variable called "Checker" is generated. If the variable is greater than 0.5 (Checker;::: 0.5), then swap the two jobs in the mother sequence which is same as what is executed in the PMX crossover. After this process is completed on all pairs of jobs, the new sequence will be the child (of course the original mother sequence is still kept). The variable used here is much like a decisionmaker. The children's features are decided by this variable. Figure below shows the procedure ofUniform crossover. Uniform Crossover + Window Mother 8 3 2 1 4 7 6 5 0 father 3 4 9 8 7 6 2 1 0 Checker 0.2 0.7 0.4 0.9 0.3 0.2 0.8 0.1 0.3 or the first pair in the window <8,3 >,because Checker< 0.5, so do not ap the positions of them in the mother sequence. But for <3, 4> we did. inally the child is: 1 4 6 8 3 7 2 9 0 5 + 9 5 0.6 Figure 6.2: Uniform crossover. There is a Checker factor below each pair of jobs, which is randomly generated between 0 and 1. If a Checker ;::: 0.5, for this pair of jobs, then use the job in the father sequence as the reference to direct the swap of the jobs in the mother sequence. In the Uniform crossover, the process of fmding a specific job in the mother sequences can be completed in O(lgN) by using Fibonacci Heap. Changing the 52
PAGE 60
position of two specific jobs can be completed in 0(1 ). Since there may beN pairs of the jobs, which are needed to be interchanged. Therefore, the Uniform crossover can be completed in O(NlgN+N) =O(NlgN) time, which is the same as Qcrossover. Thus, the Genetic Algorithm using Uniform crossover can be completed in O(MNlgN). 53
PAGE 61
6.2 Experiment: Comparison of Genetic Algorithm and Pairwise Interchange To study the average performance of the Genetic Algorithm, a large number of the experiments have been created to test it. As in the experiment model that is used for the Pairwise Interchange, the due date distribution factor is between 0 and 200. For each due date distribution factor, 100 scheduling problems are randomly generated and the Genetic Algorithm and the Pairwise Interchange are applied to fmd the solutions. The Pairwise Interchange Algorithms used in this thesis is for comparison purposes; it uses EDD and WSPT as the initial sequences. For the Genetic Algorithm, Qcrossover is chosen as the Recombination strategy to compare with the Pairwise Interchange. To achieve the solution, apply 5000 generations for the Genetic Algorithm. Figure 6.3 is the average Q value of the Genetic Algorithm and the Pairwise interchange on each due date distribution factor. The results show that the Genetic Algorithm performs better than the Pairwise Interchange, no matter which initial sequence is used by the Pairwise Interchange. When the due date distribution factor is small, the difference between the Genetic Algorithm and the Pairwise Interchange using EDD as the initial sequence is very large; the Genetic Algorithm gives a much lower Q value. This is because of using EDD as the initial sequence of the Pairwise 54
PAGE 62
Interchange. As discussed before, the Pairwise Interchange using EDD as the initial sequence performs not as well when all due dates are very tight and is much worse than the Pairwise Interchange using WSPT as the initial sequence. However, when the due date distribution factor is small, the Genetic Algorithm performs better than the Pairwise Interchange using WSPT as the initial sequence. But the difference between the Genetic Algorithm and the Pairwise Interchange using WSPT as the initial sequence is not very large. This is because the Pairwise Interchange using WSPT as the initial sequence always performs better when all due dates are very tight and small. The Genetic Algorithm provides better solutions than the Pairwise Interchange using WSPT as the initial sequence on most of the due date distribution factors. But when the due date distribution factor is 0, 1, 2, the Pairwise Interchange using WSPT as the initial sequence is better than the Genetic Algorithm. The Genetic Algorithm averaged 0.03% worse than the Pairwise Interchange using WSPT as the initial sequence when due date distribution factor is 0,1 ,2. This is because WSPT can give the optimal solutions when all the due date distribution factors are smaller than the smallest processing time. In the experiments, the smallest processing time is 1, so when the due date distribution factor is 0 and 1, WSPT can give the optimal solutions and so does the Pairwise Interchange using WSPT as the initial sequence. When the due date distribution factor is very small such as 2, the Pairwise Interchange using WSPT as the initial sequence still has a greater chance to achieve the optimal solutions using a minimal number of job swaps. As mentioned in the experiments 55
PAGE 63
with problems of small size in the Pairwise Interchange, the Pairwise Interchange using WSPT as the initial sequence found all optimal solutions when the due date distribution factor is 0,1,2. When the due date distribution factor becomes large, the difference between the Genetic Algorithm and the Pairwise Interchange using EDD as the initial sequence becomes smaller. On the contrary, the difference between Genetic Algorithm and the Pairwise Interchange using WSPT as the initial sequence becomes larger. Although the difference between the Genetic Algorithm and the Pairwise Interchange using EDD as the initial sequence could be very small, the Genetic Algorithm still gives a lower Q value on each due date distribution factor. Figure 6.4 shows the difference between Genetic Algorithm and the Pairwise Interchange. The difference is measured by %, calculated as: %difference= (PIGA)/PI *100%. The results indicate that the Genetic Algorithm is always much better than the Pairwise Interchange. The average difference between Genetic Algorithm and the Pairwise Interchange using EDD as the initial sequence is about 17 .5%. When the due date distribution factor is small, the difference is very large and is around 20%. Where 26% is the maximum difference between the Genetic Algorithm and the Pairwise Interchange using EDD as the initial sequence As the due date distribution factor becomes larger, the performance of the Pairwise Interchange using EDD as the initial sequence becomes much better. But the Genetic Algorithm still beats the 56
PAGE 64
Pairwise Interchange by at least 6%, even when the due date distribution factor becomes close to the maximum value ( ddd factor = 200). The average difference between the Genetic Algorithm and the Pairwise Interchange using WSPT as the initial sequence is 57.82%. When the due date distribution factor is 0,1,2, the difference between Genetic Algorithm and the Pairwise Interchange using WSPT as the initial sequence is about 0 03% (PI is lower than GA). However, when the due date distribution factor is between 3 and 200, the difference is about 59.3%. The Genetic Algorithm performs much better than the Pairwise Interchange using WSPT as the initial sequence. The Genetic Algorithm also has the advantage that it can be executed as long as one wishes and possibly achieve better results. However, the problem is that one doesn't really know how long it is necessary to run the program for achieving a better solution. This is because the Selection Part, Recombination Part, Mutation Part will all affect the quality of each generation, although this effect can be reduced by a larger number of generations One big disadvantage of the Genetic Algorithm is that it takes more time to get a better solution, even hundreds of times longer than the Pairwise Interchange. 57
PAGE 65
Vl 00 Ql ;, Ci > a GA vs Pairwise lnte rcha nge 1200 1000 800 600 400 200 0 Iii I iii II I II I l o I I II I ill II I II iii I II I iii iii oi I 1111 II I II I II I II Ill I II I II I ill I II I IIIII I II I II I Ill II I II I ill iii I: I 11: 1 I ill il:ll 0 en .... 00 ,... N M 10 10 ""' CD M ,... N 00 ... en 0 0 en 0 .... 00 ,... N due date distribution CD M .... .... M CD N ,... ... 00 0 en .... en en _..,_GA PI using WSPT _._pi using EDD Figure 6.3: Average Q value of the Pairwise Interchange and the Genetic Algorithm. For each problem, randomly generate 20 jobs. For each job, the processing time is generated between 1 and 10. The due date distribution factor is between 0 and 200. For each due date distribution factor, randomly generate 100 problems and get the average Q. The Pairwise Interchagne uses EDD and WSPT as the initial sequence. The Genetic Algorithm uses Qcrossover as the recombination strategy. The Genetic Algorithm performs much better than the Pairwise Interchange. The average difference between Genetic Algorithm and the Pairwise Interchange using EDD as the initial sequence is about 17.5%. The average difference between Genetic Algorithm and the Pairwise Interchange using WSPT as the initial sequence is 57.82. However, when the due date distribution factor is 0,1,2, the PI using WSPT as the initial sequence is 0.03% lower than the GA.
PAGE 66
Vl \0 B c: I!! Ql 11: "C llfference between Genetic Algorithm and Pairwise Interchange 100 90 80 70 60 50 40 30 20 10 o' i!!C,,,"''""""""""'"""""""""""'""'""""""""'"""""""""""""'"""""'"""'""""""''"""""'""""'"'""""""""""""" : m ; : g due date distribution factor +difference betv.een GA. and PI using WSPT Itdifference between GA. and PI using EOO Figure 6.4 Average difference between the Pairwise Interchange and Genetic Algorithm on each due date distribution factor. The Pairwise Interchange uses EDD and WSPT as the initial sequence. For all the due date distribution factors, the average difference between Genetic Algorithm and the PI using EDD as the initial sequence is 17.5%, the average difference between GA and PI using WSPT as the initial sequence is 57.82%. The difference is calculated as: %difference =(PIGA) I PI* 100%.
PAGE 67
6.3 Experiment: Comparison of the Genetic Algorithm Using Different Recombination Strategies Figure 6.5, Figure 6.6 and Figure 6.7 shows the comparison of the Genetic Algorithm using different Recombination strategies. Three Recombination strategies are compared with each other in this thesis; they are Qcrossover, Uniform crossover and PMX crossover. The processes of these three crossover functions are described in sections 6.0 and 6. f. The three Recombination strategies are investigated through a large number of simulations. The same experiment model used previously is also applied here, which is 20 jobs for each problem and the due date distribution factor is between 0 and 200. For each job sequence, the Genetic Algorithm is applied with the same Selection strategy and Mutation strategy but different Recombination strategies. The results are studied mainly from two perspectives: 1. How good a solution can the strategy provide? That is, which Recombination strategy used by Genetic Algorithm could give a much lower Q value for the sequence. 2. How fast can the strategy achieve an acceptable solution? That is, which Recombination strategy could lead to solutions that are improving faster as the run time carries on. 60
PAGE 68
Figure 6.5 is the average Q value generated by the Genetic Algorithm using three different Recombination procedures with 1000 generations. The results indicate that when the due date distribution factor is small such as ddd factor < 100, three Genetic Algorithms give the same or very close to the same solutions. The Genetic Algorithm using Uniform crossover or PMX crossover is a little better than the Genetic Algorithm using Qcrossover; they provide a little lower Q value than the Q crossover. When the due date distribution factor is between 0 and 80, the Genetic Algorithm using PMX crossover averages 0.61% lower than the Genetic Algorithm using Qcrossover. When the due date distribution factor is between 0 and 96, the Genetic Algorithm using Uniform crossover averaged 1.49% lower than the Genetic Algorithm using Qcrossover. However, when the due date distribution factor becomes larger ( ddd factor > 1 00), the Genetic Algorithm using Qcrossover works well, which is much better than the other two Genetic Algorithms. When the due date distribution factor is between 81 and 200, the Genetic Algorithm using PMX crossover averaged 24.86% greater than the Genetic Algorithm using Qcrossover. When the due date distribution factor is between 97 and 200, the Genetic Algorithm using Uniform crossover averaged 11.48% greater than the Genetic Algorithm using Qcrossover. Figure 6.6 and Figure 6.7 are the average Q value generated by three Genetic Algorithms with 1500 and 5000 generations. The results indicate that with the 61
PAGE 69
number of the generations growing up, the solutions provided by three Genetic Algorithms become much closer. With enough generations (5000 generations) three Genetic Algorithms can provide same or very close to the same solutions on all the due date distribution factors. As in Figure 6.7, the difference of the average Q value between each Genetic Algorithm is harder to fmd, especially when ddd factor >100. However, this difference is clear when there are only 1000 generations (Figure 3.0). After 5000 generations, the average difference between the Genetic Algorithm using Qcrossover and the Genetic Algorithm using Uniform crossover is 0.24%, the average difference between the Genetic Algorithm using Qcrossover and the Genetic Algorithm using PMX crossover is 0.57%. 62
PAGE 70
Genetic Algorithm with 1000 generations a crossover a Uniform operator PMX c rossover due date distribution factor Figure 6.5: Average Q of the Genetic Algorithms. The Genetic Algorithm uses different recombination strategies: Qcrossover, Uniform operator and PMX crossover. The Genetic Algorithm stopped after 1000 generations. For each problem, there are 20 randomly generated jobs. For each job, the processing time is randomly generated between 1 and 10. The due date distribution factor is between 0 and 200. For each ddd factor, there are 100 randomly generated problems. When the due date distribution factor is between 0 and 80, the Genetic Algorithm using PMX crossover is averaged 0.61% lower than the Genetic Algorithm using Qcrossover. When the due date distribution factor is between 0 and 96, the Genetic Algorithm using Uniform crossover is averaged 1.49% lower than the Genetic Algorithm using Qcrossover. However When the due date distribution factor is between 81 and 200, the Genetic Algorithm using PMX crossover is averaged 24.86% greater than the Genetic Algorithm using Q crossover. When the due date distribution factor is between 97 and 200, the Genetic Algorithm using Uniform crossover is averaged 11.48% greater than the Genetic Algorithm using Qcrossover. 63
PAGE 71
Figure 6.6: Average Q value of the Genetic Algorithms The Genetic Algorithm uses different recombination strategies: Qcrossover Uniform operator and PMX crossover. The Genetic Algorithm stopped after 1500 generations. For each problem, there are 20 randomly generated jobs. For each job, the processing time is randomly generated between 1 and 10. The due date distribution factor is between 0 and 200. For each ddd factor there are 100 randomly generated problems. The difference between each Genetic Algorithm is much smaller than the difference in the Figure 6.5. 64
PAGE 72
Genetic Algorithm with 5000 generations acrossove r a Uniform operator .._ PMX cr ossove r due date distribution factor Figure 6.7: Average Q value of the Genetic Algorithms. The Genetic Algorithm uses different recombination strategies: Qcrossover, Uniform operator and PMX crossover. The Genetic Algorithm stopped after 5000 generations. For each problem, there are 20 randomly generated jobs. For each job, the processing time is randomly generated between 1 and 10. The due date distribution factor is between 0 and 200. For each ddd factor, there are 100 randomly generated problems. After 5000 generations the results provided by three Genetic Algorithms are very close. The average difference between Genetic Algorithm using Qcrossover and the Genetic Algorithm using Uniform crossover is 0.24%, the average difference between the Genetic Algorithm using Q crossover and the Genetic Algorithm using PMX crossover is 0.57%. 65
PAGE 73
Figure 6.8, Figure 6.9, Figure 6.10, Figure 6.11, and Figure 6.12 are the performance of the Genetic Algorithms using different Recombination procedures on the specific case with the number of the generations growing. These figures are based on the study of a large amount of data; they can represent the average relationship between solutions and the number of the generations in the experiments. Figure 6.8 and Figure 6.9 are the results when the due date distribution factor is 0 and 50. The results show that when the due date distribution factor is very small, Qcrossover doesn't work very well. It needs more generations to produce a better solution and the improvement is more gradual than the other two. Uniform crossover is better than Qcrossover when all due dates are very tight on the time line. It can provide a better solution quicker than Qcrossover. The improvement is faster. Uniform crossover also works a little better than the PMX crossover. The performance of the PMX crossover is close to the Qcrossover when the due date distribution factor is small. Especially when all due dates are very tight such as when ddd factor < 30, it works a little better than the Qcrossover. That is, the improvement of the PMX crossover is always faster than the Qcrossover when ddd factor <30. 66
PAGE 74
A specific case when ddd factor= 0, N = 20, Up to 20000 genertations 1140 .._ ____ 1130 Qcrossover 0 U'liform operator 1110+1f1100at1090 1 080 .... M a> Ill ;;; .... M a> Ill <0 .... M a> Ill o; .... M a> Ill N .... M a> X 10 CD Ill ;1; 0 CD .... M 0 .... <0 ;?; M 0 a> .... N .q<0 <0 .... CD N N .q<0 CD CD a> generations Figure 6.8: The relationship between final solutions and the number of the generations. The Genetic Algorithm uses different recombination strategies: Qcrossover, Uniform operator and PMX crossover. The Genetic Algorithm stopped after 20000 generations. For the specific problem, there are 20 randomly generated jobs. For each job, the processing time is randomly generated between 1 and 10. The due date distribution factor is 0. Uniform crossover is better than the Qcrossover when all due dates are 0. It can provide a better solution more quickly than the other two crossovers; the improvement is fast. 67
PAGE 75
A specific case when ddd factor= 50 N = 20 Up to 20000 generations 440 420 c1Qcrossover U1Worm operator FM>< crossover 400 a 380 360 I .. \ 340 "' :& "' ;;; "' ;;::; "' :g o; "' "' ;;:; "' 0 ;::: "' X JO CD ,._ N ;;::; "' ("') N ,._ q<0 I() N qI() "' ,._ CD 0> ;: N ;! ,._ 0> generations Figure 6.9: The relationship between final solutions and the number of the generations. The Genetic Algorithm uses different recombination strategies: Qcrossover, Uniform operator and PMX crossover. The Genetic Algorithm stopped after 20000 generations. For the specific problem, there are 20 randomly generated jobs. For each job, the processing time is randomly generated between 1 and 10. The due date distribution factor is 50 Same as in the Figure 6.8, the Uniform crossover performs better when the due date distribution factor is small. However, the improvement of the PMX crossover is always faster than the Qcrossover when ddd factor <30. 68
PAGE 76
Figure 6.10 shows the results when the due date distribution factor is 100. The results show that the performance of Qcrossover has improved so much; that it can give a better solution much quicker. Although Uniform crossover still always gets the lower Q value with fewer generations than Qcrossover, Qcrossover always provides lower Q values when the number of generations is very small, such as less than 500 generations. The PMX crossover becomes worse at this due date distribution factor where it requires a few more generations to get a better solution. Figure 6.11 and Figure 6.12 show the results when the due date distribution factor is 150 and 200. The results show that Qcrossover is significantly better when all due dates spread out enough on the time line. It can give a better solution with a very small number of generations. Uniform crossover and PMX crossover becomes much worse. They require many more generations to achieve a acceptable solution than Qcrossover. In addition, PMX crossover is always worse than the Uniform crossover. The experiments show that with all due dates spread over a wider range, the performance ofQcrossover improves much faster when the performance of Uniform crossover and PMX crossover both fall behind and become much worse. Figure 6.13 is the comparison of three Genetic Algorithms with only 50 generations. The results also indicate that when the due date distribution factor is large, Qcrossover always 69
PAGE 77
can give a better solution than PMX crossover and Uniform crossover where there are only a small number of generations. Moreover, PMX crossover is even worse than the Uniform crossover. So when all due dates are tight on the time line, Uniform crossover and PMX crossover will give a much better solution with a smaller number of generations, where Qcrossover needs more generations to get the same solution. However, when all due dates are spread out enough over the time line, Qcrossover can give a much better solution with a very small number of the generations. On the contrary, Uniform crossover and PMX crossover takes a much longer time to get the same solutions. Nevertheless, what needs to be emphasized here is that with enough generations, Qcrossover, Uniform crossover and PMX crossover can all give the same or very close to the same solutions and the solutions will all be very good. 70
PAGE 78
A specific case when ddd factor= 100, N = 20, up to 20000 generations 340 1320 1300 1280 10crossover lkliform operator FMX crossover 260 a 240 220 160 "' ..... <() <'> "' ..... <() <'> ;;; "' ..... <() <'> ;::; "' ..... <() <'> 10 "' ..... ao 0 "' ao ..... <'> N <'> <() ..... ..... ao 0 ":2 generations X 10 Figure 6.10: The relationship between final solutions and the number of the generations. The Genetic Algorithm uses different recombination strategies: Qcrossover, Uniform operator and PMX crossover. The Genetic Algorithm stopped after 20000 generations. For the specific problem, there are 20 randomly generated jobs. For each job, the processing time is randomly generated between 1 and 10. The due date distribution factor is 100. The performs of the Qcrossover has greatly improved; it can give a better solution much quickly. However, Uniform crossover and PMX crossover still always provides lower Q value with fewer generations than Qcrossover. 71
PAGE 79
A specifi c c ase when ddd fa ctor= 150 N = 20 Up to 20000 generations 270 220 aoperator PMX crossover 170 Ha 120 ......70 f1t20 CD ;::: CD CD :l = CD CD ;;; 8 o; CD a; CD ;;; CD ;; CD l7l co "' N CD "' M ,._ ... a; co ,._ N ... "' "' CD ,._ co "" :::: genera tiona X 10 Figure 6.11: The relationship between final solutions and the number of the generations. The Genetic Algorithm uses different recombination strategies : Qcrossover Uniform operator and PMX crossover. The Genetic Algorithm stopped after 20000 generations. For the specific problem there are 20 randomly generated jobs. For each job, the processing time i s randomly generated between 1 and 10. The due date distribution factor is 150. When the due date distribution factor is large the Qcrossover i s s ignificant good. It can provide good s olutions with much fewer generation s than the other two crosso v ers. 72
PAGE 80
A specific case when ddd factor= 200 N = 20 Up to 20000 generatlone 400 3 50 300 250 Q.crossover 0 2 00 lklifor m operator F'I'M: cro ssover 150 r1 00 5 0 0 r:::. :&
PAGE 81
Genetic Algorithm with 50 generations ] acros so ver U1iform o perato r PMX c ros sover a due date distribution factor Figure 6.13: Average Q of the Genetic Algorithms. The Genetic Algorithm uses different recombination strategies: Qcrossover, Uniform operator and PMX crossover. The Genetic Algorithm stopped after 50 generations For each problem, there are 20 randomly generated jobs For each job, the processing time i s randomly generated between 1 and 10. The due date distribution factor is between 0 and 200. For each ddd factor there are 100 randomly generated problems. When there are only small number of generations Qcrossover always performs much better than the Uniform crossover and PMX crossover. Especially when the due date distribution factor is large the Q crossover can provide a much better solutions. 74
PAGE 82
6.4 Experiment: Comparison of Genetic Algorithm and Optimal solutions The Genetic Algorithm is also tested on the problems with small size to compare with the optimal solutions. The same experiment model is applied, which is used for the Pairwise Interchange for the small problem size. That is, N = 8; and the due date distribution factor is between 0 and 1 00; processing time is randomly generated between 1 and 1 0; for each due date distribution factor, 50 jobs are randomly generated. For each Genetic Algorithm, apply 6000 generations. Figure 6.14 shows the comparison of the Genetic Algorithm and the optimal solutions. Table 2.0 shows the number of the cases in which the Genetic Algorithm found the optimal solutions. Heuristics The number of the cases that Genetic The number of algorithm find the op_timal solutions the test cases Genetic using Q5047 5050 crossover Genetic using 5047 5050 Uniform crossover Genetic using PMX 5046 5050 crossover Table 2.0 The results are impressive. With enough generations, all the Genetic Algorithms found the optimal solutions in most cases. In only three cases, Q75
PAGE 83
crossover averaged 2.7% greater than the optimal and Uniform crossover averaged 2.8% greater than the optimal. In only four cases, PMX crossover averaged 5.7% greater than the optimal solutions; in one of the four cases, PMX crossover is 18.7% greater than the optimal. As the results shows, the quality of the Genetic Algorithm is significantly greater for the smaller problem size with enough generations. 76
PAGE 84
,GA vs optimal 100 40 20 due date distribution factor j A using Qcrossover A using Uniform operato r A using PMX crossover Figure 6.14: Average Q value of the Genetic Algorithm and the optimal solution. For each problem, randomly generate 8 jobs. For each job, the processing time is generated between 1 and 10. The due date distribution factor is between 0 and 100. For each due date distribution factor, randomly generate 50 problems and get the average Q. The GA uses three difference recombination strategies, which are Qcrossover, Uniform operator and PMX crossover. With enough generations, three genetic algorithms found nearly all the optimal solutions of 5050 problems. In only 3 cases, GA using Q crossover is averaged 2.7% greater than the optimal solutions and GA using Uniform crossover is averaged 2.8% greater than the optimal. In only 4 cases, GA using PMX crossover is averaged 18.7% greater than the optimal solutions. 77
PAGE 85
7. Conclusions In this thesis, an overview of the Early/Tardy scheduling problem is given. Two heuristics are mainly introduced, which are the Pairwise Interchange and the Genetic Algorithm. The Pairwise Interchange heuristic is simple and fast and can create acceptable solutions. However, the performance of the Pairwise Interchange is strongly dependent on the initial sequence. Compared with several initial sequences, the Pairwise Interchange using EDD as the initial sequence always provides better solutions, especially when all due dates are spread out enough on a given time line. The Pairwise Interchange using WSPT as the initial sequence can perform much better when all due dates are tightly grouped and small. The study also indicates even when the problem size is minimum, which means the number of the jobs is 3, the Pairwise Interchange may not give the optimal solution because of the initial sequence. The Genetic Algorithm always takes a long time, but provides much better solutions than the Pairwise Interchange. Three parts of the Genetic Algorithm affect the quality, which are Selection, Recombination, and Mutation. Recombination affects the quality of the algorithm more than other two parts. Three Recombination strategies are introduced, these are Qcrossover, P'MX crossover and Uniform crossover. The experiments indicate that with enough generations the three strategies can give better solutions than the Pairwise Interchange and the solutions are very 78
PAGE 86
close to the same. Therefore, in measuring the performance of the Genetic Algorithm greater focus should be kept on the speed of the improvement of the solution how fast can the Genetic algorithm give an acceptable solution. Qcrossover always can achieve good solutions much quicker when the due date distribution factor is big. PMX crossover and Uniform crossover always can produce good solutions quickly when the due date distribution factor is very small. The choice between these heuristics for a specified problem will depend on the result of the following model: Assume L represents the performance improvement for each time increment. Assume CostQ represents the reduced cost fee for each performance increment. Assume CostT represents the cost fee for each time increment. Now the model is F = L CostQ I CostT If the F is much bigger, which means the performance will be more important than the time, therefore the Genetic Algorithm should be preferred. If the F is much smaller, especially smaller than 1, this means that the time will be more important, and that will suggest the Pairwise Interchange should be chosen. 79
PAGE 87
REFERENCES [1] Kenneth R. Baker and Gary D. Scudder, "Sequencing with earliness and tardiness: penalties a review", Operations Research, JanuaryFebruary 1990 [2] Candace Arai Y ano, "Algorithms for a class of singlemachine weighted tardiness and earliness problems", European Journal of Operational Research 1991 [3] Timothy D. Fry, Ronald D. Armstrong and L. Drew Rosen, "single machine scheduling to minimize mean absolute lateness: a heuristic solution", Computers Opns. Res. 1990 [4] Morton, T. T., Pentico, D. W., "Heuristic Scheduling Systems", John Wiley and Sons, 1993. [5] William J. Wolfe, Stephen E. Sorensen, "three scheduling algorithms applied to the earth observing systems domain", manuscript, 1998 [6] William J. Wolfe, et. AI., "Empirical study of heuristics for the single machine early/tardy problem", manuscript, 1998 [7] Whitley, D., T. Starkweather, and D. Fuquary, "scheduling problems and the Traveling Salesman: The Genetic Edge Recombination Operator", Proc. Third Int'l Conference on Genetic Algorithms and their Applications, J. D. Schaeffer, ed. Morgan Kauffmann, 1989 [8] F. Syswerda, J. Palmucci, "The Application of genetic Algorithms to Resource Scheduling", Fourth International Conference on Genetic Algorithms, 1991. [9] Zwenben, M., Fox, M., "Intelligent Scheduling", Morgan Kaufmann Publishers, 1994. [10] Kanet, J., "Minimizing Variation of Flowtime in Single machine systems", Management Science, 1981. [11] Garey, M., Johnson, D., "Computers and Intractability", W. H. Freeman and Company, 1979. 80
PAGE 88
[12] L. Davis, ed., "Handbook of Genetic Algorithms", VanNostrandReinhold, NewYork, 1991. [13] Fry, T., G, Leong and T. Rakes, "Single machine scheduling: A comparison of Two Solution Procedures", OMEGA, 1987. [14] Coffman, "Computer and JobShop Scheduling Theory", John Wiley & Sons, 1976. [15] Uckun, S., et. AI., "Managing Genetic Search in Job Shop Scheduling". IEEE Expert, October, 1993. [16] Peng Si OW and Thomas E. Morton, "The single machine early/tardy problem", Management Science 1989 [17] John J. Kanet, "Minimizing the average deviation of job completion times about a common due date", Naval Research Logistics Quarterly, December 1981 [18] Michael R. Garey, Roberte E. Tarjan and Gordon T. Wilfong, "Oneprocessor scheduling with symmetric earliness and tardiness penalties", Mathematics of Operations Research, May 1988 [19] Kenneth R. Baker, "Introduction to sequencing and scheduling", John Wiley & Sons, 1974 81

