Value and variable selection for a distributed job shop scheduler implementation

Material Information

Value and variable selection for a distributed job shop scheduler implementation
McSwain, Jonathan
Publication Date:
Physical Description:
iv, 89 leaves : illustrations ; 29 cm


Subjects / Keywords:
Production scheduling -- Data processing ( lcsh )
Production management ( lcsh )
Production management ( fast )
Production scheduling -- Data processing ( fast )
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )


Includes bibliographical references (leaves 87-89).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Computer Science.
General Note:
Department of Computer Science and Engineering
Statement of Responsibility:
by Jonathan McSwain.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
35326435 ( OCLC )
LD1190.E52 1995m .M37 ( lcc )

Full Text
B.S., James Madison University, 1979
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Computer Science
Jonathan McSwain

This thesis for the Master of Science
degree by
Jonathan McSwain
has been approved
Thomas Altman
Boris Stillman

McSwain, Jonathan Ross (M.S., Computer Science)
Value and Variable Selection for a Distributed Job
Shop Scheduler Implementation
Thesis directed by Professor William J. Wolfe
Heuristic techniques that generate quality schedules for
the job shop domain require the use of complete knowledge
of the problem to be scheduled. This complete knowledge
describes in detail the overall scheduling environment and
all the constraints that must be satisfied by all
components of a successful schedule. With this detailed
knowledge, it is possible to perform efficient search for a
quality scheduling solution.
Implementing a distributed job shop scheduler is an
attractive architecture allowing the use of parallel
execution, the segmentation of scheduling knowledge into an
object oriented software architecture, and the "virtual
enterprise" business model. All these qualities are a
benefit of decentralizing knowledge of the scheduling
A trade-off is required for a distributed implementation of
a heuristic scheduler. The cost of communicating detailed
descriptions of the complete scheduling problem across a
network to distributed scheduling nodes must be weighed
against the schedule quality gained.

This thesis explores the conflicting domains of centralized
and distributed heuristic scheduling. An implementation is
described and it's performance is compared to benchmarks.
This abstract accurately represents the contents of the
candidate's thesis. I recommend its publication._________

1. Introduction...........................................1
1.1 Background.............................................1
1.1.1 The Job Shop.........................................1
1.1.2 Constraints and Quality..............................2
1.2 Motivation...........................................2
1.2.1 Constraint-Based Heuristic Schedulers................3
1.2.2 Distributed Scheduling...............................4
1.3 Thesis Outline.........................................5
2. Constraint-Based Schedulers and Constraint Logic
2.1 Constraint Based Heuristic Scheduler History...........6
2.2 Micro-Boss Features....................................8
2.2.1 Resource Demand Profiles.............................8
2.2.2 Consistency Enforcement..............................8
2.2.3 Start Time Selection.................................9
2.3 Constraint Logic Programming........................10
2.3.1 Constraint Logic Programming Introduction...........10
2.3.2 CLP As Applied to Scheduling........................13
3. Distributed Schedulers................................17
3.1 CORTES................................................17
3.2 DAS...................................................19
3.3 Distributed Scheduler Analysis........................21
4. Solution..............................................23
4.1 DHS Overview..........................................24
4.2 Strategic Layer.......................................25
4.3 Strategic and Operational Layer Interaction...........32
4.4 Operational Layer.....................................33
5. Results...............................................35
5.1 DHS Results...........................................35
5.1 DHS and Micro-Boss....................................41

5.2 DHS Traffic Requirements...............................42
5.3 Summary................................................43
A. Debug Output...........................................44
B. Code...................................................49

1. Introduction
This section will give an overview of the problem domain
and will present an outline of this thesis.
1.1 Background
1.1.1 The Job Shop
The job shop scheduling problem is in essence the problem
of selecting among alternate production routings and
assigning tasks to resources in time (Zweban & Fox, 1994).
Jobs are introduced into a manufacturing environment. Each
job is unique from the other jobs, and has a unique
production routing through the job shop. For each step of
the job routing (for each task of a job), resources are
required. Resources can be in the form of machines,
operators, tools, or anything else required to perform the
task. One goal of the job shop scheduler is to assign
tasks to specific times on specific resources. While
making these assignments, numerous constraints must be
satisfied. These constraints represent such diverse
elements as enterprise policy, production routing
precedence, and shop floor events.
Resource capacities are finite. Jobs have a due date at
which all of a job's tasks must be completed, and a release
date prior to which no work can occur on a job's tasks.
The job shop environment is considered to be the simplest
non-trivial manufacturing scheduling environments (Johnston
& Minton, 1994). Even when reduced for academic studies,
the problem is generally considered to be NP-Hard in
complexity (Garey & Johnson, 1979) The job shop problems
solved by this thesis project and other academic scheduling
research projects such as Micro-Boss (Sadeh, 1994) severely
limit the number and types of constraints modeled, thereby
allowing the researcher to focus on key aspects, at the
expense of real-world applicability.

1.1.2 Constraints and Quality
Schedule quality is judged in terms of it's ability to
satisfy schedule constraints. Constraints can be hard, in
that a job that cannot be scheduled while satisfying hard
constraints is not put on the schedule (Sadeh, 1994).
Constraints can be soft, in that the scheduler works to
satisfy the soft constraints, however if it cannot the job
is scheduled anyway, as best possible (Sadeh, 1994). Soft
constraints are often referred to as scheduling
Hard scheduling constraints often used for judging the
quality of academic schedulers are routing precedence, work
release date and resource capacity. Soft scheduling
constraints often used are the sum of the work time
required after a jobs due date or lateness, and the sum of
the work in progress (WIP), where WIP is the sum of a
scheduled jobs production and idle time between it's
release and due date (Sadeh, 1994). These values are
frequently used to benchmark academic scheduler performance
(Sadeh & Fox, 1991).
Scheduling constraints used for real-world manufacturing
applications can be much more diverse, and are often
conflicting (Hadavi, 1992; Fox, 1994). Many industrial
applications make no attempt to optimize values such as the
amount of tardy work or WIP. They are merely happy to find
a schedule that satisfies all the enterprise's hard
constraints. Examples of these constraints include
selecting among a disjunction of heterogeneous qualified
resources, scheduling tasks so that they don't overlap
union mandated breaks (LePape. 1994), control of resource
set ups, batch splitting and combining operations (Hadavi,
1994), or, in a shipyard application, the phase of the moon
(for tides).
1.2 Motivation
This section will outline the concepts that influenced this
thesis project. The two primary influences are constraint
based scheduling and distributed scheduler architectures.

1.2.1 Constraint-Based Heuristic Schedulers
One of the distinguishing features of recent heuristic
schedulers is their emphasis on a complete view of the
scheduling problem. By "complete" it is meant that as the
scheduler makes assignments, the totality of the scheduling
situation is weighed. When assigning a task to a resource
and a time, the scheduler weighs the impact on all
unscheduled tasks and on all resources at all times. This
also been called "periscoping" (Smith, 1994), and it can be
thought of as a form of search look-ahead (Dechter, 1992).
Prior to Elihayu Goldratt's OPT manufacturing scheduling
system, job-shop schedulers were characterized by use of
priority dispatch rules of the greedy type. Since these
rules lacked any representation of a complete view of the
job-shop operation, they were ineffective at scheduling the
resources most in demand in a particular enterprise (Sadeh
1991). These resources are called the bottleneck
resources. Simulation schedulers are similarly based on
dispatch rules (Morton & Pentico, 1993), with results
judged not even approximately optimal (Morton & Pentico,
1993), and are considered passive scheduling tools rather
than active, in that simulation schedulers help you see how
a manufacturing enterprise operates, however the schedules
they produce are not useful for production (Hadavi, 1994).
Simulation schedulers can be characterized by always
concentrating on the situation at time t, as opposed to a
complete, overall picture.
The most recent constraint-based heuristic AI schedulers
use Constraint Logic Programming (CLP) techniques to create
complete detailed models of the scheduling problem, in its
entirety, allowing the scheduler to see the overall state
of the problem, and the exact result of each scheduling
assignment. Constraint-based heuristic schedulers have
been shown to out-perform dispatch methods (Sadeh, 1994).
Sadeh's Micro-Boss represents the current state of the art
in constraint-based heuristic scheduling, and will be
examined in relation to CLP concepts.

1.2.2 Distributed Scheduling
The rationale for a distributed architecture includes the
idea that any non-trivial enterprise is distributed in the
geographical, logical, or temporal sense (Burke & Prosser,
1994). Geographic or logical distribution of the enterprise
prompts the scheduling function to be likewise distributed.
The constraints of an enterprise are best decomposed so
that available, local expertise can be applied (Burke &
Prosser, 1994). Also, it is becoming more typical for many
separate companies to form temporary alliances (virtual
enterprises) to allow production of complex products such
as automobiles.
Given distributed or virtual enterprises, there is a need
for distributed scheduling. The goal for a distributed
scheduler might be for an enterprise to represent it's
local scheduling constraints with enough detail to schedule
and manage it's operation, while representing the
activities of distributed or sub-contracted organizations
in a much more general way. The reason for this generality
would be the amount of bandwidth required to make
distributed real-time scheduling reactions, the need to
allow distributed entities local control and knowledge, and
proprietary concerns regarding a sub-contractors
The nodes of a distributed or virtual enterprise, working
on the same jobs or sharing the same resources, must
communicate and coordinate with each other to produce a
cohesive schedule of operations. Relevant research on the
distributed scheduling problem includes DAS (Burke &
Prosser, 1994), ReDS (Hadavi, 1994; Hadavi et. al., 1992),
EXPLICIT (Gomes, Tate & Thomas, 1994), and CORTES (Sycara
et. al., 1991).
In conclusion, constraint based schedulers such as Micro-
Boss perform very well on test suites by virtue of the
amount of knowledge used for a complete representation of
the scheduling problem. Every detail of every task,
resource and constraint is known. However the elements
comprising any non-trivial manufacturing operation are

likely to be separated by geography and/or proprietary
concerns, preventing use of complete knowledge in making a
schedule. Therefore the techniques used to make the best
scheduling decisions and the reality of distributed
manufacturing are at odds with each other.
This thesis provides an answer to the question, what is a
good combination of detailed scheduling knowledge and
abstracted scheduling knowledge that will allow the
generation of good schedules by distributed enterprises?
1.3 Thesis Outline
In chapter two current CSP solutions to the job shop
problem will be reviewed. The emphasis will be on the
complete problem information required by the scheduler to
efficiently make schedules that are good according to
selected metrics. An association will be made between
constraint-based heuristic schedulers and CLP languages to
provide a basic model for these schedulers.
In chapter three current distributed schedulers will be
surveyed. In particular, the architectures used by
distributed schedulers to segment scheduling knowledge to
distributed nodes will be outlined.
In chapter four a mixture of constraint-based heuristic job
shop scheduling techniques and distributed scheduling
abstractions will be presented. These techniques will
provide a compromise between the high-fidelity knowledge
representation of a constraint-based heuristic job shop
scheduler and the knowledge segmentation required for a
distributed scheduler.
In chapter five results of an implementation of this
distributed CSP scheduler will be presented. The scheduler
was run against a suite of test problems defined by Norman
Sadeh (Sadeh, ftp). A comparison and explanation of the
results will be presented.

2. Constraint-Based Schedulers and Constraint Logic
This chapter will describe constraint-based heuristic
scheduling techniques. These techniques have been
demonstrated to produce better results than any dispatch
method. This chapter will also view constraint scheduling
techniques in the light of CLP theory and practice. From
this we will derive a description of the significant
features of a constraint based scheduler. These features
will form the basis for Distributed Heuristic Scheduler
(DHS), the distributed constraint based scheduler described
by this thesis.
2.1 Constraint Based Heuristic Scheduler History
Norman Sadeh's Micro-Boss produced better results
scheduling a suite of job shop problems than did a suite of
dispatch methods and the OPIS scheduler (Sadeh, 1994). It
is possible to view the Micro-Boss approach as the
culmination of a series of CMU schedulers. As such it is
illuminating to briefly consider some history.
Eliyahu Goldratt and Jeff Cox recognized that within any
manufacturing organization operating at capacity there are,
at any point in time, resources that are oversubscribed.
These resources form a bottleneck to production. They
constrain the productivity of the enterprise, as products
cannot be manufactured faster than the bottlenecks can
operate. The bottlenecks of an enterprise are the primary
constraints to increasing production (Goldratt & Cox,
1984) .
About the same time Goldratt published his theory of how
shifting bottlenecks ultimately dictate manufacturing
performance, the first CMU heuristic AI scheduler ISIS was
completed and documented. ISIS was the first attempt to
formulate and operationalize the view of scheduling as a
heuristic, constraint-directed activity (Smith, 1994).

Analysis of schedules produced by ISIS version 1 revealed
that the scheduler's poor performance was due to poor
utilization of bottleneck resources (Fox, 1994). Once a
bottleneck resource was recognized, and the scheduling
search process redefined to identify and schedule the
bottleneck first, the total number of jobs scheduled tardy
went from 65 of 85 jobs using ISIS-1 to 14 of 85 jobs using
ISIS-2 (Fox, 1994).
This was important because the focus of scheduling was
shifted away from FIFO scheduling, scheduling one job from
start to finish and scheduling each job's tasks in
precedence order, to identifying critical operations within
a job and satisfying the constraints associated with those
critical operations first (Fox, 1994). Instead of
selecting tasks to schedule in a FIFO order, ISIS-2 stepped
back and viewed the complete problem, identified a
bottleneck resource, made assignments for that resource,
and only then scheduled the remaining tasks in their FIFO
What was documented by Goldratt and not implemented in ISIS
is that constraints are not static. Decisions to optimize
the operation of a bottleneck resource can lead to the
creation of new bottlenecks within the enterprise (Goldratt
& Fox, 1984; Smith 1994). Steven Smith's OPIS recognized
this. OPIS monitored and analyzed the characteristics of
the evolving structure of the solution constraints (or
bottlenecks) to control schedule generation (Smith, 1994).
After assignments were made to the most critical resource,
the demand for all other resources was gauged. If
scheduling decisions created new critically constrained
resources, the problem solving focus was shifted to
satisfying those constraints. After this process, the
remaining assignments were made on a job by job FIFO basis.
Norman Sadeh took the approach of OPIS and refined it for
Micro-Boss. OPIS was limited to schedule all of a
bottleneck, or at least a large portion of it, before
shifting focus to a new bottleneck (Sadeh, 1994). Micro-
Boss considers the structure of the scheduling problem

before each scheduling assignment. If the most constrained
resource of the scheduling problem changed with each
scheduling assignment (and Micro-Boss detected the change),
Micro-Boss would shift it's attention to solving that part
of the problem.
2.2 Micro-Boss Features
This section will examine three major features of Micro-
Boss used by this thesis project. They are resource demand
profiles, consistency enforcement, and start time
2.2.1 Resource Demand Profiles
Micro-Boss uses resource demand profiles to indicate which
task needs to be selected next for scheduling. These
demand maps are constructed at the outset of scheduling and
thereafter refreshed periodically. At any time during the
scheduling process the maps of resource demand can be
viewed to see the most contended for resource, and the task
that most requires the resource. This task is selected to
be scheduled next (Sadeh, 1994).
To construct resource demand profiles, each unscheduled
task creates an individual demand profile. This is an
array of values, one array element for each time increment
between the schedules origin and latest completion time.
For each time increment between a tasks earliest start time
and latest possible completion time, the corresponding
array element is incremented with a value weighted by
inventory and tardiness costs. These individual demands
are added together by resource to form resource demand
2.2.2 Consistency Enforcement
Note that only unscheduled tasks are used to construct
resource demand profiles. Once a task is assigned to a
start time, that time becomes unavailable for subsequent
scheduling. This unavailability is modeled using

assignment consistency enforcement.
Each task is assigned an earliest and latest possible start
time. Every time a task is assigned a resource and time
value, all unassigned tasks are examined to see if their
resource and potential start time assignments collide with
those of the scheduled task. If they do, the potential
start time assignments of the unscheduled tasks are
modified to exclude the period just assigned to the
scheduled task (Sadeh, 1994) This modified set of start
times is then represented on the demand profile.
Using consistency enforcing techniques, the search space of
the scheduling solution is kept as small as possible. A
particular class of search dead-ends, trying to schedule a
task into an already assigned slot, is prevented. And
perhaps most significantly, the effects of scheduling
decisions are seen as soon as possible. If a series of
scheduling decisions limit an unscheduled task to only one
possible assignment, this limitation can be determined
before that last assignment is taken by another task.
Propagation of the effects of scheduling decisions to all
other unassigned tasks allows the most constrained task to
be identified, hopefully so that it's constraints can be
satisfied. The effects of scheduling decisions are
represented on the complete problem. The system can
periscope above the most immediate operation to see
critical constraints anywhere within the scheduling
2.2.3 Start Time Selection
Micro-Boss uses look-ahead heuristics to assign a task to a
specific start time. The goal of this look-ahead
assessment is to produce schedules optimized towards
minimum tardiness, minimum inventory cost, and to prevent
the generation of dead-end search states, i.e. tasks with
no possible schedule assignments (Sadeh & Fox, 1991).
Many tasks, after being selected to be scheduled next, have
a variety of possible start times that will satisfy all

hard constraints and not violate the job due date. Given a
domain of possible start times, Micro-Boss selects the
start time that best satisfies the quality measures for the
schedule: in this case, tardiness and WIP. A start time is
selected that will not constrain subsequent tasks in the
job to be scheduled late, and will minimize the tasks
inventory cost or WIP (Sadeh, 1994). A final criteria for
start time selection is a measure of the survivability of
the start time assignment. Survivability measures an
assignment's chances of not being undone by subsequent
scheduling decisions. The survivability measure attempts
to predict the outcome of the search, without actually
performing the search (Sadeh & Fox, 1994).
Using these measures, the best scheduling assignment is
discovered for each task.
2.3 Constraint Logic Programming
This section will present selected CLP features that
parallel constraint-based heuristic scheduler methods.
2.3.1 Constraint Logic Programming Introduction
CLP is a refinement of the logic programming paradigm.
Logic programming is best represented by the language
Prolog. Logic programming provides a means for the
separation of computer programs into competence and
performance components (also called logic and control
components). The competence component describes factual
information and the relationships between facts, while the
performance component handles the strategy and control of
manipulating and combining the facts and their
relationships (Fruhwirth et. al., 1993). The competence
component describes the problem to be solved, the
performance component describes how to find a solution.
Logic programming is considered to be limited by its search
strategy. Depth-first search results in a generate and
test approach (Fruhwirth et. al., 1993). Each variable in
a logic program has a domain of values. Not every value in

the variable domain is consistent with a solution. As each
variable is selected and instantiated with a value from
it's domain, it must be tested to see if this instantiation
is consistent. If the instantiation is inconsistent with
the stated facts, that instantiation is abandoned, the
system backtracks, and another instantiation is tried.
CLP has benefited from taking logic programming and
enhancing it with constraint solving mechanisms.
Constraint techniques function to improve the search and
backtracking behavior of a logic programming system by
pruning the search tree in an a priori way rather than
using constraints as a source of tests for "generate and
test" search. Inconsistent solutions are detected as soon
as possible (LePape, 1994) A key feature is the
integration of a deterministic process (constraint
propagation) with a non-deterministic process (search).
The CLP paradigm is sometimes called "constrain and
generate" (Fruhwirth et. al., 1993).
The data items that describe the problem domain are called
variables. The word "variable" is somewhat overloaded by
CLP relative to procedural programming languages. Instead
of referring to an item of storage, a CLP variable has
associated with it a domain of values. The bounds of these
values can be set at variable declaration time:
constrainedlnteger x(l, 10);
This pseudocode describes a CLP variable x, of type
integer, with a domain of [1,2,...,9,10] (LePape, 1994).
When a variable is set to a single value from its domain it
is said to be "instantiated". An instantiation is said to
be "legal" or "locally consistent" if it satisfies all
relevant constraints imposed on it. A legal instantiation
of all the variables of the problem forms a solution
(Dechter, 1992). Variables can be instantiated during
variable declaration, the competence phase, or the control
The domain of a variable can be reduced by constraint

propagation. While executing the competence phase of a CLP
program, the domains of variables can be constrained
against single values using unary constraints:
x < 8
This indicates that the domain of x is now [1,2,...,6,7].
Variables can also be constrained against other variables
using composite constraints:
constrainedlnteger x(l,10), y(0,100);
y < x;
This would cause the domain of y to be [0,1, . .,8,9]. Each
time a relationship is stated, the domains of related
variables are examined and modified as needed. Each time
the domain of one of the constrained variables is modified,
the domains of all related constrained variables is
examined and possibly modified to maintain consistency.
Constraint propagation of this type fits a data driven
If the competence phase of the program causes a variables
domain to become empty, this indicates that no solution
exists for the problem. Note that this failure occurs
prior to any search.
In some limited problems a solution can be determined from
the competence phase without any search needing to be
performed. In this instance constraint propagations is
complete. Due to problem complexity, constraint
propagation is often incomplete, requiring search to find a
solution (LePape, 1994).
Search occurs while executing the control portion of a CLP
program. This procedure involves the selection of a
variable from the problem domain, and is appropriately
called variable selection. A single value is selected from
the domain of the variable according to a value selection
criteria. This value selection instantiates the variable.
Each such instantiation forms a distinct state of the

search. Upon each variable instantiation, further
constraint propagation occurs to propagate the implications
of the instantiation. Should this propagation or
subsequent search cause one or more variables to have empty
domains, a dead-end has been found. Backtracking restores
the search state and another value from the variable's
domain is selected. If no other untried value exists, the
search fails as having no solution.
During search, look-ahead techniques can be used to aid
value selection. These techniques attempt to show the
success of a variables instantiation with a specific value.
The measure of success can be boolean or can be a value
returned from a suitability function. If the results are
disadvantageous, another value from the domain can be
instantiated and tested (Dechter, 1992).
2.3.2 CLP As Applied to Scheduling
To summarize, some primary features of constraint logic
programming are:
a priori constraint definition and propagation,
search using variable and value heuristics,
propagation of constraints during search,
search look ahead.
These features map exceedingly well to the features of
Constraint definition and propagation prior to search plays
a small but crucial part in Micro-Boss, in the form of
propagating earliest and latest start times amongst all
tasks of a job, based on the release and due date of the
job. See section 4.2 for these formulae. The result is
that each task has a domain of start times that do not

violate job precedence constraints.
While propagation of these values sounds somewhat trivial
now, it apparently was something of a revelation to the
ISIS-2 programmers (Fox, 1994), and facilitated the
response of the scheduling engine to bottleneck resources.
In CLP systems, a priori constraint definition is termed a
"consistency technique". Noting that the full range of
each variables domain can be potentially huge, consistency
techniques serve to effectively rule out many inconsistent
domain values at a very early stage of the solution
procedure. When search is started, only consistent domains
values will be proposed (Fruhwirth et. al., 1993).
In larger real world systems, a priori constraint
propagation is even more important. For example, possible
task execution times can be further reduced by propagation
of enterprise calendars. If a task requires an attendant
throughout it's execution, this constraint precludes start
times such that the task will coincide with a required work
break such as lunch or afternoon break (LePape, 1994).
Real world constraints and a priori constraint propagation
function to reduce solution search space prior to search.
Search in Micro-Boss is an iterative process of variable
selection, and then value selection from the selected
variable's domain. Heuristics specialized for the Micro-
Boss job shop environment are used to make these
selections. These domain specific heuristics are designed
to minimize the amount of backtracking and to produce
schedules that rank highly on Micro-Boss's schedule scoring
criteria (Sadeh & Fox, 1991).
In CLP the process of variable and value selection during
search is also directly addressed (Dechter, 1992) The
process of variable selection is identified as the problem
aspect on which to make the next assumption about the
solution, and the process of value selection is identified
as the assumption to be made. General guidelines for

identifying the best variable to select is the one that has
the largest number of constraints associated with it, and
the variable with the smallest domain of values (Fruhwirth
et. al., 1993). Value selection guidelines include
selecting the value that least constrains the rest of the
solution (Dechter 1992).
These strategies provide a nutshell description of Micro-
Boss's variable and value selection, with consistency
enforcement and resource demand profiles providing the
knowledge required to decide which variable is the most
constrained. Variable selection is accomplished by
identifying the most constrained resource or bottleneck.
Value selection is accomplished using weighted cost
measures for tardiness and inventory and using assignment
survivability measures.
As variables are instantiated during the search process,
there is an opportunity for further consistency enforcement
through constraint propagation. Each value selection is an
assumption about the solution. The assumptions allow
pruning inconsistent solutions from the search space, as
such inconsistent solutions are created by the value
selection process (Fruhwirth et. al., 1993).
This process is mirrored in Micro-Boss's update of resource
demand profiles and consistency enforcement. Updates of
these profiles occurs every n variable selections (due to
their computational complexity). Inconsistent assignments
are removed from the domain of unscheduled task start times
with each scheduling assignment (Sadeh, 1994).
Another feature of Micro-Boss and CLP, and in fact logic
programming, is search backtracking. Constraint
propagation before search does not rule out all
inconsistent instantiations. If the selection of a value
later produces an inconsistent solution, the search
procedure must later undo that selection and make a new one
(Fruhwirth et. al., 1993). CLP most often mentions
chronological backtracking, where the most recent decision
is undone. Micro-Boss again has a more complicated and

more effective scheme (Sadeh, Sycara & Xiong, 1994).
One final commonality between CLP and Micro-Boss is search
look-ahead (Dechter, 1992; Sadeh 1994). This is a
restatement of value selection criteria. Before a value is
assigned to a selected variable, a number of values in the
variables domain are gauged to see which one will give the
best results. The value that gives the best expected
results is the value instantiated.
The concept of search look-ahead seems somewhat vague in
the CLP scheme. Any sort of heuristic applied to value or
variable selection could be viewed as look-ahead, given
that those heuristics serve to identify and resolve the
most difficult aspects of search.
(It should be noted that the documentation of Micro-Boss's
grounding in CLP theory is not an attempt to diminish
Micro-Boss's sophistication. Space and focus does not
permit the review of the many complex, specialized and
efficient algorithms used by Micro-Boss to implement CLP

3. Distributed Schedulers
This chapter will review the characteristics of some
distributed manufacturing scheduling applications. The
goal is to identify crucial features for a distributed
scheduling architecture.
First a distributed version of Micro-Boss called CORTES
will be reviewed. Next the Distributed Asynchronous
Scheduler (DAS) will be reviewed. Finally a brief analysis
will be presented.
CORTES researchers claim that although factory scheduling
has been studied in depth by operations research and
artificial intelligence researchers, at the time of this
paper's 1991 publication almost no one had researched
distributed scheduling. They state that the need for a
distributed scheduler is due to the inherently distributed
nature of a manufacturing enterprise. This
decentralization requires the corresponding
decentralization of scheduling and control mechanisms
(Sycara et. al., 1991).
CORTES was primarily an academic exercise, in that no known
publications discuss actually using this system to schedule
an enterprise. To it's credit, CORTES avoided some easy
reductions in problem scope that would have made the
architecture perhaps less applicable to real-world
problems. For example, in CORTES each scheduling agent has
the exclusive responsibility of scheduling a set of jobs.
While it would simplify the problem for each agent to also
have exclusive control over a set of resources, where all
other agents must coordinate with agent-1 to use agent-1's
resources, CORTES allows all agents equal access to all
resources. Agents schedule as peers. This arrangement
seems closer to real-world situations to this author.
Enterprise policy frequently disallows a group from having

exclusive control over a set of resources. The most
prominent example of purely hierarchic organizations would
be military operations.
Similarly, it would be convenient to have the sum total of
an enterprise's scheduling information centrally
aggregated. CORTES takes a cellular automata approach by
restricting each agent to only the scheduling information
required for that agent's jobs. CORTES has no central data
repository. CORTES attempts to build a globally optimal
schedule using agents that can only make locally optimal
decisions (Sycara et. al., 1991).
To a considerable degree the scheduling techniques of
CORTES are the same as those of Micro-Boss. CORTES also
maps well to the features of CLP. In order to distribute
Micro-Boss as CORTES, several concerns had to be addressed.
The information shared by CORTES distributed scheduling
agents has lower fidelity than the information found in a
single scheduling agent. Communication of scheduling
information is limited by communications bandwidth. The
information communicated requires summarization and
abstraction (Sycara et. al., 1991).
The information that is communicated tends to become
obsolete very fast. If scheduling information is shared
too infrequently, agents make bad or inconsistent
scheduling decisions. If it is shared too frequently,
again communications bandwidth becomes a problem. CORTES
strategy is, first of all, to minimize backtracking.
Micro-Boss's variable and value selection heuristics serve
this strategy. Also, when backtracking must be performed,
CORTES uses a backtracking algorithm developed for
uncertain, multi-agent scheduling that is much more
efficient than chronological backtracking. Briefly, CORTES
limits it's agents to undoing only their own reservations,
not those of any other agent. All tasks responsible for the
inconsistent schedule are identified and removed from the
schedule prior to any new values being selected for any

A heuristic is used to guide an agent's decision of when to
share scheduling information. Briefly, the heuristic says
an agent should distribute its scheduling profiles whenever
there has been some "significant" change, and can request
updates from other agents when many reservations have been
made on resources that are perceived as bottlenecks for all
The system was run with a small configuration, two agents
scheduling between 40 and 100 activities. The most
significant result for this thesis is that when scheduling
information was shared as often as possible, the two agent
configuration was capable of scheduling as efficiently as a
one agent test configuration. As the frequency of
information exchange was lowered, the two-agent
configuration made worse schedules, with more backtracking
(Sycara et. al., 1991).
3.2 DAS
While CORTES was structured as a network of cooperating
peer schedulers, Burke and Prosser's DAS (for Distributed
Asynchronous Scheduler) is composed of a hierarchy of
autonomous intelligent agents arranged in three tiers.
This structure is also used by EXPLICIT (Gomes, Tate &
Thomas, 1994). At the top tier is a strategic agent, in
the middle are tactical agents, and at the bottom are
operational agents (Burke & Prosser, 1994). Initially DAS
attempted to implement a peer democracy between it's
agents. Like CORTES, each agent had equal access to all
resources. Experience showed that this would not work for
DAS. Furthermore, Burke and Prosser demonstrate
theoretically that if two agents have equal access to a
resource, one or both of them will always be in a state of
ignorance about the real state of the resources
commitment(Burke & Prosser, 1994).
DAS is termed "loosely coupled", in that the ratio of
communication to computation is low. This is considered a
bit of a paradox, given that scheduling is usually treated
as a tightly coupled problem: changes to one scheduling

assignment can have significant effects on many other
scheduling assignments. DAS handles this paradox by having
each agent exhibit both reactive and predictive behavior
(Burke & Prosser, 1994) .
Taking the DAS hierarchy from the bottom, operational
agents represent single resources. Each operational agent
has a schedule for their resource, and each one represents
the specific knowledge that will best schedule that
resource. For example, one resource might be best
scheduled Just-In-Time (JIT), another might be best
scheduled earliest, and yet another might be able to offer
great saving through resource set-up optimizations.
A tactical agent has exclusive control over a group of
operational agents. As each task is assigned to a tactical
agent from the strategic agent, the tactical agent assigns
the task to an operational resource. If the assigned
operational resource is not able to perform the task within
the given time constraints, the tactical agent either
selects another operational agent in it's domain, or if
none exist, returns the task to the strategic agent. Again,
the tactical agent has specific information on how best to
assign work to the operational agents under it's control.
The strategic agent accepts new jobs. It then decides how
best to decompose the jobs with respect to scheduling. The
strategic agent might decide to schedule a whole job in
task precedence order, or it might assign tasks from a
group of jobs in arbitrary job and precedence order.
DAS is used in production at an aluminum sheet
manufacturer. An average scheduling load is 100 jobs each
with 4 steps.
DAS has many important features not found in the usual
academic research schedulers. One is the recognition that
not just one value and variable selection strategy will
always work best. An enterprise is a conglomeration of
diverse scheduling strategies. Instead of representing
these strategies in a monolithic fashion, scheduling

information is best distributed to the scheduling agents
that require a certain strategy. This is similar to basic
object-oriented design goals represent knowledge where it
is used.
Although DAS appears to be more of a classic AI contract
net, with agents negotiating with each other and logging
historical events from which to "learn", it also conforms
to the CLP methodology. Strategic agents perform variable
selection duties by deciding which job or task to assign
first to tactical agents. All agents perform value
selection. The strategic agent decides which tactical
agent, or which group of resources, a task is to be
assigned to. A tactical agent makes a similar assignment,
selecting from a group of operational agents to decide
which resource will perform the task. Operational agents
perform value selection by assigning a start time to the
DAS addresses the subject of constraint propagation. One
of it's modules is the Constraint Maintenance System (CMS).
CMS is responsible for constraint propagation during
search. The version of DAS that is the subject of (Burke &
Prosser, 1994) propagates constraints exhaustively
throughout the entire plan. While insuring that
constraints are propagated correctly to all affected nodes,
Burke and Prosser express concern that this redundant
strategy may well not scale up well to larger problems. A
more recent version of DAS with a more efficient CMS has
been coded that propagates constraint only until a node
unaffected by the new constraint is found. This saves on
communications bandwidth, however it runs the risk of
failing to detect inconsistencies through incomplete
constraint propagation, and adds the cost of additional
computation to test constraints during propagation.
3.3 Distributed Scheduler Analysis
CORTES and DAS stand in sharp contrast to each other. Some
of their most significant differences are:

Model of cooperation,
Data sharing,
Data hiding,
Installation base.
First of all, CORTES uses a democratic, peer-to-peer model,
while DAS uses a hierarchic model. In CORTES, the effort
of individual agents to achieve a local optimal schedule is
expected to result in a globally optimal schedule. In DAS,
commands come from the top of the hierarchy and errors flow
back up. It should be again noted that DAS attempted a
peer-to-peer model and failed (Burke & Prosser, 1994;
Sycara et. al., 1991).
CORTES relies on all scheduling agents having a copy of all
other agents demand profiles for resources that are shared
to or from the agent. DAS requires that the agents within
the scheduler share information hierarchically only, and
most often only in event of a scheduling failure.
There seems to be very little room for data hiding in
CORTES. Agents can make scheduling decisions for resources
that are not their prime responsibility. If an enterprise
had resource specific scheduling strategies, those
strategies would have to be shared with every agent wishing
to share another's resource. DAS on the other hand is
structured specifically to allow individual resources
specialized knowledge (Burke & Prosser, 1994; Sycara et.
al., 1991) .
Finally, CORTES was an academic exercise, while DAS is in
use scheduling a sheet aluminum factory.

4. Solution
This section will describe a distributed constraint based
job shop scheduler implementation, DHS. First will be a
description of the design goals.
In general, the goals for DHS are to create a scheduler
that is based on CLP techniques, along with selected
features from the distribution models of DAS and CORTES.
Specifically, the goal is an architecture that:
Is based on CLP concepts such as variable and value
selection strategies and constraint propagation, both
prior to and during search,
Allows for diverse variable and value selection
strategies, based on the specifics of the agent
performing the selection,
Supports distributed scheduling,
Conserves communications bandwidth and contract net style
These goals came about based on a combination of academic
and real world concerns. Academically, there is an
opportunity to produce excellent job shop schedulers using
selected academic techniques. On the real world side, a
system such as Micro-Boss, while providing some of the best
scheduling solutions with respect to two measurements,
ignores many real world concerns such as the fact that some
resources have other value selection concerns beyond
meeting due dates. Along with the need for diverse
scheduling strategies is the need for distributed
scheduling. That said, DHS bears far more resemblance to
an academic scheduler than to a manufacturing scheduler.
Part of the reason for this is that DHS it was developed as
an academic project. Also, to gauge DHS performance a set
of test cases was required. The test cases selected

determined much of DHS's capabilities.
DHS was written in 1569 lines of C++. Test cases were
taken from Norman Sadeh's Micro-Boss test suite. This is a
set of 60 scheduling experiments, each with 5 resources and
10 jobs of 5 tasks each. Each job has a linear process
routing, and each task has only one designated required
resource. The test cases vary in the range of job due
dates and release dates, and in the number of resources
that are heavily subscribed bottleneck resources. Some
test cases have one bottleneck, others have two (Sadeh &
Fox, 1991).
Jobs were rated on their tardiness and inventory cost. For
each implementation of DHS, the scheduler was run against
the test suite and aggregate values were calculated:
average tardiness, average inventory, and the standard
deviation for both of these values. See section 5.1 for
final results of these values.
Detailed evaluation of DHS performance was accomplished
using Gantt charts for the five resources and detailed text
print statements generated during the schedulers operation.
See Appendix B for debug information from DHS. Typically,
with each version of the scheduler, the test cases with the
worst results would be examined in detail.
4.1 DHS Overview
DHS most closely resembles DAS in overall architecture.
Instead of a three tiered architecture, two layers were
built, a strategic and an operational layer. Since the
test cases use only five resources and tasks are given only
one suitable resource assignment, DHS has no need for a
layer subsetting these five resources into more manageable
The strategic layer performs the variable selection
activity, and performs constraint propagation prior to and
during search. The operational level performs value
selection, in the form of assigning a start time to a task.

Communication between the two layers consists of passing a
task to a resource with it's scheduling constraints in
this case earliest and latest start times, and task
duration. After scheduling the operational level returns,
essentially, the start time assigned to the task. This
value also represents failure of the operational level,
should there be no available start time for the task within
it's constraints.
Note that while DHS was designed to test distributed,
asynchronous scheduling, it in fact was implemented as a
single process on a single machine. As such the knowledge
representation and knowledge used by different components
that were designed to be distributed was carefully
monitored. Knowledge created and used on one "node" was
not used on another. Use of object oriented design was
very helpful in this. Although one never knows the issues
of a design until one has implemented it, the goal of this
project, to design a scheduler that communicates relatively
little data and still makes good schedules, is served by
this implementation.
4.2 Strategic Layer
This section will describe the mechanisms used to implement
a strategic layer for DHS.
The DHS strategic layer was allocated two primary
functions: to perform constraint propagation before and
during search, and to perform variable selection in the
form of selecting which tasks to schedule first.
The work to schedule in DHS is comprised of a series of
jobs. Each job is comprised of a series of n tasks t.
Each task t within a job has a single predecessor task ti-j
and successor task ti+1 (except for the first and last tasks
of the series). Each has a duration d^.
Constraint propagation in DHS is not very complex. Before
search begins, earliest start times (est) and latest start
times (1st) are computed for each task within a job, based

on the job's work release date and due date:
ti est = release time + £j=o dj
ti 1st = due time £j=i dj
As search is performed and tasks are scheduled, the est and
1st values are maintained by the strategic layer. That is,
prior to search, ti+1 est relationship to est is:
ti+1 est = ti est + dL
Once ti has been scheduled, it is given an actual start time
(ast) between its est and 1st. ti+1 est relationship to ti
ast becomes:
ti+1 est = ti ast + di
Unless ti ast is the same as ti est, the value of ti+1 est
has just gotten larger, and the domain of ti+1 start times
smaller. In fact, if ti is scheduled JIT, such that ti ast
is the same as ti 1st, suddenly ti+1 has only one possible
start time, ti+J 1st.
By establishing and maintaining these constraints, the
domains of the variables that make up the scheduling CLP
search space are maintained.
Maintaining the tasks relationship to the other tasks
within their respective jobs is not enough. Each task has
a relationship with tasks within other jobs that require
the same resource at the same time. Some mechanism is
required to track this relationship.
Tracking inter-job task relationships in DHS is similar to
Micro-Boss demand profiles (Sadeh, 1994). For each
resource, an array of floating point variables (dp) is
allocated. Each element represents a time slot between the
earliest job release time and the latest job completion
date. Each task j represents it's individual demand for a
resource on dp using a count of the number of feasible

start times (fs) it has left:
dp_i = n(l /fSj) WSPT
where i ranges between tj est and tj 1st + dj, fs is the
number of time values between est and 1st where a task
could be successfully scheduled (bounded to be never less
than 1), and n is the number of possible tj executions that
could occur on that dpi See figure 4.2.1. Additionally,
this value was weighted by its weighted shortest processing
time (WSPT) (Vepsalainen and Morton, 1987), where WPST is:
v* / dkj
Where vk is the weighted tardiness cost of job k, and d*j is
the duration of task j of job k. The value of WSPT was
bounded using trial and error experiments to a minimum of
.9 and a maximum of 1.1 of the basic 1 / fSi value. It
served to decrease average lateness by about 20% when run
on all experiments. See table 4.2. See section 5.1 for a
description of performance criteria.
Average Lateness Average Inventory Std. Dev. Lateness Std. Dev. Inventory Back Tracks States
WSPT 7.98 119.07 9.15 10.58 4.28 54.28
No WSPT 9.72 117.95 11.42 10.74 4.07 54.07
Table 4.2 DHS Test results with and without WSPT resource
demand weighting
If a task has one or zero start times, it's value
represented on the demand curve is one. Like other
consistency enforcing techniques, these demand profiles
were set once prior to search, and then corrected as each
task was scheduled.

Start 5
Start 4
Start 3
Start 2
Start 1
Figure 4.2.1 Calculation of n. The value of dpL is the
same for each start, however after accumulating the values
for a task with 5 start times, the demand map shows that
the times between 4 and 9, with a value of 5(dp) are more
in demand by this task than by the end times.
Figure 4.2.2(a) and 4.2.2(b) shows demand curves for a
resource that is not a bottleneck for the job. This
represents the aggregated demand of 10 tasks on this
resource. In 4.2.2(a) at state 0 no jobs have been
scheduled and the demand is smooth, and never more than
around .85 in value.
Non-Bottleneck Demand State 0
r^- m * CO to
1 CO
00 CD f-
(M ^
Figure 4.2.2 (a)

Non-Bottleneck Demand State 15
Figure 4.2.2(b): (a) and (b) show the demand curves for a
non-bottleneck resource. (a) shows the demand prior to the
scheduling of any tasks, (b) after 15 scheduling
Figure 4.2.2(b) shows the same resource after 15 scheduling
assignments have occured, two of which were were made to
this resource. One assignment was scheduled to start at
time 0 with a duration of 11, the other to start at time
11, with a duration of 6. The increase in demand near time
106, compared to 4.2.2(a), can be attributed to the
remaining unscheduled tasks demand increasing as they are
squeezed into a smaller set of possible start times by the
two scheduled tasks.
Figure 4.2.3(a) and 4.2.3(b) shows demand curves for a
resource that is a bottleneck for the job. These graphs
also represent the aggregated demand of 10 tasks on this
resource. Note that the aggregated demand value of
approximately one is higher and longer than figure
4.2.2(a), a non-bottleneck resource. This is due to the
increased amount of work required from the bottlneck.

Figure 4.2.3(b): (a) and (b) show the demand curves for a
bottleneck resource. (a) shows the demand prior to the
scheduling of any tasks, (b) after 15 scheduling
Figure 4.2.3(b) shows the demand for the bottleneck after
15 scheduling assignments have occured, of which 7 were
made to this resource. This demonstrates DHS's preference
for scheduling the bottleneck resource first.
Resource demand profiles and fs, the count of feasible
start times, provides a job-shop specific implementation of
the CLP variable selection heuristic, "pick the variable
with the smallest domain first" (Fruhwirth et. al., 1993).

This is the function of variable selection: of all
unscheduled tasks in all jobs, select the task that has the
greatest need to be scheduled next. In this case, the task
with the greatest need is identified two ways: the task
that has the fewest feasible start times due to assignments
of tasks within the same job, and the task with the fewest
choices for assignment due to a highly contended resource.
The variable selection algorithm within DHS has two tiers.
First, every time a task is scheduled, the remaining
unscheduled tasks i within the job are examined to see if
their domain of start times falls below a threshold
fSi < di
If so, taski is immediately scheduled. This heuristic was
based on the observation that a JIT assignment constrains
all subsequent tasks to only one start time.
Once the queue of tasks selected by the first tier are
scheduled, the second variable selection tier is used.
This tier is based on the resource demand profiles. Each
profile maintains the value and index of it's greatest
demand. The task making the greatest contribution to the
highest demand time slot of the highest demanded resource
gets scheduled next.
As each task t is scheduled, its representation on the
demand profile is changed to a value of one, from ti ast to
ti ast + di. This tends to cause any tasks that have a use
for that time period to be scheduled sooner.
As task scheduling activities causes changes in unscheduled
tasks est and 1st values, their demand values are replaced
on the profiles to accurately maintain a representation of
their demand.
Note that another part of strategic layer operation for
more realistic schedulers would be for variable selection
to pick a resource for each task. A real-world scheduler

should be able to choose from a disjunction of resources
with differing capabilities. Since the test suite data
choose to ignore this issue by assigning only one suitable
resource to each task, this part of the problem was not
modeled by DHS.
DHS lacks sophisticated backtracking. Backtracking only
occurs within a single job, and DHS considers due dates as
soft rather than hard constraints. Precedence and resource
assignment are hard constraints for DHS. If there is no
start time assignment that allows satisfaction of
precedence, all scheduled tasks of the job are unscheduled.
If there is no start time assignment that does not create
a late job completion, DHS goes ahead and schedules jobs to
be late. DHS has no way to unschedule and reschedule
arbitrary jobs based on another job's lateness.
4.3 Strategic and Operational Layer Interaction
The stated design goals for a distributed scheduler are
realized in the interaction between DHS's operational and
strategic layer. Many opportunities to produce excellent
schedules are lost due to the design of distribution and
limited bandwidth between DHS's operational and strategic
When a task is selected to be scheduled by the strategic
layer, the task object itself is communicated to the
appropriate operational layer. The task object contains
est and 1st values, among others.
After the task is assigned a start date by the operational
layer, the assigned start time is communicated back to the
strategic layer. The strategic layer model of the task is
updated and the constraints are propagated to other tasks
within that job, to the resource demand profile, and
indirectly to all unscheduled tasks that require that
When performing value selection in DHS, the operational
layers are at a disadvantage compared to Micro-Boss. In

Micro-Boss complex heuristics based on texture maps and
detailed knowledge of the overall scheduling problem are
used to select a start time that is most likely to not
cause lateness, or excessive inventory, or to require
backtracking. In DHS, this knowledge is unavailable to the
operational layers, as a design decision to preserve
bandwidth. In DHS the only overall scheduling information
that is communicated to the operational layer is a single
"suggested start value" created by the strategic layer
based on texture maps. If the operational layer cannot
schedule the task at it's suggested start time, it must
blindly select another start time.
In this way communications bandwidth is conserved, the
strategic and operational layers are much more loosely
coupled in that the interface does not require complex
demand profile data to be exchanged, and of course schedule
quality as measured by tardiness and inventory cost
4.4 Operational Layer
Each instance of the operational layer represents a
resource in DHS. The operational layer is responsible for
value selection: in this case, selecting a start time
value for tasks.
The process of start time selection is simple. First, the
suggested start time calculated by the strategic layer is
tried. Should this fail, ideally because there is no
feasible start time for the task, all start values between
the task est and 1st are tried. If there still is no start
time assigned, the operational layer attempts to schedule
the task at successively later start times. Should the
task be scheduled after it's 1st, it is up to the strategic
layer to propagate this event through the tasks in the same
job, and to unscheduled any tasks in that tasks job that
have already been scheduled.
The rational for the structure of DHS is to allow the
different operational layers to be distributed

computationally, and for each operational layer to
represent scheduling knowledge and goals unique to that
resource. As such, it is likely that the suggested start
time would be taken with a grain of salt because the
strategic layer would not share the operational layers
unique knowledge. Due to the test suites reductions in
problem complexity, unique operational layer value
assignment strategies were not implemented or tested in

5. Results
This section will present the results of DHS scheduling
performance. Basic performance will be measured in terms
of job tardiness and inventory costs. These measures will
be compared when possible to those of Micro-Boss (Sadeh,
1991). Calculated measures of network traffic required by
a distributed implementation of DHS will be presented.
5.1 DHS Results
This section will present details of DHS performance. DHS
was developed and benchmarked with a suite of scheduling
experiments (Sadeh, ftp). This suite used 6 experiment
groups. They varied on the distribution of release and due
dates, and on the number of resources in heavy demand, or
bottlenecks (Sadeh, 1991). The experiment groups are:
1. All release and due dates are the same, one bottleneck
2. All release and due dates are the same, two bottleneck
3. Release and due dates vary with a narrow distribution,
one bottleneck resource,
4. Release and due dates vary with a narrow distribution,
two bottleneck resources,
5. Release and due dates vary with a wide distribution, one
bottleneck resource,
6. Release and due dates vary with a wide distribution, two
bottleneck resources.
DHS performance was gauged based on:
Average lateness across each experimental group, and
standard deviation of each experiment's lateness cost

within it's experimental group,
Average inventory cost across each experimental group,
and standard deviation of each experiment's inventory
cost within it's experimental group,
Number of backtracks required.
Average lateness was calculated based on a jobs due date
dd, it's completion date c, and it's weighed tardiness cost
Vj taken from the test data. For job j lateness 1 is:
lj = Vj x Max (0, Cj ddj)
and as expected average lateness al is based on the number
of jobs scheduled js:
al = Zj 1 / js
Figure 5.1 depicts the average lateness for DHS on the 6
experiment groups. The behavior of DHS is as expected,
considering that the experiment groups with 2 bottlenecks
(2, 4 and 6) should be more intricate to schedule, and when
using a scheduler lacking sophisticated backtracking, more
tightly constrained and more difficult to schedule without
lateness. The exception to this is groups 1 and 2, where
the one bottleneck group proved more difficult to schedule
with no lateness.

Average Lateness
O) 5
< 0
Experiment Group
Figure 5.1: Average lateness of the 6 experiment groups
The standard deviation of scheduler lateness sdl for each
group g of experiments is calculated using al. For n
experiments within each group, each experiment with m jobs,
the sdl for a group is:
sdlg = £n (Ln Abs (lm alg)) / n
Figure 5.2 displays the standard deviation of experiment
group lateness. It matches the topography of the average
Standard Deviation Lateness
1 2 3 4 5 6
Experiment Group
Figure 5.2: Standard deviation of lateness for each of the
6 program groups

Inventory cost represents the marginal holding costs and
raw material interest costs for jobs. It represents the
flowtime of the job, exclusive of the sum of the durations
of the tasks. It is desirable for this value to be as
close to 0 as possible. Each task has an inventory figure
associated with it. In the case of these experiments, only
the first task had an inventory figure and it was always 1.
This means that inventory cost was always equal to the
duration of the job plus the delay time between tasks.
Therefore inventory in for a job j with k tasks was
in-j = Cj astj £* d
Average inventory ai for a group g of jobs was calculated
ai = Zg in / js
Figure 5.3 shows the average inventory costs for the 6
experiment groups. Inventory costs for the 1 bottleneck
groups are quite uniform, as are those for the 2 bottleneck
groups. Given that the average duration of 2 bottleneck
jobs was 45.3 and the average duration of 1 bottleneck jobs
was 40.34, the observed difference in average inventory of
around 15 can only be accounted for by DHS's difficulty in
efficiently handling the more complex problems.

Average Inventory
Experiment Group
Figure 5.3: Average inventory costs beyond those incurred
by job duration.
The standard deviation of scheduler inventory sda for each
group g of experiments is calculated using ai. For n
experiments within each group, each experiment with m jobs,
the sda for a group is:
sdag = (2m Abs (inm aig) ) / n
Figure 5.4 displays the sda values for each of the 6
program groups. With the exception of group 3, the one
bottleneck experiment groups were scheduled with more
consistent inventory costs than the two bottleneck groups.
This seems consistent with the other greater difficulties
DHS has with two bottleneck groups.

Standard Deviation Inventory
> u
" O
Q *->
. c
*-> >
to c
1 2 3 4 5 6
: _
1 -4~ / zBz
Experiment Group
Figure 5.4: Standard deviation of experiment inventory
costs beyond those incurred by job duration.
The final measure of DHS performance was backtracking
behavior. In DHS, backtracking occurred only when the
existing schedule forces precedence constraints to be
broken. That is, if task.* can't be scheduled such that it's
finish time is less than the already scheduled taski+I 's
start time, taski is scheduled at it's best possible time
and taski+J is removed from the schedule. After constraint
propagation to reflect the start time of taski, taski+i is
rescheduled. This sequence would constitute one instance
of backtracking.
Figure 5.5 again clearly shows the difficult time DHS had
with 2 bottleneck jobs. All 1 bottleneck experiment groups
were scheduled with less than 2 backtracks on average,
while 2 bottleneck experiment groups were scheduled with
more than 6 backtracks on average.
Were DHS designed to backtrack to try to eliminate
tardiness or to optimize inventory costs, the backtracks
required would of course be higher.

Average Backtracks
Experiment Group
Figure 5.5: Average number of backtrack operations per
experiment group.
5.1 DHS and Micro-Boss
Although DHS and the version of Micro-Boss documented in
(Sadeh, 1991) share the same test case suite and similar
variable selection mechanisms, unfortunately direct
comparison of performance remains somewhat a question of
apples and oranges.
Sadeh considers each of his 60 test cases "experiments",
where an experiment is a group of 10 jobs to be scheduled.
Micro-Boss of (Sadeh, 1991) was gauged on 3 performance
1. Search Efficiency: the ratio of the number of tasks to be
scheduled over the total number of search states that
were explored. When there is no backtracking this value
should be equal to 1.
2. Number of experiments solved in less than 1,000 search
states each, where an experiment is one of the 60 test
3. Average CPU time per operation, in seconds.
Of these measures, 1 and part of 2 are the most relevant.

All of DHS's solutions were generated in far less than 1000
search states, due to DHS's simplicity, unfortunately not
its efficiency. Given the difference between the (Sadeh,
1991) Micro-Boss implementation in a Lisp variant and DHS's
implementation in C++, there is little use in comparing CPU
Micro-Boss' best search efficiency was 85%. DHS's was 92%.
Due to lack of an extensive backtracking scheme, DHS solved
only 28 of 60 experiments in the sense used in (Sadeh,
1991), where solving an experiment is defined as scheduling
all jobs within an experiment with no lateness. Micro-Boss
solved 52 of 60 experiments.
In DHS's defense it solved 553 of the 600 jobs in all the
experiments with no lateness, or 92%. This is much better
than DHS's performance according to Sadeh's criteria, 47%.
It should also be noted that DHS's performance is
consistent with CORTES experiments. As communication of
scheduling information decreased, CORTES made worse
schedules. DHS matches this result with it's low fidelity
demand profiles.
What this shows is that Micro-Boss is a very efficient,
effective scheduling system. It also shows that DHS is
capable of fair performance. Part of the drop in
performance comes from resources performing start time
selection with much less information than Micro-Boss. The
other performance drop comes from DHS's relative lack of
sophistication, and lack of sophisticated heuristic
methods, such as the backtracking system (Sadeh, Sycara &
Xiong, 1994).
5.2 DHS Traffic Requirements
Each call to schedule a task in DHS requires 3 parameters
be transferred. On invocation, a task structure of 44
bytes and a suggested start time of 4 bytes are
transmitted. On return from scheduling a task structure of
44 bytes is returned, for a total of 92 bytes or 736 bits
per scheduling of a task on a resource.

DHS's average number of scheduling calls is 54 per test
set, over the 60 test sets. This is 39744 bits on average
for a test set. Given a network rate of 10 megabits per
second (MBS), DHS allows for 252 test sets per second to be
scheduled across a network, assuming the applications could
use all 10 MBS of a CD/CDMA LAN (a generous assumption).
Given that DHS requires around 2 seconds to schedule one
experiment on a Sparcstation 10, network communication is
clearly not a bottleneck.
It is interesting to compare this figure with an
implementation that transmits updated texture maps to the
resources with each scheduling action. This is a similar
architecture to that used by CORTES for resource demand
between scheduling agents (Sycara et. al., 1991). The size
of DHS's texture maps are 1204 bytes or 9664 bits. This
value plus the 736 bits of task information sums to 10400
bits. This times 54 average scheduling operations per test
set yields 561600 bits per test set on average. This would
allow the scheduling of 18 average test sets per second.
This is a factor of 14 increase in bandwidth demand. While
still feasible on a LAN with DHS's execution rate, the
severe increase in demand for communications bandwidth is
5.3 Summary
Despite DHS's fair performance on benchmarks I consider the
program to be a success. It demonstrates that fair
constraint-based heuristic scheduling performance can be
had with severe bandwidth limitations, and without
transmitting extensive proprietary data between two members
of a virtual enterprise. The results of this research are
currently being incorporated directly into a system design
for a manufacturing process representation / scheduling /
execution / simulation / analysis system for the Advanced
Projects Research Administration.

A. Debug Output
Table B-l: Textual DHS debug output. Lines beginning with
"Task" indicate a task selected to be scheduled next.
Lines beginning with "caching" indicate that the task
selected to be scheduled next is forcing other tasks within
the same job to have a very small set of possible start
dates (referred to as tier 1 variable selection). Lines
beginning with "backtracking" indicate that a tasks final
scheduled start time is invalidating precedence constraints
among other scheduled tasks within the same job. Finally
the inventory and lateness values are printed for the
completed schedule.
caching Task 3 Rsc
caching Task 4 Rsc
Task 2 Rsc 2 Job 8
caching Task 3 Rsc
Task 4 Rsc 1 Job 8
Task 3 Rsc 4 Job 8
caching Task 0 Rsc
caching Task 1 Rsc
Task 2 Rsc 2 Job 5
caching Task 0 Rsc
Task 1 Rsc 3 Job 5
Task 0 Rsc 0 Job 5
Task 2 Rsc 2 Job 6
caching Task 0 Rsc
Task 1 Rsc 3 Job 6
backtracking: Task
Task 0 Rsc 0 Job 6
caching Task 4 Rsc
Task 2 Rsc 2 Job 4
caching Task 3 Rsc
Task 4 Rsc 1 Job 4
Task 3 Rsc 4 Job 4
Task 2 Rsc 2 Job 7
4 Job 8 Start -1
1 Job 8 Start -1
Start 124
4 Job 8 Start -1
Start 143
Start 140
0 Job 5 Start -1
3 Job 5 Start -1
Start 20
0 Job 5 Start -1
Start 11
Start 0
Start 31
0 Job 6 Start -1
Start 6
1 Rsc 3 Job 6 Start 6
Start 11
1 Job 4 Start -1
Start 109
4 Job 4 Start -1
Start 133
Start 127
Start 46

Task. 2 Rsc 2 Job 0 Start 60
Task 2 Rsc 2 Job 9 Start 73
Task 2 Rsc 2 Job 1 Start 85
Task 2 Rsc 2 Job 2 Start 97
Task 1 Rsc 3 Job 6 Start 21
caching Task 0 Rsc 0 Job 3 Start -1
caching Task 1 Rsc 4 Job 3 Start -1
Task 2 Rsc 2 Job 3 Start 8
caching Task 0 Rsc 0 Job 3 Start -1
Task 1 Rsc 4 Job 3 Start 3
backtracking: Task 1 Rsc 4 Job 3 Start 3
backtracking: Task 2 Rsc 2 Job 3 Start 8
caching Task 2 Rsc 2 Job 3 Start -1
Task 0 Rsc 0 Job 3 Start 17
caching Task 3 Rsc 1 Job 3 Start -1
caching Task 4 Rsc 3 Job 3 Start -1
Task 2 Rsc 2 Job 3 Start 140
caching Task 3 Rsc 1 Job 3 Start -1
Task 4 Rsc 3 Job 3 Start 158
backtracking: Task 4 Rsc 3 Job 3 Start 158
caching Task 4 Rsc 3 Job 3 Start -1
Task 3 Rsc 1 Job 3 Start 149
Task 4 Rsc 3 Job 3 Start 159
Task 1 Rsc 3 Job 7 Start 26
Task 3 Rsc 1 Job 2 Start 108
Task 3 Rsc 1 Job 1 Start 97
Task 0 Rsc 3 Job 0 Start 0
Task 3 Rsc 1 Job 9 Start 85
Task 4 Rsc 4 Job 1 Start 106
caching Task 3 Rsc 0 Job 0 Start -1
Task 4 Rsc 1 Job 0 Start 92
Task 3 Rsc 0 Job 0 Start 81
caching Task 1 Rsc 0 Job 1 Start -1
Task 0 Rsc 3 Job 1 Start 70
backtracking: Task 2 Rsc 2 Job 1 Start 85
backtracking: Task 3 Rsc 1 Job 1 Start 97
backtracking: Task 4 Rsc 4 Job 1 Start 106
caching Task 2 Rsc 2 Job 1 Start -1
caching Task 3 Rsc 1 Job 1 Start -1
caching Task 4 Rsc 4 Job 1 Start -1
Task 1 Rsc 0 Job 1 Start 92

caching Task 2 Rsc 2 Job 1 Start -1
caching Task 3 Rsc 1 Job 1 Start -1
Task 4 Rsc 4 Job 1 Start 143
caching Task 2 Rsc 2 Job 1 Start -1
Task 3 Rsc 1 Job 1 Start 119
backtracking: Task 3 Rsc 1 Job 1 Start 119
backtracking: Task 4 Rsc 4 Job 1 Start 143
caching Task 3 Rsc 1 Job 1 Start -1
caching Task 4 Rsc 4 Job 1 Start -1
Task 2 Rsc 2 Job 1 Start 148
caching Task 3 Rsc 1 Job 1 Start -1
Task 4 Rsc 4 Job 1 Start 169
Task 3 Rsc 1 Job 1 Start 160
Task 1 Rsc 3 Job 2 Start 87
Task 3 Rsc 0 Job 7 Start 109
Task 4 Rsc 0 Job 9 Start 99
Task 1 Rsc 3 Job 8 Start 98
Task 4 Rsc 4 Job 7 Start 145
caching Task 3 Rsc 4 Job 6 Start -1
Task 4 Rsc 1 Job 6 Start 53
Task 3 Rsc 4 Job 6 Start 46
Task 1 Rsc 4 Job 0 Start 6
Task 0 Rsc 4 Job 2 Start 0
caching Task 0 Rsc 3 Job 4 Start -1
Task 1 Rsc 0 Job 4 Start 20
backtracking: Task 1 Rsc 0 Job 4 Start 20
Task 0 Rsc 3 Job 4 Start 34
Task 1 Rsc 0 Job 4 Start 40
Task 0 Rsc 0 Job 8 Start 20
Task 0 Rsc 4 Job 9 Start 16
Task 1 Rsc 3 Job 9 Start 40
caching Task 4 Rsc 1 Job 5 Start -1
Task 3 Rsc 4 Job 5 Start 135
Task 4 Rsc 1 Job 5 Start 169
Task 1 Rsc 4 Job 3 Start 20
Task 4 Rsc 0 Job 2 Start 120
Task 0 Rsc 1 Job 7 Start 0
Job 0 Tardiness 0
Inventory 87

Job 1 Tardiness 270 Inventory 52
Job 2 Tardiness 0 Inventory 46
Job 3 Tardiness 60 Inventory 56
Job 4 Tardiness 0 Inventory 32
Job 5 Tardiness 230 Inventory 119
Job 6 Tardiness 0 Inventory 73
Job 7 Tardiness 0 Inventory 62
Job 8 Tardiness 0 Inventory 25
Job 9 Tardiness 0 Inventory 40
Total Tardiness 560 Inventory 592 Avg Tard 56.00 Avg Inv
59.20 Backed 10 schedCalls 60

Resource 0
Resource 1
Resource 2
Resource 3
Resource 4
Latest Due Date
Figure B-1: Screendump of DHS debug tool. This Gantt chart display shows resources
0-5 on the vertical axis, and time on the horizontal axis. Each of 10 jobs has a different
color (gray scale on this reproduction). On the darker jobs, the lower left corner shows a
j:t legend, where j is the job number and t is the task of that job. The heaviest vertical line
towards the right is the latest due date. One and one-half tasks of job 4 (indicated with
dark legends) extend beyond the latest completion date. This is due to task 4:2s
unfortunate late placement on bottleneck resource 2.

B. Code
FILE: defs.h
ifndef DEFS_H
define DEFS_H
define MAX_TASKS 10
// gen specific to CSC 5816
define NTIME 300
extern "C" {
void exit();

FILE: domain.h
include "defs.h"
include "utils.h"
class Domain {
int jobCnt,
Job j ob[MAX_JOBS] ;
Resource *rsc[MAX_RESOURCES];
List *critCache;
List *tasksL[MAX_RESOURCES] ;
void texMapJobs (),
propagate (Task *),
cacheCritTasks (Job *),
notifyTasks (int);
Task *findMaxTask();
float calcWSPT(Task *);
void read (int, char *[
verify () ,
unschedule (Task
removeTex (Task *
profile (Task *),
optForward (),
moveTask (Task *,
} ;
) ,
) ,

FILE: job.h
include "defs.h"
define MAX_PREDS 5
define MAX_JOBS 160
class Job;
class Task;
class Job {
struct prec_str {
Task *task;
struct prec_str
int predCnt;
} prec [MAX_TASKS];
struct prec_str *getLastTask ();
void recursBackSched (struct prec_str *, int) ;
void recursFrontSched (struct prec_str *) ;
Job (char *, FILE *);
void frontSched (),
backSched (),
verify (),
propagateStarts (),
remove ();
int costTard (),
costlnv ();
Task *getTask (int which) { return prec[which].task; )
int getTaskCnt () { return taskCnt ; };
int getReleaseDate () { return releaseDate;};
int getJobNum () { return job; );
int getTard () { return tardinessFactor; };

FILE: resource.h
include "defs.h"
include "utils.h"
class Task;
class Resource {
Task *timeLine[NTIME] ;
int pass (int, int),
propose (Task *, int);
void assign (Task *, int);
Resource ();
int schedule (Task *,
countStarts (Task
optForward ();
void unschedule (Task *)
int) ,

FILE: task.h
include "defs.h"
//include "comm_strs.h"
class Job;
class Resource;
class Task {
int rscType,
Job *myJob;
Resource *myRsc;
Task (char *, Job *);
int getRscType () { return rscType; };
int getNextf) { return nextTask; };
int getDuration () { return duration; }
int getlnv () { return invFactor; };
int getFreeSlots () ( return freeSlots;
int getID () { return opNum; };
void setRsc (Resource *rsc) ( myRsc = rsc
Job *getJob () ( return myJob; };
int costTard (),
costlnv (),
getEarliestStart (),
getLatestStart (),
isScheduledP (),
getFrom (),
getTo ();
void setEarliestStart (int),
setLatestStart (int when),
setFreeSlots (),
setActualStart (int),
taskPrint (),
notifyRscChange ();

FILE: texmap.h
include "defs.h"
class TexLL {
TexLL *next,
Task *task;
float val;
int removed;
TexLL (TexLL *, Task *, float);
friend class TexLLHead;
class TexLLHead {
TexLL *head,
int cnt;
float total;
void removeLink (TexLL *),
findMaxTask (),
sumTasks ();
TexLLHead (Task *, float);
void add (Task *, float),
removeMaxTask (),
removeTask (Task *),
dump (),
dumpTasks ();
Task *getMaxTask ();
float getMaxVal ();
float getTotal () { return total; };
int getCnt () { return cnt; };
int hasUnschedTaskP();
class TexMap {
TexLLHead *tex[NTIME];
int maxTime; // index of center of most complicated
TexMap ();
void addToMap (Task *, int, float),
removeTask (Task *),
dumpVals (),

Task *max (float *),
*getMaxTask ()
float getMaxValO;
int getBest (Task *) ;

FILE: utils.h
#ifndef UTILS
define UTILS
class Item {
Item *next;
void *item;
Item (void *);
void *getltem () { return item; };
Item *getNext () ( return next; };
void setNext (Item *that) { next = that
class List (
Item *head,
List ();
void rewind () { ptr = head; };
void advance (),
push (void *),
exPush (void *),
add (void *);
void *pop (),
*getltem ();
} ;

include "task.h"
include "job.h"
include "resource.h"
include "texmap.h"
include "domain.h"
Domain domain;
main (int argc, char *argv[])
{, argv);
domain.initJobs() ;
domain.procSched() ;
domain.verify() ;
// domain.dumpSched() ;

include "task.h"
include "job.h"
include "resource.h"
include "texmap.h"
include "domain.h"
define DOM_LEN 80
int sched=0;
Domain::read (int argc, char *argv[])
FILE *fp;
char buf[DOM_LEN];
int i,j, needed;
backtracked = 0;
critCache = new List();
for (i=0; i< MAX_RESOURCES; i++)
tasksL[i] = new List ();
if (argc < 2) (
printf ("Need a domain to load\n");
exit ();
if ((fp=fopen (argv[l], "r")) == NULL) {
printf ("can't open %s\n",argv[1]);
exit ();
while (fgets (buf, DOM_LEN, fp) != NULL)
job[jobCnt++] = new Job (buf, fp);
// create resources
for (i=0; i for (j =0; jgetTaskCnt (); j++) {
needed = (job[i]->getTask (j))->getRscType();
if (rsc[needed] == 0) {
texMap [needed] = new TexMapO;
rsc[needed] = new Resource ();
if (rscCnt == MAX_RESOURCES)

for (i=0; KjobCnt; i++)
for (j=0; jcjob[i]->getTaskCnt (); j++) {
Task *task = job[i]->getTask (j);
task->setRsc (rsc[ task->getRscType()]);
tasksL[task->getRscType()]->push (task);
Domain::initJobs ()
int i,j;
Task *task;
for (i=0; i job [i]->backSched();
for (i=0; i job [i]->frontSched();
for (i=0; i for (j=0; jgetTaskCnt
task = job[i]->getTask (j);
task->setFreeSlots ();
j++) (
Domain::verify ()
int i;
for (i=0; i job[i]->verify ();
Domain::notifyTasks (int which)
Task *task;
while ((task = (Task *) tasksL[which]->getltem()) != NULL) {
task->notifyRscChange ();
Domain::costJobs ()

int i, tard, inv,
tsum=0, isum=0, cmplted=0;
for (i=0; KjobCnt; i++) {
printf ("Job %3d Tardiness %d\tInventory %d\n",
job[i]->getJobNum(), (tard=job[i]->costTard ()) ,
if (tard < 99999) {
tsum += tard;
isum += inv;
printf ("\nTotal Tardiness %d Inventory %d Avg Tard %.2f Avg Inv
%.2f Backed %d schedCalls %d\n"
,tsum,isum, (float ) tsum/cmplted, (float)
isum/cmplted,backtracked, sched);
j ii *********************\n" ) ;
Domain::dumpsched ()
int i;
printf ("Rsc %d Jobs %d\n", rscCnt,jobCnt);
for (i=0; i printf ("Job %d ", job[i]->getJobNum ());
job[i]->dump ();
putchar (' \n');
int i,j;
for (i=0; i for (j =0; jgetTaskCnt (); j++)
profile (job[i]->getTask (j));
Domain::calcWSPT (Task *task)
define WSPTBOUND 0.9
float val = (task->getJob())->getTard () / (float) task-

if (val < WSPTBOUND)
else if (val > 2.0 WSPTBOUND)
return 2.0 WSPTBOUND;
return val;
Domain;:profile (Task *task)
float prof[NTIME], unit, WSPT;
int earliest, latest, duration,
rsc = task->getRscType() ;
int k, 1;
if (!task->isScheduledP()) {
earliest = task->getEarliestStart () ;
latest = task->getLatestStart ();
duration = task->getDuration ();
WSPT = calcWSPT (task);
unit = (1.0 / (float) task->getFreeSlots() ) WSPT;
// exchange for memclr, bcopy, some reasonable
for (k=0; k prof[k] = 0.0;
for (k=0; k for (l=k; l prof[l] += unit;
for (k=0, l=earliest; k texMap[rsc]->addToMap (task, 1, prof[k]);
for (l=task->getFrom (); l<=task->getTo (); 1++)
texMap[rsc]->addToMap (task, 1, 1.0);
Domain::optForward ()
int i,madeachange;
do {
madeachange = 0;
for (i=0; iCrscCnt; i++)
if (rsc[i]->optForward () == 1)

madeachange = 1;
} while (madeachange == 1);
Task *
int i;
Task *task=NULL;
int maxNdx = -1;
float score, maxScore = 0.0;
if ((task = (Task *) critCache->pop ()) != NULL); else {
for (i=0; KrscCnt; i++) {
score = texMap[i]->getMaxVal ();
if (score > maxScore) {
maxScore = score;
maxNdx = i;
if (maxNdx > -1) // -1 indicates all tasks are
task = texMap[maxNdx]->getMaxTask();
return task;
Domain::cacheCritTasks (Job *job)
Task *task;
for (int j=0; jgetTaskCnt (); j++) {
task = job->getTask(j);
if (!task->isScheduledP()&& task->getFreeSlots() <
((float) task->getDuration() 1.0) ){
critCache->exPush (task);
printf ("caching "); task->taskPrint();
Task *task;
int rscNdx;

texMapJobs ();
((task = findMaxTask ())!= NULL)
rscNdx = task->getRscType();
texMap[rscNdx]->removeTask (task)
if (rsc[rscNdx]->schedule (task,
notifyTasks (rscNdx);
propagate (task);
cacheCritTasks (task->getJob());
Domain::moveTask (Task *task, int toWhen)
int rscNdx = task->getRscType();
rsc[rscNdx]->unschedule (task);
task->setActualStart (UNSCHEDULED);
rsc[rscNdx]->schedule (task, toWhen);
propagate (task);
Domain::propagate (Task *task)
profile (task) ; // add already scheduled task back
to profile
(task->getJob ())->propagateStarts ();
Domain::unschedule (Task *task)
rsc[task->getRscType()]->unschedule (task);
printf ("backtracking: ");task->taskPrint();
task->setActualStart (UNSCHEDULED);

Domain::removeTex (Task *task)
texMap[task->getRscType()]->removeTask (task)

include "job.h"
include "task.h"
include "texmap.h" // for domain.h
include "domain.h"
define JOB_LEN 80
extern Domain domain;
Job::Job (char *jobStr, FILE *fp)
char buf[JOB_LEN];
int i, next;
// i value not used, just soaking it up
sscanf (jobStr, "%d%d%d%d%d",
Sjob, StardinessFactor, Si, SdueDate, SreleaseDate);
taskCnt = 0;
while (fgets (buf, JOB_LEN, fp) != NULL SS strlen (buf) > 1) {
prec[taskCnt].task = new Task (buf, this);
prec [taskCnt].predCnt = 0;
for (i=0; i next = prec[i].task->getNext(); // next same as
task index
if (next > 0) {
prec[next].preds[prec[next].predCnt++] = Spree[i];
prec[i].succesor = Spree[next];
prec[i].succesor = NULL; // last in line
Job::recursBackSched (struct prec_str *thisPrec, int due)
int j,
latest = due thisPrec->task->getDuration();
thisPrec->task->setLatestStart (latest);
for (j=thisPrec->predCnt-l; j>=0; j--)
recursBackSched (thisPrec->preds[j], latest);

Job:rbackSched ()
recursBackSched (getLastTask(), dueDate);
Job::recursFrontSched (struct prec_str thisPrec)
int j,max;
struct prec_str *pred;
// those without preds have earliest start
from tasklnit
if (thisPrec->predCnt != 0) {
max = 0;
for (j=0; jpredCnt; j++) (
pred = thisPrec->preds[j];
if (pred->task->getEarliestStart() + pred->task-
>getDuration() > max)
max = pred->task->getEarliestStart () + pred
thisPrec->task->setEarliestStart (max);
if (thisPrec->succesor != NULL)
recursFrontSched (thisPrec->succesor);
Job::frontSched ()
int i;
(i=0; i if (prec[i].predCnt ==
(Spree[i]) ;
struct Job::prec_str *
Job::getLastTask ()
{return Spree[taskCnt-1];}
Job::verify ()
int i;

for (i=0; KtaskCnt; i++)
if (prec[i].succesor != 0)
if (prec[i].task->getTo() >= prec[i].succesor->task-
>getFrom ()) {
fprintf (stderr,"Job: Bad sequence job %d task
%d\n",job, i);
dump ();
exit ();
int i;
//printf ("Tardiness %d,\n", tardinessFactor);
for (i=0; i printf ("\tTask %d ",i);
prec[i].task->prettyDump ();
Job::costTard ()
Task *lastTask = prec[taskCnt-1].task;
int tardiness = lastTask->getTo() dueDate;
if (tardiness < 0)
tardiness = 0;
if (lastTask->isScheduledP ())
return tardiness tardinessFactor;
return 99999;
Job::costInv ()
int recoveryDate;
int i, sum=0, dur=0;
Task *lastTask = prec [taskCnt-1].task;
for (i=0; i dur += prec[i].task->getDuration();
if (lastTask->isScheduledP ()) {
recoveryDate = (dueDate > lastTask->getTo ()) ?

dueDate: lastTask->getTo () ;
for (i=0; ictaskCnt; i++)
sum += prec[i].task->getlnv()
prec[i].task->getFrom ());
return sum dur;
return 99999;

(recoveryDate -
Job::propagateStarts ()
int i, eStart, IStart, dur,
changed = 1;
Task *suc, *task, *pred;
while (changed) {
changed = 0;
for (i=0; i succesor
sue = NULL;
if (prec[i].succesor)
sue = prec[i].succesor->task;
pred = NULL;
if (prec[i].predCnt)
pred = prec[i].preds[0]->task;
task = prec[i].task;
dur = task->getDuration();
eStart = task->getEarliestStart ();
IStart = task->getLatestStart ();
// good place for optimization minimize
texture calls
// have to handle texture here, before start
change, for efficiency
if (pred) {
if (eStart != pred->getFrom () + pred-
>getDuration()) {
latestStart too
pred->getDuration ());
domain.removeTex (task);
// task will set
task->setEarliestStart (pred->getFrom() +
domain.profile (task);
changed = 1;
if (sue) {

if (IStart + dur != suc->getTo() suc-
+ 1) {
domain.removeTex (task);
(suc->getTo() -
- dur + 1) ;
domain.profile (task);
changed = 1;
Job::remove ()
int i;
printf ("Job::remove: removing job %d\n",job);
for (i=0; i if (prec[i].task->isScheduledP()) (
domain.removeTex (prec[i].task);
domain.unschedule (prec[i].task);


extern Domain domain;
Resource::Resource ()
int i;
for (i=0; i timeLine[i] = NULL;
Resource::pass (int start, int duration)
int OK = 1,
for (i=start; i if (timeLine[i] != NULL)
OK = 0;
return OK;
Resource::countStarts (Task *task)
int i,
starts = 0,
duration = task->getDuration();
for (i=task->getEarliestStart(); i<=task->getLatestStart (); i++)
if (pass (i, duration) == 1)
return starts;

Resource::assign (Task *task, int start)
int i;
task->setActualStart (start);
for (i=task->getFrom () ; i<=task->getTo (); i++) {
if (timeLine[i] != NULL)
printf ("Resource:schedule: hosing reservation\n")
timeLine[i] = task;
// try for latest start, then back up till open slot is found
Resource::schedule (Task *task, int bestStart)
int duration = task->getDuration (),
if (pass (bestStart, duration) == 1) {
assign (task, bestStart);
scheduled = 1;
if ((scheduled) { // make it later than bestStart
tryStart = bestStart +1;
bound = task->getLatestStart();
while ((scheduled && tryStart <= bound) {
if (pass (tryStart, duration) == 1) {
assign (task, tryStart);
scheduled = 1;
if ((scheduled) { // make it earlier than bestStart
tryStart = bestStart -1;
bound = task->getEarliestStart();
while ((scheduled && tryStart >= bound) {
if (pass (tryStart, duration) == 1) {
assign (task, tryStart);
scheduled = 1;

if (!scheduled) { // make it late
tryStart = task->getLatestStart() +1;
bound = NTIME task->getDuration () ;
while (!scheduled && tryStart <= bound) {
if (pass (tryStart, duration) == 1) {
assign (task, tryStart);
scheduled = 1;
return scheduled;
Resource:runschedule (Task *task)
int i;
for (i=task->getFrom(); i<=task->getTo(); i++) {
if (timeLine[i] != task)
printf ("Resource::unschedule: hosing up\n");
timeLine[i] = NULL;
Resource::propose (Task *task, int incr)
int i;
for (i = task->getFrom ()+incr; i<=task->getTo ()+incr; i++)
if (timeLine[i] != task && timeLine[i] != 0)
return 0;
return 1;
Resource::optForward ()
Task *task=NULL;
int i, newstart, incr,
for (i=0; i 72

if (timeLine[i] != NULL && timeLine[i] != task) {
incr = 1;
task = timeLine[i];
while (propose (task, incr) == 1 &&
task->getFrom ()+incr <= task-
>getLatestStart ())
if ( incr > 0) {
newStart = task->getFrom () + incr;
domain.moveTask (task, newStart);
madeachange = 1;
return madeachange;

include "task.h"
include "job.h"
include "texmap.h"
include "domain.h"
include "resource.h"
extern Domain domain;
Task::Task (char *taskStr, Job *job)
earliestStart = job->getReleaseDate();
latestStart = 0;
freeSlots = 0; // real initial value set in job when
propagating start times
// -1 == not scheduled, or tail
actualStart = nextTask= -1;
myJob = job;
sscanf (taskStr, "%d%d%d%d%d", &opNum, &invFactor,
SrscType, ^duration, snextTask);
Task::setEarliestStart (int when)
earliestStart = when;
setFreeSlots () ;
if (actualStart > -1 && earliestStart > actualStart) {
domain.unschedule (this);
setActualStart (UNSCHEDULED);
//_domain.profile (this);
//_myJob->propagateStarts ();
//_reserveSoonest ();
/*_if (proposedStart > -1 && earliestStart > proposedStart) (
setProposed (earliestStart, earliestStart+duration-1);
//myJob->propagateStar ts (); don't think this is needed
) */
if (earliestStart > latestStart)
setLatestStart (earliestStart);
Task::setLatestStart (int when)

latestStart = when;
setFreeSlots ();
Task::setActualStart (int when)
// if (when > earliestStart)
// setEarliestStart (when);
// myJob->propagateStarts ();
actualStart = when;
Task::prettyDump ()
printf (" Resource %d Duration %d InventoryFactor %d",rscType,
printf (" From %d To %d\n",actualStart, actualStart+duration);
printf ("\t\tearliest %d latest %d\n",earliestStart, latestStart)
Task::taskPrint ()
printf ("Task %d Rsc %d Job %d Start
%d\n", getID(),getRscType(), myJob-
>getJobNum(), actualStart);
Task::isScheduledP ()
if (actualStart == -1)
return 0;
return 1;

Task::getFrom ()
if (actualStart == -1)
return earliestStart;
return actualStart;
Task::getTo ()
if (actualStart == -1)
return latestStart + duration-1;
return actualStart + duration-1;
Task::getLatestStart ()
// if (actualStart == -1)
return latestStart;
// else
// return actualStart;
Task::getEarliestStart ()
// if (actualStart == -1)
return earliestStart;
// else
// return actualStart;
Task::setFreeSlots ()
int slots = myRsc->countStarts (this);
if (slots)
freeSlots = slots ;
freeSlots = 1; // garentee it's next to be

Task::notifyRscChange ()
setFreeSlots ();
if (!isScheduledP()) {
domain.removeTex (this)
domain.profile (this);

include "task.h"
include "texmap.h"
TexMap::TexMap ()
int i;
for (i=0; i tex[i] = NULL;
maxTime = -1;
// need bcopy or bitblt!!
// lame initial value?
TexMap::addToMap (Task *task, int ndx, float val)
if (tex[ndx] == NULL)
tex[ndx] = new TexLLHead (task, val);
tex[ndx]->add (task, val);
// semi-lame way to track area of greatest texture
if (maxTime == -1)
maxTime = ndx;
if (tex[ndx]->getTotal () > tex[maxTime]->getTotal ())
maxTime = ndx;

Task *
TexMap::getMaxTask ()
if (maxTime == -1)
return NULL;
return tex[maxTime]->getMaxTask();
TexMap::removeTask (Task *task)
float maxTotal=0.0;
int i;
(i=task->getFrom (); i<= task->getTo (); i++)
tex[i]->removeTask (task); // also finds
if (tex[i]->getCnt () == 0) {
new maxtask;

delete tex[i];
tex[i] = NULL;
for (i=0; i if (tex[i] && tex[i]->getTotal () > maxTotal) {
maxTotal = tex[i]->getTotal ();
maxTime = i;
if (!tex[maxTime])
maxTime = -1;
TexMap::dumpVals ()
int zeros=l;
for (int i=0; i if (tex[i]) {
zeros = 0;
printf ("%d:: ",i);
tex[i]->dump ();
putchar ('\n');
else if ((zeros) {
printf ("------\n") ;
zeros = 1;
TexMap::dumpTasks ()
int zeros=l;
for (int i=0; i if (tex [i]) {
zeros = 0;
printf ("%d:: ",i);
tex[i]->dumpTasks ();
putchar ('\n');
else if ((zeros) {
printf ("-------\n") ;
zeros = 1;

TexMap::getMaxVal ()
if (maxTime != -1)
return tex[maxTime]->getTotal();
return 0.0;
TexMap:rgetBest (Task task)
int minNdx = task->getLatestStart();
int i, j;
float rangeTotal,
minVal = 100000.0;
for (i=task->getEarliestStart(); i<=task->getLatestStart(); i++) {
rangeTotal = 0.0;
for (j=i; jgetDuration(); j++) (
//already scheduled
if (tex[j]) {
if (tex[j]->getCnt() == 1 && tex[j]->getTotal ()
== 0.0) {
rangeTotal += 100000.0;
rangeTotal += tex[j]->getTotal();
if (rangeTotal < minVal) {
minVal = rangeTotal;
minNdx = i;
return minNdx;
// linked list implementation for texture
TexLLHead::TexLLHead (Task *task, float nval)
head = tail = new TexLL (NULL, task, nval);
cnt = 1;
if (!task->isScheduledP()) {
max = head;
total = nval;

else {
max = NULL;
total = 0.0;
// assumes new entries always go to end of list
TexLL::TexLL (TexLL *nprev, Task *ntask, float nval) :
task (ntask), val (nval), prev (nprev)
next = NULL;
removed = 0;
TexLLHead::add (Task *ntask, float nval)
TexLL *newTail = new TexLL (tail, ntask, nval);
tail->next = newTail;
tail = newTail;
if (hasUnschedTaskP())
// semi-lame way to track greatest contributer
to texture
if (!ntask->isScheduledP())
if (max == NULL || nval > max->val)
max = tail;
TexLL *that = head;
total = 0.0;
while (that != NULL) {
total += that->val;
that = that->next;
TexLL *that = head;

while (that != NULL) {
if (!that->task->isScheduledP())
return 1;
that = that->next;
return 0;
Task *
TexLLHead::getMaxTask ()
if (max->task->isScheduledP())
printf ("getMaxTask: returning scheduled work\n")
if (max != NULL)
return max->task;
return NULL;
TexLLHead::getMaxVal ()
if (max != NULL)
return max->val;
return 0.0;
TexLLHead::removeTask (Task *task)
TexLL *that = head;
int notfound=l;
while (that != NULL) {
if (that->task == task) {
removeLink (that);
notfound = 0;
if (!max || task == max->task)
that = that->next;
if (notfound)
printf ("notfound\n");

TexLLHead::removeLink (TexLL *that)
if (that->prev != NULL)
that->prev->next = that->next;
head = that->next;
if (that->next != NULL)
that->next->prev = that->prev;
tail = that->prev;
total -= that->val;
delete that;
TexLLHead::findMaxTask ()
TexLL *that=head;
float maxVal = 0.0;
int foundAUnsched = 0;
max = NULL;
while (that != NULL) {
if (!that->task->isScheduledP() && that->val > maxVal) {
maxVal = that->val;
max = that;
foundAUnsched = 1;
that = that->next;
if (!foundAUnsched)
total = 0.0;
TexLLHead::removeMaxTask ()
removeLink (max);
TexLL *entry=head;
printf ("%.2f: ",total);

while (entry != NULL) {
printf ("%.2f ", entry->val);
entry = entry->next;
TexLL *entry=head;
while (entry != NULL) {
printf ("%x ", entry->task);
entry = entry->next;

tinclude "utils.h"
j y********** + ***** + **+******Hr*'**********,*,*,*r*******,********' +
Item::Item (void *nitem)
item = nitem;
next = NULL;
List::List ()
head = tail = ptr = NULL;
List::push (void *nitem)
Item *that = new Item (nitem);
if (head == NULL)
tail = that;
that->setNext (head);
head = that;
List::exPush (void *nitem)
Item *that = head;
while (that != NULL) {
if (that->getltem() == nitem)
that = that->getNext();
push (nitem);
List::add (void *nitem)
if (tail) {
Item *that = new Item (nitem);
tail->setNext (that);
tail = that;

push (nitem);
void *
List::pop ()
if (head != NULL) {
void *item = head->getltem() ;
Item popped = head;
head = head->getNext() ;
if (head == NULL)
tail = NULL;
delete popped;
return item;
return NULL;
List::advance ()
if (ptr != NULL)
ptr = ptr->getNext ();
void *
List:rgetltem ()
if (ptr != NULL)
return ptr->getltem();
return NULL;

Burke, P., P. Prosser. 1994. The Distributed Asynchronous
Scheduler. Intelligent Scheduling, ed. M. Zweban, M. S.
Fox, Morgan Kaufman, San Francisco CA.
Dechter, R. 1992. Constraint Networks. Encyclopedia of
Artificial Intelligence, second edition, pp. 276-285, John
Wiley & Sons.
Fox, M. S. 1994. ISIS: A Retrospective. Intelligent
Scheduling, ed. M. Zweban, M. S. Fox, Morgan Kaufman, San
Francisco CA.
Fruhwirth, T., A. Herold, V. Kuchenhoff, T. Le Provost, P.
Lim, E. Monfroy, M. Wallace. 1993. Constraint Logic
Programming An Informal Introduction. Technical Report
ECRC-93-5, European Computer-Industry Research Centre.
Garey, M. R., D. S. Johnson. 1979. Computers and
Intractability: A Guide to the Theory of NP-Completeness.
W. H. Freeman and Co., New York.
Goldratt, E. M., J. Cox. 1984. The Goal. North River Press,
Great Barrington MA.
Gomes, C. P., A. Tate, L. Thomas. A Distributed Scheduling
Framework. Proceedings of the Sixth International
Conference on Tools with Artificial Intelligence, pp. 49-
55, New Orleans, Louisiana, IEEE Computer Society Press,
1994 .
Hadavi, K. 1994. ReDS: A Real Time Production Scheduling
Systems from Conceptions to Practice. Intelligent
Scheduling, ed. M. Zweban, M. S. Fox, Morgan Kaufman, San
Francisco CA.
Hadavi, K., W. Hsu, T Chen, C. Lee. 1992. A Architecture
for Real-Time Distributed Scheduling. AI Magazine, pp. 4 6-
56, Fall 1992.

Johnston, M. D., S. Minton. 1994. Analyzing a Heuristic
Strategy for Constraint-Satisfaction and Scheduling.
Intelligent Scheduling, ed. M. Zweban, M. S. Fox, Morgan
Kaufman, San Francisco CA.
LePape, C. 1994. Using a Constraint-Based Scheduling
Library to Solve a Specific Scheduling Problem.
Proceedings of the AAAI-SIGMAN Workshop on Artificial
Intelligence Approaches to Modeling and Scheduling
Manufacturing Processes, 1994.
Morton, T. E., D. W. Pentico. 1993. Heuristic Scheduling
Systems. John Wiley & Sons, New York.
Sadeh, N. 1991. Look-ahead Techniques for Micro-
opportunistic Job Shop Scheduling. Ph.D. Thesis, School of
Computer Science, Carnegie Mellon University.
Sadeh, N., M. S. Fox. 1991. Variable and Value Ordering
Heuristics for Hard Constraint Satisfaction Problems: An
Application to Job Shop Scheduling. Technical Report CMU-
RI, TR-91-23, The Robotics Institute, Carnegie Mellon
Sadeh, N., K. Sycara, Y. Xiong. 1994. Backtracking
Techniques for the Job Shop Scheduling Constraint
Satisfaction Problem. Technical Report CMU-RI-TR-94-31,
The Robotics Institute, Carnegie Mellon University.
Sadeh, N. 1994. Micro-Opportunistic Scheduling: The Micro-
Boss Factory Scheduler. Intelligent Scheduling, ed. M.
Zweban, M. S. Fox, Morgan Kaufman, San Francisco CA.
Sadeh, N. ftp. Test data available from "ftp", in
Smith, S. F. 1994. OPIS: A Methodology and Architecture for
Reactive Scheduling. Intelligent Scheduling, ed. M.
Zweban, M. S. Fox, Morgan Kaufman, San Francisco CA.
Sycara, K. P., S. F. Roth, N. Sadeh, M. S. Fox. 1991.

Resource Allocation in Distributed Factory Scheduling.
IEEE Expert, February 1991.
Vepsalainen, A. P. J., T. E. Morton. 1987. Priority Rules
for Job Shops with Weighted Tardiness Costs. Management
Science, Vol. 33, No. 8, August 1987.
Zweban, M., M. S. Fox. 1994. Intelligent Scheduling, pp.
ix, Morgan Kaufman, San Francisco CA.