Citation
Resource allocation across geographically distributed domains

Material Information

Title:
Resource allocation across geographically distributed domains an application in the emergency services industry
Creator:
Hunter, Tara K
Publication Date:
Language:
English
Physical Description:
viii, 85 leaves : illustrations ; 29 cm

Subjects

Subjects / Keywords:
CADS (Computer system) ( lcsh )
Fire engines -- Dispatching ( lcsh )
Assistance in emergencies ( lcsh )
Assistance in emergencies ( fast )
CADS (Computer system) ( fast )
Fire engines -- Dispatching ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 83-85).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Computer Science.
General Note:
Department of Computer Science and Engineering
Statement of Responsibility:
by Tara K. Hunter.

Record Information

Source Institution:
|University of Colorado Denver
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
37887112 ( OCLC )
ocm37887112
Classification:
LD1190.E52 1997m .H86 ( lcc )

Downloads

This item has the following downloads:


Full Text
Resource Allocation Across Geographically
Distributed Domains:
An Application in the Emergency Services Industry
by
Tara K. Hunter
B.A., University of Colorado at Denver, 1989
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Computer Science
1997


This thesis for the Master of Science
degree by
Tara K. Hunter
has been approved
by
Ilia
blfe
Gita Aiaghband
5~-£ HT1
Date
Jay Rothman


Hunter, Tara K. (M.S., Computer Science)
Resource Allocation Across Geographically Distributed Domains:
An Application in the Emergency Services Industry
Thesis directed by Professor William Wolfe
ABSTRACT
Emergency service industries, such as fire departments, face significant
problems in dealing with the geographically distributed resources. Tactical
issues address the immediate response to a request for assistance. Strategic
issues address the placement of resources such that the greatest overall benefit
can be realized. This includes facility location as well as the assignment of
resources to each facility.
Typically, a response has been based on a look up table that assumes all
available units are in their assigned stations. The reality is that the units often
are elsewhere (i.e. doing inspections, training, investigating other incidents,
etc.). In a situation where seconds can mean the difference between life and
death, its important to know exactly which units are the closest to a given
incident so the response can be most effective.
Recent technological advances allow for the automatic location of vehicles
through the use of the department of defenses global positioning system
(GPS). This, combined with transportation network analysis tools and
geographic information system digital mapping concepts, opens the way for a
solution paradigm based on up-to-date location information.
This thesis explores resource distribution issues in the context of a Computer
Aided Dispatch system. The focus is on response recommendation and
resource reallocation during periods of intense demand within one or more
areas of the system.


This abstract accurately represents the content of the candidates thesis. I
recommend its publication.


DEDICATION
This thesis is dedicated to my son Erik, who, at five, demonstrates wisdom
beyond my years.


ACKNOWLEDGEMENTS
I would like to thank the following people, without whom, this endeavor would
never have succeeded.
Dr. William Wolfe for his wealth of knowledge, and his interest and enthusiasm
in this project.
Kent Phelps for his wealth of technical expertise and his unwavering support
and belief in me.
Chief Charles Burdick of the Littleton Fire Department for his undying patience
and ever open door. Without his vision and insight, this project would never
have even been proposed.
The dedicated men and women in the South Metro Fire Protection District
dispatch center for sharing their insights into not only the current system and it
shortcomings, but also their understanding of how things should be.


CONTENTS
Chapter
1 Introduction 1
1.1 Motivation 1
1.2 Problem Overview 2
1.2.1 The Current System 2
1.2.2 Requirements and Constraints of the Proposed System 4
2 Current Technology 7
2.1 Geographic Information Systems 7
2.2 Global Positioning System and Automatic Vehicle Locators 8
3. Resource Allocation and Deployment Models Literature Review 10
3.1 Overview 10
3.2 Normative Models 12
3.2.1 Location Criteria in Objective Functions 12
3.2.2 Discrete Location-Allocation Models 13
3.3 Descriptive (Evaluative) Models 18
3.3.1 Simulation Models 18
3.3.2 The Hypercube Queuing Model 19
3.3.3 A Parametric Solution Model 20
3.4 Heuristics 20
3.4.1 Linear Programming Methods 20
3.4.2 The Rand Institute Study on Fire Department Deployment Analysis 20
vii


4. Solution Formulations: The South Metro Fire Protection District
Computer Aided Dispatch System........................................25
4.1 Response Recommendations The Dynamic Allocation of Resources...25
4.1.1 The Data Models.................................................26
4.1.2 Finding the Closest Units The Transportation Network.........27
4.1.3 Path Finding Algorithms Dijkstra's and Grassfire.............. 29
4.2 Move-up Static Resource Allocation.............................33
4.2.1 Criteria...........:............................................33
4.2.2 The Solution Approach...........................................34
4.2.3 Finding the p-Median of the Ranked Coverage Matrix..............39
4.3 Testing the Solution. A Simulation Model........................40
Appendix
A. Glossary..........................................................44
B. Source Code.......................................................45
References............................................................83
viii


1 Introduction
1.1 Motivation
Just as with any industry, the Emergency Services Industries strive to optimize
their "profits" (in this case, the profit is in terms of the quality of service to the
community) within the various constraints to their abilities. The goal of a fire
department is to minimize property damage, injuries and loss of life subject to
budgetary constraints.
The amount of time firefighters spend actually servicing calls for assistance is
relatively small1. Due to this, the traditional solution to assigning resources to
incidents has been based on the assumption that most of the time fire
companies are in their stations. Each station is assigned a district of
responsibility based on the experience and knowledge of the domain experts.
Response recommendations come from a look-up table that was painstakingly
derived by senior fire department personnel analyzing the system and deciding
which regions were closest to which stations, not only for the first responding
units, but also for the case when multiple units are needed or when the first
responding unit is unavailable.
During periods of high demand, portions of the system may become
uncovered, i.e., there may be areas that cannot be reached within an
acceptable amount of time. In this situation, the fire department will seek to
lower the risk of system failure by relocating available units into the deficient
areas2. It is typically left up to the employees in the dispatch center to recognize
the need for such a relocation and to determine which units should be moved.
This approach is deficient in the following ways. First, firefighters may spend a
relatively small portion of their time actually responding to incidents, but, this
does not imply that they are always at their assigned stations. A significant
amount of time is spent off site performing inspections, in training classes and
investigating previous incidents. Second, with respect to the need for a
redistribution of resources, the times when the need is most likely to arise, are
the very times when dispatch personnel are the busiest and are least likely to
1 Reference for instance Hendrick, et al (1972). It was found that actual work time for
pumper companies in the Denver fire department was between 2 and 4 percent of total
work time.
2 This is referred to as move-up.
1


take the time to analyze the state of the system and recommend a relocation.
Dispatchers end up shouldering a great deal of responsibility for tracking and
determining assignment of resources.
In the past 25 years much work has been done in the field of spatial analysis
and distribution of resources in geographically distributed domains. Combining
these concepts with the graphical user interface and relational database design
concepts of geographic information systems allow for a more appropriate
solution approach to these problems based on the current state of an ever
changing system. Recent advances in the areas of Global Positioning Systems
(GPS) and Automatic Vehicle Locators (AVL) supply the means to relieve
dispatchers of a great deal of unnecessary responsibility.
1.2 Problem Overview
This research was conducted in conjunction with the design and implementation
of a computer aided dispatch (CAD) system for the South Metro Fire Protection
District (SMFPD) dispatch center, located in Littleton, Colorado.3 This paper will
address the response recommendation and the resource reallocation portions of
the system.4 Discussed will be both the formal
models and solutions of the problem domain, as found in the literature, and a
model of the specific problem presented by the SMFPD.
1.2.1 The Current System
The current CAD system utilized by SMFPD is a static system based on a set of
look up tables. The entire 500 square mile district has been divided into
approximately 174 square mile geographic cells called basic geographic units
(BGUs). The Geofile contains all address ranges within each BGU and a
pointer to an entry in the runcard table. The runcards contain details about
which units should be dispatched for a given incident type.
3 A special thanks goes to Fire Chief Charles Burdick who served as the main interface
and primary contact for domain knowledge associated with this project.
4 A response recommendation presents a list to the dispatcher that contains units
considered to be the most desirable to respond to a particular incident. The response
reallocation portion, more commonly referred to as move up", will scan the system for
areas deficient in coverage and make recommendations about redistribution of the units
that are available to protect against future incidents.
2


This system has two major shortcomings. One is the frustration with the overall
abstraction and poor design of the data model, the other is that a static model is
being used to model a dynamic system. A more dynamic and robust solution is
being sought through the use of currently available technology.
The current data model is based on units to be dispatched to a given incident
type. At first glance, this seems to be a reasonable approach. But in practice,
this is too rigid and restrictive of a model. The most glaring example of this is
the case where a single unit can serve the functionality of two or more logical
types of units. For instance, for a car fire, the typical response is to send an
engine and a rescue (paramedic) unit, but Littleton has one engine (currently
housed at station 15) that can serve as both. A model of the system that is
based on units would send an unnecessary unit to an incident whose response
is out of station 15. Initially, management seemed to think response should be
determined by station but this turns out to be too static also, as firefighters dont
always stay home. They are involved in on going activities such as training and
inspections which have them out of the station on a regular basis. In addition,
once an incident has been handled, the unit will go back in service (is available
to respond to another incident) while still performing mop up activities at the site
of the current incident. A second frustration with the data model, centers around
the fact that the database is a flat file system, changes in resources have a
ripple effect that cause a maintenance nightmare. A more appropriate data
model will be presented later.
A fire protection system is often viewed and modeled as a basically static
system (for instance Toregas, et al (1970) assume "...that each facility has
response capability at all times."), but in reality, it is anything but static. When
not responding to incidents, firefighters are often out of quarters for training
purposes, performing routine inspections of businesses and other facilities,
investigating incidents, etc. Every change in the system alters the
appropriateness of a given response, and might indicate a need for a system
reconfiguration.
The current CAD doesn't have the ability to track the location of resources or to
make recommendations surrounding the relocation of those resources.
Response recommendations are based on the assumption that available
resources are always in their home locations when not out responding to a call.
Move-up is accomplished based on the gut feel of the dispatchers on duty. This
system is inherently problematic in that the times when a need for a move-up
are likely to occur are the very times when the dispatchers are the busiest and
are least able to consider evaluating system needs. Computer assistance for
move-ups includes monitoring the system, making suggestions for when a
relocation should occur and suggestions for redistribution of resources to
3


maintain minimal coverage standards and make the best use of available
resources.
1.2.2 Requirements and Constraints of the Proposed System
Dynamic allocation of resources has two major areas of consideration. The first
is related to the management of various types of incidents, what resources are
needed in response to a given incident type. This must take into account not
only the current state of the system but also possible future states, there is
always a trade off between conservation of resources and having needed
resources available as quickly as possible. These types of decisions are made
on a general level and are statically fixed for each type of incident. However,
since reality never can be fully outlined, the judgment of the dispatcher as to
what ultimately is needed for a given call is the bottom line on resource
allocation. The dispatchers are the domain experts, and while they are given
guidance and direction by management and policies, they ultimately make the
decisions about the demands of a particular call. In a realm where seconds can
make the difference between life and death, good decisions need to be made
quickly. The best use for a computer system is to provide as much information
as possible in a quick and comprehensible manner. As a way of supporting the
decision making process.
The intent is to build a system that tracks all available resources and makes
recommendations for the allocation and distribution of those resources based
on the current state of the system. The vision is to eventually equip all units with
AVL (automatic vehicle locators), thus making available the exact
latitude/longitude location of all mobile resources at any given moment. This
data is to be used in determining the most appropriate response
recommendations. As resources are allocated, the system will be responsible
for detecting the need for a redistribution of available resources and making
high quality suggestions regarding allocation of those resources.
Since this is an approach that is conceptually different from that which the
departments have operated under in the past, it is important that the system
make recommendations that are believable by the domain experts. Any
unreasonable recommendations will lead to a distrust of the system. The ability
to trace the logic behind recommendations will help with trust and belief issues,
as well as assist in tracking down problems with the system or faulty data.
4


The effectiveness of a fire protection district is difficult to quantify.5 Ultimately,
the objective is to maximize the usage of available resources in such a way as
to minimize property damage and the loss of life. These metrics are the
consequent of the activities of the department and are in direct relationship to
their policies and procedures, but are not accessible to direct observation of
variance under differing policy conditions. For this reason, internal measures of
performance, primarily response times of each arriving unit, are used as a proxy
measure to judge the merit of a given policy for static distribution as well as
dynamic allocation of resources. Typically, response time is the primary metric
used (Mirchandani and Reilly, 1987).
The Littleton fire department has determined the maximum acceptable first unit
response time measure to be about 4 minutes from the time the call is first
taken to the time that the first unit arrives on scene. This value is based on
various factors such as the amount of time the brain can go without oxygen
before permanent loss of functioning Minimizing the number of responses that
exceed the 4 minute limit is the primary requirement for the system.
In addition to basic coverage standards, the issues surrounding the relocation
of available vehicles need to take into account the work load of the various
stations. Stations with heavier work loads are more likely to need the presence
of available resources than those that are more often idle. If possible,
reconfiguration suggestions should take into account the distance to be traveled
and the inconvenience of the firefighters involved. With 24 hour shifts,
firefighters to some extent live in their assigned stations, moves to other
quarters are an inconvenience, thus the fewer moves that can be made to
accomplish the objectives of the problem, the better. Longer distance moves
are less preferred than shorter ones, the farther the move, the longer the
unprotected area goes without adequate coverage.
SMFPD maintains a number of agreements for automatic and mutual aid both
within and outside of the constituent fire departments.6 In a worst case
scenario, a requirement of the system would be to determine system
configuration of the entire metro Denver area including even possibly outlying
5 The difficulty of defining objectives that reflect the true metrics of a fire protection
system is discussed through out the literature. See for instance, Hendrick, et al (1975)
and Walker, Chaiken and Ignall (1979)
6 Automatic aid agreements allow for the automatic sharing of resources among the fire
departments and are usually limited to a geographical area within a certain distance of
the shared boundaries of the departments. With mutual aid agreements, the needed
resources are requested through the dispatch center of the requested department and
are granted or denied based on the current demand within their system.
5


areas such as Boulder and Castle Rock, as would be needed in the case of a
large scale disaster.
On a practical level, there is always the cost constraint. In this case the biggest
concern seems to be the cost of acquiring high quality data. Many hours have
already been invested in the currently used geofile. To make use of this is, if for
no other reason, at least politically advisable. Even with the use of this file, a
large amount of data is still needed to make this a truly dynamic system.
Definition of the transportation network complete with lat/long coordinates is
required.
As an additional constraint, this is a real time system, Any calculations must
either happen very quickly or must be accomplished prior to the time they are
needed. Any visual interfaces (this is especially a concern with respect to
mapping interfaces) should not impede the dispatch process.
6


2 Current Technology
2.1 Geographic Information Systems
Geographic Information Systems are computer systems that encapsulate a
number of tools needed for computer analysis of geographic data. In the most
generic sense, they are maps represented and stored in a database. The
system consists of three major parts. A relational database structure to store the
map features and attributes of these features, a set of standard analysis tools to
assist with exploration and assimilation of associated data, and a graphical
interface to display the map and analysis results. They are useful for analyzing
objects and phenomena within spatially distributed domains (Aronoff, 1989).
Traditionally, in the fire services industry, the allocation of resources to incidents
has had a rather static solution. Not because the problem is static but because
the tools and technology needed to address the problem dynamically were not
available. As technology advances, the horsepower and memory needed to
manage the data necessary for addressing this as a dynamic problem are now
available to the vast majority of the population. Gains in abstraction of spatial
domains using geographic information systems (GIS), the modeling tools
encapsulated by these systems and recent cost effective access to global
positioning technology have made the time ripe for the movement into dynamic
solutions to dispatch policies based entirely on the current state of the system.
Transportation networks have been used as an abstraction to study traffic flow
problems for quite some time and analysis algorithms are well established
(Lupien, Moreland, and Dangermond, (1987)). Street segments are represented
as arcs and intersections and interchanges as nodes in a graph. Smaller
networks can be represented with adjacency structures, larger problems as are
typically encountered in real life analysis need a database structure to maintain
needed information. The concepts and methodologies found in vector based
GIS prove to be powerful tools in analysis of transportation networks.
GIS data is stored in either a raster format or a vector format. The raster format
stores data as cells or pixels in a grid. Each cell is assigned attributes to
represent a set area on a map (i.e. each pixel represents one square meter and
may be assigned values such as lake or road). Raster formatting is usually
used in environmental and agricultural applications where the maps represent
more continuous data such as terrain features. In vector format, maps are
stored as a collection of objects. The database is comprised of tables of map
7


objects: lines or arcs, nodes, vertices, and polygons. Attributes such as street
names, type of street, or, description of a polygon (i.e. lake) are attached to
the objects themselves as opposed to the cells in a grid that are associated with
the object as with raster data. Vector formats are usually used when the
mapping application deals with entities that are readily represented as lines,
points and areas, such as street systems or communications networks. Raster
formatting offers the advantage of faster display capabilities.
The core database layout in a vector base GIS includes a table of nodes
representing intersections, a table of arcs representing street segments and
perhaps a table of polygon topology representing bounded areas such as parks
and lakes.
The relational nature of the database facilitates the storage of a number of
different 'layers' of data about the geographic area. Attributes about the network
such as street names and demographics are stored in different tables which can
be combined and manipulated. Analysis results are then written to yet another
data layer. This type of data layout is quite useful when analyzing objectives
with multiple and often conflicting constraints. Constraints can be represented
within different data layers which are then selected and combined in a quick and
efficient manner.
Both formats have their advantages according to the type of application being
considered. Neither format is entirely appropriate for the current application.
The vector format is clearly the choice for representing a transportation network.
The difficulty with vector based GIS for use in a real time system is the speed of
the graphic display. Every point and arc in the network is drawn as an individual
object, making the display of maps unacceptably slow. This difficulty can be
overcome through the use of the vector abstraction for the underlying
calculations, with graphical presentation being accomplished through the use of
a database of bitmaps of various resolutions. Graphic display of calculation
results are drawn on an overlay on top of the bitmap, using a translation from
lat/long coordinates to pixels.
2.2 Global Positioning System and Automatic Vehicle Locators
The Global Positioning System (GPS) is a constellation of 24 satellites
belonging to the Department of Defense. Automatic Vehicle Locator units (AVL)
use signals received from at least three of these satellites to calculate their own
latitude/longitude position. This position is then rebroadcast to be picked up by a
module tied in to the dispatch system. This technology relieves the dispatcher
8


from having to manually track the location of vehicles that are not in quarters. It
also allows the CAD system to have current up to date information about the
location of all mobile resources in the system, ensuring the best possible
response recommendations.
9


3. Resource Allocation and Deployment Models -
Literature Review
3.1 Overview
The issues surrounding the deployment of geographically distributed resources
can be divided into those which by nature are strategic and those which are
more tactical (Ball and Feng, 1993). Strategic issues involve planning decisions
about where to locate emergency services stations. Tactical issues are
concerned with the actual allocation of available resources to current static
locations, response allocation to incidents and the redistribution of resources
when the coverage in a portion of the system is found to be deficient. The
problem defined here is most concerned with the later, tactical approaches to
the problems of allocating available resources such that work load is distributed
and service is maximized throughout the system. Many concepts and solution
approaches are common to both types of issues and thus will be discussed, but
the major emphasis will be placed on dispatch polices and the reallocation of
available resources. The problem addressed here is not that of optimal site
location but rather optimal allocation of resources to existing sites. The second
problem is, in it's essence, the same as the first but with the additional
constraint that only existing sites be considered for inclusion in the problem set.
We are assigning units to a subset of all stations as opposed to assigning
stations to a subset of all nodes.
Allocation of mobile resources to static sites is an ongoing problem. Even
assuming that the stations were well located when they were built, the demand
configuration of any system changes over time. As neighborhoods age, their
demographics change and the emergency services needs change along with
them. While it may not be feasible to relocate stations, the allocation of
resources to existing sites can have a significant impact on the service to the
system as a whole.
First addressed are the various types of criteria typically used in the selection of
optimal locations. Then, two major groups of models will be considered,
normative models which seek to find an optimal or ideal state to which real
systems aspire, and descriptive models, which attempt to model the real system
10


itself (Scott, 1971). Finally, as these models are in general, computationally
infeasible7, section 4 will discuss various heuristic solutions to the problem.
The following table lists the definitions of the various symbols used:
j : possible location of a facility
i : location of (user) node i
y j : a location variable, equal to 0, if no facility is established at point j and
1, if a facility is
established at point j
x | : coverage variable, equal to 1 if the user node at site i is covered by a
facility within the
maximum acceptable distance, zero otherwise
x | j : equal to 1 if the facility at location j serves site i
a j j : indicates whether point i can be served by a unit at site j without
violating constraints
s : maximum acceptable response time or distance
N | : The set of nodes that are located within s of node i
d | j : The response time or distance from any node j to node i
t: Maximum travel time standard
D j : Expected demand at node i
p : The number of facilities to be located
7 The p-center and p-median problems have been proven to be np-complete (Kariv and
Hakimi, 1979) and Cooper (1964) has shown exponential computation times for the
general location-allocation problem.
11


3.2 Normative Models
3.2.1 Location Criteria in Objective Functions
The predominate criteria used in selection of optimal site locations are the
median objective and the center objective. Both were first introduced by Hakimi
in 1964. The median of a graph is the point on the network where the total
(weighted) distance of vertices from that point is at a minimum. As an objective,
it is an efficiency measure reflective of total system wide travel time. The center
objective is an effectiveness or equity measure that looks at the travel time of
the most distant source-destination pair. It seeks the point that minimizes the
maximum (weighted) distance to any other point on the network. The center
objective is considered to be a 'coverage' measure and is thus typically used in
locating emergency service facilities (Halpern and Maimon, 1983).
A number of other measures have been considered independently and in
combination. Halpern and Maimon (1983) studied the antagonism between four
locational measures. The median and center measures as defined above, and
the variance and Lorenz measures. The variance measure is the sum of the
(weighted) squared differences between the distance of a vertex from the facility
and the average distance of all vertices from the facility. It is considered to be a
dispersion measure. The Lorenz measure is an equity measure that compares
the distance of the p nearest vertices to the total distance of all vertices. The
vertices and distances can be weighted.
Halpern and Maimon measured the effect of choosing one measure over
another by choosing an optimal location using one objective over all the others
and calculating the loss from the optimal value the other functions show. They
found that the Lorenz measure located a center point that was significantly
separated, in the physical sense and in the value of the objective function, from
the results that would be obtained using another measure. The choice to use
either the center or variance measures for the objective function yielded the
smallest changes in the objective function results of the other measures. These
points also seemed to be generally located close to each other in space.
Moderate losses in the value of the center and variance measures were found
when the median objective was the optimizing factor.
It seemed to be almost universal in reading the literature, that an equity criteria
is used as the primary criteria for location of emergency services type
12


resources. For instance, Torgeas et al (1971), in their set covering model for
facility location, use the standard of having at least one fire fighting company
within x minutes of every geographic unit in the coverage district.
The Rand study for the New York City Fire Department8 used hierarchical
criteria in decisions regarding relocation of fire companies. The primary criteria
was to ensure that at least one of the closest three engine companies and at
least one of the closest two ladder companies are available to every geographic
unit. This criteria implicitly reflects the status quo of the department including
distribution of work load. A secondary criteria was incorporated in the cost
function used to decide which companies should relocate. This function
implicitly includes values reflective of both efficiency and work load criteria
(expected value of the response time within the affected areas) (Walker,
Chaiken and Ignall, 1979).
The Rand study allowed the equity criteria to be implicitly defined by using the
requirement that for every demand point in the system, at least one of the
closest 3 engine houses and one of the closest 2 ladder houses contain an
available company. Considering the computer resources and tools available to
them and the demographics of the region they were dealing with (a densely
populated urban area as opposed to the more diversely populated, sometimes
rural areas included in the SMFPD), this was probably a reasonable solution.
However, this policy includes assumptions that maintain a status quo and that
are not necessarily appropriate to a different problem domain. In a rapidly
growing area such as the northern portions of Douglas County that are in the
SMFPD, It may be more critical that certain stations are occupied in order to
maintain a minimal level of coverage.
3.2.2 Discrete Location-Allocation Models
3.2.2.1 The P-median and P-center Problems
The classic p-median problem belongs to a class of problems termed minisum
problems. These models locate p facilities so as to minimize the sum of travel
times or distances within the system. The basic formulation is as follows:
8 This study is an extremely well documented case study of a very similar problem
domain to that being presented here. More is presented about it in section 3.4.2
13


Minimize:
(1)
Subject to:
n m
z= X Xdiixij
j=l i=l
CL, II NX (2)
j=i (i = 1,2,...,n) (3)
yj (i = 1,2,...,m; j = 1,2 n) (4)
yj, Xjj e (0,1) (i = 1,2 m; j = 1,2 n) (5)
Constraint (2) ensures there are no more than p facilities located. (3) assures
that zone i is served, constraint (4) assures that zone i is assigned to be served
by facility j only if there is a facility established there. Constraint (5) makes yj
and Xjj zero-one integer variables.
The p-median problem uses a median criteria for facility location which may
result in some areas having less than the minimum standard of coverage. The
p-center problem addresses this issue for the emergency services industry.
Dubbed a 'minimax1 strategy, the p-center formulation seeks to minimize the
maximum cost that any demand point may incur (Mirchandani, Reilly, 1987).
The problem can be stated as:
Minimize Z
Subject to: Z > d s j x j j (i = 1,2,...,m; j = 1,2.n)
and constraints (2) (5) listed above.
Berman, Larson and Chiu (1985) embed the 1-median problem in a general
queuing context. Instead of seeking to minimize just the average travel time of
the system, they seek to minimize the average cost of response which includes
costs related to delays in queue and/or the cost of refusing service (referral to
14


another server). The model utilizes stochastic travel and service times thus
incorporating temporal as well as spatial uncertainties in facility location.
3.2.2.2 The Set Covering Model
Set covering formulations seek to minimize the number of sites or stations
required to ensure that no service area is more than a pre-specified distance or
travel time from a station. Toregas, Swain, ReVelle and Bergman (1971) were
among the first to view emergency service facility siting as a set covering
problem. In the basic formulation of the model, the problem may be written as
follows:
n
Minimize: z=^yj (1)
j=l
Subject to: ^a^yj > 1, (i = 1,2...n) (2)
jeNj
Vj = (0,1) (j = 1.2...n) (3)
Equation (1) is the objective function which seeks to minimize the total number
of facilities used, subject to the constraints of (2) and (3). Constraint (2) requires
that at least one service facility (j) be located in each N j. Constraint (3)
makes y j a zero-one integer variable, fractional values for y j are not
acceptable.
Toregas et al (1971) used a linear program with an added single cut constraint
to obtain integer solutions. The problem is first solved without the cut and a
number of servers m is obtained where generally m is non-integer. The problem
is then solved again with the addition of the cut:
n
^ yj > [m] +1 where [m] is the integer part of m.
j=i
Addition of this cut does not guarantee convergence to integer solutions for the
number of facilities to be located or for their locations (as illustrated by Daskin
and Stern (1981)), however, the computational experience of the authors was
15


excellent. Solutions for up to 50 nodes in over 150 runs always yielded integer
solution.
Many variants to this original solution have been proposed and employed.
Two of the more important ones are the maximum covering and maximum
expected covering location problems.
3.2.2.3 The MCL and MECL Problems
The maximum covering location problem (Church and ReVelle, 1974) seeks to
maximize the amount of demand covered in the situation where all points
cannot be covered using the maximal travel time constraint due to budgetary
constraints. This model takes into account the demand level of each node or
zone in the network and maximizes the amount of demand served within the
system. The problem is formulated as follows:
Maximize:
z=
i
Subject to:
Xxk ^ yi.
keNj
Vi
X/rP
= (0,1) (j = 1,2... n)
= (0,1) (i = 1,2,.. m)
By solving this problem for various values of p, a cost effectiveness curve can
be obtained indicating the trade off between benefit and cost. Church and
ReVelle also suggested addition of mandatory coverage constraints where a
distance of T (T > S) is a hard constraint reflecting the maximum distance any
user node could be from a server. The solution then selects the optimal
16


configuration of p facilities that maximizes the number of nodes covered within
S of their closest facility. In effect, combining the p-median and set covering
problems.
The maximum expected covering model was introduced by Daskin (1982). The
model accounts for vehicle busy periods within a covering model framework. It
is an attempt to adjust the maximum covering model for some of the
shortcomings that analytic models seem to have over the more complex
queuing and simulation models.
The basic set covering model assumes that all servers in the system are
available at all times and that the closest server is the one allocated to a point
or zone. Especially given that many alternate optimal solutions often exist for
any given problem formulation (Daskin, 1987), it is often desirable to model
other objectives of the system, such as the need to consider interdistrict
responses in the case when the primary server for a district is unavailable, or
giving preference to existing sites in analysis of optimal facility location.
Hierarchical models incorporate secondary objectives in the objective function
to account for subordinate criteria.
3.2.2.3 Hierarchical Models
Hendrick et al. (1974) in a study conducted for the city of Denver, Colorado in
the early seventies, used a hierarchical objective function to evaluate locations
for possible new fire stations while giving preference to existing stations. The
purpose of the study was to evaluate the current configuration of stations and
make recommendations on ways to improve the system. The cost of building a
new facility is significant, but the resultant savings might out weigh the costs if
the movement of one or more facilities would result in the ability to close others
without loss of service to the rest of the system. To accommodate these
considerations, they came up with the following formulation for the objective
function:
Subject to the same constraints as the basic set covering model.
Where e is a positive constant less than 1/n and existing stations are indexed
from 1 to r. This objective function favors existing sites while also allowing for
the possibility of their replacement.
r
n
Minimize:
j=i
j=r+l
17


Kolesar and Walker (1972) used a hierarchic model in their solution formulation
for addressing relocation in the New York City fire department. Their primary
objective was an equity measure ensuring minimum coverage, secondary was
an efficiency criteria aimed at minimizing total expected response time. They
used the requirement that at least one of the closest 3 engine companies and
one of the closest 2 ladder companies be available to all response areas of the
city. This used the current system configuration to implicitly define the equity
criteria. With the equity concerns taken care of, they then focused on efficiency
for deciding which companies should be moved when a relocation need has
been identified. More will be presented about this study in section 3.4.2.
Another relevant secondary objective addresses the issue of backup coverage
in the system. These models seek to maximize the number of zones that are
covered by more than one facility within the specified constraints. Daskin and
Stem (1981) use a hierarchical model that extends the set covering model to
address this secondary objective. Hogan and ReVelle (1986) propose two
models one of which extends Daskin and Stern's model to include consideration
of the population (assuming population implies demand) of the covered zones.
The other model does the same for the maximal covering location problem.
3.3 Descriptive (Evaluative) Models
The models discussed so far have been prescriptive optimization models.
Simulation and queuing models are descriptive models which "...allow for a
richness of detail that is generally lacking in such deterministic optimization
models as the set covering and maximum covering models" (Daskin, 1982).
The added detail also means they are significantly more complex and
somewhat more difficult to understand. The two kinds of models can be viewed
as complementary and, as is discussed below, are often used in conjunction to
generate the best possible solutions.
3.3.1 Simulation Models
Simulation models are often used for making final location decisions once a
handful of potential sites have been identified or for testing location
configurations arrived at through other means against an existing configuration.
The Rand Institute Study (see section 3.4.2) used a simulation model to
compare different policies and methods for locating, relocating and dispatching
fire-fighting units. The model design and preliminary results are outlined in
Carter and Ignall (1970). The model consists of three parts, an incident
18


generator, a simulation program and an analysis program. The incident
generator takes historical data about incidents in New York City into account
and generates random incidents based on this distribution. The simulation
program tracks response time distribution and system state (unit availability)
characteristics over time. The post-simulation analysis program provides
statistical analyses of the simulation run and compares a given run with other
runs.
The Rand simulation program was modified and used by Hendrick, et al (1975)
in their study of the deployment of fire fighting resources in Denver in the '70s.
The simulation was used to evaluate and compare configurations arrived at
through the use of a set covering optimization model and the informed judgment
of the team of officials from the Denver Fire Department that were involved in
the study.
3.3.2 The Hypercube Queuing Model
This model, presented by Larson in 1974 uses a queuing theory approach to
analyze vehicle location and response district design. Larson views the state
space of the system as an N-dimensional unit hypercube, each vertex of the
hypercube corresponds to a particular combination of busy units. He develops
an algorithm for obtaining the steady-state probabilities of the system which
exploits many of the properties of the hypercube model. The performance
measures the model tracks include mean travel time, workload imbalance and
the number of interdistrict dispatches, the measures are tracked for each unit as
well as system wide. The approach returns data about a particular
location/districting scheme and is intended to be used interactively by a system
planner with knowledge of the political and social constraints of the system. Due
to storage and execution time requirements, the algorithm is stated to be
practical for the placement of only about 11 or 12 units.
The hypercube model is an M/M/n queuing model, its tractability is dependent
on Markovian assumptions which require that each server have an exponential
service time. M/G/n type queuing models seem to be intractable for locating n
service facilities in a system where service times and queuing delays depend on
the location of the service facility (Berman, et al, 1985). The service time
distribution in the emergency services industry is typically not exponential.
Walker, Chaiken and Ignall (1979) show that for a small fire the probability for
service time lasting more than ten minutes is nearly 1.
19


3.3.3 A Parametric Solution Model
The Parametric allocation model developed by Rider (1975) was an outcome of
the Rand project in the 1970s. The model was designed to allocate resources,
not locate them. It takes the total number of fire companies to be deployed and
allocates them to various regions of the city. It uses a tradeoff parameter that
allows the user to configure the emphasis placed on the three major types of
objectives to be met: equalization of workloads in all regions of the city,
minimization of citywide travel times, or equalization of average travel times for
each region.
The model can-be used in conjunction with a siting model to optimally locate the
companies allocated to each region.
3.4 Heuristics
The p-median, p-center and set covering models have all been proven to be np-
hard (Hansen, Peeters.and Thisse, 1983). An optimal solution would require
consideration of all possible permutations of facility locations, an impossible feat
for all but the smallest problems. Thus, heuristic methods for solution are
justified.
3.4.1 Linear Programming Methods
Perhaps the most commonly employed heuristic methods are linear
programming techniques. The exact techniques are beyond the scope of this
paper. The interested reader is referred to Chvatal (1983). While linear
programming solution techniques are technically not polynomial time algorithms
(due to a small number of pathological problem formulations which have been
shown to be exponential in their running times), in practice, their running times
are very good, especially when the number of constraints and variables is small
(i.e. in the tens to hundreds as opposed to in the thousands).
3.4.2 The Rand Institute Study on Fire Department Deployment Analysis
In the early 70's, the city of New York and the Department of Housing and
Urban Development funded a significant study of the fire fighting operations in
20


New York City. Central to the study was analysis of allocation of fire fighting
resources. Studies were conducted that, among other things, addressed static
allocation (districting) and location of firehouses as well as dynamic allocation of
fire companies in terms of dispatch policies and relocation schemes during
periods of intense demand in concentrated areas of the geographic system.
(Walker, Chaiken and Ignall, 1979)
Relocation analysis involves determining when a relocation is needed (based on
some criteria), and then, determining which fire fighting company or companies
should be moved to fill the areas lacking in coverage.
In the relocation analysis, the project team initially proceeded with the
assumption that there was an optimal arrangement for m fire fighting companies
to be arranged into n locations. This approach turned out to be too
computationally expensive for the needed real time system. Their second
approach was to develop special purpose fast algorithms that produced near
optimal solutions. They then used the results to rank the fire stations in the
system according to desirability of having a company located in them and used
this list in the relocation algorithm. This approach turned out to have a number
of difficulties not the least of which was the fact that the same companies
repeatedly were the ones recommended to relocate (regardless of the location
of the deficiency) due to their low ranking on the desirability list.
The final approach involved a heuristic solution that broke the problem into 4
parts. The solution at many levels leveraged the concept of the "response
neighborhood of a station or set of stations. The response neighborhood (RN)
is defined as A collection of alarm boxes that have the same unordered pair of
ladders of (or engines) as first-due and second-due. (pp. 649 Walker, Chaiken
and Ignall, 1979). Using this concept strictly limited the number of geographic
areas that needed to be considered by any calculations.
These are the stages of the solution design:
Stage 1 involves determining the need for a relocation. In this stage, the equity
criteria mentioned above (3.2.1) is used to find areas in the system that are
lacking the minimum coverage standard of having at least one of the closest 3
engine companies and at least one of the closest 2 ladder companies available
to it. The amount of time that these companies are anticipated to be unavailable
is considered in the analysis and no relocation recommendation is presented
unless the expected time for the deficiency is considerable.
Stage 2 involves a set covering problem, picking the stations to be filled to
ensure coverage of the system while minimizing the number of moves.
21


A greedy heuristic was used to generate sets of empty houses to be filled. The
first set is found by selecting empty houses in the order that would fill the largest
number of uncovered regions. The first house covers the largest number of
uncovered areas, the second house would be the one that covers the largest
number of the remaining uncovered regions, etc. until all uncovered regions are
covered. Successive sets are generated by picking first the house that would
cover the jth largest number of uncovered regions with j = 2,3,.. ,P (P is arbitrary
- up to the number of empty houses the larger P is, the more likely the optimal
solution will be found) and then proceeding as above to find the rest of the set.
The set with the smallest number of moves is passed on to stage 3. In the case
of ties, stage 3 is solved for all of the tied sets and the lowest total cost set is
passed on to stage 4.
In Stage 3. all available companies are evaluated to find the best candidates for
relocation. A cost function is used to generate a ranked list of candidates for
relocation into each of the empty houses selected in stage 2. Since the
minimum coverage standard is covered by the equity standard used in stage 1,
the criteria used for selecting the units to be moved is one of efficiency. The
relocation that minimizes the total expected travel time to incidents in the
affected regions is sought. The problem has the general formulation of a
transportation problem with additional constraints to ensure that none of the
moved companies will leave uncovered areas:
Minimize:
Subject to:
M M+N
= £ Zciixii (D
j=i i=M+l
M+N
2>ij=l 0 = 1.2.M) (2)
i=M+l
M
(i = M+1, M+2, ..., M+N) (3)
j=0
22


M+N M+N M
XaikxiO + £ M o .. X IV (k = 1,2 L) (4)
i=M+l i=M+l j=l
Xjj 6(0,1) V i, j (5)
Where M is the number of empty houses selected by stage 2 that are to be
filled, M + N refers to the number of available companies, a j k = 1 if available
company i serves response area k and is zero otherwise, c j j is the cost function
as described below. A dummy empty house (j = 0) is used if available
company i is to stay in its own quarters. The objective function and the first two
constraints of this formulation are structured like an assignment or
transportation problem, the third set of constraints (4) assure that the minimal
coverage criteria are not violated.
The cost function cy = (c2- c.,) (a., (t + r jj) + a j j r j j} is used to predict the
cost of having company i relocate into empty station j. Where otj = Xiy[A[ / vi-
The function uses an approximation of expected response times in the regions
that will be affected by the proposed moves. It is based on findings by Kolesar
and Blum (1973) that the expected response distance in a region is proportional
to the square root of the area served per fire company: £(0,) = c, Va/N where
D1 is the distance traveled by the first arriving unit, A is the area of the region
being considered, N is the number of available units in the region and c, is a
constant of proportionality for the first arriving unit (estimation of these
constants is discussed on pages 180-183 of Walker, Chaiken and Ignall). The
other variables in the function are as follows: X j is the alarm rate in region i
(arrival according to Poisson process), v j is the average response speed in
region i, t is the predicted duration of the incident causing the need for a
relocation, and r j j is the amount of time a unit would spend traveling from a
station in region i to a station in region j.9 The concept behind the equation
involves finding the difference in the total expected travel times for the regions
involved in the moves allowing for the times when the second closest unit is
available rather than the first.
9 A complete derivation of the cost function can be found in Kolesar and Walker (1974).
23


The heuristic used to solve stage 3 generates a ranked list of candidates for
relocation into each of the empty houses selected in stage 2 using the cost
function. The lowest cost moves are then chosen and feasibility of the moves is
checked to ensure maintenance of system wide coverage standards. If a
feasible relocation set is found in this manner, it is used, as it is optimal.
Otherwise, several feasible relocations are generated and the one representing
the lowest total cost is used. Feasible relocations are generated by changing
the order in which the houses are filled and by considering the kth lowest cost
move in place of the lowest cost move.
Stage 4 takes the units picked by stage 3 and solves an assignment problem to
minimize the total travel distance for all relocating units.
24


4. Solution Formulations: The South Metro Fire Protection
District Computer Aided Dispatch System
The resource allocation responsibilities of an emergency services dispatch
system divide easily into two major areas. The issues surrounding
recommending units to respond to a given incident in a dynamic, constantly
changing system and the tracking of the state of the overall system to ensure
adequate coverage within all areas of the system.
4.1 Response Recommendations The Dynamic Allocation of
Resources
Dynamic allocation of resources deals with the decisions surrounding the
dispatch of units to incidents. The system makes response recommendations
for consideration by the dispatcher. For the most part, dispatch policies at
SMFPD are static according to incident type with dispatch personnel ultimately
deciding which units should respond to a given incident. The goal of the CAD
system is to make suggestions that are reliable (i.e. believable by the domain
experts, so that the system can be trusted) and are based on the current state
of available resources. To do this, the system must track all resources in the
system, their functionality, location and actual as well as potential availability.
Static resource allocation analysis needs to take into account the demand
configuration of the system and potential unavailability of resources (so that
secondary response can be optimally planned for) but, when an incident arises,
the only criteria is that the closest available units that are capable of satisfying
the request for service, are the ones that are dispatched to the incident.
When a call comes in to the dispatch center, the basic scenario involves first
finding where the incident is and then what type of situation is being presented.
Dispatch obtains an address and elicits enough information to categorize the
incident. The system then takes the address and obtains a latitude/longitude
location for the address and calculates the distance from the incident to all other
nodes in the network. The database is then queried to find available units that
are capable of fulfilling the needs of the incident. A list of lists is built. Each list is
a set of units that can meet the needs of the incident. The sets are in order
according to the closest units first. The first set is presented to the dispatcher as
the primary recommendation with the other recommendation sets available on
request.
25


4.1.1 The Data Models
A response recommendation arises from an analysis of the two questions,
what and where. What resources are needed for a given incident and where
should they be sent from? The associated data models are supportive of the
issues surrounding these questions. The question of what corresponds to the
need for a model for the resources of the system and the where requires a
model of the geographic domain.
The system currently in use by SMFPD employs a model in which units that are
to respond to a given incident type in a given BGU are assigned on runcards.
The runcards enumerate primary response units as well as backup units for
each BGU/incident type combination. The assumption is made that the units are
located in a station. An attempt to generalize this model to a relational database
model that would support dynamic allocation procedures proved to be
unworkable. At first glance, it made sense to assign units to incident types. For
instance, a car accident would need an engine and a rescue unit in it's
response. The difficulty came in fitting in units that served multiple functions. To
see this, consider a unit whose primary function is as a fire engine but that is
also equipped to serve as a rescue unit. Where most stations would send two
units to the accident, the station with the dual purpose engine would send only
one and the assignment of unit types to incident types breaks down. The
second idea was to recommend response according to geographic location (i.e.
by station). This approach too was deficient in that there were times when a
response must be split between stations. It is always desirable to get the first
arriving unit to the scene as quickly as possible.
Looking at the problem with Jacobson's (1992) use case analysis approach
proved to be quite useful for finding an appropriate data model10. The actors of
this system are dispatchers and fire fighters. When a request for service comes
in, the dispatcher first determines the nature and location of the request. The
dispatch actor then attempts to find and dispatch the closest resources that will
adequately perform the functions needed by the requesting source. In the
course of this analysis, it became clear that what was really needed was a data
10 In the use case approach, design of the system is modeled around the actors and the
use cases for the system. The users of the system perform the roles of the actors and
the use cases are the functions to be performed by the system.
26


representation modeled around the capability of the resources to perform
needed functions.
The system supplies the ability to assign different responses based on the
demographics of the area in question. A structural fire in a densely populated
metropolitan area will require a different type of response than one in a remote
rural area. To allow for this, the data model relates each BGU with an incident
type allowing for separate responses to be defined for each BGU/Incident type
pair. This approach is flexible enough to allow for the tracking of critical
locations that would require special responses. For instance, a company that
has on hand a lot of hazardous materials, would require that a hazmat unit be a
part of the recommended response for most incident types at that location. To
do this, the location can be defined to be its own BGU and unique response
types assigned to it. Figure 1 shows the physical model of the resources
database.
4.1.2 Finding the Closest Units The Transportation Network
Integral to the system is the transportation network. Data collection can be very
expensive and thus the kind of data and degree of granularity needed to solve
the problem was a major consideration. It was decided that the solution
algorithms should be able to reasonably handle differing degrees of granularity.
The highest level being the basic network of all major streets and their
intersections. Since most of the BGU's are bordered by major streets, this
seemed to be a reasonable starting place. All major decisions by the system
such as who should respond to a given incident and fulfillment of basic
coverage criteria are decided using this highest level of granularity. The largest
of the BGU's are approximately 1/4 mile square, under most circumstances,
predicted travel times to the BGU containing the incident will be adequate for
making resource allocation decisions. This will keep the amount of data to be
collected to a minimum. Later, as time allows, the network can be improved
upon to include all streets and even all addresses if desired.
The issue of finding the closest units that are available to respond to a given
incident requires a model of the transportation network involved. The typical
vector model used by most GIS was utilized for this representation. The issue of
collection of the vector data needed to represent the system was a big concern.
It was finally decided that since units typically run on major arteries for as much
of the distance as possible when responding to an incident, it would be
acceptable to gather data for just the major streets and their intersections.
27


One of the greatest advantages of the geographic information system relational
data model lies in the ability to pick and choose from different data layers
depending on any number of factors. Different conditions such as time of day,
weather, and road conditions will affect actual travel times within the network.
Layering the different constraints will allow for picking and combining those
which would most likely impact the network at any given time. Once automatic
vehicle locators are put on units, the system can obtain feedback about how
long it actually takes to get from point A to point B and can adapt itself in an
ongoing learning process.
Once it is determined which units can fill the request for service, the task is to
find the closest available unit to the incident. Closest being measured in terms
of actual travel time. The first step is to locate the incident within the network.
Approximately 50% of the calls that come into the SMFPD dispatch center
come from 911. The 911 system transmits data about the address and location
(latitude/longitude) of the originating call. The remaining calls have to be
located based on the address of the incident. SMFPD maintains a database of
all address ranges within their districts of responsibility, unfortunately, these
addresses are tied to their BGU system rather than a lat/long location. A little
over a year ago, SMFPD was approached by Pierson Graphics to provide
address range data for their Digital Denver GIS product. As a result of an
agreement worked out between them, the Digital Denver database is used to
find latitude/longitude location information on incidents when it is needed.
The master Geofile is kept current with all addresses in the system being tied to
a BGU. For the few times when the address for some reason, is not found in the
database, a manual system for locating the incident is needed. This is provided
by supplying the dispatcher the ability to click on a point on the map and having
the system locate the point relative to the internal data structures it needs to
make a response recommendation. This involves resolving the point to the
closest node in the transportation network as well as finding which BGU the
incident is in. Finding the closest node was simply a matter of iteratively
increasing the radius from the chosen point and selecting all nodes in the
database within that radius until the result set comes back non-empty and then
calculating the Euclidean distance to each node to find the closest one. Finding
the BGU the point is in is a more complex problem.
BGUs are stored as polygons, if the bgu boundaries were parallel to the lat/long
axis the problem of obtaining a bgu from a given point would be trivial, but bgus
don't have this restriction and often follow odd shaped geographic features. So,
28


a general solution is needed. For this, I used an approach obtained from the
computational geometry chapter of Cormen, et al (1990).11
The approach uses the idea that if a point (P1) is on the interior of a simple
polygon, any vector from P1 to a point exterior to the polygon (P2), must
intersect the edges of the polygon an odd number of times. To find the bgu, I
first located a node in the database that was closest to the chosen point, found
all arcs adjacent to that node and then retrieved all bgus containing these arcs.
These were the candidate bgus. A vector from the chosen point to a random
point outside the Denver metropolitan area created the needed vector. The
number of intersections of the edges of the candidate bgus with this vector were
counted.12 Only one of these could have an odd number of intersections.
Given the lat/long location of the incident, it is resolved to the closest node in
the transportation network (typically this will be the closest major intersection),
and then the shortest distance of each available unit to the intersection is
calculated. The response recommendation is then presented to the dispatcher
with the closest physically available units being listed first.
4.1.3 Path Finding Algorithms Dijkstras and Grassfire
As defined by Potts and Oliver (1972), the cost, c( n1p nj) of the cheapest route
from a node n, to the other j nodes in a network is determined by the unique
solutions to the functional equations f(n-|) = 0; f(nj) = min [f( nj) + c(nj, nk )]
There has been a great deal published about shortest paths calculation and
many different approaches have been proposed. The most commonly employed
methods all seem to fall in the category of what Potts and Oliver termed "tree
building algorithms". Dijkstra's Shortest Path algorithm is perhaps the best
known in this class of route calculating algorithms. In general terms, the origin
node is used as the starting place and a cheapest route spanning tree is built by
successively finding the cost to get to all nodes from the current position and
then choosing the node with the minimum weight value as the next current
node. These are efficient polynomial time algorithms, properly implemented,
11 Although, this feature has been implemented and tested, it will not initially be available
to the system. Entry of the bgu polygon definitions will be one of the last data sets to be
created.
12 The intersection of two line segments can be quickly determined by first verifying that
the bounding boxes of the two segments intersect, and then using a cross-product
method to determine if the two end points of one segment straddles the other segment.
Details are in Cormen, et al, pp. 889.
29


Dijkstras runs in 0(A2) time where A is the number of arcs or street links in the
network. (Cormen, Leiserson, Rivest, 1990).
As a starting place, I implemented both Dijkstras in the classic sense and a
more general algorithm for which the term 'Grassfire' seems to be appropriate.
Both implementations use travel time as the cost constraint to weight the graph,
although, a number of different cost factors could be employed. The Dijkstra
method calculates a shortest route between an origin/destination pair. The
Grassfire algorithm uses a label correcting paradigm assigning a weight to each
node in the network based on the minimum cost of reaching that node from any
one of a number of origin locations. In this way, districts of responsibility can be
assigned to the various supply points or, the distance from any number of
supply points to a given demand node can quickly be determined.
The Grassfire algorithm was used for both the response recommendation
portion of the system and the move-up portion. In the response
recommendation solution, the incident location is assigned to be the source and
all nodes in the system are assigned a distance from this location. These values
are then used to determine which units are the closest to the incident location.
In determining a need for a move-up, the location of available resources are
designated as the sources. Districts of responsibility can be assigned to
individual units and nodes that are farther than a given distance from the closest
resource can be identified.
Figures 2 and 3 show graphically the results of the output of the Grassfire
algorithm. This is a map of the state of New York since at this time, the data for
the south metro transportation network is still in the works.
Figure 2 plots the districts of responsibility for the New York state highway
transportation network with 5 supply points. Figure 3 shows the results of
running the algorithm with one source (as with an incident) and plots the
shortest paths to 4 supply points.
30


Source AvgResp Time Max Resp Time Nodes Assign
3600638 62.29 138.7 207
3600306 66.14 109.4 78
3600445 58.37 112.2 251
3600231 54.54 121.3 201
111 jfl
x 4 ' ' -^{pA -a. X 8
IjllltpS 1';^
iSjv". t-? :'NM *3S8*eSUe#i^p
*. kljl Mm Htf V1 f I \ \ i i 1 \ u |
\§kV: TV. . f \
Krfh 'K' * 'K <*£ 4i s 4 OS
11
mm

P\
VJ

111

* _

NObEilD LONGITUDE LATITUDE \
3600866 3600867 -75901100 44040300. -76047997 44029000 !>*|
y 3600868 -76056999! 43956000
3600869 -75915001 44035500 !*
3600870 -75879700 44192400 \
K-; (
im> -*v
i&er-
^o1
Figure 2 The results of a run of the Grassfire algorithm on the New York state highway
transportation network. The source nodes are marked by the bold black '+ marks. The average
response time, maximum response time and number of nodes assigned to each supply point is
shown for each district of responsibility
31



Source Data H@ S31
Source Response Time Max Resp Time
3600636 103.5 1109.4 i
3600375 105 112.2
3600375 105 112.2 l
3600308 110.8 1121.3 i
~iy

IP FI ?SS^S^5|-^ H
plHSISI0^^WSKi
s# .'I > ^v*-1
^-- ** *1
&. S#"*r
i
IS
3Q

ts i
w&
;.- .>, i/,:.
S^JI1I
sin
^S^a
i__mm~
'>i^^?-v \
x- ; nv
i^itgfe
MODE ID LONGITUDE
3600511 ; -761707991 42600400 x \ '
3600512! -76171402! 434179001
,-r
3600513! -76171600! 43063800 r-1
3600514! -76181099; 42650300;;
iii^^li^ftsw
§i
;!fSA|aa
t, ^
^ **
__________________
Figure 3 Plot of the shortest path from 4 supply nodes to a single demand node. The response
time value (the length of the shortest path) is shown for each supply node
The initial implementations of these algorithms used a database to store the
street network. My implementation of Dijkstra's was quite naive13 in the way it
stored and searched for the next node to come out of the candidate list. As a
result, both algorithms ran in approximately the same amount of time (as a label
correcting algorithm, the Grassfire algorithm doesn't do any comparisons, it just
updates nodes with the smallest value so far and stops when a node is found
that already has a label from another source). This approach turned out to be
unacceptably slow (around 45 seconds on a Pentium 120). The speed problem
seemed to be related more to the multiple calls to the database, because 13
13 There has been a great deal of work done regarding the fastest implementations of
shortest routes algorithms. Bertsekas (1991) gives an excellent overview of the major
approaches and concepts.
32


internalizing the street network in a sparse matrix data structure14 cut the actual
runtime for both algorithms to under 2 seconds for a 2000 node network. Since
the Grassfire algorithm finds the routes and distances from a source to all other
nodes in the network and it does it in an acceptable amount of time, the Dijkstra
implementation, which only finds a route from a single source to a single node,
was not used.
4.2 Move-up Static Resource Allocation
Move-up has to do with the relocation of available resources into areas which
are temporarily deficient in coverage due to high levels of demand. Move-ups
are generally needed as a result of a large scale incident such as a major fire,
airplane crash or multi-car accident. An occurrence of a major incident (as input
by dispatch personnel) will trigger a check for the need for a redistribution of
resources. In addition to the obvious high risk times, the system will routinely
monitor the system to see if a move-up might be beneficial.
4.2.1 Criteria
As a result of conversations with Chief Burdick of the Littleton Fire Department
and the dispatchers in the SMFPD dispatch center, two criteria dropped
naturally out of the problem formulation. The first is the (apparently typical)
need for maintaining minimal coverage standards. This clearly is the primary
objective to be met. Secondarily, considering demand history, they tend not to
move units that are located in the busiest parts of the city.
In addition to the obvious major criteria, other factors are sometimes considered
which are more political and practical in nature. For instance, in the Rand study,
the cost function they used to evaluate the quality of a proposed move, implicitly
sought to minimize the distance a moving unit would have to travel. Other
important considerations are the anticipated duration of the deficient condition,
the various auto and mutual aid agreements among the departments in the
district, and minimization of the number of disrupted personnel.
14 This structure consists of a dynamically allocated array of linked lists. The node
identifiers are used as an index into the array and each list contains the nodes that are
adjacent to the indexed node.
33


4.2.2 The Solution Approach
Initially, it seemed reasonable to take an approach that would attempt to find a
configuration that closely approximated the optimal for the m available
companies and then focus on finding which companies to move to obtain this
configuration. Several things motivated me to stray from even considering this
approach. The first was an analysis of the complexity of the solution.
To find an optimal configuration, consideration of all

vmy
(where n is the
number of stations and m is the number of companies or units available for
placement) possible configurations of the system would need to be considered.
Given that the worst case scenario requirement of the system, n could
conceivably be in the range of 50-100 with m any number of values in the
middle part of the range. Even with very fast analysis routines, this seemed
infeasible for the worst case scenario.
The second indication that an optimal solution should not be sought, was the
experience of the Rand Institute project when attempting to solve this same
problem (see section 3.4.2).
The third consideration has to do with robustness of the system. An optimal
solution approach would locate m units among n stations, a blanket location
edict. Breaking the problem into stages offers flexibility for both implementation
and use, having the problem solution broken into different stages has inherent
advantages. Combining the problem of who to move with that of what to move
limits the ability of the system to just give feedback on which stations would
benefit the most from external units. In the case of a major incident, units can
be brought in from reserve or from other departments. It would be desirable to
have the system be able to identify the stations that it would be the most
desirable to have occupied.
Finally, breaking the solution into stages allows for incremental implementation.
At the time of this writing, only the module to determine the need for a move-up
is completed. The modules to generate recommendations about which units to
move where will require more data than the system will initially have available.
This gives the system the ability to alert the dispatcher of a potential problem
within the system. This seems to be the most important part of the system, the
analysis and decision about who to move can be adequately accomplished by
34


the dispatcher until the remaining portions of the move-up module can be
implemented.
Given these factors, it seemed more reasonable to try to work with the basic
approach used by the Rand team, adapting it to this particular problem. So, as
with the Rand study, my approach will have 4 stages: Determining need, picking
stations to be filled, picking units to be moved and assigning relocating units to
empty stations.
Due to a couple of factors, early in the analysis of the move-up solution design,
it seemed clear that the recommendation modules needed to use data based on
BGUs rather than the individual nodes of the transportation network. First, to
generate good recommendations, several feasible configurations would need to
be generated and analyzed, the more configurations the system could analyze
in a short period of time, the better the recommendation would be. It seemed
that increasing the granularity of the geographic regions considered by the
move-up module would facilitate a better overall distribution of resources. The
second factor has to do with the secondary criteria that the stations with the
greatest potential workload should not be considered for movement. Due to
current data collection mechanisms, tracking the probability that an incident will
occur within a given BGU is easier (and, for a variety of reasons beyond the
scope of this paper, more desirable) than resolving the incident to its closest
node and tracking the demand at that node. In general, this problem seems to
require a coarser approach, it addresses more static location issues whose
solutions are more computationally intensive.
Another major design issue deals with the granularity of the resources that will
be considered for moving. Typically, units will be moved to existing stations.15
The moved units however, could come from places other than their usual
quarters. If possible, the solution needs to allow for the possibility of analyzing
all available units regardless of location. On the other hand, using the data
model outlined above where actual units are stored in essence as virtual units
based on their capabilities, is really overkill and would add an excessive amount
of computational overhead. The most logical approach is to analyze the system
for a need to relocate each major type of vehicle. There are at present only 3 of
these in the Littleton department, engines, rescue units and hazardous
materials units. The system will be flexible enough to handle any number of
major type definitions but, time wise, will only feasibly handle a few.
15 They would consider moving a unit to a place in the city other than a station if it would
provide needed coverage for a short period of time, but it would be a rare event.
35


A simulation of the system, to be used fortesting the proposed solutions, has
been designed and partially implemented. The design is discussed in section
4.2.3.
Stage 1 Detecting the Need For a Relocation
The first part of the problem is finding if there is a problem, if there is a need for
a relocation. A minimum level of coverage being the primary criteria. This is an
assignment problem which involves finding the shortest distance (once again
employing the Grassfire algorithm) from each basic geographic area in the
system to the closest available response vehicles, and tagging those areas that
are predicted to be beyond the maximum acceptable distance. The data for
predicting the duration of each incident type is tracked and readily available.
The system will have a configurable parameter for an acceptable predicted
deficiency duration before the system notifies dispatch personnel.
Since the geographical definition of the BGUs will be among the last data items
to be entered into the system, this portion of the move-up module will be run on
the nodes of the transportation network
The remaining portions of the move-up module involve finding a heuristic
approach that is appropriate to the exact problem presented by SMFPD. The
intent is to design several solutions and test them using a simulation model of
the actual system. At this point in time, the simulation model has been designed
and roughed out, but the data needed to fully implement and test it is not yet
available. The design is discussed below. Given this, the rest of this section will
center on approaches that will be explored once the data becomes available.
Stage 2 Selecting the Stations to be Filled
The goal of this stage of the solution is to find the smallest number of stations
that need to be filled in order to satisfy the basic coverage criteria of the system
as a whole.
Rand built sets of stations to be filled by acting as if each station was filled and
determining how this would affect the number of uncovered areas. Sets of
potential relocatees were then built. The first set was built by first choosing the
station that would reduce the number of uncovered areas the most, then
reevaluating the remaining stations and choosing the next one that would
36


reduce the number of uncovered areas the most and so forth until all areas are
covered. The second set would use the same procedure except it selected the
second ranked station for reducing the number of uncovered areas. Only the
smallest sets are retained for consideration during stage 3.
The Rand approach seems like a straightforward and reasonable approach that
would undoubtedly meet the requirements of the system. I would like to also
investigate the desirability of a few different approaches including the following:
Basing the choice of stations on a median paradigm. Find the BGU that is
farthest from an available unit and fill the station that would normally cover
that BGU.
Do the same as above, but weight the BGUs based on their typical demand
and select the farthest weighted value, fill the station that would cover this
BGU.16
Select first the stations that have the highest uncovered demand
Stage 3 Selecting the Units to Relocate
This stage involves selecting the units to be moved into the stations selected in
stage 2. Actual assignment of units to stations will be left for stage 4.
The first consideration during this stage is that no units are to be moved that
would cause an area within their current district of responsibility to become
uncovered. Secondarily, the units selected should be those whose removal from
their current location, will least impact the demand coverage of the system.
The units will not necessarily be in quarters it is feasible that the relocating unit
could be off training somewhere or doing inspections all available and
potentially available units should be considered and presented to the dispatcher
to facilitate the decision process about reassignment of activities. However,
inclusion of all geographic areas will increase the calculation times considerably.
The solution design is flexible enough to allow for varying degrees of granularity
based on the time available to the system to come up with a recommendation.
The algorithm presented below should run sufficiently fast for moving units from
station to station. Once implemented, it can be determined if it is adequate for
evaluation of units not currently in quarters. To do the latter would involve
tracking and analyzing information about the proximity of each BGU to every
16 This idea parallels the maximum covering location problem proposed by Church and
ReVelle (1974)
37


other BGU in the system rather than just about proximity of each BGU to each
station in the system. For now, lets just consider the problem as though it were
static, reducing this stage to picking the stations to empty.
The Rand study used an efficiency criteria in this stage with the intent of
minimizing total expected travel time within the system. The cost function that
they used is not directly transferable to this paradigm. However, the same types
of data are easily obtained from the labeling generated by the Grassfire
algorithm. The secondary criteria of maximizing the demand coverage is also
easy to tally as Grassfire runs. Even when tracking the data needed to evaluate
the cost of a given move, the Grassfire algorithm takes only a second or two to
run on a 2000 node network. The issues seem to be around the amount of time
required to analyze the system for multiple configurations.
Move-ups in SMFPD dont happen very often, the vast majority of the time, only
one move will be needed. In this case, it Will probably be acceptable to simply
act as if each possible move were made, run the Grassfire algorithm and use
the results to make a decision based on any combination of the statistical
information that results. A run of Grassfire currently tracks maximum distance to
any assigned node, average travel time to all assigned nodes and the number
of nodes assigned (this can easily be modified to be the demand assigned) for
each source in the network. Clearly, the first criteria must be to pick only
stations whose removal will not adversely impact basic coverage, beyond this,
there will be room to experiment with different criteria. The recommendations
the system makes under different weightings for the various options can be
weighed against the intuitive understanding of the domain experts. The
recommendations will also be run against the simulation to gain insight into
system wide behavior under different approaches. The approach with the best
predicted results, is not necessarily the approach that will be implemented in the
end. For example, given the perception of the domain experts about the
desirability of removing units from stations that are in high demand areas, it may
not be desirable to even consider those stations, even if it would be beneficial to
the high demand station or the system as a whole. It is more important that the
domain experts believe the system and trust the recommendations that it
makes.
On the rare occasion when more than one station needs to be moved, the
possible number of system configurations might exceed the number which can
be analyzed in an adequate amount of time. In this instance another approach
will have to be taken. It might be adequate to act as if one move at a time is to
be made, or, the system can attempt to eliminate some of the stations that are
considered for moving (for instance, not even consider those stations that
38


currently have a high level of demand coverage). If it turns out these
approaches are not good enough, other algorithms will need to be investigated.
4.2.3 Finding the p-Median of the Ranked Coverage Matrix
The most interesting approach Ive come up with is a combination of the p-
center and p-median approaches. It can be described as finding the p-median
of the ranked coverage matrix.
A ranked coverage matrix can be thought of as a weighted adjacency matrix
where the entries r; j reflect whether or not station j can cover BGU i. If it can,
the weight on the entry indicates how it ranks relative to the other stations that
also cover BGU i. If station j is farther than some maximum acceptable
response time t, then the rj j entry is infinitely large. This matrix can easily be
built from runs of the Grassfire algorithm with each station as the sole source
and then noting the distance from each BGU to each of the other n -1 stations.
If we let k = n e p then, the n x k matrix X would consist of k vectors XjT
whose entries reflect whether or not a unit is to be located in station j. Where n
is the total number of stations in the system that are being considered, e the
number of stations that are empty and are to remain empty and p the number of
stations that are to be filled (e and p are determined by stage 2 and are fixed
within X).
Multiplying RX will yield an i x k matrix. Each entry in this matrix will be an
indication of how good configuration k is for BGU i. Any zero valued entry in this
matrix will indicate that the Xj configuration should be rejected as it leaves BGU i
uncovered. The minimum value resulting from summing each of the k columns
(the median of the resulting graph) will reflect the most desirable overall
configuration.
A shortcoming of this approach is that it doesnt take into account the distance
that units would have to travel to occupy empty stations. There are at least a
couple of ways around this, weight the Xj entries according to the desirability of
the move. Desirability can be derived from a combination of factors including
currently predicted demand within station js district of responsibility and
m n
Minimize:
k=i j=i
39


distance to travel to the closest of the empty stations to be filled. Weights that
accurately reflect the desired outcome might be difficult to find. Another
approach would be to retain the k lowest values from the above summation and
analyze each for total distance to be traveled by the relocating units.
Stage 4 Assigning Units to Empty Stations
This stage simply assigns the units that came out of stage 3 to the stations that
were decided on in stage 2. In the vast majority of the recommendations, it is
anticipated that only one move will be required and this stage will not be
needed. On the rare occasion that it is needed, the data necessary to make an
assignment will have already been generated. Given the object oriented
implementation of the Grassfire algorithm, each station coming from stage 2
can have its own Grassfire evaluation object to be retained for this stage. Unit
assignments will be as simple as finding which units are closest to which
stations.
At this point, a large unknown is whether it will be adequate to select and fill one
station at a time or, whether the needs of the system are going to demand that
larger sets of stations to be filled and the units to fill them need to be
considered. Obviously, if adequate recommendations can come from moving
one station at a time, this stage can be eliminated.
4.3 Testing the Solution. A Simulation Model
A discrete event simulation will be used to test the effectiveness of response
and relocation policies.
The model will allow for a configurable number of servers, geographically
distributed across the region. As in the real system, there is no incident queue.
If the closest server is unavailable, the next closest server will be assigned to
the incident.
Units will be created and maintained as objects that will track their own current
state (location and availability) as well as statistical values relating to work load
and travel time.
The use of a Poisson distribution for the alarm arrival rate seems to be a widely
accepted standard that adequately models reality. Using the chi-square test, the
Rand institute verified that the alarm arrival rate data for the city of New York
were consistent with the Poisson distribution at the 0.05 significance level (pp.
40


291-294, Walker, Chaiken and Ignall, 1979). Filippi and Romanin-Jacur state
that "First aid requests arise at every nucleus according to a Poisson process of
given rate generally proportional to the amount of residents." (1996).
An exponential distribution is also often used for service time in queuing
analyses related to the fire service (Walker, Chaiken and Ignall, 1979, pp. 214).
Unfortunately, this distribution does not match reality but is used as a simplifying
assumption and the results that are obtained are then shown to be
approximately correct for several examples of real service time distributions.
The data needed to determine the geographic distribution and mean interarrival
rate of incidents and the service time distributions for various types of incidents
is readily available, at least for Littleton, through their incident reporting system.
The distribution for incident type is also tracked and can be extracted from this
data set.
To create an incident, the model will generate an exponential random variable
with a mean interarrival time determined from historical data. Although the
actual distribution for incident locations can be obtained (and, actually is
obtained for use in the relocation algorithms), getting a discrete random variable
from a finely partitioned set (there are roughly 2000 BGUs) would be very time
consuming and tedious, so each incident will be located according to a uniform
distribution17. An incident type will then be assigned according to a given
distribution of standard incident types derived from the data and a service time
for the incident will be generated using the mean service time for the assigned
incident type plus the predicted travel time from the server location to the
incident location.
An event queue will be maintained to track all time related events. The event
queue will maintain a time line and all added events will be placed in order
according to the time they are to take place. The simulation will place a couple
of new events into the event queue upon initialization. Generation of new
incidents will include assignment of an incident location, type and a value for the
amount of time the incident will require to be serviced. The main body of the
simulation pulls the head of the event queue and processes according to the
type of the simulation event pulled.
17 Like the Poisson distribution for interarrival times, a uniform distribution for location
seems to be a widely accepted standard in location analysis studies.
41


Event types are as follows:
New Incident:
A new incident will call the dispatch routine. The dispatch routine will find the
closest available units that are needed for the given incident type and will assign
them to the incident location18. For each unit, the predicted travel time will be
added to the given service time and two new events will be placed in the event
queue. One for completion of service and one for the return to quarters.
The dispatch routine will be responsible for updating system wide statistics for
things like total incidents served, total system wide response time, maximum
response time, the total time there were areas of the system without coverage,
and the number of incidents that were not reached within the maximum
acceptable response time.
Service Completion:
The unit will update its work load statistics and becomes available for re-
dispatch at this time. It will not change it's location until it is actually back in
quarters. This complicates things a little and will require the dispatch routine to
track and update a 're-dispatch flag for the unit to be set if the unit is dispatched
from a location other than its home.
Return to Quarters:
At this time, if the re-dispatched flag has not been set, the unit will update its
location to its assigned home, if the unit has been re-dispatched, this event is
ignored.
A final analysis routine will output the results of the simulation run. A number of
metrics will be tracked, including average system wide response time, maximum
18 In reality, units that are in the process of responding to an incident are available to be
reassigned to a more urgent incident up until the time when they actually arrive at the
incident. This simulation will make the simplifying assumption that once a unit has been
dispatched to an incident, it will be unavailable until the service of the incident is
complete.
42


response time, the total time there were areas of the system without coverage,
the number of incidents that were not reached within the maximum acceptable
response time, individual unit work loads and time spent traveling, and average
work load per unit.
43


Appendix A. Glossary
AVL Automatic Vehicle Locator a device that uses a GPS receiver to find its
own location and then transmits that location to be picked up by some central
station (such as the dispatch center responsible for dispatching that vehicle).
Auto/Mutual Aid Agreements made between fire departments to share needed
resources. Automatic aid agreements are usually limited to an area within a
certain distance of the shared boundaries of the departments. For incidents
within these boundaries, requests for assistance are automatically honored.
With mutual aid agreements, the needed resources are requested through the
dispatch center of the requested department and are granted or denied based
on the current demand within their system.
BGU Basic Geographical Unit. A small geographic area, typically represented
as a polygon defined by it's bordering arcs.
CAD Computer Aided Dispatch system
Geofile Database file with address and BGU information
GPS Global Positioning System A system of satellites put in to place by the
Department of Defense. Each satellite transmits time and position data.
Receivers use this information to calculate their own geographical position.
Incident A request made to the system initiated by a call to the fire
department direct or routed through 911.
Move-up Movement of units from a geographic area with adequate coverage
to another which is lacking in coverage.
Response The collective resources that are needed to respond to any one
type of incident. A single response may have various combinations of vehicles
and or personnel which will serve as given collective function.
Response Recommendation A set (or sets) of units that the system has
determined to be the best suited to respond to a given incident.
SMFPD South Metro Fire Protection District
44


Appendix B. Source Code
unit GeoSys;
{Contains the following classes:
TGeoSystem: Consists of function BuildMatrix that returns a pointer to an
adjacency matrix representing the transportation system to be used in the
analysis done by the Grassfire algorithm. BuildMatrix requires an input of the
tables containing arc and node information it returns a pointer to an internal
data structure built from these database tables. Use of this structure
significantly improves the performance of the Grassfire algorithm. This object
holds a default Adjacency matrix pointer the create method requires a
reference to an arcs table and a nodes table.
TGrassFire: Maintains the list of sources and implementation of the Grassfire
algorithm
TCIosestNode: Contains a method to find the node in the transportation
system that is the closest to a given lat/long location. SearchRadius is initialized
to 150000. Maxlterations is the number of times the search radius will be
increased trying to find the node if a node isn't found, the ClosestNodelD
will be set to -1
}
interface
uses classes, dbtables, dialogs, fifo_Que, SysUtils;
const
MaxNodes = 1000; {the maximum number of nodes in the system}
Num_Sources = 20; {max number of sources to be concidered by
Grassfire}
type
TLonglntPtr = ALonglnt;
resp_time_rec = record
avg_time: real;
max_time: real;
nodes_assigned:integer;
end;
Tresp_time_arry = array [1..num_sources] of resp_time_rec;
45


{-------------------------------------------------------}
{type declarations for TGeoSystem and its adjacency matrix}
AM_elem_ptr = AAM_element;
AM_element = record
adjarc_tt: real;
endnode: longint;
next: AM_elem_ptr;
end;
nodes_arry = array [l.MaxNodes] of AM_elem_ptr;
pnodes_arry = Anodes_arry;
TGeoSystem = class (TObject)
private
ArcsTable:Ttable;
NodesTable:Ttable;
procedure SetArcsTable(theTable:Ttable);
function GetArcsTable:Ttable;
procedure SetNodesTable(TheTable:Ttable);
function GetNodesTable:Ttable;
function GetNodeTableName:string;
function GetNodeTableDBName:string;
Public
Defa u ItAM: p n odes_a rry;
property ArcTable:Ttable read GetArcsTable write SetArcsTable;
property NodeTable:Ttable read GetNodesTable write SetNodesTable;
property NodeTableName:string read GetNodeTableName;
property NodeTableDBName:string read GetNodeTableDBName;
function BuildMatrix:pnodes_arry;
constructor Create(Arcs, Nodes:Ttable);
destructor Free;
end;
{-------------------------------------------------------}
{type declarations for TCIosestNode}
TCIosestNode = class (TObject)
SearchRadius:longlnt;
ClosestNodelD:Longlnt;
PointLat:Longlnt;
PointLong:Longlnt;
CN_Lat:Longlnt;
46


CN_Long:Longlnt;
Maxlterations:Longlnt; {the max number of times the radius will be
increased}
constructor create; {trying to find the node ClosestNodelD is set to -1 if
not found}
destructor free;
procedure FindClosestNode (DBName,TableName:string); {Nodes Table being
considered}
end;
{-----------------------------------------------------}
{type declarations for Grassfire}
ENoSources = class (exception);
ERM_NullEntry = class (exception);
node_ptr = Anode;
node = record
nodewt: real;
fromnode: longint;
sourcefrom: longint;
next: node_ptr;
end;
TResultArray = array [1..MaxNodes] of node_ptr;
TGrassFire = class (TObject)
private
Sources:TList;
ResultMatrix:TResultArray;
resp_times:T resp_time_arry;
GeoS :TGeoSystem;
function GetGeoSys:TGeoSystem;
procedure SetGeoSys(GeoSys.TGeoSystem);
public
constructor create(GeoSys:TGeoSystem);
property GeoSystem: TGeoSystem read GetGeoSys write SetGeoSys;
procedure AddSource (Source:Longlnt);
procedure Bum;
function NumNodeslnSys:Longlnt;
function RMEntryJsNull (Nodelndex:Longlnt):boolean;
function RMEntry_SourceNode (Nodelndex:Longlnt):Longlnt;
47


function RMEntry_NodeFrom (Nodelndex:Longlnt):Longlnt;
function RMEntry_NodeWt (Nodelndex:Longlnt):Real;
destructor free;
end;
implementation
{------------------------TCIosestNode------------------------}
constructor TCIosestNode.create;
begin
inherited create;
SearchRadius:= 150000;
Maxlterations:= 20;
end;
destructor TCIosestNode.free;
begin
if self <> nil then
inherited destroy;
end;
procedure TCIosestNode.findClosestNode (DBName,TableName:string);
var
uplat,uplong,lowlat,lowlong:real;
dist,mindist:real;
distlat.distlong: real;
numrecordsiinteger;
LatLongQuery:TQuery;
sqlstring:string;
RadlncCount:Longlnt;
begin
RadlncCount:= 1;
If ((PointLat = 0) or (PointLong = 0)) then
messagedlg('Latitude and Longitude properties not set!', MTInformation,
[mbok],0)
else
begin
LatLongQuery:= TQuery.create(nil);
LatLongQuery.DatabaseName:= DBName;
48


sqlstring:= 'select nodejd, Latitude, Longitude from
sqlstring:= sqlstring + tablename;
sqlstring:= sqlstring + where ((Latitude >= :lowlat and Latitude <= :uplat)
sqlstring:= sqlstring + and (Longitude >= :lowlong and Longitude <=
:uplong))';
latlongquery.close;
latlongquery.sql.clear;
latlongquery.sql.add(sqlstring);
latlongquery.prepare;
closestnodelD:=0;
uplat:= PointLat + SearchRadius;
lowlat:= PointLat SearchRadius;
uplong:= PointLong + SearchRadius;
lowlong:= PointLong SearchRadius;
latlongquery.parambyname('lowlat').Asfloat:= lowlat;
latlongquery.parambyname ('lowlong').Asfloat:= lowlong;
latlongquery.parambyname ('uplat').Asfloat:= uplat;
latlongquery.parambyname (uplong').Asfloat:= uplong;
latlongquery.open;
numrecords:= latlongquery.recordcount;
while (numrecords = 0) and (RadlncCount <= Maxlterations) do
begin
RadlncCount:= RadlncCount + 1;
uplat:= uplat + SearchRadius;
lowlat:= lowlat SearchRadius;
uplong:= uplong + SearchRadius;
lowlong:= lowlong SearchRadius;
latlongquery.close;
latlongquery.parambyname('lowlat').Asfloat:= lowlat;
latlongquery.parambyname ('lowlong').Asfloat:= lowlong;
latlongquery.parambyname ('uplat').Asfloat:= uplat;
latlongquery.parambyname ('uplong').Asfloat:= uplong;
latlongquery.open;
numrecords:= latlongquery.recordcount;
end;
if RadlncCount <= Maxlterations then
begin
latlongquery.first;
mindist:= high(longint);
repeat
49


distLat:= latlongquery.fieldbyname ('Latitude').Asfloat PointLat;
distLong:= latlongquery.fieldbyname ('Longitude').Asfloat PointLong
dist:= sqrt(sqr(distLat) + sqr(distLong));
if dist < mindist then
begin
mindist:= dist;
closestnodelD:= latlongquery.fieldbyname('node_id).Aslnteger;
CN_Lat:= latlongquery.fieldbyname('latitude').Aslnteger;
CN_Long:= latlongquery.fieldbyname('longitude').Aslnteger;
end;
latlongquery.next;
until latlongquery.eof;
end {if RadlncCount <= Maxlterations}
else
closestnodelD:= -1;
end; {if pointjat = 0...}
end; {TCIosestNode.find}
{-------------------TGrassFire--------------------}
function TGrassFire.GetGeoSys:TGeoSystem;
begin
result:= GeoS;
end;
procedure TGrassFire.SetGeoSys(GeoSys:TGeoSystem);
begin
GeoS:= GeoSys;
end;
constructor TGrassfire.create(GeoSys:TGeoSystem);
begin
inherited create ;
sources:= TList.create;
GeoSystem:= GeoSys;
end;
destructor TGrassfire.free;
var
i:Longlnt;
50


begin
if self <> nil then
begin
Sources.free;
for i:= 1 to MaxNodes do
begin
{dispose(resultmatrix[i]A);}
if ResultMatrix[i] <> nil then
dispose(ResultMatrix[i]);
end;
end;
if self <> nil then destroy;
end;
function TGrassFire.NumNodesInSysiLonglnt;
begin
result:= MaxNodes;
end;
{procedure TGrassfire.burn (Adj_Matrix:Pnodes_arry);}
procedure TGrassfire.burn;
var
nodejist: TfifoQue;
c_source: longint;
nodeAslndex:integer;
temp:AM_elem_ptr;
procedure initialize; {TGrassFire.Burn}
{initializes the queue of nodes to be considered and the results list}
var
i,x: integer;
sourcePtr:ALonglnt;
begin
{node_list:= TfifoQue.createlist; }
{load source nodes into queue}
for i:= 0 to Sources.count -1 do
begin
sourcePtr:= sources.items[i];
if sourcePtrA > 0 then
begin
x:= sourcePtrA;
new(resultMatrix[x]);
51


ResultMatrix[x]A.nodewt:= 0;
ResultMatrix[x]A.fromnode:= sourcePtrA;
ResultMatrix[x]A.sourcefrom:= sourcePtrA;
ResultMatrix[x]A.next:= nil;
Node_List.additem(x);
end;
resp_times[i+1].avg_time:= 0.0;
resp_times[i+1].max_time:= 0.0;
resp_times[i+1].nodes_assigned:= 0;
end; {for i}
end; {initialize}
procedure eval_net; {TGrassfire.bum}
var
curr_node: longint;
curr_nodeWeight: real;
enode:longint;
temp_wt:real;
begin {eval_net}
while not Node_List.is_empty do
begin
Curr_node:= Node_List.retrieveHead;
{get the weight of the current node}
curr_nodeWeight:= ResultMatrix[curr_node]A.nodewt;
c_source:= ResultMatrix[curr_node]A.sourcefrom;
{process all arcs in adjacency matrix}
temp:= GeoS.DefaultAMA[curr_node];
if temp <> nil then
repeat
{tt:= tempA.adjarc_tt;}
enode:= tempA.endnode;
temp_wt:= ResultMatrix[curr_node]A.nodewt + tempA.adjarc_tt;
if ResultMatrix[enode] = nil then
begin
new(ResultMatrix[enodej);
ResultMatrix[enode]A.nodewt:= temp_wt;
ResultMatrix[enode]A.fromnode:= curr_node;
ResultMatrix[enode]A.sourcefrom:= c_source;
ResultMatrix[enode]A.next:= nil;
NodeJJst. additem (enode);
end
else
52


begin
if temp_wt < ResultMatrix[enode]A.nodewt then
begin
ResultMatrix [enode]A.nodewt:= temp_wt;
ResultMatrix[enode]A.fromnode:= curr_node;
ResultMatrix[enode]A.sourcefrom:= c_source;
Node_List.additem(enode);
end;
end;
temp:= tempA.next;
until temp = nil;
end; {while}
end; {eval_net}
begin {procedure Grassfire.burn}
if sources.count <> 0 then
begin
try
node_list:= TfifoQue.create;
initialize;
eval_net;
finally
nodejist.free;
end;
end
else
Raise ENoSources.create('Source list is empty (TGrassFire.Burn)');
end; {TGrassfire.Bum}
procedure TGrassfire.AddSource(source:Longlnt);
var
element: TLonglntPtr;
begin
elements new(TLonglntPtr);
elementA:= Source;
Sources.add(element);
end;
Function TGrassfire.RMEntryJsNull (Nodelndex:Longlnt):Boolean;
begin
if ResultMatrixfNodelndex] = nil then
result:= true
else
53


result:= false;
end;
Function TGrassfire.RMEntry_SourceNode (Nodelndex:Longlnt):Longlnt;
begin
if ResultMatrix[Nodelndex] = nil then
Raise ERM_NullEntry.create('Result Matrix Entry is Nil')
else
result:= ResultMatrix[Nodelndex]A.SourceFrom;
end;
function TGrassfire.RMEntry_NodeFrom (Nodelndex:Longlnt):Longlnt;
begin
if ResultMatrix[Nodeindex] = nil then
Raise ERM_NullEntry.create('Result Matrix Entry is Nil')
else
result:= ResultMatrix[Nodelndex]A.FromNode;
end;
function TGrassfire.RMEntry_NodeWt (Nodelndex:Longlnt):Real;
begin
if ResultMatrix[Nodelndex] = nil then
Raise ERM_NullEntry.create('Result Matrix Entry is Nil')
else
result ResultMatrix[Nodelndex]A.NodeWt;
end;
{---------------------TGeoSystem--------------------------}
function TGeoSystem.BuildMatrix: pnodes_arry;
{builds an adjacency matrix from a table of arcs and one of nodes. Returned is
a pointer to an array of linked lists. The node ID is an index into the array.
The elements of the list contain the weight and opposite end node of each arc
that is adjacent to that node.
The fields used from each table are as follows:
Arcs Table: Nodes Table:
ENodel NodeJD
ENode2
Weight
}
54


var
adj_matrix:pnodes_arry;
nodeid:longint;
temp:AM_elem_ptr;
i: integer;
new_node: AM_elem_ptr;
enodeilongint;
begin
getmem (adj_matrix, sizeof(nodes_arry));
for i:= 1 to MaxNodes do
begin
adj_matrixA[i]:= nil;
end;
NodesTable.active:= true;
ArcsTable. active: = true;
NodesTable.first;
repeat
nodelD:= NodesTable.fieldbyname('Node_ID').Aslnteger;
ArcsTable.indexname:= 'nodeT;
ArcsTable.setrange([nodeid],[nodeid]);
ArcsTable.first;
while not ArcsTable.eof do
begin
temp:= adj_matrixA[nodeid];
if temp = nil then
begin
new(new_node);
new_nodeA.next:= nil;
new_nodeA.adjarc_tt:= ArcsTable.fieldbyname('weight').AsFloat;
new_nodeA.endnode:= ArcsTable.fieldbyname('enode2').Aslnteger;
adj_matrixA[nodeid]:= new_node;
ArcsTable. next;
end
else
begin
while tempA.next <> nil do temp:= tempA.next;
new(new_node);
new_nodeA.next:= nil;
new_nodeA.adjarc_tt:= ArcsTable.fieldbyname('weight').AsFloat;
55


new_nodeA.endnode:= ArcsTable.fieldbyname('enode2').Aslnteger;
tempA.next:= new_node;
ArcsTable.next;
end;
end;
ArcsTable.indexname:= 'node2';
ArcsTable.setrange([nodeid],[nodeid]);
ArcsTable.fi rst;
while not ArcsTable.eof do
begin
temp:= adj_matrixA[nodeid];
if temp = nil then
begin
new(new_node);
new_nodeA.next:= nil;
new_nodeA.adjarc_tt:= ArcsTable.fieldbyname(weight').AsFloat;
new_nodeA.endnode:= ArcsTable.fieldbyname('enode1').Aslnteger;
adj_matrixA[nodeid]:= new_node;
ArcsTable.next;
end
else
begin
while tempA.next <> nil do temp:= tempA.next;
new(new_node);
new_nodeA.next:= nil;
new_nodeA.adjarc_tt:= ArcsTable.fieldbyname('weight').AsFloat;
new_nodeA.endnode:= ArcsTable.fieldbyname('enodeT).Aslnteger;
tempA.next:= new_node;
ArcsTable.next;
end;
end;
NodesTable.next;
until NodesTable.eof;
result:= adj_matrix;
end; {TGeosystem.BuildMatrix}
constructor TGeoSystem.create(Arcs, Nodes:Ttable);
begin
inherited create;
ArcsTable:= Arcs;
56


NodesTable:= Nodes;
getmem(DefaultAM, sizeof(pnodes_arry));
DefaultAM:= BuildMatrix;
end;
destructor TGeoSystem.free;
var
temp:AM_elem_ptr;
i:Longlnt;
begin
if self <> nil then
begin
for i:= 1 to maxNodes do
begin
temp:= DefaultAMA[i];
while temp <> nil do
begin
DefaultAMA[i]:= tempA.next;
dispose(temp);
temp:= DefaultAMA[i];
end;
end;
freemem (DefaultAM, sizeof(nodes_arry));
if self <> nil then destroy;}
end;
end;
procedure TGeoSystem.SetArcsTable(theTable:Ttable);
begin
ArcsTable:= TheTable;
end;
function TGeoSystem.GetArcsTable:Ttable;
begin
result:= ArcsTable;
end;
procedure TGeoSystem.SetNodesTable(TheTable:Ttable);
begin
NodeTable:= TheTable;
end;
57


function TGeoSystem.GetNodesTable:Ttable;
begin
result:= NodesTable;
end;
function TGeoSystem.GetNodeTableName:string;
begin
result:= NodesTable.name;
end;
function TGeoSystem.GetNodeTableDBName:string;
begin
result:= NodesTable. DataBaseName;
end;
end. {unit GeoSys}
unit Resp_rec;
{A response is based on the type of incident and the BGU where it occurs. A
given incident type in a given BGU requires certain capabilities in the
responding units.
Available units having these capabilities are found and a list of sets of units
built with the units closest to the incident at the top of the list (index 0)
TResponse: Create method requires input of an evaluated network
(TGrassFire), the ID of the BGU (initializes the BGU property) of the incident,
the incident type (to initialize the TypeofEvent property) and the name of the
resources database FindResponse method builds a list of response sets
ResponseSet(index:integer) method returns a TList Object listing a set
of units. Index is the recommendation position i.e., the closest set of
units would be at index = 0 ResponseSetSize(index:integer) returns the number
of units in the set of units at index
Database tables and fields that are used:
Table: RESPONSE
Fields: (all fields)
Resp_type
Capability
Qty
Table:RUNCARD
Fields: (all fields)
Resp_Type
Event_Type
BGU
58


Table: Units
Fields: (all fields)
Capability
Status
Unit id
Table: UnitsJocation
Fields: Latitude
Longitude
Unit ID
The remaining classes in this unit are as follows. All are used by TResponse.
TUnit: Holds information about a single response unit
TUnitList: A list of units
TCapabilitiesList: An intermediate structure used by TResponse.GetResponse
to build a list of needed capabilities and the units that can
fulfill them
TCapability: An individual entry in the CapabilitiesList
TResponseList: A list containing sets of units that can fulfill a given response
request
}
interface
uses classes, dbtables,sysutils, geosys ;
Type
Tunit = class
unitlD:string;
dist:real;
status:string;
location integer;
end;
TUnitList = class (TList)
private
TheList:TList;
protected
function Get(lndex: Integer): TUnit;
procedure Put(lndex: Integer; Item: TUnit);
public
constructor create;
destructor free;
59


procedure add(Value:TUnit);
procedure lnsertByDist(value:TUnit);
property ltems[lndex: Integer]: TUnit read Get write Put; default;
function count: integer;
procedure delete(lndex:integer);
end;
Tcapability = class
capab: string;
qty: integer;
unitslist: TUnitList; {will hold a list of available units that have this capability}
constructor create;
destructor free;
end;
TCapabilitiesList = class (TList)
private
capabList:TList;
public
constructor create;
destructor free;
procedure clear;
end;
TRespList = class (TList)
private
TheRespList.Tlist;
protected
function Get(lndex: Integer): TUnitList;
procedure Put(lndex: Integer; Item: TUnitList);
public
constructor create;
destructor free;
procedure add(ltem:TUnitList);
procedure clear;
property ltems[lndex: Integer]: TUnitList read Get write Put; default;
function count:integer;
end;
60


TResponse = class
private
RespList:TRespList;
CapabilityList:TCapabilitiesList;
BGU id: String;
DB:string;
TypeEvent:String;
EvaledNet:TGrassFire;
RespQ:TQuery;
UnitsQiTQuery;
UnitsLocQ:TQuery;
function GetBGU:string;
procedure SetBGU (BGU_ID:string);
function GetEvent:string;
procedure SetEvent(Event:string);
public
property BGU:string read BGUid write BGUid;
property TypeOfEvent:String read TypeEvent write TypeEvent;
procedure FindResponse;
Function ResponseSet(index:integer):TUnitList; {returns the response set at
index}
function ResponseSetSize(index:integer):integer; {returns the size of the
response set at index}
function ResponseListSize:integer; {returns the number of response sets in the
list}
constructor create(EvaluatedNetwork:TGrassFire; BGUJD, EventType:string;
DataBase:string);
destructor free;
end;
implementation
{--------------------------TCapability-----------------------}
constructor TCapability.create;
begin
inherited create;
end;
61


destructor TCapability.free;
var
temp: TUnit;
begin
While unitslist.count > 0 do
begin
temp:= unitslist.first;
temp.free;
unitslist.delete(O);
end;
unitslist.free;
inherited free;
end;
{-----------------------TCapabilitiesList-----------------------}
constructor TCapabilitiesList.create;
begin
inherited create;
capablist:= TList.create;
end;
destructor TCapabilitiesList.free;
var
temp:TCapability;
begin
while capablist.count > 0 do
begin
temp:= capablist.first;
temp.free;
capablist.delete(O);
end;
Capablist.free;
inherited free;
end;
procedure TCapabilitiesList.clear;
var
temp:TCapability;
begin
62


while capablist.count > 0 do
begin
temp:= capablist.first;
temp.free;
capablist.delete(O);
end;
inherited clear;
end;
{-----------------------TUnitList-----------------------------}
constructor TUnitList.create;
begin
inherited create;
TheList:= TList. create;
end;
destructor TUnitList.free;
var
TempUnit:TUnit;
begin
while TheList.count > 0 do
begin
tempUnit:= TheList.items[0];
tempUnit.free;
Thelist.delete(O);
end;
Thelist.free;
if self <> nil then
inherited destroy;
end;
function TUnitList.count:integer;
begin
result:= TheList.count;
end;
procedure TUnitList.Add(value:TUnit);
begin
TheList.add(value);
end;
63


procedure TUnitList.lnsertByDist(value:TUnit);
{inserts into the unitslist with the smallest distance values in front}
var
place:integer;
temp:TUnit;
cont:boolean;
listsize: integer;
begin
listsize:= TheList.count;
if listsize = 0 then
TheList.add(value)
else
begin
place:= 0;
cont:= true;
while cont and (place < listsize) do
begin
temp:= Thel_ist.items[place];
if temp.dist < value.dist then
place:= place + 1
else
cont:= false;
end; {while}
TheList.insert(place,value);
end; {else}
end; {InsertbyDist}
function TUnitList.Get(lndex: Integer): TUnit;
begin
result:= TheList.items[index];
end;
procedure TUnitList.Put(lndex: Integer; Item: TUnit);
begin
TheList.items[index]:= Item;
end;
procedure TUnitList.Delete(lndex:lnteger);
begin
TheList.delete(lndex);
end;
64


{-
TRespList-
}
constructor TRespList.create;
begin
inherited create;
TheRespList:= TList.create;
end;
destructor TRespList.free;
var
templist:TUnitList;
begin
while TheRespList.count > 0 do
begin
templist:= TheRespList.first;
templist.free;
TheRespList.delete(O);
end;
TheResPList.free;
if self <> nil then
inherited destroy;
end;
procedure TRespList.clear;
var
templist:TUnitList;
begin
while TheRespList.count > 0 do
begin
templist:= TheRespList.first;
templist.free;
TheRespList.delete(O);
end;
inherited clear;
end;
procedure TRespList.add(ltem:TUnitList);
begin
TheRespList.add(ltem);
end;
65


function TRespList.count:integer;
begin
result:= TheRespList.count;
end;
function TRespList.Get(lndex: Integer): TUnitList;
begin
result:= TheRespl_ist.items[index];
end;
procedure TRespList.Put(lndex: Integer; Item: TUnitList);
begin
TheRespList.items[index]:= Item;
end;
{-------------------------TResponse-------------------------}
constructor TResponse.create(EvaluatedNetwork:TGrassFire; BGUJD,
EventType:string;DataBase:string);
begin
inherited create;
RespList:= TRespList. create;
CapabilityList:= TCapabilitiesList.create;
evaledNet:= EvaluatedNetwork;
BGU:= BGUJD;
TypeEvent:= EventType;
DB:= Database;
RespQ:=TQuery.create(nil);
UnitsQ:=TQuery.create(nil);
UnitsLocQ:= TQuery.create(nil);
RespQ.DataBaseName:= DataBase;
UnitsQ.DataBaseName:= DataBase;
UnitsLocQ.DataBaseName:= DataBase;
end;
destructor TResponse.free;
begin
RespList.free;
CapabilityList.free;
RespQ.free;
UnitsQ.free;
66


UnitsLocQ.free;
inherited free;
end;
function TResponse.GetBGU:string;
begin
result:=BGU;
end;
procedure TResponse.SetBGU (BGU_ID:string);
begin
BGU:= BGUJD;
end;
function TResponse.GetEvent:string;
begin
result:= TypeEvent;
end;
procedure TResponse.SetEvent(Event:string);
begin
TypeEvent:= Event;
end;
function TResponse.ResponseSet(index:integer):TUnitList;
var
unitlist:TUnitList;
begin
UnitList:= RespList.items[index];
result:= UnitList;
end;
function TResponse.ResponseSetSize(index:integer):integer;
var
temp:TUnitList;
begin
temp:= RespList.items[index];
result:= temp.count;
end;
function TResponse.ResponseListSize:integer;
begin
result:= RespList.count;
end;
67


procedure TResponse.FindResponse;
{goes through and builds a list of needed capabilities along with the units (by
order of the closest ones first) that can meet each need. This list is then used
to build a list of response sets.
}
var
CapabilityRec:Tcapability;
NewUnitrTunit;
UnitList:TUnitList;
status: string;
location:integer;
sqlstring:string;
SQLStnstring;
ClosestNode:TCIosestNode;
procedure buildRespList;
var
RespListSet:TUnitList;
Capability:TCapability;
CapabUnitsList:TUnitList;
i, j:integer;
capablistcount:integer;
function inList(Unitltem:TUnit):boolean;
var
UnitList:TUnitList;
LocalUnitltem:TUnit;
found:boolean;
ListNum,UnitNum:integer;
begin
found:= false;
if RespList.count > 0 then
begin
for listnum:= 0 to (RespList.count-1) do
begin
UnitList:= RespList.items[listnum];
for unitnum:= 0 to (UnitList.count-1) do
begin
LocalUnitltem:= UnitList.items[unitnum];
if LocalUnitltem.Unitld = Unitltem.unitld then
found:= true;
end; {for unitnum}
end; {for listnum}
68


end; {if resplist.count > 0}
result:= found;
end; {InList}
begin {BuildRespList}
while CapabilityList.count > 0 do
begin
RespListSet:= TUnitList.create;
capabListCount:= CapabilityList.count;
for i:= 0 to (CapabListcount -1) do {go through each item in the capability }
{list and build a set of units to fulfill each}
begin {needed capability}
Capability:= CapabilityList.items[i];
CapabUnitsList:= Capability. UnitsList;
for j:= 1 to Capability.qty do
begin
if CapabUnitsList.count <> 0 then {check for unassigned units}
begin
if not inList(CapabUnitsList.items[0]) then {check to see if the unit is}
begin {already in the recommendation list}
RespListSet.add(CapabUnitsList.items[0]);
capabUnitsList.delete(O);
end {if not inList...}
else {already in recommendations list still need to delete the item}
capabUnitsList.delete(O);
end {if capabunitslist.count...}
else
begin
CapabilityList.delete(i); {when the units list is empty delete the item}
break; {from the capabilities list}
end; {else}
end; {for j}
end; {for i}
if respListSet.count <> 0 then
RespList.add(RespListSet)
else
RespListSet.free;
j:=0;
for i:= 0 to (capablistcount -1) do {look at the unit list for each capability}
begin {delete the capabilities that have empty}
capability:= capabilitylist.items[j]; {unit lists}
69


capabllnitsList:= capability. UnitsList;
if CapabUnitsList.count = 0 then
capabilityList.delete(j) {index stays the same when an element is }
else {eliminated}
j:=j + 1;
end; {for i:= (capablistcount -1)) do}
end; {while CapabilityList.count > 0 do}
end; {buildresplist}
begin {findresponse}
RespList.clear;
CapabilityList.clear;
ClosestNode:=TCIosestNode.create;
respq.close;
unitsq.close;
respq.close;
respq.sql.clear;
SqlString:= 'Select from response, runcard where (response.resp_type =
runcard.resp_type) and';
SqlString:= SqIString + '(Runcard.event_type = :event_type) and (Runcard.bgu
= :bgu)';
respq.sql.add(SQLString);
respq.params[0].AsString:= TypeEvent;
respq. params[1].AsString:= BGUid;
respq.prepare;
respq.open;
respq.first;
UnitsLocQ.close;
UnitsLocQ.sql.clear;
SQLStr:= 'Select Latitude, Longitude from Unitsjocation where UnitJD =
:UnitlD';
UnitsLocQ. sql. add(SQLStr);
UnitsLocQ.prepare;
{numCaps:= 0;}
while not respq.eof do
begin
capabilityRec:=Tcapability.create;
capabilityRec.capab:= respq.fieldbyname('capability').AsString;
capabilityRec.qty:= respq.fieldbyname ('qty').Aslnteger;
UnitList:= TUnitList.create;
70


unitsq.close;
unitsq.sql.clear;
sqlstring:='Select from units where((capability = :capability)and((status =
"A")or(status = "P")))';
unitsq.sql. add(sqlstring);
unitsq.params[0].AsString:= capabilityRec.capab;
unitsq.prepare;
unitsq.open;
unitsq.first;
while not unitsq.eof do
begin
status:= unitsq.fieldbyname(status').AsString;
if (status = 'A') or (status = 'P') then
begin
UnitsLocQ.close;
UnitsLocQ.params[0].AsString:= Unitsq.fieldbyname('UnitJd').AsString;
UnitsLocQ.prepare;
UnitsLocQ.open;
UnitsLocQ.first;
{the following code will find the closest node there is a column in
the unitsjocation table for the closest node this runs faster using
that column requires filling in when the latitude and longitude values
are updated}
ClosestNode.PointLong:=unitsLocQ.fieldbyname('Longitude').Aslnteger;
ClosestNode.PointLat:= unitsLocQ.fieldbyname('Latitude').Aslnteger;
ClosestNode.findClosestNode(DB,'Nodes');
NewUnit:= TUnit.create;
NewUnit.UnitlD:= unitsq.fieldbyname(,Unit_id').AsString;
location: = ClosestNode. ClosestNodel D;
location:- unitsLocQ.fieldbyname('CLOSEST_NODE').Aslnteger;
NewUnit.Dist:= EvaledNet.RMEntry_NodeWt(location);
NewUnit.status:= status;
NewUnit.location:=location;
UnitList.lnsertByDist(NewUnit);
end; {if status}
unitsq.next;
end; {while not}
capabilityRec.unitslist:= UnitList;
CapabilityList.add(capabilityRec);
respq.next;
71


end; {while not respq.eof}
buildRespList;
ClosestNode.free;
end; {findResponse}
end. {unit Resp_rec}
unit Move_up;
{
Provides functionality to evaluate the geographic system and make
recommendations regarding the need for a move-up.
The move-up object requires an input of the geo system this typically will be
the system of bgus with nodes representing the centroid of the bgu and the
arcs representing an average time for traveling from one bgu to the next. A
GeoSystem object is built in the instantiation of a move-up object see the
GeoSys unit
Current unit locations are resolved to nodes and a sources list built for
the Grassfire object.
Database table information used:
Units_Location Table this table is passed in the actual table name and the
name of the database containing it is picked off and used elsewhere in the class
fields used:
UnitJD
Latitude
Longitude
Type (a major type classification typically something like 'E' for engine
or 'R' for rescue)
InQuarters
TimeReturning
AssignedStn
BGUData Table -
BGUJD
Prob
RespStn
NodesTable
BGUJD
}
72


interface
uses classes, SysUtils, dbtables, GeoSys, DateTime;
type
TBGU = class
private
BGU_ID:string;
RespStation:String; {normal responsible station}
DistToClosestStn:real; {current response time}
Probability:real;
EstNonCoverageTime:Longlnt;
CurrentTime:TDateTime;
function GetlD:string;
procedure Set|D (ID:string);
function GetStation:string;
procedure SetStation (Station:String);
function GetDistance:real;
procedure SetDistance(Distance:real);
function GetProb:Real;
procedure SetProb(Prob:real);
function GetNoCoverTime:Longlnt;
procedure SetNoCoverTime(Time:Longlnt);
function GetCurrentTime:TDateTime;
procedure SetCurrentTime(Time:TDateTime);
public
property ID:string read GetID write SetID;
property ResponsibleStation:string read GetStation write SetStation;
property ClosestStationDist:real read GetDistance write SetDistance;
property lncidentProbability:real read GetProb write SetProb;
property EstMinutesWithoutCoverage:Longint read GetNoCoverTime write
SetNoCoverTime;
property EvalTime:TDateTime read GetCurrentTime write SetCurrentTime;
end;
TmoveUp = class
private
MaxRespTime: Longlnt; {maximum acceptable travel time}
IntervalOfNoMove: Longlnt; {Time span of non coverage before a move-up}
{is recommended in minutes}
UncoveredBGUs: TList;
73


BGUgeoSys:TGeoSystem;
GrassFireObjrTGrassFire;
NodesTableNameistring;
NodesTableDB:string;
procedure addBGUtoList (BGU:TBGU);
function UncoveredBGUcount:Longlnt;
procedure SetUnitsLoc (UnitsLocTable:Ttable;TypeUnit:char); {establishes unit
locations and builds sources list for Grassfire}
function GetlntervakLonglnt;
procedure Setlnterval(TimelnMinutes:Longlnt);
function GetMaxResponseTime:Longlnt;
procedure SetMaxResponseTime(TimelnSeconds:Longlnt);
public
constructor create(Arcs,nodes:Ttable;lntervalNoMove:l_onglnt);
destructor free;
property MaxResponseTime:Longlnt read GetMaxResponseTime write
SetMaxResponseTime;
property NoMoveUpIntervakLonglnt read Getlnterval write Setlnterval;
function
MU_needed(UnitsLocTable,BGUDataTable:Ttable;TypeUnit:char):boolean;
{returns true if there are bgus in the system that will be uncovered for more
than IntervalOfNoMove. Uses NodesTableDB value (set in create) and
creates a dynamic query into the units and unitsjoc tables}
end;
implementation
function TBGU.GetlD:string;
begin
results BGUJD;
end;
procedure TBGU.SetID (ID:string);
begin
BGU_ID:= ID;
end;
function TBGU.GetProb:Real;
begin
result: = Probability;
end;
74


procedure TBGU.SetProb (Prob:real);
begin
Probability:= Prob;
end;
function TBGU.GetDistance:Real;
begin
result:= DistToClosestStn;
end;
procedure TBGU.SetDistance (distance:real);
begin
DistToClosestStn := distance;
end;
function TBGU.GetStation:String;
begin
Result:= RespStation;
end;
procedure TBGU.SetStation (Station:String);
begin
RespStation:= Station;
end;
function TBGU.GetNoCoverTime:Longlnt;
begin
result:= EstNonCoverageTime;
end;
procedure TBGU. SetNoCoverTime(Time: Long I nt);
begin
EstNonCoverageTime:= Time;
end;
function TBGU.GetCurrentTime:TDateTime;
begin
result CurrentTime;
end;
75


procedure TBGU.SetCurrentTime(Time:TDateTime);
begin
CurrentTime:= Time;
end;
constructor TMoveUP.Create(arcs,nodes:Ttable;lntervalNoMove:Longlnt);
{the arcs and nodes tables should be the BGU geo system nodes representing
the center of BGUs and arcs between them}
begin
inherited create;
UncoveredBGUs:= TList.create;
BGUgeoSys:= TGeoSystem.create(arcs, nodes);
GrassFireObj:= TGrassFire.create(BGUGeoSys);
NodesTableName:= nodes.TableName;
NodesTableDB:= nodes.DataBaseName; {used to access other tables in
moveup modules}
lntervalOfNoMove:= IntervalNoMove;
end;
destructor TMoveUP.Free;
begin
UncoveredBGUs.free;
BGUGeoSys.free;
GrassFireObj.free;
inherited free;
end;
function TMoveUP.GetlntervaLLonglnt;
begin
result:= Inten/alOfNoMove;
end;
procedure TMoveUP.Setlnterval(TimelnMinutes:Longlnt);
begin
lntervalOfNoMove:= TimelnMinutes;
end;
function TMoveUP.GetMaxResponseTime:Longlnt;
begin
result:= MaxRespTime;
end;
76


procedure TMoveUP.SetMaxResponseTime(TimelnSeconds:Longlnt);
begin
MaxRespTime:= TimelnSeconds;
end;
procedure TMoveUP.AddBGUtoList (BGU:TBGU);
begin
uncoveredBGUs.add(BGU);
end;
function TMoveUP.UncoveredBGUcount:longlnt;
begin
result:= UncoveredBGUs.count;
end;
procedure TMoveUP.setunitsLoc (UnitsLocTable:Ttable;typeunit:char);
var
U n itLat, U n itLong: Long I nt;
ClosestNode:TCIosestNode;
begin
ClosestNode:=TCIosestNode.create;
with UnitsLocTable do
begin
indexfieldnames:= 'type';
setrangestart;
fieldbyname('type').AsString:= typeunit;
setrangeend;
fieldbyname ('type').AsString:= typeunit;
applyrange;
first;
while not eof do
begin
ClosestNode.PointLat:= fieldbyname('Latitude).Aslnteger;
ClosestNode.PointLong:= fieldbyname('Longitude').Aslnteger;
ClosestNode.FindClosestNode(NodesTableDB,NodesTableName);
GrassFireObj.AddSource(ClosestNode.ClosestNodelD);
next;
end; {while not eof}
end; {with}
end;
77


function TMoveUP.MU_needed (unitsLoctable,
BGUDataTable:Ttable;TypeUnit:char):boolean;
{Inputs: A table containing units their locations and major type classification.
This is designed to be run for a specific type of resource (Typellnit) such
as engines or rescue vehicles. The BGUGeoSys (TGeoSystem) object is used
to find the BGUs which are uncovered. Returns true if there are uncovered
BGU's with expected time of noncoverage in excess of NoMovellPInterval}
var
klongint;
BGUquery1,BGUQuery2,UnitsQuery:TQuery;
BGU:TBGU;
SQLString1lSQLString2lSQLString3:String;
Probireal;
Station:string; .
Retu rnTi m e: TDateTi m e;
BGUidistring;
TimeOut:Longlnt;
procedure InitQueries (var BGUQueryl, UnitsQuery:TQuery; NodesTableDB,
NodesTableName: String);
var
UnitsLocTableNameistring;
begin
UnitsLocTableName:= UnitsLocTable.TableName;
BGUQueryl := TQuery.create(nil);
UnitsQuery:= TQuery.create(nil);
BGUQueryl ,databaseName:= NodesTableDB;
UnitsQuery.databaseName:=NodesTableDB;
SQLString1:= 'Select BGUJD from ';
SQLString1:= SQLStringl + NodesTableName;
SQLString1:= SQLStringl + where NodeJD = :indx';
BGUQueryl.SQL.add(SQLStringl);
SQLString2:= 'Select TimeReturning from ';
SQLString2:= SQLString2 + UnitsLocTableName;
SQLString2:= SQLString2 + where AssignedStn = :stn and Type = :UT';
UnitsQuery.SQL.add(SQLString2);
BGUQueryl .prepare;
UnitsQuery. prepare;
end; {InitQueries}
78


function getBGUid (BGUQuery1:TQuery; indx:longint):string;
begin
BGUQueryl. close;
BGUQueryl .params[0].Aslnteger:= indx;
BGUQueryl.open;
BGUQueryl .first;
result:= BGUQueryl .fieldbyname('BGU _ID').AsString;
end; {getBGUid}
procedure GetBGUData (BGUDataTable:Ttable; BGUid.string; var
Station:string; var prob:real);
begin
BGU DataT able. SetKey;
BGUDataTable.fieldbyname('BGUJD').AsString:= BGUid;
BGUDataTable.GotoKey;
Stations BGUDataTable.fieldbyname('RespStn').AsString;
Prob:= BGUDataTable.fieldbyname('Prob').AsFloat;
end; {GetBGUData}
function GetUnitReturnTime (UnitsQuery:TQuery; Station:string;
TypeUnit:char):TDateTime;
begin
{Select from unitsjoc table where assignedStn = station and type = unittype}
UnitsQuery.close;
UnitsQuery.params[0].AsString:= station;
UnitsQuery.params[1].AsString:= TypeUnit;
UnitsQuery.open;
UnitsQuery.first;
Result:= UnitsQuery.fieldbyname('TimeReturning').AsDateTime;
end; {GetUnitReturnTime}
function evalTime(ReturnTime:TdateTime ):Longlnt;
begin
result:= DateTime.DifferencelnMinutes(Now,ReturnTime);
end; {eval_time}
begin {MU_needed}
UnitsLocTable.open;
BGUDataTable.open;
SetUnitsLoc(UnitsLocTable,TypeUnit); {set up sources list for Grassfire}
GrassFireObj.burn;
79


InitQueries (BGUQueryl.UnitsQuery, NodesTableDB, NodesTableName);
for i:= 1 to maxNodes do
begin
if Not GrassFireObj.RmEntryJsNull(i) then
begin
if GrassFireObj.RmEntry_NodeWt(i) > MaxResponseTime then
begin {note: the nodelD is the index (i here) into the result matrix this is a
different value than the BGU number. Find responsible station and
anticipated duration of period out of quarters if > IntervalNoMove
then add this bgu to the list}
BGUid:= getBGUid (BGUQuery1,i);
GetBGUData (BGUDataTable, BGUid,Station,prob);
ReturnTime:= GetUnitRetumTime (UnitsQuery, station, TypeUnit);
TimeOut:= EvalTime(ReturnTime);
if TimeOut > IntervalOfNoMove then
begin
BGU:= TBGU.create;
BGU.ID:=BGUid;
BGU.ResponsibleStation:= Station;
BGU.CIosestStationDist:= GrassFireObj.RMEntry_NodeWt(i);
BGU.IncidentProbability:= Prob;
BGU.EstNonCoverageTime:= TimeOut;
BGU.EvalTime:= Now;
UncoveredBGUs.add(BGU);
end; {if TimeOut > intervalnomove then}
end; {if nodeWt > MaxResponseTime}
end; {if not is null}
end; {for i}
if UncoveredBGUs.count > 0 then
result:= true
else
result:= false;
end; {MU_needed}
end. {unit Move_up}
unit fifo_que;
{A first in, first out queue. The add method adds an item to the end of the
queue}
80


interface
uses classes, SysUtils;
type
TLonglntPtr = ALonglnt;
ERetrieveFromEmptyList = class (Exception);
TFifoQue = class (TObject)
private
TheList:TList;
public
constructor Create;
destructor free;
procedure addltem (Value: longlnt);
function RetrieveHead: longlnt;
function is_empty: boolean;
end;
implementation
constructor TFifoQue.create;
begin
inherited create;
TheList:= TList.create;
end; {Create}
destructor TFifoQue.free;
begin
While TheList. count <> 0 do
TheList.delete(O);
TheList.free;
end;
procedure TFifoQue.addltem(Value: longlnt);
var
temp:TLonglntPtr;
begin
getmem(temp,sizeof(temp));
tempA:= value;
TheList.add(temp);
end; {add}
81


function TFifoQue.RetrieveHead:Longlnt;
var
temp: TLonglntPtr;
begin
if TheList.count > 0 then
begin
temp:= TheList.items[0];
result:= tempA;
TheList.remove(temp);
end
else
Raise ERetrieveFromEmptyList.create('Attempt to Retrieve from an Empty
List (TFifo_que.RetrieveHead)');
end; {retrieve}
function TFifoQue.is_empty: boolean ;
begin
if TheList.count = 0 then
is_empty:= true
else
is_empty:= false;
end; {is_empty}
end. {unit fifo_que}
82


References
S. Aronoff, Geographic Information Systems A Management Perspective,
WDL Publications, Ottawa, Canada, 1989
S.L. Arlinghaus (ed.), Practical Handbook of Digital Mapping Terms and
Concepts, CRC Press Inc., 1994
Ball, M.O. and Feng, L.L., "A Reliability Model Applied to Emergency Service
Vehicle Location", Operations Research, 41:1, Jan-Feb, 1993
Berman, O.R., Larson, C., and Chui, S., "Optimal Srver Location on a Network
Operating as an M/G/1 Queue", Operations Search, 33: 746-771, 1985
Bertsekas, D. P., "Linear Network Optimization: Algorithms and Codes", MIT
Press, Cambridge, Massachusetts, 1991
Carter, G. and Ignall, E. "A Simulation Model of Fire Department Operations:
Design and Preliminary Results, R-632-NYC, The New York City Rand
Institute, New York, December, 1970
Church, R.L., ReVelle, C., "The Maximal Covering Location Problem", Papers of
the Regional Science Assoc., 32:101-118, Fall, 1974
Chvatal, V., "Linear Programming", W.H. Freeman and Company, New York,
1983
Cooper, L., Heuristic Methods for Location-Allocation Problems, SIAM Review,
Vol. 6:1, January, 1964
Cormen, T., Leiserson, C., Rivest, R., "Introduction to Algorithms", The MIT
Press, Cambridge, Mass., 1990
Daskin, M.S., "Location, Dispatching, and Routing Models", in Spatial Analysis
and Location-Allocation Models, A. Ghosh and G. Rushton (ed), Van Nostrand
Reinhold Company, New York, 1987
Daskin, M.S., Maximum Expected Covering Location Model: Formulation,
Properties and Heuristic Solution, Transportation Science, Vol. 17:1, February,
1983
83


Daskin, M.S., Application of an Expected Covering Model to Emergency
Medical Service System Design", Decision Sciences, Vol. 13, 1982
Daskin, M.S., Stern, E.H., "Hierarchical Objective Set Covering Model for
Emergency Medical Service Vehicle Deployment", Transportation Science, 15:
137-152, May, 1981.
Filippi, C. and Romanin-Jacur, G., "Optimal Allocation of Two Fixed Service
Units Acting as M/G/1 Queues, Transportation Science, 30:1, February, 1996
Hakimi, S.L., "Optimum Locations of Switching Center and the Absolute Centers
and Medians of a Graph", Operations Research, 12:450-459, 1964
Halpern, J., Maimon, O., "Accord and Conflict Among Several Objectives in
Locational Decisions on Tree Networks", in Locational Analysis of Public
Facilities, J.F. Thisse and H.G. Zoller (ed), North-Holland Publishing Company,
1983
Hansen, P., Peeters, D. and Thisse, J. F., Public Facility Location Models: A
Selective Survey", in Locational Analysis of Public Facilities, J.F. Thisse and
H.G. Zoller (ed), North-Holland Publishing Company, 1983
Hendrick, T.E., Plane, D.R., Monarchi, D.E., Tomasides, C.,Heiss, F.W.,
Walker, W.E., "An Analysis of the Deployment of Fire-Fighting Resources in
Denver, Colorado", R-1566/3-HUD, The New York City Rand Institute, New
York, May 1975
Hogan, K. and ReVelle, C., "Concepts and Application of Backup Coverage",
Management Science, 32:1434-1444, Nov, 1986
Jacobson, Ivar, Christerson, M., Jonsson, P., Overgaard, G. "Object-Oriented
Software Engineering A Use Case Driven Approach", ACM Press, 1992
Karimi, H. A., Krakiwsky, E.J., Harris, C., Craig, G., Goss, R. "A Relational
Database Model for an AVL System and an Expert System for Optimal Route
Selection.", AutoCarto 8 Proceedings Eighth International Symposium on
Computer Assisted Cartography, April, 1987 ASPRS/ACSM
Kariv, O. and S.L. Hakimi, "An Algorithmic Approach to Network Location
Problems, I: The p-centers, SIAM Journal on Applied Mathematics, 37:513-538,
1979a
84


Kariv, O. and S.L. Hakimi, "An Algorithmic Approach to Network Location
Problems, I: The p-centers, SIAM Journal on Applied Mathematics, 37:539-560,
1979b
Kolesar, P. and Walker, W.E., "An Algorithm for the Dynamic Relocation of Fire
Companies", R-1023-NYC, The New York City Rand Institute, New York., 1974
Kolesar, P. and Blum, E., "Square Root Laws for Fire Engine Response
Distances", Management Science, 19:1368-1378, 1973
Larson, R.C., A Hypercube Queuing Model for Facility Location and
Redistricting in Urban Emergency Services, Computers and Operations
Research, 1:67-95, 1974
Lee, Jay, A data Model for Dynamic Vehicle Routing with GIS, GIS/LIS 1990
Proceedings Volume 1, ACSM/ASPRS/AAG/U RISA/AM FM
Lupien, A.E., Moreland, W.H., Dangermond, J., "Network Analysis in
Geographic Information Systems, Photogrammetric Engineering and Remote
Sensing, 33:10 October 1987 pp. 1417-1421
Mirchandani, P.B. and Reilly, J.M., "Spatial Distribution Design for Fire Fighting
Units in Spatial Analysis and Location Allocation Models, A. Ghosh and G.
Rushton (eds.), Van Nostrand Reinhold Company, New York, 1987
Potts, R., Oliver, R., "Flows in Transportation Networks", Academic Press, New
York, 1972
Rider, K.L., A Parametric Model for the Allocation of Fire Companies, R-1615-
NYC/HUD, The New York City Rand Institute, New York, April, 1975
Scott, Allen J., "An Introduction to Spatial Allocation Analysis", Association of
American Geographers, Resource Paper No. 9, 1971
Teitz, M.B., and Bart, P., "Heuristic Methods for Vertex Median of a Weighted
Graph", Operations Research, 16:955-961, 1968
Toregas, C., R. Swain, C. ReVelle, and L. Bergman, "The Location of
Emergency Service Facilities", Operations Research, 19:1363-1373,1971
Walker, W.E., Chaiken, J.M., and Ignall, E.J. (eds), "Fire Department
Deployment Analysis: The Rand Fire Project", North Holland, New York, 1979
85


Full Text

PAGE 1

Resource Allocation Across Geographically Distributed Domains: An Application in the Emergency Services Industry by Tara K. Hunter B.A., University of Colorado at Denver, 1989 A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Computer Science 1997

PAGE 2

This thesis for the Master of Science degree by Tara K. Hunter has been approved by Gita Alaghband Jay 5'--b-q] Date

PAGE 3

Hunter, Tara K. (M.S., Computer Science) Resource Allocation Across Geographically Distributed Domains: An Application in the Emergency Services Industry Thesis directed by Professor William Wolfe ABSTRACT Emergency service industries, such as fire departments, face significant problems in dealing with the geographically distributed resources. Tactical issues address the immediate response to a request for assistance. Strategic issues address the placement of resources such that the greatest overall benefit can be realized. This includes facility location as well as the assignment of resources to each facility. Typically, a response has been based on a look up table that assumes all available units are in their assigned stations. The reality is that the units often are elsewhere (i.e. doing inspections, training, investigating other incidents, etc.). In a situation where seconds can mean the difference between life and death, it's important to know exactly which units are the closest to a given incident so the response can be most effective. Recent technological advances allow for the automatic location of vehicles through the use of the department of defenses' global positioning system (GPS). This, combined with transportation network analysis tools and geographic information system digital mapping concepts, opens the way for a solution paradigm based on up-to-date location information. This thesis explores resource distribution issues in the context of a Computer Aided Dispatch system. The focus is on response recommendation and resource reallocation during periods of intense demand within one or more areas of the system. iii

PAGE 4

This abstract accurately represents the content of the candidate's thesis. I recommend its publication. iv

PAGE 5

DEDICATION This thesis is dedicated to my son Erik, who, at five, demonstrates wisdom beyond my years.

PAGE 6

ACKNOWLEDGEMENTS I would like to thank the following people, without whom, this endeavor would never have succeeded. Dr. William Wolfe for his wealth of knowledge, and his interest and enthusiasm in this project. Kent Phelps for his wealth of technical expertise and his unwavering support and belief in me. Chief Charles Burdick of the Littleton Fire Department for his undying patience and ever open door. Without his vision and insight, this project would never have even been proposed. The dedicated men and women in the South Metro Fire Protection District dispatch center for sharing their insights into not only the current system and it's shortcomings, but also their understanding of how things should be.

PAGE 7

CONTENTS Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Problem Overview 2 1.2.1 The Current System 2 1.2.2 Requirements and Constraints of the Proposed System 4 2 Current Technology 7 2.1 Geographic Information Systems 7 2.2 Global Positioning System and Automatic Vehicle Locators 8 3. Resource Allocation and Deployment Models Literature Review 10 3.1 Overview 10 3.2 Normative Models 12 3.2.1 Location Criteria in Objective Functions 12 3.2.2 Discrete Location-Allocation Models 13 3.3 Descriptive (Evaluative) Models 18 3.3.1 Simulation Models 18 3.3.2 The Hypercube Queuing Model 19 3.3.3 A Parametric Solution Model 20 3.4 Heuristics 20 3.4.1 Linear Programming Methods 20 3.4.2 The Rand Institute Study on Fire Department Deployment Analysis 20 vii

PAGE 8

4. Solution Formulations: The South Metro Fire Protection District Computer Aided Dispatch System ........................................................... 25 4.1 Response RecommendationsThe Dynamic Allocation of Resources ...... 25 4.1.1 The Data Models ..................................................................................... 26 4.1.2 Finding the Closest Units-The Transportation Network ......................... 27 4.1.3 Path Finding Algorithms Dijkstra's and Grassfire .................................. 29 4.2 Move-up Static Resource Allocation ........................................................ 33 4.2.1 Criteria ....................... : .......................................................................... .. 33 4.2.2 The Solution Approach ............................................................................ 34 4.2.3 Finding the p-Median of the Ranked Coverage Matrix ............................ 39 4.3 Testing the Solution. A Simulation Model. .................................................. 40 Appendix A. Glossary .............................................................................. 44 B. Source Code ........................................................................ 45 References ............................................................................... 83 viii

PAGE 9

1 Introduction 1.1 Motivation Just as with any industry, the Emergency Services Industries strive to optimize their "profits" (in this case, the profit is in terms of the quality of service to the community) within the various constraints to their abilities. The goal of a fire department is to minimize property damage, injuries and loss of life subject to budgetary constraints. The amount of firefighters spend actually servicing calls for assistance is relatively small1 Due to this, the traditional solution to assigning resources to incidents has been based on the assumption that most of the time fire companies are in their stations. Each station is assigned a district of responsibility based on the experience and knowledge of the domain experts. Response recommendations come from a look-up table that was painstakingly derived by senior fire department personnel analyzing the system and deciding which regions were closest to which stations, not only for the first responding units, but also for the case when multiple units are needed or when the first responding unit is unavailable. During periods of high demand, portions of the system may become 'uncovered', i.e., there may be areas that cannot be reached within an acceptable amount of time. In this situation, the fire department will seek to lower the risk of system failure by relocating available units into the deficient areas2 It is typically left up to the employees in the dispatch center to recognize the need for such a relocation and to determine which units should be moved. This approach is deficient in the following ways. First, firefighters may spend a relatively small portion of their time actually responding to incidents, but, this does not imply that they are always at their assigned stations. A significant amount of time is spent off site performing inspections, in training classes and investigating previous incidents. Second, with respect to the need for a redistribution of resources, the times when the need is most likely to arise, are the very times when dispatch personnel are the busiest and are least likely to 1 Reference for instance Hendrick, et al (1972). It was found that actual work time for pumper companies in the Denver fire department was between 2 and 4 percent of total work time. 2 This is referred to as 'move-up'. 1

PAGE 10

take the time to analyze the state of the system and recommend a relocation. Dispatchers end up shouldering a great deal of responsibility for tracking and determining assignment of resources. In the past 25 years much work has been done in the field of spatial analysis and distribution of resources in geographically distributed domains. Combining these concepts with the graphical user interface and relational database design concepts of geographic information systems allow for a more appropriate solution approach to these problems based on the current state of an ever changing system. Recent advances in the areas of Global Positioning Systems (GPS) and Automatic Vehicle Locators (AVL) supply the means to relieve dispatchers of a great deal of unnecessary responsibility. 1.2 Problem Overview This research was conducted in conjunction with the design and implementation of a computer aided dispatch (CAD) system for the South Metro Fire Protection District (SMFPD) dispatch center, located in Littleton, Colorado.3 This paper will address the response recommendation and the resource reallocation portions of the system.4 Discussed will be both the formal models and solutions of the problem domain, as found in the literature, and a model of the specific problem presented by the SMFPD. 1.2.1 The Current System The current CAD system utilized by SMFPD is a static system based on a set of look up tables. The entire 500 square mile district has been divided into approximately 1/4 square mile geographic cells called basic geographic units (BGUs). The Geofile contains all address ranges within each BGU and a pointer to an entry in the run card table. The runcards contain details about which units should be dispatched for a given incident type. 3 A special thanks goes to Fire Chief Charles Burdick who served as the main interface and primary contact for domain knowledge associated with this project. 4 A response recommendation presents a list to the dispatcher that contains units considered to be the most desirable to respond to a particular incident. The response reallocation portion, more commonly referred to as "move up", will scan the system for areas deficient in coverage and make recommendations about redistribution of the units that are available to protect against future incidents. 2

PAGE 11

This system has two major shortcomings. One is the frustration with the overall abstraction and poor design of the data model, the other is that a static model is being used to model a dynamic system. A more dynamic and robust solution is being sought through the use of currently available technology. The current data model is based on units to be dispatched to a given incident type. At first glance, this seems to be a reasonable approach. But in practice, this is too rigid and restrictive of a model. The most glaring example of this is the case where a single unit can serve the functionality of two or more logical types of units. For instance, for a car fire, the typical response is to send an engine and a rescue (paramedic) unit, but Littleton has one engine (currently housed at station 15) that can serve as both. A model of the system that is based on units would send an unnecessary unit to an incident whose response is out of station 15. Initially, management seemed to think response should be determined by but this turns out to be too static also, as firefighters don't always stay home. They are involved in on going activities such as training and inspections which have them out of the station on a regular basis. In addition, once an incident has been handled, the unit will go back in service (is available to respond to another incident) while still performing mop up activities at the site of the current incident. A second frustration with the data model, centers around the fact that the database is a flat file system, changes in resources have a ripple effect that cause a maintenance nightmare. A more appropriate data model will be presented later. A fire protection system is often viewed and modeled as a basically static system (for instance-Toregas, et al (1970) assume ... that each facility has response capability at all times."), but in reality, it is anything but static. When not responding to incidents, firefighters are often out of quarters for training purposes, performing routine inspections of businesses and other facilities, investigating incidents, etc. Every change in the system alters the appropriateness of a given response, and might indicate a need for a system reconfiguration. The current CAD doesn't have the ability to track the location of resources or to make recommendations surrounding the relocation of those resources. Response recommendations are based on the assumption that available resources are always in their home locations when not out responding to a call. Move-up is accomplished based on the gut feel of the dispatchers on duty. This system is inherently problematic in that the times when a need for a move-up are likely to occur are the very times when the dispatchers are the busiest and are least able to consider evaluating system needs. Computer assistance for move-ups includes monitoring the system, making suggestions for when a relocation should occur and suggestions for redistribution of resources to 3

PAGE 12

maintain minimal coverage standards and make the best use of available resources. 1.2.2 Requirements and Constraints of the Proposed System Dynamic allocation of resources has two major areas of consideration. The first is related to the management of various types of incidents, what resources are needed in response to a given incident type. This must take into account not only the current state of the system but also possible future states, there is always a trade off between conservation of resources and having needed resources available as quickly as possible. These types of decisions are made on a general level and are statically fixed for each type of incident. However, since reality never can be fully outlined, the judgment of the dispatcher as to what ultimately is needed for a given call is the bottom line on resource allocation. The dispatchers are the domain experts, and while they are given guidance and direction by management and policies, they ultimately make the decisions about the demands of a particular call. In a realm where seconds can make the difference between life and death, good decisions need to be made quickly. The best use for a computer system is to provide as much information as possible in a quick and comprehensible manner. As a way of supporting the decision making process. The intent is to build a system that tracks all available resources and makes recommendations for the allocation and distribution of those resources based on the current state of the system. The vision is to eventually equip all units with AVL (automatic vehicle locators), thus making available the exact latitude/longitude location of all mobile resources at any given moment. This data is to be used in determining the most appropriate response recommendations. As resources are allocated, the system will be responsible for detecting the need for a redistribution of available resources and making high quality suggestions regarding allocation of those resources. Since this is an approach that is conceptually different from that which the departments have operated under in the past, it is important that the system make recommendations that are believable by the domain experts. Any unreasonable recommendations will lead to a distrust of the system. The ability to trace the logic behind recommendations will help with trust and belief issues, as well as assist in tracking down problems with the system or faulty data. 4

PAGE 13

The effectiveness of a fire protection district is difficult to quantify.5 Ultimately, the objective is to maximize the usage of available resources in such a way as to minimize property damage and the loss of life. These metrics are the consequent of the activities of the department and are in direct relationship to their policies and procedures, but are not accessible to direct observation of variance under differing policy conditions. For this reason, internal measures of performance, primarily response times of each arriving unit, are used as a proxy measure to judge the merit of a given policy for static distribution as well as dynamic allocation of resources. Typically, response time is the primary metric used (Mirchandani and Reilly, 1987). The Littleton fire department has determined the maximum acceptable first unit response time measure to be about 4 minutes from the time the call is first taken to the time that the first unit arrives on scene. This value is based on various factors such as the amount of time the brain can go without oxygen before permanent loss of functioning Minimizing the number of responses that exceed the 4 minute limit is the primary requirement for the system. In addition to basic coverage standards, the issues surrounding the relocation of available vehicles need to take into account the work load of the various stations. Stations with heavier work loads are more likely to need the presence of available resources than those that are more often idle. If possible, reconfiguration suggestions should take into account the distance to be traveled and the inconvenience of the firefighters involved. With 24 hour shifts, firefighters to some extent live in their assigned stations, moves to other quarters are an inconvenience, thus the fewer moves that can be made to accomplish the objectives of the problem, the better. Longer distance moves are less preferred than shorter ones, the farther the move, the longer the unprotected area goes without adequate coverage. SMFPD maintains a number of agreements for automatic and mutual aid both within and outside of the constituent fire departments. 6 In a worst case scenario, a requirement of the system would be to determine system configuration of the entire metro Denver area including even possibly outlying 5 The difficulty of defining objectives that reflect the true metrics of a fire protection system is discussed through out the literature. See for instance, Hendrick, et al (1975) and Walker, Chaiken and lgnall (1979) 6 Automatic aid agreements allow for the automatic sharing of resources among the fire departments and are usually limited to a geographical area within a certain distance of. the shared boundaries of the departments. With mutual aid agreements, the needed resources are requested through the dispatch center of the requested department and are granted or denied based on the current demand within their system. 5

PAGE 14

areas such as Boulder and Castle Rock, as would be needed in the case of a large scale disaster. On a practical level, there is always the cost constraint. In this case the biggest concern seems to be the cost of acquiring high quality data. Many hours have already been invested in the currently used geofile. To make use of this is, if for no other reason, at least politically advisable. Even with the use of this file, a large amount of data is still needed to make this a truly dynamic system. Definition of the transportation network complete with lat/long coordinates is required. As an additional constraint, this is a real time system, Any calculations must either happen very quickly or must be accomplished prior to the time they are needed. Any visual interfaces (this is especially a concern with respect to mapping interfa.ces) should not impede the dispatch process. 6

PAGE 15

2 Current Technology 2.1 Geographic Information Systems Geographic Information Systems are computer systems that encapsulate a number of tools needed for computer analysis of geographic data. In the most generic sense, they are maps represented and stored in a database. The system consists of three major parts. A relational database structure to store the map features and attributes of these features, a set of standard analysis tools to assist with exploration and assimilation of associated data, and a graphical interface to display the map and analysis results. They are useful for analyzing objects and phenomena within spatially distributed domains (Aronoff, 1989). Traditionally, in the fire services industry, the allocation of resources to incidents has had a rather static solution. Not because the problem is static but because the tools and technology needed to address the problem dynamically were not available. As technology advances, the horsepower and memory needed to manage the data necessary for addressing this as a dynamic problem are now available to the vast majority of the population. Gains in abstraction of spatial domains using geographic information systems (GIS), the modeling tools encapsulated by these systems and recent cost effective access to global positioning technology have made the time ripe for the movement into dynamic solutions to dispatch policies based entirely on the current state of the system. Transportation networks have been used as an abstraction to study traffic flow problems for quite some time and analysis algorithms are well established (Lupien, Moreland, and Dangermond, (1987)). Street segments are represented as arcs and intersections and interchanges as nodes in a graph. Smaller networks can be represented with adjacency structures, larger problems as are typically encountered in real life analysis need a database structure to maintain needed information. The concepts and methodologies found in vector based GIS prove to be powerful tools in analysis of transportation networks. GIS data is stored in either a raster format or a vector format. The raster format stores data as cells or pixels in a grid. Each cell is assigned attributes to represent a set area on a map (i.e. each pixel represents one square meter and may be assigned values such as 'lake' or 'road'). Raster formatting is usually used in environmental and agricultural applications where the maps represent more continuous data such as terrain features. In vector format, maps are stored as a collection of objects. The database is comprised of tables of map 7

PAGE 16

objects: lines or arcs, nodes, vertices, and polygons. Attributes such as street names, type of street, or, description of a polygon (i.e. "lake") are attached to the objects themselves as opposed to the cells in a grid that are associated with the object as with raster data. Vector formats are usually used when the mapping application deals with entities that are readily represented as lines, points and areas, such as street systems or communications networks. Raster formatting offers the advantage of faster display capabilities. The core database layout in a vector base GIS includes a table of nodes representing intersections, a table of arcs representing street segments and perhaps a table of polygon topology representing bounded areas such as parks and lakes. The relational nature of the database facilitates the storage of a number of different 'layers'. of data about the geographic area. Attributes about the network such as street names and demographics are stored in different tables which can be combined and manipulated. Analysis results are then written to yet another data layer. This type of data layout is quite useful when analyzing objectives with multiple and often conflicting constraints. Constraints can be represented within different data layers which are then selected and combined in a quick and efficient manner. Both formats have their advantages according to the type of application being considered. Neither format is entirely appropriate for the current application. The vector format is clearly the choice for representing a transportation network. The difficulty with vector based GIS for use in a real time system is the speed of the graphic display. Every point and arc in the network is drawn as an individual object, making the display of maps unacceptably slow. This difficulty can be overcome through the use of the vector abstraction for the underlying calculations, with graphical presentation being accomplished through the use of a database of bitmaps of various resolutions. Graphic display of calculation results are drawn on an overlay on top of the bitmap, using a translation from lat/long coordinates to pixels. 2.2 Global Positioning System and Automatic Vehicle Locators The Global Positioning System (GPS) is a constellation of 24 satellites belonging to the Department of Defense. Automatic Vehicle Locator units (AVL) use signals received from at least three of these satellites to calculate their own latitude/longitude position. This position is then rebroadcast to be picked up by a module tied in to the dispatch system. This technology relieves the dispatcher 8

PAGE 17

from having to manually track the location of vehicles that are not in quarters. It also allows the CAD system to have current up to date information about the location of all mobile resources in the system, ensuring the best possible response recommendations. 9

PAGE 18

3. Resource Allocation and Deployment Models Literature Review 3.1 Overview The issues surrounding the of geographically distributed resources can be divided into those which by nature are strategic and those which are more tactical (Ball and Feng, 1993). Strategic issues involve planning decisions about where to locate emergency services stations. Tactical issues are concerned with the actual allocation of available resources to current static locations, response allocation to incidents and the redistribution of resources when the coverage in a portion of the system is found to be deficient. The problem defined here is most concerned with the later, tactical approaches to the problems of allocating available resources such that work load is distributed and service is maximized throughout the system. Many concepts and solution approaches are common to both types of issues and thus will be discussed, but the major emphasis will be placed on dispatch polices and the reallocation of available resources. The problem addressed here is not that of optimal site location but rather optimal allocation of resources to existing sites. The second problem is, in it's essence, the same as the first but with the additional constraint that only existing sites be considered for inclusion in the problem set. We are assigning units to a subset of all stations as opposed to assigning stations to a subset of all nodes. Allocation of mobile resources to static sites is an ongoing problem. Even assuming that the stations were well located when they were built, the demand configuration of any .system changes over time. As neighborhoods age, their demographics change and the emergency services needs change along with them. While it may not be feasible to relocate stations, the allocation of resources to existing sites can have a significant impact on the service to the system as a whole. First addressed are the various types of criteria typically used in the selection of optimal locations. Then, two major groups of models will be considered, normative models which seek to find an optimal or ideal state to which real systems aspire, and descriptive models, which attempt to model the real system 10

PAGE 19

itself (Scott, 1971 ). Finally, as these models are in general, computationally infeasible7 section 4 will discuss various heuristic solutions to the problem. The following table lists the definitions of the various symbols used: possible location of a facility location of (user) node i y i a location variable, equal to 0, if no facility is established at point j and 1, if a facility is established at point j x i coverage variable, equal to 1 if the user node at site i is covered by a facility within the maximum acceptable distance, zero otherwise x i i equal to 1 if the facility at location j serves site i a i i indicates whether point i can be served by a unit at site j without violating constraints s maximum acceptable response time or distance N i The set of nodes that are located within s of node i d i i The response time or distance from any node j to node i t i' Maximum travel time standard D i Expected demand at node i p The number of facilities to be located 7 The p-center and p-median problems have been proven to be np-complete (Kariv and Hakimi, 1979) and Cooper ( 1964) has shown exponential computation times for the general location-allocation problem. 11

PAGE 20

3.2 Normative Models 3.2.1 Location Criteria in Objective Functions The predominate criteria used in selection of optimal site locations are the median objective and the center objective. Both were first introduced by Hakimi in 1 964. The median of a graph is the point on the network where the total (weighted) distance of vertices from that point is at a minimum. As an objective, it is an efficiency measure reflective of total system wide travel time. The center objective is an effectiveness or equity measure that looks at the travel time of the most distant source-destination pair. It seeks the point that minimizes the maximum (weighted) distance to any other point on the network. The center objective is considered to be a 'coverage' measure and is thus typically used in locating emergency service facilities (Halpern and Maimon, 1 983). A number of other measures have been considered independently and in combination. Halpern and Maimon (1 983) studied the antagonism between four locational measures. The median and center measures as defined above, and the variance and Lorenz measures. The variance measure is the sum of the (weighted) squared differences between the distance of a vertex from the facility and the average distance of all vertices from the facility. It is considered to be a dispersion measure. The Lorenz measure is an equity measure that compares the distance of the p nearest vertices to the total distance of all vertices. The vertices and distances can be weighted. Halpern and .Maimon measured the effect of choosing one measure over another by choosing an optimal location using one objective over all the others and calculating the loss from the optimal value the other functions show. They found that the Lorenz measure located a center point that was significantly separated, in the physical sense and in the value of the objective function, from the results that would be obtained using another measure. The choice to use either the center or variance measures for the objective function yielded the smallest changes in the objective function results of the other measures. These points also seemed to be generally located close to each other in space. Moderate losses in the value of the center and variance measures were found when the median objective was the optimizing factor. It seemed to be almost universal in reading the literature, that an equity criteria is used as the primary criteria for location of emergency services type 12

PAGE 21

resources. For instance, Torgeas et al (1971 ), in their set covering model for facility location, use the standard of having at least one fire fighting company within x minutes of every geographic unit in the coverage district. The Rand study for the New York City Fire Department8 used hierarchical criteria in decisions regarding relocation of fire companies. The primary criteria was to ensure that at least one of the closest three engine companies and at least one of the closest two ladder companies are available to every geographic unit. This criteria implicitly reflects the status quo of the department including distribution of work load. A secondary criteria was incorporated in the cost function used to decide which companies should relocate. This function implicitly includes values reflective of both efficiency and work load criteria (expected value of the response time within the affected areas) (Walker, Chaiken and lgnall, 1979). The Rand study allowed the equity criteria to be implicitly defined by using the requirement that for every demand point in the system, at least one of the closest 3 engine houses and one of the closest 2 ladder houses contain an available company. Considering the computer resources and tools available to them and the demographics of the region they were dealing with (a densely populated urban area as opposed to the more diversely populated, sometimes rural areas included in the SMFPD), this was probably a reasonable solution. However, this policy includes assumptions that maintain a status quo and that are not necessarily appropriate to a different problem domain. In a rapidly growing area such as the northern portions of Douglas County that are in the SMFPD, It may be more critical that certain stations are occupied in order to maintain a minimal level of coverage. 3.2.2 Discrete Location-Allocation Models 3.2.2.1 The P-median and P-center Problems The classic p-median problem belongs to a class of problems termed 'minisum' problems. These models locate p facilities so as to minimize the sum of travel times or distances within the system. The basic formulation is as follows: 8 This study is an extremely well documented case study of a very similar problem domain to that being presented here. More is presented about it in section 3.4.2 13

PAGE 22

Minimize: Subject to: n m z = """" """"d x IJ IJ j=l i=l n I Xij = 1 j=l y i xii e (0, 1) (1) (2) (i = 1 ,2, ... ,n) (3) (i = 1 ,2, ... ,m; j = 1 ,2, ... ,n) (4) (i = 1 ,2, ... ,m; j = 1 ,2, ... ,n) (5) Constraint (2) ensures there are no more than p facilities located. (3) assures that zone i is served, constraint (4) assures that zone i is assigned to be served by facility j only if there is a facility established there. Constraint (5) makes y i and xi i zero-one integer variables. The p-median problem uses a median criteria for facility location which may result in some areas having less than the minimum standard of coverage. The p-center problem addresses this issue for the emergency services industry. Dubbed a 'minimax' strategy, the p-center formulation seeks to minimize the maximum cost that any demand point may incur (Mirchandani, Reilly, 1987). The problem can be stated as: Minimize z Subject to: (i = 1 ,2, ... ,m; j = 1 ,2, ... ,n) and constraints (2) (5) listed above. Berman, Larson and Chiu (1985) embed the 1-median problem in a general queuing context. Instead of seeking to minimize just the average travel time of the system, they seek to minimize the average cost of response which includes costs related to delays in queue and/or the cost of refusing service (referral to 14

PAGE 23

another server). The model utilizes stochastic travel and service times thus incorporating temporal as well as spatial uncertainties in facility location. 3.2.2.2 The Set Covering Model Set covering formulations seek to minimize the number of sites or stations required to ensure that no service area is more than a pre-specified distance or travel time from a station. Toregas, Swain, ReVelle and Bergman (1971) were among the first to view emergency service facility siting as a set covering problem. In the basic formulation of the model, the problem may be written as follows: Minimize: Subject to: n z= LYj j=l LaijYj;;:: 1, jENi y j = (0, 1) (1) ( i = 1 ,2, ... ,n) (2) ( j = 1 ,2, ... ,n) (3) Equation (1) is the objective function which seeks to minimize the total number of facilities used, subject to the constraints of (2) and (3). Constraint (2) requires that at least one service facility ( j ) be located in each N i Constraint (3) makes y i a zero-one integer variable, fractional values for y i are not acceptable. To reg as et al (1971) used a linear program with an added single cut constraint to obtain integer solutions. The problem is first solved without the cut and a number of servers m is obtained where generally m is non-integer. The problem is then solved again with the addition of the cut: n LYj ;;::[m]+l j=l where [m] is the integer part of m. Addition of this cut does not guarantee convergence to integer solutions for the number of facilities to be located or for their locations (as illustrated by Daskin and Stern (1981 )), however, the computational experience of the authors was 15

PAGE 24

excellent. Solutions for up to 50 nodes in over 150 runs always yielded integer solution. Many variants to this original solution have been proposed and employed. Two of the more important ones are the maximum covering and maximum expected covering location problems. 3.2.2.3 The MCL and MECL Problems The maximum covering location problem (Church and ReVelle, 1974) seeks to maximize the amount of demand covered in the situation where all points cannot be covered using the maximal travel time constraint due to budgetary constraints. This model takes into account the demand level of each node or zone in the network and maximizes the amount of demand served within the system. The problem is formulated as foilows: Maximize: Subject to: : Vi X j = (0,1) ( j = 1,2, ... ,n) y i = (0,1) ( i = 1,2, ... ,m) By solving this problem for various values of p, a cost effectiveness curve can be obtained indicating the trade off between benefit and cost. Church and ReVelle also suggested addition of mandatory coverage constraints where a distance ofT (T > S) is a hard constraint reflecting the maximum distance any user node could be from a server. The solution then selects the optimal 16

PAGE 25

configuration of p facilities that maximizes the number of nodes covered within S of their closest facility. In effect, combining the p-median and set covering problems. The maximum expected covering model was introduced by Daskin (1982). The model accounts for vehicle busy periods within a covering model framework. It is an attempt to adjust the maximum covering model for some of the shortcomings that analytic models seem to have over the more complex queuing and simulation models. The basic set covering model assumes that all servers in the system are available at all times and that the closest server is the one allocated to a point or zone. Especially given that many alternate optimal solutions often exist for any given problem formulation (Daskin, 1987), it is often desirable to model other objectives of the system, such as the need to consider interdistrict responses in the case when the primary server for a district is unavailable, or giving preference to existing sites in analysis of optimal facility location. Hierarchical models incorporate secondary objectives in the objective function to account for subordinate criteria. 3.2.2.3 Hierarchical Models Hendrick et al. (1974) in a study conducted for the city of Denver, Colorado in the early seventies, used a hierarchical objective function to evaluate locations for possible new fire stations while giving preference to existing stations. The purpose of the study was to evaluate the current configuration of stations and make recommendations on ways to improve the system. The cost of building a new facility is significant, but the resultant savings might out weigh the costs if the movement of one or more facilities would result in the ability to close others without loss of service to the rest of the system. To accommodate these considerations, they came up with the following formulation for the objective function: Minimize: r z= LYj + j=l n :Lo+e)yj j=r+l Subject to the same constraints as the basic set covering model. Where e is a positive constant less than 1/n and existing stations are indexed from 1 to r. This objective function favors existing sites while also allowing for the possibility of their replacement. 17

PAGE 26

Kolesar and Walker (1972) used a hierarchic model in their solution formulation for addressing relocation in the New York City fire department. Their primary objective was an equity measure ensuring minimum coverage, secondary was an efficiency criteria aimed at minimizing total expected response time. They used the requirement that at least one of the closest 3 engine companies and one of the closest 2 ladder companies be available to all response areas of the city. This used the current system configuration to implicitly define the equity criteria. With the equity concerns taken care of, they then focused on efficiency for deciding which companies should be moved when a relocation need has been identified. More will be presented about this study in section 3.4.2. Another relevant secondary objective addresses the issue of backup coverage in the system. These models seek to maximize the number of zones that are covered by more than one facility within the specified constraints. Daskin and Stern (1981) use a hierarchical model that extends the set covering model to address this secondary objective. Hogan and ReVelle (1986) propose two models one of which extends Daskin and Stern's model to include consideration of the population (assuming population implies demand) of the covered zones. The other model does the same for the maximal covering location problem. 3.3 Descriptive (Evaluative) Models The models discussed so far have been prescriptive optimization models. Simulation and queuing models are descriptive models which ... allow for a richness of detail that is generally lacking in such deterministic optimization models as the set covering and maximum covering models" (Daskin, 1982). The added detail also means they are significantly more complex and somewhat more difficult to understand. The two kinds of models can be viewed as complementary and, as is discussed below, are often used in conjunction to generate the best possible solutions. 3.3.1 Simulation Models Simulation models are often used for making final location decisions once a handful of potential sites have been identified or for testing location configurations arrived at through other means against an existing configuration. The Rand Institute Study (see section 3.4.2) used a simulation model to compare different policies and methods for locating, relocating and dispatching fire-fighting units. The model design and preliminary results are outlined in Carter and lgnall (1970). The model consists of three parts, an incident 18

PAGE 27

generator, a simulation program and an analysis program. The incident generator takes historical data about incidents in New York City into account and generates random incidents based on this distribution. The simulation program tracks response time distribution and system state (unit availability) characteristics over time. The post-simulation analysis program provides statistical analyses of the simulation run and compares a given run with other runs. The Rand simulation program was modified and used by Hendrick, et al (1975) in their study of the deployment of fire fighting resources in Denver in the '70's. The simulation was used to evaluate and compare configurations arrived at through the use of a set covering optimization model and the informed judgment of the team of officials the Denver Fire Department that were involved in the study. 3.3.2 The Hypercube Queuing Model This model, presented by Larson in 1974 uses a queuing theory approach to analyze vehicle location and response district design. Larson views the state space of the system as an N-dimensional unit hypercube, each vertex of the hypercube corresponds to a particular combination of busy units. He develops an algorithm for obtaining the steady-state probabilities of the system which exploits many of the properties of the hypercube model. The performance measures the model tracks include mean travel time, workload imbalance and the number of interdistrict dispatches, the measures are tracked for each unit as well as system wide. The approach returns data about a particular location/districting scheme and is intended to be used interactively by a system planner with knowledge of the political and social constraints of the system. Due to storage and execution time requirements, the algorithm is stated to be practical for the placement of only about 11 or 12 units. The hypercube model is an M/M/n queuing model, it's tractability is dependent on Markovian assumptions which require that each server have an exponential service time. M/G/n type queuing models seem to be intractable for locating n service facilities in a system where service times and queuing delays depend on the location of the service facility (Berman, et al, 1985). The service time distribution in the emergency services industry is typically not exponential. Walker, Chaiken and lgnall (1979) show that for a small fire the probability for service time lasting more than ten minutes is nearly 1. 19

PAGE 28

3.3.3 A Parametric Solution Model The Parametric allocation model developed by Rider (1975) was an outcome of the Rand project in the 1970's. The model was designed to allocate resources, not locate them. It takes the total number of fire companies to be deployed and allocates them to various regions of the city. It uses a tradeoff parameter that allows the user to configure the emphasis placed on the three major types of objectives to be met: equalization of workloads in all regions of the city, minimization of citywide travel times, or equalization of average travel times for each region. The model can be used in conjunction with a siting model to optimally locate the companies allocated to each region. 3.4 Heuristics The p-median, p-center and set covering models have all been proven to be np hard (Hansen, Peeters, and Thisse, 1983). An optimal solution would require consideration of all possible permutations of facility locations, an impossible feat for all but the smallest problems. Thus, heuristic methods for solution are justified. 3.4.1 Linear Programming Methods Perhaps the most commonly employed heuristic methods are linear programming techniques. The exact techniques are beyond the scope of this paper. The interested reader is referred to Chvatal (1983). While linear programming solution techniques are technically not polynomial time algorithms (due to a small number of pathological problem formulations which have been shown to be exponential in their running times), in practice, their running times are very good, especially when the number of constraints and variables is small (i.e. in the tens to hundreds as opposed to in the thousands). 3.4.2 The Rand Institute Study on Fire Department Deployment Analysis In the early 70's, the city of New York and the Department of Housing and Urban Development funded a significant study of the fire fighting operations in 20

PAGE 29

New York City. Central to the study was analysis of allocation of fire fighting resources. Studies were conducted that, among other things, addressed static allocation (districting) and location of firehouses as well as dynamic allocation of fire companies in terms of dispatch policies and relocation schemes during periods of intense demand in concentrated areas of the geographic system. (Walker, Chaiken and lgnall, 1979) Relocation analysis involves determining when a relocation is needed (based on some criteria), and then, determining which fire fighting company or companies should be moved to fill the areas lacking in coverage. In the relocation analysis, the project team initially proceeded with the assumption that there was an optimal arrangement form fire fighting companies to be arranged into n locations. This approach turned out to be too computationally expensive for the needed real time system. Their second approach was to develop special purpose fast algorithms that produced near optimal solutions. They then used the results to rank the fire stations in the system according to desirability of having a company located in them and used this list in the relocation algorithm. This approach turned out to have a number of difficulties not the least of which was the fact that the same companies repeatedly were the ones recommended to relocate (regardless of the location of the deficiency) due to their low ranking on the desirability list. The final approach involved a heuristic solution that broke the problem into 4 parts. The solution at many levels leveraged the concept of the "response neighborhood" of a station or set of stations. The response neighborhood (RN) is defined as "A collection of alarm boxes that have the same unordered pair of ladders of (or engines) as first-due and second-due." (pp. 649 Walker, Chaiken and lgnall, 1979). Using this concept strictly limited the number of geographic areas that needed to be considered by any calculations. These are the stages of the solution design: Stage 1 involves determining the need for a relocation. In this stage, the equity criteria mentioned above (3.2.1) is used to find areas in the system that are lacking the minimum coverage standard of having at least one of the closest 3 engine companies and at least one of the closest 2 ladder companies available to it. The amount of time that these companies are anticipated to be unavailable is considered in the analysis and no relocation recommendation is presented unless the expected time for the deficiency is considerable. Stage 2 involves a set covering problem, picking the stations to be filled to ensure coverage of the system while minimizing the number of moves. 21

PAGE 30

A greedy heuristic was used to generate sets of empty houses to be filled. The first set is found by selecting empty houses in the order that would fill the largest number of uncovered regions. The first house covers the largest number of uncovered areas, the second house would be the one that covers the largest number of the remaining uncovered regions, etc. until all uncovered regions are covered. Successive sets are generated by picking first the house that would cover the jth largest number of uncovered regions with j = 2,3, ... P (Pis arbitrary up to the number of empty houses the larger P is, the more likely the optimal solution will be found) and then proceeding as above to find the rest of the set. The set with the smallest number of moves is passed on to stage 3. In the case of ties, stage 3 is solved for all of the tied sets and the lowest total cost set is passed on to stage 4. In Stage 3, all companies are evaluated to find the best candidates for relocation. A cost function is used to generate a ranked list of candidates for relocation into each of the empty houses selected in stage 2. Since the minimum coverage standard is covered by the equity standard used in stage 1, the criteria used for selecting the units to be moved is one of efficiency. The reloca.tion that minimizes the total expected travel time to incidents in the affected regions is sought. The problem has the general formulation of a transportation problem with additional constraints to ensure that none of the moved companies will leave uncovered areas: Minimize: Subject to: M M+N z = L LCijXij j=l i=M+l M+N (1) L xij = 1 U = 1,2, ... ,M) (2) i=M+l M LXij = 1 (i = M+1, M+2, ... M+N) (3) j=O 22

PAGE 31

M+N LaikxiO + i=M+l Xij e (0,1) M+N M L Icjkxij 1 i=M+l j=l 'V i, j (k= 1,2, ... L) (4) (5) Where M is the number of empty houses selected by stage 2 that are to be filled, M + N refers to the number of available companies, a i k = 1 if available company i serves response area k and is zero otherwise, c i i is the cost function as described below. A "dummy" empty house U = 0) is used if available company i is to stay in it's own quarters. The objective function and the first two constraints of this formulation are structured like an assignment or transportation problem. the third set of constraints (4) assure that the minimal coverage criteria are not violated. The cost function cij = (c2 -c1 ) {a1 ( t + r i i ) + a i i r i i } is used to predict the cost of having company i relocate into empty station j. Where a i = /... i .JA: I vi The function uses an approximation of expected response times in the regions that will be affected by the proposed moves. It is based on findings by Kolesar and Blum (1973) that the expected response distance in a region is proportional to the square root of the area served per fire company: E(D1 } = c1 JAIN where 01 is the distance traveled by the first arriving unit, A is the area of the region being considered, N is the number of available units in the region and c1 is a constant of proportionality for the first arriving unit (estimation of these constants is discussed on pages 180-183 of Walker, Chaiken and lgnall). The other variables in the function are as follows: /... i is the alarm rate in region i (arrival according to Poisson process), vi is the average response speed in region i, t is the predicted duration of the incident causing the need for a relocation, and r i i is the amount of time a unit would spend traveling from a station in region i to a station in region j. 9 The concept behind the equation involves finding the difference in the total expected travel times for the regions involved in the moves allowing for the times when the second closest unit is available rather than the first. 9 A complete derivation of the cost function can be found in Kolesar and Walker (1974). 23

PAGE 32

The heuristic used to solve stage 3 generates a ranked list of candidates for relocation into each of the empty houses selected in stage 2 using the cost function. The lowest cost moves are then chosen and feasibility of the moves is checked to ensure maintenance of system wide coverage standards. If a feasible relocation set is found in this manner, it is used, as it is optimal. Otherwise, several feasible relocations are generated and the one representing the lowest total cost is used. Feasible relocations are generated by changing the order in which the houses are filled and by considering the kth lowest cost move in place of the lowest cost move. Stage 4 takes the units picked by stage 3 and solves an assignment problem to minimize the total travel distance for all relocating units. 24

PAGE 33

4. Solution Formulations: The South Metro Fire Protection District Computer Aided Dispatch System The resource allocation responsibilities of an emergency services dispatch system divide easily into two major areas. The issues surrounding recommending units to to a given incident in a dynamic, constantly changing system and the tracking of the state of the overall system to ensure adequate coverage within all areas of the system. 4.1 Response Recommendations The Dynamic Allocation of Resources Dynamic allocation of resources deals with the decisions surrounding the dispatch of units to incidents. The system makes response recommendations for consideration by the dispatcher. For the most part, dispatch policies at SMFPD are static according to incident type with dispatch personnel ultimately deciding which units should respond to a given incident. The goal of the CAD system is to make suggestions that are reliable (i.e. believable by the domain experts, so that the system can be trusted) and are based on the current state of available resources. To do this, the system must track all resources in the system, their functionality, location and actual as well as potential availability. Static resource allocation analysis needs to take into account the demand configuration of the system and potential unavailability of resources (so that secondary response can be optimally planned for) but, when an incident arises, the only criteria is that the closest available units that are capable of satisfying the request for service, are the ones that are dispatched to the incident. When a call comes in to the dispatch center, the basic scenario involves first finding where the incident is and then what type of situation is being presented. Dispatch obtains an address and elicits enough information to categorize the incident. The system then takes the address and obtains a latitude/longitude location for the address and calculates the distance from the incident to all other nodes in the network. The database is then queried to find available units that are capable of fulfilling the needs. of the incident. A list of lists is built. Each list is a set of units that can meet the needs of the incident. The sets are in order according to the closest units first. The first set is presented to the dispatcher as the primary recommendation with the other recommendation sets available on request. 25

PAGE 34

4.1.1 The Data Models A response recommendation arises from an analysis of the two questions, 'what' and 'where'. What resources are needed for a given incident and where should they be sent from? The associated data models are supportive of the issues surrounding these questions. The question of what corresponds to the need for a model for the resources of the system and the where requires a model of the geographic domain. The system currently in use by SMFPD employs a model in which units that are to respond to a_ given incident type in a given BGU are assigned on run cards. The runcards enumerate primary response units as well as backup units for each BGU/incident type combination. The assumption is made that the units are located in a station. An attempt to generalize this model to a relational database model that would support dynamic allocation procedures proved to be unworkable. At first glance, it made sense to assign units to incident types. For instance, a car accident would need an engine and a rescue unit in it's response. The difficulty came in fitting in units that served multiple functions. To see this, consider a unit whose primary function is as a fire engine but that is also equipped to serve as a rescue unit. Where most stations would send two units to the accident, the station with the dual purpose engine would send only one and the assignment of unit types to incident types breaks down. The second idea was to recommend response according to geographic location (i.e. by station). This approach too was deficient in that there were times when a response must be split between stations. It is always desirable to get the first arriving unit to the scene as quickly as possible. Looking at the problem with Jacobson's (1992) use case analysis approach proved to be quite useful for finding an appropriate data model10 The actors of this system are dispatchers and fire fighters. When a request for service comes in, the dispatcher first determines the nature and location of the request. The dispatch actor then attempts to find and dispatch the closest resources that will adequately perform the functions needed by the requesting source. In the course of this analysis, it became clear that what was really needed was a data 10 In the use case approach, design of the system is modeled around the actors and the use cases for the system. The users of the system perform the roles of the actors and the use cases are the functions to be performed by the system. 26

PAGE 35

representation modeled around the capability of the resources to perform needed functions. The system supplies the ability to assign different responses based on the demographics of the area in question. A structural fire in a densely populated metropolitan area will require a different type of response than one in a remote rural area. To allow for this, the data model relates each BGU with an incident type allowing for separate responses to be defined for each BGU/Incident type pair. This approach is flexible enough to allow for the tracking of critical locations that would require special responses. For instance, a company that has on hand a lot of hazardous materials, would require that a hazmat unit be a part of the recommended response for most incident types at that location. To do this, the location can be defined to be its own BGU and unique response types assigned to it. Figure 1 shows the physical model of the resources database. 4.1.2 Finding the Closest Units -The Transportation Network Integral to the system is the transportation network. Data collection can be very expensive and thus the kind of data and degree of granularity needed to solve the problem was a major consideration. It was decided that the solution algorithms should be able to reasonably handle differing degrees of granularity. The highest level being the basic network of all major streets and their intersections. Since most of the BGU's are bordered by major streets, this seemed to be a reasonable starting place. All major decisions by the system such as who should respond to a given incident and fulfillment of basic coverage criteria are decided using this highest level of granularity. The largest of the BGU's are approximately 1/4 mile square, under most circumstances, predicted travel times to the BGU containing the incident will be adequate for making resource allocation decisions. This will keep the amount of data to be collected to a minimum. Later, as time allows, the network can be improved upon to include all streets and even all addresses if desired. The issue of finding the closest units that are available to respond to a given incident requires a model of the transportation network involved. The typical vector model used by most GIS was utilized for this representation. The issue of collection of the vector data needed to represent the system was a big concern. It was finally decided that since units typically run on major arteries for as much of the distance as possible when responding to an incident, it would be acceptable to gather data for just the major streets and their intersections. 27

PAGE 36

One of the greatest advantages of the geographic information system relational data model lies in the ability to pick and choose from different data layers depending on any number of factors. Different conditions such as time of day, weather, and road conditions will affect actual travel times within the network. Layering the different constraints will allow for picking and combining those which would most likely impact the network at any given time. Once automatic vehicle locators are put on units, the system can obtain feedback about how long it actually takes to get from point A to point 8 and can adapt itself in an ongoing learning process. Once it is determined which units can fill the request for service, the task is to find the closest available unit to the incident. Closest being measured in terms of actual travel time. The first step is to locate the incident within the network. Approximately of the calls that come into the SMFPD dispatch center come from 911. The 911 system transmits data about the address and location (latitude/longitude) of the originating call. The remaining calls have to be located based on the address of the incident. SMFPD maintains a database of all address ranges within their districts of responsibility, unfortunately, these addresses are tied to their BGU system rather than a lat/long location. A little over a year ago, SMFPD was approached by Pierson Graphics to provide address range data for their Digital Denver GIS product. As a result of an agreement worked out between them, the Digital Denver database is used to find latitude/longitude location information on incidents when it is needed. The master Geofile is kept current with all addresses in the system being tied to a BGU. For the few times when the address for some reason, is not found in the database, a manual system for locating the incident is needed. This is provided by supplying the dispatcher the ability to click on a point on the map and having the system locate the point relative to the internal data structures it needs to make a response recommendation. This involves resolving the point to the closest node in the transportation network as well as finding which BGU the incident is in. Finding the closest node was simply a matter of iteratively increasing the radius from the chosen point and selecting all nodes in the database within that radius until the result set comes back non-empty and then calculating the Euclidean distance to each node to find the closest one. Finding the BGU the point is in is a more complex problem. BGUs are stored as polygons, if the bgu boundaries were parallel to the latllong axis the problem of obtaining a bgu from a given point would be trivial, but bgus don't have this restriction and often follow odd shaped geographic features. So, 28

PAGE 37

a general solution is needed. For this, I used an approach obtained from the computational geometry chapter of Carmen, et al (1990). 11 The approach uses the idea that if a point (P1) is on the interior of a simple polygon, any vector from P1 to a point exterior to the polygon (P2), must intersect the edges of the polygon an odd number of times. To find the bgu, I first located a node in the database that was closest to the chosen point, found all arcs adjacent to that node and then retrieved all bgus containing these arcs. These were the candidate bgus. A vector from the chosen point to a random point outside the Denver metropolitan area created the needed vector. The number of intersections of the edges of the candidate bgus with this vector were counted.12 Only one of these could have an odd number of intersections. Given the lat/long location of the incident, it is resolved to the closest node in the transportatipn network (typically this will be the closest major intersection), and then the shortest distance of each available unit to the intersection is calculated. The response recommendation is then presented to the dispatcher with the closest physically available units being listed first. 4.1.3 Path Finding Algorithms Dijkstra's and Grassfire As defined by Potts and Oliver (1972), the cost, c( n1 nj) of the cheapest route from a node n1 to the other j nodes in a network is determined by the unique solutions to the functional equations f(n1) = 0; f(nj ) = min [f( nj ) + c(nj nk )]. There has been a great deal published about shortest paths calculation and many different approaches have been proposed. The most commonly employed methods all seem to fall in the category of what Potts and Oliver termed "tree building algorithms". Dijkstra's Sho.rtest Path algorithm is perhaps the best known in this class of route calculating algorithms. In general terms, the origin node is used as the starting place and a cheapest route spanning tree is built by successively finding the cost to get to all nodes from the current position and then choosing the node with the minimum weight value as the next current node. These are efficient polynomial time algorithms, properly implemented, 11 Although, this feature has been implemented and tested, it will not initially be available to the system. Entry of the bgu polygon definitions will be one of the last data sets to be created. 12 The intersection of two line segments can be quickly determined by first verifying that the bounding boxes of the two segments intersect, and then using a cross-product method to determine if the two end points of one segment straddles the other segment. Details are in Carmen, et al, pp. 889. 29

PAGE 38

Dijkstras' runs in O(A2 ) time where A is the number of arcs or street links in the network. (Cormen, Leiserson, Rivest, 1990). As a starting place, I implemented both Dijkstras' in the classic sense and a more general algorithm for which the term 'Grassfire' seems to be appropriate. Both implementations use travel time as the cost constraint to weight the graph, although, a number of different cost factors could be employed. The Dijkstra method calculates a shortest route between an origin/destination pair. The Grassfire algorithm uses a label correcting paradigm assigning a weight to each node in the network based qn the minimum cost of reaching that node from any one of a number of origin locations. In this way, districts of responsibility can be assigned to the various supply points or, the distance from any number of supply points to a given demand node can quickly be determined. The Grassfire algorithm was used for both the response recommendation portion of the system and the move-up portion. In the response recommendation solution, the incident location is assigned to be the source and all nodes in the system are assigned a distance from this location. These values are then used to determine which units are the closest to the incident location. In determining a need for a move-up, the location of available resources are designated as the sources. Districts of responsibility can be assigned to individual units and nodes that are farther than a given distance from the closest resource can be identified. Figures 2 and 3 show graphically the results of the output of the Grassfire algorithm. This is a map of the state of New York since at this time, the data for the south metro transportation network is still in the works. Figure 2 plots the districts of responsibility for the New York state highway transportation network with 5 supply points. Figure 3 shows the results of running the algorithm with one source (as with an incident) and plots the shortest paths to 4 supply points. 30

PAGE 39

Figure 2 The results of a run of the Grassfire algorithm on the New York state highway transportation network. The source nodes are marked by the bold black '+' marks. The average response time, maximum response time and number of nodes assigned to each supply point is shown for each district of responsibility 31

PAGE 40

Figure 3 Plot of the shortest path from 4 supply nodes to a single demand node. The response time value (the length of the shortest path) is shown for each supply node The initial implementations of these algorithms used a database to store the street network. My implementation of Dijkstra's was quite naive 13 in the way it stored and searched for the next node to come out of the candidate list. As a result, both algorithms ran in approximately the same amount of time (as a label correcting algorithm, the Grassfire algorithm doesn't do any comparisons, it just updates nodes with the smallest value so far and stops when a node is found that already has a label from another source). This approach turned out to be unacceptably slow (around 45 seconds on a Pentium 120). The speed problem seemed to be related more to the. multiple calls to the database, because 13 There has been a great deal of work done regarding the fastest implementations of shortest routes algorithms. Bertsekas (1991) gives an excellent overview of the major approaches and concepts. 32

PAGE 41

internalizing the street network in a sparse matrix data structure14 cut the actual runtime for both algorithms to under 2 seconds for a 2000 node network. Since the Grassfire algorithm finds the routes and distances from a source to all other nodes in the network and it does it in an acceptable amount of time, the Dijkstra implementation, which only finds a route from a single source to a single node, was not used. 4.2 Move-up Static Resource Allocation Move-up has to do with the relocation of available resources into areas which are temporarily deficient in coverage due to high levels of demand. Move-ups are generally needed as a result of a large scale incident such as a major fire, airplane crash or multi-car accident. An occurrence of a major incident (as input by dispatch personnel) will trigger a check for the need for a redistribution of resources. In addition to the obvious high risk times, the system will routinely monitor the system to see if a move-up might be beneficial. 4.2.1 Criteria As a result of conversations with Chief Burdick of the Littleton Fire Department and the dispatchers in the SMFPD dispatch center, two criteria dropped naturally out of the problem formulation. The first is the (apparently typical) need for maintaining minimal coverage standards. This clearly is the primary objective to be met. Secondarily, considering demand history, they tend not to move units that are located in the busiest parts of the city. In addition to the obvious major criteria, other factors are sometimes considered which are more political and practical in nature. For instance, in the Rand study, the cost function they used to evaluate the quality of a proposed move, implicitly sought to minimize the distance a moving unit would have to travel. Other important considerations are the anticipated duration of the deficient condition, the various auto and mutual aid agreements among the departments in the district, and minimization of the number of disrupted personnel. 14 This structure consists of a dynamically allocated array of linked lists. The node identifiers are used as an index into the array and each list contains the nodes that are adjacent to the indexed node. 33

PAGE 42

4.2.2 The Solution Approach Initially, it seemed reasonable to take an approach that would attempt to find a configuration that closely approximated the optimal for the m available companies and then focus on finding which companies to move to obtain this configuration. Several things motivated me to stray from even considering this approach. The first was an analysis of the complexity of the solution. To find an optimal configuration, consideration of all (:) (where n is the number of stations and m is the number of companies or units available for placement) possible configurations of the system would need to be considered. Given that the worst case scenario requirement of the system, n could conceivably be in the range of 50-1 00 with m any number of values in the middle part of the range. Even with very fast analysis routines, this seemed infeasible for the worst case scenario. The second indication that an optimal solution should not be sought, was the experience of the Rand Institute project when attempting to solve this same problem (see section 3.4.2). The third consideration has to do with robustness of the system. An optimal solution approach would locate m units among n stations, a blanket location edict. Breaking the problem into stages offers flexibility for both implementation and use, having the problem solution broken into different stages has inherent advantages. Combining the problem of who to move with that of what to move limits the ability of the system to just give feedback on which stations would benefit the most from external units. In the case of a major incident, units can be brought in from reserve or from other departments. It would be desirable to have the system be able to identify the stations that it would be the most desirable to have occupied. Finally, breaking the solution into stages allows for incremental implementation. At the time of this writing, only the module to determine the need for a move-up is completed. The modules to generate recommendations about which units to move where will require more data than the system will initially have available. This gives the system the ability to alert the dispatcher of a potential problem within the system. This seems to be the most important part of the system, the analysis and decision about who to move can be adequately accomplished by 34

PAGE 43

the dispatcher until the remaining portions of the move-up module can be implemented. Given these factors, it seemed more reasonable to try to work with the basic approach used by the Rand team, adapting it to this particular problem. So, as with the Rand study, my approach will have 4 stages: Determining need, picking stations to be filled, picking units to be moved and assigning relocating units to empty stations. Due to a couple of factors, early in the analysis of the move-up solution design, it seemed clear that the recommendation modules needed to use data based on BGUs rather than the individual nodes of the transportation network. First, to generate good recommendations, several feasible configurations would need to be generated and analyzed, the more configurations the system could analyze in a short perio9 of time, the better the recommendation would be. It seemed that increasing the granularity of the geographic regions considered by the move-up module would facilitate a better overall distribution of resources. The second factor has to do with the secondary criteria that the stations with the greatest potential workload should not be considered for movement. Due to currerit data collection mechanisms, tracking the probability that an incident will occur within a given BGU is easier (and, for a variety of reasons beyond the scope of this paper, more desirable) than resolving the incident to it's closest node and tracking the demand at that node. In general, this problem seems to require a coarser approach, it addresses more static location issues whose solutions are more computationally intensive. Another major design issue deals with the granularity of the resources that will be considered for moving. Typically, units will be moved to existing stations.15 The moved units however, could come from places other than their usual quarters. If possible, the solution needs to allow for the possibility of analyzing all available units regardless of location. On the other hand, using the data model outlined above where actual units are stored in essence as virtual units based on their capabilities, is really overkill and would add an excessive amount of computational overhead. The most logical approach is to analyze the system for a need to relocate each major type of vehicle. There are at present only 3 of these in the Littleton department, engines, rescue units and hazardous materials units. The system will be flexible enough to handle any number of major type definitions but, time wise, will only feasibly handle a few. 15 They would consider moving a unit to a place in the city other than a station if it would provide needed coverage for a short period of time, but it would be a rare event. 35

PAGE 44

A simulation of the system, to be used for testing the proposed solutions, has been designed and partially implemented. The design is discussed in section 4.2.3. Stage 1 Detecting the Need For a Relocation The first part of the problem is finding if there is a problem, if there is a need for a relocation. A minimum level of coverage being the primary criteria. This is an assignment problem which involves finding the shortest distance (once again employing the Grassfire algorithm) from each basic geographic area in the system to the closest available response vehicles, and tagging those areas that are predicted to be beyond the maximum acceptable distance. The data for predicting the duration of each incident type is tracked and readily available. The system will have a configurable parameter for an acceptable predicted deficiency duration before the system notifies dispatch personnel. Since the geographical definition of the BGUs will be among the last data items to be entered into the system, this portion of the move-up module will be run on the nodes of the transportation network The remaining portions of the move-up module involve finding a heuristic approach that is appropriate to the exact problem presented by SMFPD. The intent is to design several solutions and test them using a simulation model of the actual system. At this point in time, the simulation model has been designed and roughed out, but the data needed to fully implement and test it is not yet available. The design is discussed below. Given this, the rest of this section will center on approaches that will be explored once the data becomes available. Stage 2 Selecting the Stations to be Filled The goal of this stage of the solution is to find the smallest number of stations that need to be filled in order to satisfy the basic coverage criteria of the system as a whole. Rand built sets of stations to be filled by acting as if each station was filled and determining how this would affect the number of uncovered areas. Sets of potential relocatees were then built. The first set was built by first choosing the station that would reduce the number of uncovered areas the most, then reevaluating the remaining stations and choosing the next one that would 36

PAGE 45

reduce the number of uncovered areas the most and so forth until all areas are covered. The second set would use the same procedure except it selected the second ranked station for reducing the number of uncovered areas. Only the smallest sets are retained for consideration during stage 3. The Rand approach seems like a straightforward and reasonable approach that would undoubtedly meet the requirements of the system. I would like to also investigate the desirability of a few different approaches including the following: Basing the choice of stations on a median paradigm. Find the BGU that is farthest from an available unit and fill the station that would normally cover that BGU. Do the same as above, but weight the BGUs based on their typical demand and select the farthest weighted value, fill the station that would cover this BGU.16 Select first the stations that have the highest uncovered demand Stage. 3 -Selecting the Units to Relocate This stage involves selecting the units to be moved into the stations selected in stage 2. Actual assignment of units to stations will be left for stage 4. The first consideration during this stage is that no units are to be moved that would cause an area within their current district of responsibility to become uncovered. Secondarily, the units selected should be those whose removal from their current location, will least impact the demand coverage of the system. The units will not necessarily be in quarters it is feasible that the relocating unit could be off training somewhere or doing inspections all available and potentially available units should be considered and presented to the dispatcher to facilitate the decision process about reassignment of activities. However, inclusion of all geographic areas will increase the calculation times considerably. The solution design is flexible enough to allow for varying degrees of granularity based on the time available to the system to come up with a recommendation. The algorithm presented below should run sufficiently fast for moving units from station to station. Once implemented, it can be determined if it is adequate for evaluation of units not currently in quarters. To do the latter would involve tracking and analyzing information about the proximity of each BGU to every 16 This idea parallels the maximum covering location problem proposed by Church and ReVelle (1974) 37

PAGE 46

other BGU in the system rather than just about proximity of each BGU to each station in the system. For now, let's just consider the problem as though it were static, reducing this stage to picking the stations to empty. The Rand study used an efficiency criteria in this stage with the intent of minimizing total expected travel time within the system. The cost function that they used is not directly transferable to this paradigm. However, the same types of data are easily obtained from the labeling generated by the Grassfire algorithm. The secondary criteria of maximizing the demand coverage is also easy to tally as Grassfire runs. Even when tracking the data needed to evaluate the cost of a given move, the Grassfire algorithm takes only a second or two to run on a 2000 node network. The issues seem to be around the amount of time required to analyze the system for multiple configurations. Move-ups in SMFPD don't happen very often, the vast majority of the time, only one move will be needed. In this case, it Will probably be acceptable to simply act as if each possible move were made, run the Grassfire algorithm and use the results to make a decision based on any combination of the statistical information that results. A run of Grassfire currently tracks maximum distance to any assigned node, average travel time to all assigned nodes and the number of nodes assigned (this can easily be modified to be the demand assigned) for each source in the network. Clearly, the first criteria must be to pick only stations whose removal will not adversely impact basic coverage, beyond this, there will be room to experiment with different criteria. The recommendations the system makes under different weightings for the various options can be weighed against the intuitive understanding of the domain experts. The recommendations will also be run against the simulation to gain insight into system wide.behavior under different approaches. The approach with the best predicted results, is not necessarily the approach that will be implemented in the end. For example, given the perception of the domain experts about the desirability of removing units from stations that are in high demand areas, it may not be desirable to even consider those stations, even if it would be beneficial to the high demand station or the system as a whole. It is more important that the domain experts believe the system and trust the recommendations that it makes. On the rare occasion when more than one station needs to be moved, the possible number of system configurations might exceed the number which can be analyzed in an adequate amount of time. In this instance another approach will have to be taken. It might be adequate to act as if one move at a time is to be made, or, the system can attempt to eliminate some of the stations that are considered for moving (for instance, not even consider those stations that 38

PAGE 47

currently have a high level of demand coverage). If it turns out these approaches are not good enough, other algorithms will need to be investigated. 4.2.3 Finding the p-Median of the Ranked Coverage Matrix The most interesting approach I've come up with is a combination of the p center and p-median approaches. It can be described as finding the p-median of the ranked coverage matrix. A ranked coverage matrix can be thought of as a weighted adjacency matrix where the entries r i i reflect whether or not station j can cover BGU i. If it can, the weight on the entry indicates how it ranks relative to the other stations that also cover BGU i. If station j is farther than some maximum acceptable response time t, then the r i i entry is infinitely large. This matrix can easily be built from runs of the Grassfire algorithm with each station as the sole source and then noting the distance from each BGU to each of the other n -1 stations. If we let k = n -e -p then, the n x k matrix X would consist of k vectors x iT whose entries reflect whether or not a unit is to be located in station j. Where n is the total number of stations in the system that are being considered, e the number of stations that are empty and are to remain empty and p the number of stations that are to be filled (e and pare determined by stage 2 and are fixed within X). Multiplying RX will yield an i x k matrix. Each entry in this matrix will be an indication of how good configuration k is for BGU i. Any zero valued entry in this matrix will indicate that the xi configuration should be rejected as it leaves BGU i uncovered. The minimum value resulting from summing each of the k columns (the median of the resulting graph) will reflect the most desirable overall configuration. Minimize: m n z = L IrijXjk k=l j=l (i = 1,2, ... ,b) A shortcoming of this approach is that it doesn't take into account the distance that units would have to travel to occupy empty stations. There are at least a couple of ways around this, weight the Xi entries according to the desirability of the move. Desirability can be derived from a combination of factors including currently predicted demand within station j's district of responsibility and 39

PAGE 48

distance to travel to the closest of the empty stations to be filled. Weights that accurately reflect the desired outcome might be difficult to find. Another approach would be to retain the k lowest values from the above summation and analyze each for total distance to be traveled by the relocating units. Stage 4 -Assigning Units to Empty Stations This stage simply assigns the units that came out of stage 3 to the stations that were decided on in stage 2. In the vast majority of the recommendations, it is anticipated that only one move will be required and this stage will not be needed. On the rare occasion that it is needed, the data necessary to make an assignment will have already been generated. Given the object oriented implementation of the Grassfire algorithm, each station coming from stage 2 can have it's own Grassfire evaluation object to be retained for this stage. Unit assignments will be as simple as finding which units are closest to which stations. At this point, a large unknown is whether it will be adequate to select and fill one station at a time or, whether the needs of the system are going to demand that larger sets of stations to be filled and the units to fill them need to be considered. Obviously, if adequate recommendations can come from moving one station at a time, this stage can be eliminated. 4.3 Testing the Solution. A Simulation Model A discrete event simulation will be used to test the effectiveness of response and relocation policies. The model will allow for a configurable number of servers, geographically distributed across the region. As in the real system, there is no incident queue. If the closest server is unavailable, the next closest server will be assigned to the incident. Units will be created and maintained as objects that will track their own current state (location and availability) as well as statistical values relating to work load and travel time. The use of a Poisson distribution for the alarm arrival rate seems to be a widely accepted standard that adequately models reality. Using the chi-square test, the Rand institute verified that the alarm arrival rate data for the city of New York were consistent with the Poisson distribution at the 0.05 significance level (pp. 40

PAGE 49

291-294, Walker, Chaiken and lgnall, 1979). Filippi and Romanin-Jacur state that "First aid requests arise at every nucleus according to a Poisson process of given rate generally proportional to the amount of residents." (1996). An exponential distribution is also often used for service time in queuing analyses related to the fire service (Walker, Chaiken and lgnall, 1979, pp. 214). Unfortunately, this distribution does not match reality but is used as a simplifying assumption and the results that are obtained are then shown to be approximately correct for several examples of real service time distributions. The data needed to determine the geographic distribution and mean interarrival rate of incidents and the service time distributions for various types of incidents is readily available, at least for Littleton, through their incident reporting system. The distribution for incident type is also tracked and can be extracted from this data set. To create an incident, the model will generate an exponential random variable with a mean interarrival time determined from historical data. Although the actual distribution for incident locations can be obtained (and, actually is obtain.ed for use in the relocation algorithms}, getting a discrete random variable from a finely partitioned set (there are roughly 2000 BGU's) would be very time consuming and tedious, so each incident will be located according to a uniform distribution 17 An incident type will then be assigned according to a given distribution of standard incident types derived from the data and a service time for the incident will be generated using the mean service time for the assigned incident type plus the predicted travel time from the server location to the incident location. An event queue will be maintained to track all time related events. The event queue will maintain a time line and all added events will be placed in order according to the time they are to take place. The simulation will place a couple of new events into the event queue upon initialization. Generation of new incidents will include assignment of an incident location, type and a value for the amount of time the incident will require to be serviced. The main body of the simulation pulls the head of the event queue and processes according to the type of the simulation event pulled. 17 Like the Poisson distribution for interarrival times, a uniform distribution for location seems to be a widely accepted standard in location analysis studies. 41

PAGE 50

Event types are as follows: New Incident: A new incident will call the dispatch routine. The dispatch routine will find the closest available units that are needed for the given incident type and will assign them to the incident location18 For each unit, the predicted travel time will be added to the given service time and two new events will be placed in the event queue. One for completion of service and one for the return to quarters. The dispatch routine will be responsible for updating system wide statistics for things like total incidents served, total system wide response time, maximum response time, .the total time there were areas of the system without coverage, and the number of incidents that were not reached within the maximum acceptable response time. Service Completion: The unit will update it's work load statistics and becomes available for re dispatch at this time. It will not change it's location until it is actually back in quarters. This complicates things a little and will require the dispatch routine to track and update a 're-dispatch' flag for the unit to be set if the unit is dispatched from a location other than it's home. Return to Quarters: At this time, if the re-d is patched flag has not been set, the unit will update it's location to it's assigned home, if the unit has been re-d is patched, this event is ignored. A final analysis routine will output the results of the simulation run. A number of metrics will be tracked, including average system wide response time, maximum 18 In reality, units that are in the process of responding to an incident are available to be reassigned to a more urgent incident up until the time when they actually arrive at the incident. This simulation will make the simplifying assumption that once a unit has been dispatched to an incident, it will be unavailable until the service of the incident is complete. 42

PAGE 51

response time, the total time there were areas of the system without coverage, the number of incidents that were not reached within the maximum acceptable response time, individual unit work loads and time spent traveling, and average work load per unit. 43

PAGE 52

Appendix A. Glossary AVL Automatic Vehicle Locator -a device that uses a GPS receiver to find it's own location and then transmits that location to be picked up by some central station (such as the dispatch center responsible for dispatching that vehicle). Auto/Mutual Aid Agreements made between fire departments to share needed resources. Automatic aid agreements are usually limited to an area within a certain distance of the shared boundaries of the departments. For incidents within these boundaries, requests for assistance are automatically honored. With mutual aid agreements, the needed resources are requested through the dispatch center of the requested department and are granted or denied based on the current demand within their system. BGU Basic Geographical Unit. A small geographic area, typically represented as a polygon defined by it's bordering arcs. CAD .-Computer Aided Dispatch system Geofile Database file with address and BGU information GPS Global Positioning System -A system of satellites put in to place by the Department of Defense. Each satellite transmits time and position data. Receivers use this information to calculate their own geographical position. IncidentA request made to the system-initiated by a call to the fire department direct or routed through 911. Move-up-Movement of units from a geographic area with.adequate coverage to another which is lacking in coverage. Response The collective resources that are needed to respond to any one type of incident. A single response may have various combinations of vehicles and or personnel which will serve as given collective function. Response RecommendationA set (or sets) of units that the system has determined to be the best suited to respond to a given incident. SMFPD-South Metro Fire Protection District 44

PAGE 53

Appendix B. Source Code unit GeoSys; {Contains the following classes: TGeoSystem: Consists of function BuildMatrix that returns a pointer to an adjacency matrix representing the transportation system to be used in the analysis done by the Grassfire algorithm. Build Matrix requires an input of the tables containing arc and node information it returns a pointer to an internal data structure built from these database tables. Use of this structure significantly improves the performance of the Grassfire algorithm. This object holds a default Adjacency matrix pointer the create method requires a reference to an arcs table and a nodes table. TGrassFire: Maintains the list of sources and implementation of the Grassfire algorithm TCiosestNode: Contains a method to find the node in the transportation system that is the closest to a given lat/long location. Search Radius is initialized to 150000. Maxlterations is the number of times the search radius will be increased trying to find the node-if a node isn't found, the ClosestNodeiD will be set to -1 } interface uses classes, dbtables, dialogs, fifo_Que, SysUtils; canst MaxNodes = 1000; Num_Sources = 20; Grassfire} {the maximum number of nodes in the system} {max number of sources to be concidered by type TLonglntPtr = ALonglnt; resp_time_rec = record avg_time: real; max_time: real; nodes_assigned:integer; end; Tresp_time_arry = array [1 .. num_sources] of resp_time_rec; 45

PAGE 54

{------------------------------------------------------------------------------} {type declarations for TGeoSystem and its adjacency matrix} AM_elem_ptr = "AM_ element; AM_element = record adjarc_tt: real; endnode: longint; next: AM_elem_ptr; end; nodes_arry = array [1 .. MaxNodes] of AM_elem_ptr; pnodes_arry = "nodes_arry; TGeoSystem = class (TObject) private ArcsTable:Ttaple; NodesTable:Ttable; procedure SetArcsTable(theTable:Ttable); function GetArcsTable:Ttable; procedure SetNodes Table(The Table:Ttable); function GetNodesTable:Ttable; function GetNodeTableName:string; function GetNode TableDBName: string; Public DefaultAM: pnodes_arry; property ArcTable:Ttable read GetArcsTable write SetArcsTable; property NodeTable:Ttable read GetNodesTable write SetNodesTable; property NodeTableName:string read GetNodeTableName; property NodeTableDBName:string read GetNodeTableDBName; function BuildMatrix:pnodes_arry; constructor Create(Arcs, Nodes:Ttable); destructor Free; end; {------------------------------------------------------------------------------} {type declarations for TCiosestNode} TCiosestNode = class (TObject) SearchRadius:longlnt; ClosestNode I D: Long I nt; Pointlat: Long I nt; Pointlong: Longlnt; CN_Lat:Longlnt; 46

PAGE 55

CN_Long:Longlnt; Maxlterations:Longlnt; {the max number of times the radius will be increased} constructor create; {trying to find the nodeClosestNodeiD is set to -1 if not found} destructor free; procedure FindCiosestNode (DBName,TableName:string); {Nodes Table being considered} end; {----------------------------------------------------------------------------} {type declarations for Grassfire} ENoSources =class (exception); ERM_NuiiEntry_= class (exception); node_ptr = 1\node; node = record nodewt: real; froninode: longint; sourcefrom: longint; next: node_ptr; end; TResultArray =array [1 .. MaxNodes] of node_ptr; TGrassFire = class (TObject) private Sources:Tlist; ResultMatrix:TResultArray; resp_times:Tresp_time_arry; GeoS:TGeoSystem; function GetGeoSys:TGeoSystem; procedure SetGeoSys(GeoSys:TGeoSystem); public constructor create( GeoSys: TGeoSystem); property GeoSystem: TGeoSystem read GetGeoSys write SetGeoSys; procedure AddSource (Source:Longlnt); procedure Burn; function NumNodeslnSys:Longlnt; function RMEntry_lsNull (Nodelndex:Longlnt):boolean; function RM Entry_ SourceNode (Node Index: Long I nt): Long I nt; 47

PAGE 56

function RM Entry_ NodeFrom (Node Index: Long I nt): Long I nt; function RMEntry_NodeWt (Nodelndex:Longlnt):Real; destructor free; end; implementation { ---------------------------------TC I osestN ode--------------------------------} constructor TCiosestNode.create; begin inherited create; SearchRadius:= 150000; Maxlterations:= 20; end; destructor TCiosestNode.free; begin if self <> nil then inherited destroy; end; procedure TCiosestNode. findCiosestNode (DB Name, TableName:string); var uplat, uplong ,lowlat,lowlong :real; dist,mindist:real; distlat,distlong: real; numrecords:integer; LatLongQuery:TQuery; sqlstring:string; Rad I ncCou nt: Long I nt; begin RadlncCount:= 1; If ((PointLat = 0) or (PointLong = 0)) then messagedlg('Latitude and Longitude properties not set!', MTinformation, [mbok],O) else begin LatLongQuery:= TQuery.create(nil); LatLongQuery.DatabaseName:= DBName; 48

PAGE 57

sqlstring:= 'select node_id, Latitude, Longitude from '; sqlstring:= sqlstring + tablename; sqlstring:= sqlstring +'where ((Latitude>= :lowlat and Latitude<= :uplat) '; sqlstring:= sqlstring +'and (Longitude>= :lowlong and Longitude<= :uplong))'; latlongquery. close; latlongquery.sql.clear; latlongquery.sql.add(sqlstring); latlongquery. prepare; closestnodel 0:=0; uplat:= PointLat + SearchRadius; lowlat:= PointLat-SearchRadius; uplong:= PointLong + SearchRadius; lowlong:= PointLong SearchRadius; latlongquery.parambyname('lowlat').Asfloat:= lowlat ; latlongquery. parambyname ('lowlong') .Asfloat: = lowlong; latlongquery.parambyname ('uplat').Asfloat:= uplat; latlongquery.parambyname ('uplong').Asfloat:= uplong; latlongquery.open; numrecords:= latlongquery.recordcount; while (numrecords = 0) and (RadlncCount <= Maxlterations) do begin RadlncCount:= RadlncCount + 1; uplat:= uplat + SearchRadius; lowlat:= lowlat SearchRadius; uplong:= uplong + SearchRadius; lowlong:= lowlong SearchRadius; latlongquery. close; latlongquery.parambyname('lowlat').Asfloat:= lowlat ; latlongquery.parambyname ('lowlong').Asfloat:= lowlong; latlongquery.parambyname ('uplat').Asfloat:= uplat; latlongquery.parambyname ('uplong').Asfloat:= uplong; latlongquery. open; numrecords:= latlongquery.recordcount; end; if RadlncCount <= Maxlterations then begin latlongquery. first; mindist:= high(longint); repeat 49

PAGE 58

distLat:= latlongquery.fieldbyname ('Latitude').AsfloatPointLat; distLong:= latlongquery.fieldbyname ('Longitude').AsfloatPointLong; dist:= sqrt(sqr(distLat) + sqr(distLong)); if dist < mindist then begin mindist:= dist; closestnodel D: = latlongquery. fieldbyname('node _id') .As Integer; CN_Lat: = latlongquery. fieldbyname('latitude') .As Integer; CN_Long:= latlongquery.fieldbyname('longitude').Aslnteger; end; latlongquery.next; untillatlongquery.eof; end {if RadlncCount <= Maxlterations} else -1; end; {if point_lat = 0 ... } end; {TCiosestNode.find} { _______ ._ ____________________ TG rassFire----------------------------} function TGrassFire.GetGeoSys:TGeoSystem; begin result:= GeoS; end; procedure TG rassFire. SetGeoSys( GeoSys: TGeoSystem); begin GeoS:= GeoSys; end; constructor TGrassfire. create( GeoSys: TGeoSystem); begin inherited create ; sources:= TList.create; GeoSystem:= GeoSys; end; destructor TGrassfire. free; var i:Longlnt; 50

PAGE 59

begin if self <> nil then begin Sources. free; for i:= 1 to MaxNodes do begin { dispose(resultmatrix[i]A);} if ResultMatrix[i] <> nil then dispose(ResultMatrix[i]); end; end; if self <> nil then destroy; end; function TGrassFire.NumNodeslnSys:Longlnt; begin result:= MaxNodes; end; {procedure TGrassfire.burn (Adj_Matrix:Pnodes_arry);} procedure TGrassfire.burn; var node_list: TfifoQue; c_source: longint; nodeAsl ndex:integer; temp:AM_elem_ptr; procedure initialize; {TGrassFire.Burn} {initializes the queue of nodes to be considered and the results list } var i,x: integer; sourcePtr:ALonglnt; begin {node_list:= TfifoQue.createlist; } {load source nodes into queue} fori:= 0 to Sources. count-1 do begin sourcePtr:= sources.items[i]; if sourcePtrA > 0 then begin x:= sourcePtrA; new(resultMatrix[x]); 51

PAGE 60

ResultMatrix[x]".nodewt:= 0; ResultMatrix[x]" .from node:= sourcePtr"; ResultMatrix[x]".sourcefrom:= sourcePtr"; ResultMatrix[x]".next:= nil; Node_List.additem(x); end; resp_times[i+1].avg_time:= 0.0; resp_times[i+1 ].max_time:= 0.0; resp _times[i+ 1]. nodes_ assigned:= 0; end; {fori} end; {initialize} procedure eval_net; {TGrassfire.burn} var curr_node: longint; curr_nodeWeight: real; enode:longint; temp_wt:real; begin {eval_net} while not Node_List.is_empty do begin Curr _node:= Node_List.retrieveHead; {get the weight of the current node } curr_nodeWeight:= ResultMatrix[curr_node]".nodewt; c _source:= ResultMatrix[ curr _node]" .sourcefrom; {process all arcs in adjacency matrix} temp:= GeoS.DefaultAM"[curr_node]; if temp <> nil then repeat { tt:= temp".adjarc_tt; } enode:= temp".endnode; temp_wt:= ResultMatrix[curr_node]".nodewt + temp".adjarc_tt; if ResultMatrix[enode] =nil then begin new(ResultMatrix[ en ode]); Resu It Matrix[ en ode]". nodewt: = tern p wt; ResultMatrix[enode]".fromnode:= curr_node; ResultMatrix[enode]".sourcefrom:= c_source; ResultMatrix[ en ode]". next:= nil; Node_ List. add item (en ode); end else 52

PAGE 61

begin if temp_wt < ResultMatrix[enode]'\nodewt then begin ResultMatrix [enode]A.nodewt:= temp_wt; ResultMatrix[enode]A.fromnode:= curr_node; ResultMatrix[enode]A.sourcefrom:= c_source; Node_List.additem(enode); end; end; temp:= tempA.next; until temp = nil; end; {while } end; {eval_net} begin {procedure Grassfire.burn} if sources. count <> 0 then begin try node_list:= TfifoQue.create; initialize; eval_net; finally node_list.free; end; end else Raise ENoSources.create('Source Jist is empty (TGrassFire.Burn)'); end; {TGrassfire.Burn} procedure TGrassfire.AddSource(source:Longlnt); var element: TLonglntPtr; begin element:= new(TLonglntPtr); element/\:= Source; Sources. add( element); end; Function TGrassfire.RMEntry _Is Null (Node Index: Longlnt): Boolean; begin if ResultMatrix[Nodelndex] =nil then result:= true else 53

PAGE 62

result:= false; end; Function TGrassfire. RM Entry_ SourceNode (Node Index: Long I nt): Long I nt; begin if ResultMatrix[Nodelndex] = nil then Raise ERM_NuiiEntry.create('Result Matrix Entry is Nil') else result:= ResultMatrix[Nodel ndex]" .SourceFrom; end; function TGrassfire. RMEntry _NodeFrom (Node Index: Longlnt): Longlnt; begin if ResultMatrix[Nodelndex] = nil then Raise ERM_f\:luiiEntry.create('Result Matrix Entry is Nil') else result:= ResultMatrix[Nodel ndex]". From Node; end; function TGrassfire. RM Entry _NodeWt (Node Index: Long I nt): Real; begin if ResultMatrix[Nodelndex] = nil then Raise ERM_NuiiEntry.create('Result Matrix Entry is Nil') else result:= ResultMatrix[Nodel ndex]". NodeWt; end; { ------------------------------TGeoSyste m-----------------------------------} function TGeoSystem.BuildMatrix: pnodes_arry; {builds an adjacency matrix from a table of arcs and one of nodes. Returned is a pointer to an array of linked lists. The node ID is an index into the array. The elements of the list contain the weight and opposite end node of each arc that is adjacent to that node. The fields used from each table are as follows: } Arcs Table: ENode1 ENode2 Weight Nodes Table: Node_ID 54

PAGE 63

var adj_matrix:pnodes_arry; nodeid:longint; temp:AM_elem_ptr; i:integer; new_node: AM_elem_ptr; enode:longint; begin getmem (adj_matrix, sizeof(nodes_arry)); fori:= 1 to MaxNodes do begin adj_matrix"[i]:= nil; end; NodesTable.active:= true; ArcsTable.active:= true; Nodes Table. first; repeat nodeiD:= NodesTable.fieldbyname('Node_ID').Aslnteger; ArcsTable.indexname:= 'node1'; ArcsTable.setrange([nodeid],[nodeid]); Arcs Table. first; while not ArcsTable.eof do begin temp:= adj_matrix"[nodeid]; if temp = nil then begin new(new_node); new_node".next:= nil; new_node".adjarc_tt:= ArcsTable.fieldbyname('weight').AsFioat; new_node".endnode:= ArcsTable.fieldbyname('enode2').Aslnteger; adj_matrix"[nodeid]:= new_node; ArcsTable.next; end else begin while temp".next <>nil do temp:= temp".next; new(new_node); new_node".next:= nil; new _node". adjarc _tt:= Arcs Table. fieldbyname('weight').AsFioat; 55

PAGE 64

new_node". end node:= Arcs Table. fieldbyname('enode2') .As Integer; temp".next:= new_node; Arcs Table. next; end; end; ArcsTable.indexname:= 'node2'; ArcsTable.setrange([nodeid],[nodeid]); ArcsTable.first; while not ArcsTable.eof do begin temp:= adj_matrix"[nodeid]; if temp = nil then begin new(new_node); new_node".r:1ext:= nil; new_node".adjarc_tt:= ArcsTable.fieldbyname('weight').AsFioat; new_node". end node:= Arcs Table. fieldbyname('enode 1').As Integer; adj_matrix"[nodeid]:= new_node; ArcsTable.next; end else begin while temp".next <>nil do temp:= temp".next; new(new_node); new_node".next:= nil; new_node".adjarc_tt:= ArcsTable.fieldbyname('weight').AsFioat; new_node".endnode:= ArcsTable.fieldbyname('enode1').Aslnteger; temp".next:= new_node; ArcsTable.next; end; end; NodesTable.next; until NodesTable.eof; result:= adj_matrix; end; {TGeosystem. BuildMatrix} constructor TGeoSystem.create(Arcs, Nodes:Ttable); begin inherited create; Arcs Table:= Arcs; 56

PAGE 65

NodesTable:= Nodes; getmem(DefaultAM, sizeof(pnodes_arry)); DefaultAM:= BuildMatrix; end; destructor TGeoSystem.free; var temp:AM_elem_ptr; i:Longlnt; begin if self <> nil then begin fori:= 1 to maxNodes do begin temp:= Defat,JitAM"[i]; while temp <> nil do begin DefaultAM"[i]:= temp".next; dispose(temp); temp:= DefaultAM"[i]; end; end; freemem (DefaultAM, sizeof(nodes_arry)); if self <> nil then destroy;} end; end; procedure TGeoSystem.SetArcsTable(theTable:Ttable); begin ArcsTable:= TheTable; end; function TGeoSystem.GetArcsTable:Ttable; begin result:= Arcs Table; end; procedure TGeoSystem.SetNodesTable(TheTable:Ttable); begin NodeTable:= The Table; end; 57

PAGE 66

function TGeoSystem.GetNodesTable:Ttable; begin result:= NodesTable; end; function TGeoSystem.GetNodeTableName:string; begin result:= NodesTable.name; end; function TGeoSystem.GetNodeTableDBName:string; begin result:= NodesTable.DataBaseName; end; end. {unit GeoSys} unit Resp_rec; {A response is based on the type of incident and the BGU where it occurs. A given incident type in a given BGU requires certain capabilities in the responding units. Available units having these capabilities are found and a list of sets of units built with the units closest to the incident at the top of the list (index 0) TResponse: Create method requires input of an evaluated network (TGrassFire), the ID of the BGU (initializes the BGU property) of the incident, the incident type (to initialize the TypeofEvent property) and the name of the resources database Find Response method builds a list of response sets ResponseSet(index:integer) method returns a Tlist Object listing a set of units. Index is the recommendation positioni.e., the closest set of units would be at index = 0 ResponseSetSize(index:integer) returns the number of units in the set of units at index Database tables and fields that are used: Table: RESPONSE Table:RUNCARD Fields: (all fields) Fields: (all fields) Resp_type Resp_ Type Capability Event_ Type Qty BGU 58

PAGE 67

Table: Units Fields: (all fields) Capability Status Unit_id Table: Units_location Fields: Latitude Longitude Unit_ID The remaining classes in this unit are as follows. All are used by TResponse. TUn it: Holds information about a single response unit TUnitlist: A list of units TCapabilitieslist: An intermediate structure used by TResponse.GetResponse to build a list of needed capabilities and the units that can fulfill them TCapability: An .individual entry in the Capabilitieslist TResponselist: A list containing sets of units that can fulfill a given response request } interface uses classes, dbtables,sysutils, geosys Type Tunit = class unitiD:string; dist:real; status:string; location: integer; end; TUnitlist = class (Tlist) private Thelist:Tlist; protected function Get(lndex: Integer): TUnit; procedure Put(lndex: Integer; Item: TUnit); public constructor create; destructor free; 59

PAGE 68

procedure add(Value:TUnit); procedure lnsertByDist(value:TUnit); property ltems[lndex: Integer]: TUnit read Get write Put; default; function count:integer; procedure delete(lndex:integer); end; T capability = class capab: string; qty: integer; unitslist: TUnitlist; {will hold a list of available units that have this capability} constructor create; destructor free; end; TCapabilitieslist = class (Tlist) private capablist: Tlist; public constructor create; destructor free; procedure clear; end; TResplist = class (Tlist) private TheResplist:Tiist; protected function Get(lndex: Integer): TUnitlist; procedure Put(lndex: Integer; Item: TUnitlist); public constructor create; destructor free; procedure add(ltem:TUnitlist); procedure clear; property ltems[lndex: Integer]: TUnitlist read Get write Put; default; function count:integer; end; 60

PAGE 69

TResponse = class private Resplist:TResplist ; Capabilitylist:TCapabilitieslist; BGUid:String; DB:string; TypeEvent:String; EvaledNet:TGrassFire; RespQ:TQuery; UnitsQ:TQuery; UnitslocQ:TQuery; function GetBGU:string; procedure SetBGU (BGU_ID:string); function GetEvent:string; procedure SetEvent(Event:string); public property BGU:string read BGUid write BGUid; property TypeOfEvent:String read TypeEvent write TypeEvent; procedure FindResponse; Function ResponseSet(index:integer):TUnitlist; {returns the response set at index} function ResponseSetSize(index:integer):integer; {returns the size of the response set at index} function ResponselistSize:integer; {returns the number of response sets in the list} constructor create(EvaluatedNetwork:TGrassFire; BGU_ID, EventType:string; DataBase:string); destructor free; end; implementation { --------------------------------TCa pa bi I ity------------------------------} constructor TCapability.create; begin inherited create; end; 61

PAGE 70

destructor TCapability. free; var temp: TUnit; begin While unitslist.count > 0 do begin temp:= unitslist.first; temp. free; unitslist. delete(O); end; unitslist. free; inherited free; end; { -----------------------------TCapa bi I ities List-----------------------------} constructor TCapabilitieslist.create ; begin inherfted create; capablist:= Tlist.create; end; destructor TCapabilitieslist.free; var temp:TCapability; begin while capablist.count > 0 do begin temp:= capablist.first; temp .free; capablist.delete(O); end; Capablist. free; inherited free; end; procedure TCapabilitieslist.clear; var temp:TCapability; begin 62

PAGE 71

while capablist.count > 0 do begin temp:= capablist.first; temp.free; capablist. delete(O); end; inherited clear; end; {------------------------------lrlJnitlist-------------------------------------} constructor lrlJnitlist.create; begin inherited creat_e; lrhelist:= lrlist.create; end; destructor lrlJnitlist.free; var lrempLJnit:lrlJnit; begin while lrhelist.count > 0 do begin templJnit:= lrhelist.items[O]; templJnit.free; lrhelist.delete(O); end; lrhelist. free; if self<> nil then inherited destroy; end; function lrlJnitlist.count:integer; begin result:= lrhelist.count; end; procedure lrlJnitlist.Add(value:lrLJnit); begin lrhelist. add(value ); end; 63

PAGE 72

procedure TUnitlist.lnsertByDist(value:TUnit); {inserts into the unitslist with the smallest distance values in front} var place: integer; temp:TUnit; cent: boolean; listsize: integer; begin listsize:= Thelist.count; if listsize = 0 then The list. add(value) else begin place:= 0; cont:= true; while cent and (place < listsize) do begin temp:= Thelist.items[place]; if temp.dist < value.dist then place:= place+ 1 else cent:= false; end; {while} The list. insert(place, value); end; {else} end; {lnsertbyDist} function TUnitlist.Get(lndex: Integer): TUnit; begin result:= Thelist.items[index]; end; procedure TUnitlist.Put(lndex: Integer; Item: TUnit); begin Thelist.items[index]:= Item; end; procedure TUnitlist. ndex: Integer); begin Thelist.delete(lndex); end; 64

PAGE 73

{ ---------------------------------TRes p List---------------------------------} constructor TResplist.create; begin inherited create; TheResplist:= Tlist.create; end; destructor TResplist. free; var tern plist: TU nitlist; begin while TheResplist.count > 0 do begin templist:= TheResplist.first; templist. free; TheResplist.delete(O); end; TheResPList. free; if self <> nil then inherited destroy; end; procedure TResplist.clear; var templist:TUnitlist; begin while TheResplist.count > 0 do begin templist:= TheResplist.first; templist. free; TheResplist.delete(O); end; inherited clear; end; procedure TResplist.add(ltem:TUnitlist); begin TheResplist.add(ltem); end; 65

PAGE 74

function TRespList.count:integer; begin result:= TheRespList.count; end; function TRespList.Get(lndex: Integer): TUnitList; begin result:= TheRespList.items[index]; end; procedure TRespList.Put(lndex: Integer; Item: TUnitList); begin TheRespList.items[index]:= Item; end; { -----------------------------------TRes po nse---------------------------------} constructor TResponse.create(EvaluatedNetwork:TGrassFire; BGU_ID, EventType:string;DataBase:string); begin inherited create; RespList:= TRespList.create; CapabilityList:= TCapabilitiesList.create; evaledNet:= EvaluatedNetwork; BGU:= BGU_ID; TypeEvent:= EventType; DB:= Database; RespQ:=TQuery.create(nil); UnitsQ:=TQuery.create(nil); UnitslocQ:= TQuery.create(nil); RespQ. DataBaseName:= DataBase; UnitsQ.DataBaseName:= DataBase; UnitslocQ. DataBaseName:= DataBase; end; destructor TResponse. free; begin RespList. free; CapabilityList. free; RespQ. free; UnitsQ.free; 66

PAGE 75

UnitslocQ. free; inherited free; end; function TResponse.GetBGU:string; begin result:=BGU; end; procedure TResponse.SetBGU (BGU_ID:string); begin BGU:= BGU_ID; end; function TResponse. GetEvent:string; begin result:= TypeEvent; end; procedure TResponse.SetEvent(Event:string); begin TypeEvent:= Event; end; function TResponse.ResponseSet(index:integer):TUnitlist; var unitlist:TUnitlist; begin Unitlist:= Resplist.items[index]; result:= Unitlist; end; function TResponse. ResponseSetSize(index: integer): integer; var temp:TUnitlist; begin temp:= Resplist.items[index]; result:= temp.count; end; function TResponse. ResponselistSize:integer; begin result:= Resplist.count; end; 67

PAGE 76

procedure TResponse.FindResponse; {goes through and builds a list of needed capabilities along with the units (by order of the closest ones first) that can meet each need. This list is then used to build a list of response sets } var CapabilityRec:Tcapability; NewUnit:Tunit; Unitlist:TUnitlist; status : string; location : integer; sqlstring : string ; SQLStr : string; ClosestNode : TCiosestNode; procedure buildResplist; var ResplistSet:TUnitlist; Capability:TCapability; CapabUnitslist: TUnitlist; i, j:integer; capablistcount: integer; function inlist(Unitltem:TUnit):boolean; var Unitlist:TUnitlist; LocaiUnitltem:TUnit; found:boolean; ListNum,UnitNum:integer; begin found : = false; if Resplist.count > 0 then begin for listnum : = 0 to (Resplist.count-1) do begin Unitlist:= Resplist.items[listnum]; for unitnum:= 0 to (Unitlist.count-1) do begin LocaiUnitltem : = Unitlist. items[ unitnum ]; if LocaiUnitltem Unitld = Unitltem.unitld then found : = true; end; {for unitnum} end; {for listnum} 68

PAGE 77

end; {if resplist.count > 0} result:= found; end; {lnList} begin {BuildRespList} while CapabilityList.count > 0 do begin RespListSet:= TUnitList.create; capabListCount: = capabilityList. count; for i:= 0 to (CapabListcount 1) do {go through each item in the capability } {list and build a set of units to fulfill each} begin {needed capability} Capability:= CapabilityList. items[i]; Capability. Units List; for j:= 1 to Capability.qty do begin if CapabUnitsList.count <> 0 then {check for unassigned units} begin if not inList(CapabUnitsList.items[O]) then {check to see if the unit is} begin {already in the recommendation list} RespListSet.add(CapabUnitsList.items[O]); capabUnitsList.delete(O); end {if not inList...} else {already in recommendations list still need to delete the item} capabUnitsList.delete(O); end {if capabunitslist.count...} else begin CapabilityList.delete(i); {when the units list is empty delete the item} break; {from the capabilities list} end; {else} end; {for j} end; {fori} if respListSet.count <> 0 then RespList.add(RespListSet) else RespListSet. free; j:= 0; fori:= 0 to (capablistcount1) do {look at the unit list for each capability} begin {delete the capabilities that have empty} capability:= capabilitylist.itemsO]; {unit lists} 69

PAGE 78

capabUnitslist:= capability.Unitslist; if CapabUnitslist.count = 0 then capabilitylist.deleteU) {index stays the same when an element is } else {eliminated} j:=j+1; end; {fori:= (capablistcount1)) do} end; {while Capabilitylist.count > 0 do} end; {buildresplist} begin {findresponse} Resplist. clear; Capabilitylist.clear; ClosestNode:=TCiosestNode.create; respq.close; unitsq.close; respq.close; respq.sql.clear; SqiString:= 'Select* from response, runcard where (response.resp_type = runcard.resp_type) and '; SqiString:= SqiString + '(Runcard.event_type = :event_type) and (Runcard.bgu = :bgu)'; respq.sql.add(SQLString); respq.params[O].AsString:= TypeEvent; respq. params[1 ].AsString:= BGUid; respq.prepare; respq.open; respq.first; UnitsLocQ.close; UnitsLocQ.sql.clear; SQLStr:= 'Select Latitude, Longitude from Units_location where Unit_ID = :UnitiD'; UnitsLocQ.sql.add(SQLStr); UnitsLocQ.prepare; { numCaps:= 0;} while not respq.eof do begin capabilityRec: = T capability. create; capabilityRec. capab: = respq. fieldbyname('capability') .AsStri ng; capabilityRec.qty:= respq.fieldbyname ('qty').Aslnteger; Unitlist:= TUnitlist.create; 70

PAGE 79

unitsq.close; unitsq.sql.clear; sqlstring:='Select *from units where((capability = :capability)and((status = "A")or(status = "P")))'; unitsq.sql.add(sqlstring); unitsq.params[O].AsString:= capabilityRec.capab; unitsq.prepare; unitsq.open; unitsq. first; while not unitsq.eof do begin status:= unitsq.fieldbyname('status').AsString; if (status= 'A') or (status= 'P') then begin UnitsLoc0.9Iose; UnitslocQ.params[O].AsString:= Unitsq.fieldbyname('Unit_id').AsString; UnitslocQ.prepare; UnitslocQ.open; UnitslocQ.first; {the following code will find the closest node -there is a column in the units_location table for the closest node this runs faster using that column requires filling in when the latitude and longitude values are updated} ClosestNode.Pointlong:=unitslocQ.fieldbyname('Longitude').Aslnteger; ClosestNode.Pointlat:= unitslocQ.fieldbyname('Latitude').Aslnteger; ClosestNode. findCiosestNode(DB,'Nodes'); NewUnit:= TUnit.create; NewUnit.UnitiD:= unitsq.fieldbyname('Unit_id').AsString; location:= ClosestNode.CiosestNodeiD; location:= unitslocQ.fieldbyname('CLOSEST_NODE').Aslnteger; NewUnit. Dist: = EvaledNet. RM Entry _NodeWt(location); NewUnit.status:= status; NewUnit.location:=location; Unitlist.lnsertByDist(NewUnit); end; {if status} unitsq.next; end; {while not} capabilityRec.unitslist:= Unitlist; Capabilitylist.add(capabilityRec); respq.next; 71

PAGE 80

end; {while not respq.eof} buildRespList; ClosestNode. free; end; {findResponse} end. {unit Resp_rec} unit Move_up; { Provides functionality to evaluate the geographic system and make recommendations regarding the need for a move-up. The move-up object requires an input of the geo system this typically will be the system of bgus with nodes representing the centroid of the bgu and the arcs representing an average time for traveling from one bgu to the next. A GeoSystem object is built in the instantiation of a move-up object see the GeoSys unit Current unit locations are resolved to nodes and a sources list built for the Grassfire object. Database table information used: Units_Location Table this table is passed in the actual table name and the name of the database containing it is picked off and used elsewhere in the class fields used: } Unit_ID Latitude Longitude Type (a major type classification typically something like 'E' for engine or 'R' for rescue) lnQuarters TimeReturning AssignedStn BGUData TableBGU_ID Prob RespStn Nodes Table BGU_ID 72

PAGE 81

interface uses classes, SysUtils, dbtables, GeoSys, DateTime; type TBGU =class private BGU_ID:string; RespStation:String; {normal responsible station} DistToCiosestStn:real; {current response time} Probability: real; EstNonCoverage Time: Longlnt; CurrentTime:TDateTime; function GetiD:string; procedure SetiD (ID:string); function GetStation:string; procedure SetStation (Station:String); function GetDistance:real; procedure SetDistance(Distance: real); function GetProb:Real; procedure SetProb(Prob:real); function GetNoCoverTime:Longlnt; procedure SetNoCoverTime(Time:Longlnt); function GetCurrentTime:TDateTime; procedure SetCurrentTime(Time:TDateTime); public property ID:string read GetiD write SetiD; property ResponsibleStation:string read GetStation write SetStation; property ClosestStationDist:real read GetDistance write SetDistance; property lncidentProbability:real read GetProb write SetProb; property EstMinutesWithoutCoverage:Longint read GetNoCoverTime write SetNoCoverTime; property EvaiTime:TDateTime read GetCurrentTime write SetCurrentTime; end; TmoveUp = class private MaxRespTime: Longlnt; {maximum acceptable travel time} lntervaiOfNoMove: Longlnt; {Time span of non coverage before a move-up} {is recommended -in minutes} UncoveredBGUs: Tlist; 73

PAGE 82

BGUgeoSys:TGeoSystem; GrassFireObj:TGrassFire; NodesTableName:string; NodesTableDB:string; procedure addBGUtolist (BGU:TBGU); function UncoveredBGUcount:Longlnt; procedure SetUnitsLoc (UnitsLocTable:Ttable;TypeUnit:char); {establishes unit locations and builds sources list for Grassfire} function Get Interval: Long I nt; procedure Setlntervai(TimelnMinutes:Longlnt); function GetMaxResponse Time: Long I nt; procedure SetMaxResponse Time(Timel nSeconds: Longlnt); public constructor create(Arcs, nodes: Ttable; I ntervaiNoMove: Long I nt); destructor free; property MaxResponseTime:Longlnt read GetMaxResponseTime write SetMaxResponse Time; property NoMoveUplntervai:Longlnt read Getlnterval write Setlnterval; function MU_n.eeded(UnitsLocTable,BGUDataTable:Ttable;TypeUnit:char):boolean; {returns true if there are bgus in the system that will be uncovered for more than lntervaiOfNoMove. Uses NodesTableDB value (set in create) and creates a dynamic query into the units and units_loc tables} end; implementation function TBGU.GetiD:string; begin result:= BGU_ID; end; procedure TBGU.SetiD (ID:string); begin BGU_ID:= ID; end; function TBGU.GetProb:Real; begin result:= Probability; end; 74

PAGE 83

procedure TBGU.SetProb (Prob:real); begin Probability:= Prob; end; function TBGU.GetDistance:Real; begin result:= DistToCiosestStn; end; procedure TBGU.SetDistance (distance:real); begin DistT oCiosestStn: = distance; end; function TBGU.GetStation:String; begin Result:= RespStation; end; procedure TBGU.SetStation (Station:String); begin RespStation:= Station; end; function TBGU. GetNoCoverTime: Long I nt; begin result:= EstNonCoverageTime; end; procedure TBGU.SetNoCoverTime(Time:Longlnt); begin EstNonCoverageTime:= Time; end; function TBGU.GetCurrentTime:TDateTime; begin result:= CurrentTime; end; 75

PAGE 84

procedure TBGU.SetCurrentTime(Time:TDateTime); begin CurrentTime:= Time; end; constructor TMoveUP.Create(arcs,nodes:Ttable;lntervaiNoMove:Longlnt); {the arcs and nodes tables should be the BGU geo system nodes representing the center of BGUs and arcs between them} begin inherited create; UncoveredBGUs:= Tlist.create; BGUgeoSys:= TGeoSystem.create(arcs, nodes); GrassFireObj:= TGrassFire.create(BGUGeoSys); NodesTableName:= nodes.TableName; nodes.DataBaseName; {used to access other tables in moveup modules} lntervaiOfNoMove:= lntervaiNoMove; end; destructor TMoveUP.Free; begin UncoveredBGUs.free; BGUGeoSys.free; GrassFireObj.free; inherited free; end; function TMoveU P. Get Interval: Long I nt; begin result:= lntervaiOfNoMove; end; procedure TMoveU P .Set I ntervai(Timel nMinutes: Long I nt); begin lntervaiOfNoMove:= TimelnMinutes; end; function TMoveUP.GetMaxResponseTime:Longlnt; begin result:= MaxRespTime; end; 76

PAGE 85

procedure TMoveUP.SetMaxResponseTime(TimelnSeconds:Longlnt); begin MaxRespTime:= TimelnSeconds; end; procedure TMoveUP.AddBGUtolist (BGU:TBGU); begin uncoveredBGUs.add(BGU); end; function TMoveUP.UncoveredBGUcount:longlnt; begin result:= UncoveredBGUs.count; end; procedure TMoveUP .setunitsloc (UnitslocTable:Ttable;typeunit:char); var Unitlat,Unitlong:Longlnt; ClosestNode:TCiosestNode; begin ClosestNode:=TCiosestNode.create; with UnitslocTable do begin indexfieldnames:= 'type'; setrangestart; fieldbyname('type').AsString:= typeunit; setrangeend; fieldbyname ('type').AsString:= typeunit; applyrange; first; while not eof do begin ClosestNode.Pointlat:= fieldbyname('Latitude').Aslnteger; ClosestNode.Pointlong:= fieldbyname('Longitude').Aslnteger; ClosestNode.FindCiosestNode(NodesTableDB, NodesTableName ); GrassFireObj.AddSource(CiosestNode.CiosestNodeiD); next; end; {while not eof} end; {with} end; 77

PAGE 86

function TMoveUP.MU_needed (unitsloctable, BGUDataTable:Ttable;TypeUnit:char):boolean; {Inputs: A table containing units -their locations and major type classification. This is designed to be run for a specific type of resource (Type Unit) such as engines or rescue vehicles. The BGUGeoSys (TGeoSystem) object is used to find the BGUs which are uncovered. Returns true if there are uncovered BGU's with expected time of noncoverage in excess of NoMoveUPinterval} var i:longint; BGUquery1, BGUQuery2, UnitsQuery:TQuery; BGU:TBGU; SQLString1,SQLString2,SQLString3:String; Prob:real; Station:string; ReturnTime:TDateTime; BGUid:string; TimeOut:Longlnt; procedure lnitQueries (var BGUQuery1, UnitsQuery:TQuery; NodesTableDB, NodesTableName: String); var UnitslocTableName:string; begin UnitslocTableName:= UnitslocTable. TableName; BGUQuery1 := TQuery.create(nil); UnitsQuery:= TQuery.create(nil); BGUQuery1.databaseName:= NodesTableDB; UnitsQuery .databaseName:=Nodes TableDB; SQLString1 :='Select BGU_ID from'; SQLString1 := SQLString1 + NodesTableName; SQLString1 := SQLString1 +'where Node_ID = :indx'; BGUQuery1.SQL.add(SQLString 1 ); SQLString2:= 'Select TimeReturning from '; SQLString2:= SQLString2 + UnitslocTableName; SQLString2:= SQLString2 +'where AssignedStn = :stn and Type= :UT'; UnitsQuery.SQL.add(SQLString2); BGUQuery1.prepare; UnitsQuery. prepare; end; {lnitQueries} 78

PAGE 87

function getBGUid (BGUQuery1 :TQuery; indx:longint):string; begin BGUQuery1.close; BGUQuery1.params[O].Aslnteger:= indx; BGUQuery1.open; BGUQuery1.first; result:= BGUQuery1.fieldbyname('BGU_ID').AsString; end; {getBGUid} procedure GetBGUData (BGUDataTable:Ttable; BGUid:string; var Station:string; var prob:real); begin BGU Data Table. SetKey; BGUDataTable.fieldbyname('BGU_ID').AsString:= BGUid; BGUDataTabl_e. GotoKey; Station:= BGUDataTable.fieldbyname('RespStn').AsString; Prob:= BGUDataTable.fieldbyname('Prob').AsFioat; end; {GetBGUData} function GetUnitReturnTime (UnitsQuery:TQuery; Station:string; TypeUnit:char):TDateTime; begin {Select from units_loc table where assignedStn = station and type = unittype} UnitsQuery.close; UnitsQuery.params[O].AsString:= station; UnitsQuery. params[1].AsString: = TypeUnit; UnitsQuery.open; UnitsQuery. first; Result:= UnitsQuery.fieldbyname('TimeReturning').AsDateTime; end; {GetUnitReturnTime} function evaiTime(ReturnTime:TdateTime ):Longlnt; begin result:= DateTime.DifferencelnMinutes(Now,ReturnTime); end; {eval_time} begin {MU_needed} UnitslocTable.open; BGUDataTable.open; SetUnitsloc(UnitslocTable,TypeUnit); {set up sources list for Grassfire} GrassFireObj.burn; 79

PAGE 88

lnitQueries (BGUQuery1,UnitsQuery, NodesTableDB, NodesTableName); fori:= 1 to maxNodes do begin if Not GrassFireObj.RmEntry_lsNull(i) then begin if GrassFireObj.RmEntry_NodeWt(i) > MaxResponseTime then begin {note: the nodeiD is the index (i here) into the result matrix-this is a different value than the BGU number. Find responsible station and anticipated duration of period out of quarters -if> lntervaiNoMove then add this bgu to the list} BGUid:= getBGUid (BGUQuery1,i); GetBGUData (BGUDataTable, BGUid,Station,prob); ReturnTime : = GetUnitReturnTime (UnitsQuery station TypeUnit) ; TimeOut:= EvaiTime(ReturnTime); if TimeOut > lntervaiOfNoMove then begin BGU:= TBGU.create; BGU.ID:=BGUid; BGU.ResponsibleStation:= Station; BGU.CiosestStationDist:= GrassFireObj.RMEntry_NodeWt(i); BGU.IncidentProbability:= Prob; BGU EstNonCoverageTime:= TimeOut; BGU EvaiTime:= Now; UncoveredBGUs.add(BGU); end; {if TimeOut > intervalnomove then} end; {if nodeWt > MaxResponseTime} end; {if not is null} end; {fori} if UncoveredBGUs.count > 0 then result:= true else result:= false; end; {MU_needed} end. {unit Move_up} unit fifo_ que; {A first in, first out queue. The add method adds an item to the end of the queue} 80

PAGE 89

interface uses classes, SysUtils; type TLonglntPtr = "Longlnt; ERetrieveFromEmptylist = class (Exception); TFifoQue = class (TObject) private Thelist:Tlist; public constructor Create; destructor free; procedure addltem (Value: longlnt); function RetrieveHead: longlnt; function is_empty: boolean; end; implementation constructor TFifoQue.create; begin inherited create; Thelist:= Tlist.create; end; {Create} destructor TFifoQue.free; begin While Thelist.count <> 0 do Thelist.delete(O); The list. free; end; procedure TFifoQue.addltem(Value: longlnt); var temp:TLonglntPtr; begin getmem(temp,sizeof(temp)); temp":= value; Thelist.add(temp); end; {add} 81

PAGE 90

function TFifoQue.RetrieveHead:Longlnt; var temp: TLonglntPtr; begin if The List. count > 0 then begin temp:= TheList.items[O]; result:= temp\ The List. remove(temp); end else Raise ERetrieveFromEmptyList.create('Attempt to Retrieve from an Empty List (TFifo_que.RetrieveHead)'); end; {retrieve} function TFifoQue.is_empty: boolean ; begin if TheList.count = 0 then is_empty:= true else is_ empty:= false; end; {is_empty} end. {unit fifo_ que} 82

PAGE 91

References S. Aronoff, "Geographic Information Systems-A Management Perspective", WDL Publications, Ottawa, Canada, 1989 S.L. Arlinghaus (ed.), "Practical Handbook of Digital Mapping Terms and Concepts", CRC Press Inc., 1994 Ball, M.O. and Feng, L.L., "A Reliability Model Applied to Emergency Service Vehicle Location", Operations Research, 41:1, Jan-Feb, 1993 Berman, O.R., Larson, C., and Chui, S., "Optimal Srver Location on a Network Operating as an M/G/1 Queue", Operations Search, 33: 746-771, 1985 Bertsekas, D. P., "Linear Network Optimization: Algorithms and Codes", MIT Press, Cambridge, Massachusetts, 1991 Carter, G. and lgnall, E. "A Simulation Model of Fire Department Operations: Design and Preliminary Results", R-632-NYC, The New York City Rand Institute, New York, December, 1970 Church, R.L., ReVelle, C., "The Maximal Covering Location Problem", Papers of the Regional Science Assoc., 32:101-118, Fall, 1974 Chvatal, V., "Linear Programming", W.H. Freeman and Company, New York, 1983 Cooper, L., "Heuristic Methods for Location-Allocation Problems", SIAM Review, Vol. 6:1, January, 1964 Carmen, T., Leiserson, C., Rivest, R., "Introduction to Algorithms", The MIT Press, Cambridge, Mass., 1990 Daskin, M.S., "Location, Dispatching, and Routing Models", in Spatial Analysis and Location-Allocation Models, A. Ghosh and G. Rushton (ed), Van Nostrand Reinhold Company, New York, 1987 Daskin, M.S., "Maximum Expected Covering Location Model: Formulation, Properties and Heuristic Solution", Transportation Science, Vol. 17:1, February, 1983 83

PAGE 92

Daskin, M.S., "Application of an Expected Covering Model to Emergency Medical Service System Design", Decision Sciences, Vol. 13, 1982 Daskin, M.S., Stern, E. H., "Hierarchical Objective Set Covering Model for Emergency Medical Service Vehicle Deployment", Transportation Science, 15: 137-152, May, 1981. Filippi, C. and Romanin-Jacur, G., "Optimal Allocation of Two Fixed Service Units Acting as M/G/1 Queues", Transportation Science, 30:1, February, 1996 Hakimi, S.L., "Optimum Locations of Switching Center and the Absolute Centers and Medians of a Graph", Operations Research, 12:450-459, 1964 Halpern, J., Maimon, 0., "Accord and Conflict Among Several Objectives in Locational Deci!)ions on Tree Networks", in Locational Analysis of Public Facilities, J.F. Thisse and H. G. Zoller (ed), North-Holland Publishing Company, 1983 Hansen, P., Peeters, D. and Thisse, J. F.," Public Facility Location Models: A Selective Survey", in Locational Analysis of Public Facilities, J.F. Thisse and H.G. Zoller (ed), North-Holland Publishing Company, 1983 Hendrick, T.E., Plane, D.R., Monarchi, D.E., Tomasides, C.,Heiss, F.W., Walker, W.E., "An Analysis of the Deployment of Fire-Fighting Resources in Denver, Colorado", R-1566/3-HUD, The New York City Rand Institute, New York, May 1975 Hogan, K. and ReVelle, C., "Concepts and Application of Backup Coverage", Management Science, 32:1434-1444, Nov, 1986 Jacobson, lvar, Christerson, M., Jonsson, P., Overgaard, G. "Object-Oriented Software EngineeringA Use Case Driven Approach", ACM Press, 1992 Karimi, H. A, Krakiwsky, E.J., Harris, C., Craig, G., Goss, R. "A Relational Database Model for an AVL System and an Expert System for Optimal Route Selection.", AutoCarto 8 Proceedings Eighth International Symposium on Computer Assisted Cartography, April, 1987 ASPRS/ACSM Kariv, 0. and S.L. Hakimi, "An Algorithmic Approach to Network Location Problems, 1: The p-centers, SIAM Journal on Applied Mathematics, 37:513-538, 1979a 84

PAGE 93

Kariv, 0. and S.L. Hakimi, "An Algorithmic Approach to Network Location Problems, 1: The p-centers, SIAM Journal on Applied Mathematics, 37:539-560, 1979b Kolesar, P. and Walker, W.E., "An Algorithm for the Dynamic Relocation of Fire Companies", R-1023-NYC, The New York City Rand Institute, New York., 1974 Kolesar, P. and Blum, E., "Square Root Laws for Fire Engine Response Distances", Management Science, 19:1368-1378, 1973 Larson, R.C., "A Hypercube Queuing Model for Facility Location and Redistricting in Urban Emergency Services", Computers and Operations Research, 1 :67-95, 197 4 Lee, Jay, "A data Model for Dynamic Vehicle Routing with GIS", GIS/LIS 1990 Proceedings Volume 1, ACSM/ASPRS/AAG/URISA/AMFM Lupien, A. E., Moreland, W.H., Dangermond, J., "Network Analysis in Geographic Information Systems", Photogrammetric Engineering and Remote Sensing, 33:10 October 1987 pp. 1417-1421 Mirchandani, P.B. and Reilly, J.M., "Spatial Distribution Design for Fire Fighting Units" in Spatial Analysis and Location Allocation Models, A Ghosh and G. Rushton (eds.), Van Nostrand Reinhold Company, New York, 1987 Potts, R., Oliver, R., "Flows in Transportation Networks", Academic Press, New York, 1972 Rider, K.L., "A Parametric Model for the Allocation of Fire Companies", R-1615NYC/HUD, The New York City Rand Institute, New York, April, 1975 Scott, Allen J., "An Introduction to Spatial Allocation Analysis", Association of American Geographers, Resource Paper No. 9, 1971 Teitz, M.B., and Bart, P., "Heuristic Methods for Vertex Median of a Weighted Graph", Operations Research, 16:955-961, 1968 Toregas, C., R. Swain, C. ReVelle, and L. Bergman, "The Location of Emergency Service Facilities", Operations Research, 19:1363-1373, 1971 Walker, W.E., Chaiken, J.M., and lgnall, E.J. (eds), "Fire Department Deployment Analysis: The Rand Fire Project", North Holland, New York, 1979 85