EVALUATION OF MATRIX AND HYBRID TRANSITIVE CLOSURE

ALGORITHMS WITH TRANSIT GRAPHS

by

Thomas Patrick Bailey Jr.

B.S., University of Idaho, 1999

A thesis submitted to the

University of Colorado Denver

in partial fulfillment

of the requirements for the degree of

Master of Science

Computer Science

2010

This thesis for the Master of Science

degree by

Thomas Patrick Bailey Jr.

has been approved

by

Tom Altman

\\/xc\/tlo\0

Date

Bailey, Thomas Patrick (M.S., Computer Science)

Transitive Closure in Graph Theory

Thesis directed by Professor Tom Altman

ABSTRACT

Knowing if you can get from point A to point B is almost as important as knowing

path between them. The Transitive Closure gives us this answer in the form of a

reachability matrix of a graph. This thesis will discuss and test several transitive

closure algorithms, a few foundational, for creating a reachability matrix/hybrid

matrix from an adjacency matrix/hybrid matrix. Also, this thesis will evaluate the

Floyd-Warshall, Warren, Agrawal-Jagadish, Yang-Yu-Liu-Dao-Wang-Phan, and

Reduced Hybrid algorithms over graphs of varying sizes and present performance

comparisons. Transitive Closure algorithms are useful in numerous areas and

amongst them are database relationships and transit systems.

This abstract accurately represents the content of the candidates thesis. I

recommend its publication.

Signed

Tom Altman

DEDICATION

To my wonderful wife, Lisa, a virtuous woman. Thanks for putting up with me as I

struggled with my school work and listening to me as I thought out loud. Thanks

for taking more than your fair share of our daily duties while I completed this thesis.

To my lovely children, Corbin and Kyndall, I hope you reach as far as you can in

life and are blessed by learning the wonders of Gods creation.

ACKNOWLEDGEMENT

Thanks to my advisor, Tom Altman, for your time and commitment as I wavered.

Also, I would like to thank Professor Gethner for launching my interest in graph

theory. Thanks to all the hard working staff at CU Denver who can appreciate what

it means to be a student with both a job and a family

TABLE OF CONTENTS

Figures..........................................................................vii

Tables..........................................................................viii

Chapter

1. Inspiration...................................................................1

2. Introduction..................................................................1

3. Definitions...................................................................2

3.1. Directed/Acyclic Graph...................................................2

3.2. Adjacency Matrix.........................................................3

3.3. Reachability Matrix......................................................4

3.4. Topological Sort.........................................................5

4. TC Sequential Matrix-Based Algorithms.........................................5

4.1. Floyd-Warshall's Algorithm...............................................5

4.2. Warren's Algorithm.......................................................8

5. TC Hybrid Matrix-Based Algorithm.............................................11

5.1. Agrawal-Jagadish Hybrid Algorithm.......................................11

5.2. Yang-Yu-Dao-Wang-Phan's Hybrid Algorithm................................15

5.3. The Proposed Reduced Hybrid Algorithm...................................19

6. TC Parallel Algorithms.......................................................29

6.1. Yang-Yu-Dao-Wang-Phan's Hybrid Algorithm................................29

6.2. The Proposed Reduced Hybrid Algorithm...................................31

7. Additional TC Algorithms.....................................................34

7.1. Matrix Multiplication...................................................34

7.2. Graph Based.............................................................34

7.3. Maintaining TC..........................................................35

8. Performance Analysis.........................................................35

8.1. Results.................................................................35

8.2. Conclusions.............................................................46

Appendix..........................................................................47

Bibliography.....................................................................134

vi

LIST OF FIGURES

Figure 3.1 Directed graph with cycle..................................................3

Figure 3.2 Acyclic directed graph (DAG)...............................................3

Figure 3.3 Transitive closure of figure 3.2...........................................4

Figure 5.1 Row block.................................................................13

Figure 5.2 P(j) and upper triangular matrix..........................................16

Figure 5.3 Rows block upper right triangle...........................................18

Figure 5.4 Fixed route...............................................................19

Figure 5.5 Fixed route w/ transfer edges.............................................20

Figure 5.6 Start/End nodes of fixed route............................................21

Figure 5.7 Start/End nodes split.....................................................22

Figure 5.8 Connecting start end nodes................................................22

Figure 5.9 Transit graph.............................................................23

Figure 5.10 Required accessible rows.................................................25

Figure 5.11 Required accessible rows (path adjacent edges).........................27

Figure 6.1 K limit, immediate non-connected rows.....................................32

Figure 8.1 Time vs. node count comparison of 5 algorithms (memory)...................36

Figure 8.2 Small-disk test results (milliseconds)....................................37

Figure 8.3 Small disk test results Floyd-Warshall removed (milliseconds).............38

Figure 8.4 Mid-disk/blocked/queue test results.......................................40

Figure 8.5 Mid-disk/blocked/queue test results (Yang and Warren removed)............41

Figure 8.6 Large blocked/queued test.................................................43

Figure 8.7 Best algorithm very large node count test.................................45

vii

LIST OF TABLES

Table 8.1 System details..............................................................35

Table 8.2 Small memory test results (milliseconds)....................................37

Table 8.3 Small-disk I/O test results (milliseconds)..................................39

Table 8.4 Mid-disk/blocked/queue test results (milliseconds)..........................42

Table 8.5 Large-disk/blocked/queue/64bit/threads test results (milliseconds)..........44

Table 8.6 Very large graphs test results (milliseconds)...............................46

viii

1. Inspiration

In 2008,1 started to commute to work via the local transportation authority,

RTD. During that time, I utilized RTDs website to plan my route. On their

website, I selected an approximate start time and a destination and it provided

several route options. The first iteration, of using this tool, yielded a route of

roughly 90 minutes, which I utilized for several days. However, the distance from

home to my place of work did not justify a 90 minute commute time. I went back to

their website and began to experiment and adjust start times, which yielded an

alternate trip with a travel time of roughly 60 minutes. I used this faster route for a

few days then tried some additional start times. I eventually found a route of less

than 45 minutes. I then asked myself, Why dont they use a tool to discover a

fastest route in a transit system over a given time range?

Over the past few years I have considered this problem and tried to find or create a

viable solution. One tool I think will work, and will hopefully work fast enough for

real world use, would require a reachability matrix as a part of the solution. To that

end, I have been researching and writing this paper on Transitive Closure in relation

to graph theory.

2. Introduction

The first algorithm proposed to solve the transitive closure (TC) of a directed

graph was the Floyd-Warshall, named after Stephen Warshall and Robert Floyd [24,

11]. This algorithm was independently discovered by Bernard Roy in 1959 [5] and

Stephen Warshall [24] 1962, and later modified by Robert Floyd [11] to find the

length of the shortest path between nodes. Since then, several different matrix

1

based TC algorithms have been discovered [23, 17, 4], as well as, hybrid algorithms

[1, 25, 7] all with varying degrees of efficiencies.

In this paper, I will discuss and evaluate five different algorithms: the Floyd-

Warshall [5, 24, 11], Warren [23], Agrawal-Jagadish [1], Yang-Yu-Liu-Dao-Wang-

Phan [25], and a Reduced Hybrid algorithm.

The Reduced Hybrid algorithm, which is based on Agrawal-Jagadish [1], is

presented in section 5.3. The Reduced Hybrid can only handle a specific type of

acyclic directed graph, called a transit graph. A transit graph models a

transportation system where the goal is routing a passenger, be it a person or

package, through a fixed route transportation system. More specifics characteristics

of the transit graph will be covered with the description of the Reduced Hybrid

algorithm.

3. Definitions

3.1. Directed/Acyclic Graph

A directed graph, also known as a digraph, is a graph where the edges

connecting nodes are one-way (see Figure 3.1). The digraph can be denoted by the

following, G = (n, E), where n is the set of nodes, and E is the set of edges for the

graph. An edge is defined as e = (tj), where i is the parent node, and j is the child

node (a path exists from i to j). For more details, please read Sedgwick [18].

2

Figure 3.1 Directed graph with cycle

An acyclic directed graph, also known as a DAG, is a digraph where no cyclic

path exists. A path is a sequence of nodes and for each node in the series there

exists an edge to the next node in that sequence. A cyclic path repeats at least one

node in its sequence. An example of a cyclic path is Path = (1,2,3,1), taken from

Figure 3.1, where node 1 is repeated once. An example of a graph with no cycles,

and therefore a DAG, can be seen in Figure 3.2 where no cyclic path exists.

Figure 3.2 Acyclic directed graph (DAG)

3.2.Adjacency Matrix

The adjacency matrix, Mc, of a graph is a nxn matrix, where n is the number

of nodes in the graph and ...

Mc[i,j]

1 if {i.j} e E

0 otherwise

3

For example, the adjacency matrix of Figure 3.2 is ...

( 1 0 0 \

0 0 1 1 0

0 0 0 1 0

0 0 0 0 0

Vo 0 1 0 0/

3.3. Reachability Matrix

The reachability matrix ,Rg, of a graph is the adjacency matrix of the transitive

closure of a graph. For example, the transitive closure of the graph in Figure 3.2 is

shown in Figure 3.3.

Figure 3.3 Transitive closure of figure 3.2

The reachability matrix of Figure 3.3 is ...

f 1 1 1

0 0 1 1 0

0 0 0 1 0

0 0 0 0 0

Vo 0 1 1 oj

4

3.4. Topological Sort

A topological sort is an ordering of the nodes of a DAG. In this ordering, every

node precedes other nodes with which it shares an outbound edge. A topological

sort of the graph, in Figure 3.3, is {1,2,5,3,4}. However, this is not the only

topological sort of this graph. Another valid sort is {1,5,2,3,4). In a reverse

topological sort, every node succeeds any node with which it shares an outbound

edge. A reverse topological sort of the graph, in Figure 3.3, is {4,3,5,2,1}. There

are several algorithms for sorting a DAG, such as Kahn [14], Knuth-Szwarcfiter

[15], and also Tarjan [22]. Tarjans algorithm, which has a run time of 0(e), sorts a

graph as a byproduct of determining its strongly connected components.

4. TC Sequential Matrix-Based Algorithms

The efficient computation of the TC of a graph was first explored by Bernard

Roy in 1959 [5] and Stephen Warshall in 1962 [24] with sequential matrix based

algorithms. Since then, many sequential matrix based algorithms have been

presented, based in large part, on the work of Roy and Warshall [23, 1, 19, 2]. This

section will review the Floyd-Warshall [5, 24, 11] and Warren [1] algorithms which

will be compared against several hybrid algorithms in Section 8.

4.1.Floyd-Warshalls Algorithm

Floyd-Warshall algorithms, also referred to as the Roy-Floyd Algorithm,

was independently discovered by Bernard Roy [5] and Stephen Warshall [24], It

computes the TC of a graph by comparing all possible routes between two nodes

sequentially. In his paper, "A Theorem on Boolean Matrices" [24], Stephen

5

Warshall puts forth and proves Theorem 4.1.1 describing how to create the TC from

an adjacency matrix.

Theorem 4.1.1 [24]

Given a square (d X d) matrix, M each of whose elements is 0 or 1. Define M'

by m'ij = 1, if and only if, either mi; = 1, or there exists integers klt... kn, such

that mikl = mklk2 = = mkn ikn = mknj = 1; = 0 otherwise. Define M*

by the following construction:

0. Set M* = M.

1. Set i = 1.

2. (v;3: mji = 1 )(V/c)set mjk = m*k V m*k.

3. Increment i by 1.

4. If i < d, go to step 2; otherwise stop.

We assert M* = M'

The input for the algorithm is an adjacency matrix. The result is a reachability

matrix.

Algorithm

For i from 1 to n

For } from 1 to n

If M [/, i] = 1

For k from 1 to n

If M [i, k] = 1

M \j, k] = 1

6

The algorithm starts by considering the first node, setting i to 1. Then it searches

the i-th column of every row. If a columns value is found to be one, then that node,

node j, has a path to node i. Next, if a path can be found from node i to node k,

then we know a path exists from node j to node k, through node i. If this is the

case, then M[j, k] is set to 1, indicating a path exist between j and k. After every

row j is considered for node t, then i is incremented by one and the process is

repeated until i is greater than the number of nodes. The runtime of this algorithm

is O (V3).

Floyd [11] presented a modification of the Warshall algorithm that not only finds

the TC of a graph, but also finds the length of a shortest path between nodes.

Floyds algorithm takes an adjacency matrix that contains the lengths between

nodes. If no path between nodes exists, list the length as infinity. The Floyd

algorithm is as follows.

Algorithm (Floyd)

For i from 1 to n

For j from 1 to n

If M [j, i] < i nfi ni ty

For k from 1 to n

If M [t, k] < infinity

s = M [j, i] + M [i, k]

If s < M \j, k\ then M \j, k] = s

This algorithm works like the Warshall algorithm, with the exception of maintaining

the length of the shortest path between nodes. The result of this algorithm is a

7

reachability matrix that contains the length of the shortest path, such that M[i, j] =

length of the shortest path between i and j.

4.2.Warrens Algorithm

In 1975, Henry Warren [23] presented a modification to the Floyd-Warshall

Algorithm [5, 24, 11] to address the I/O issue of paging. Warren points out that the

Floyd-Warshall algorithm, scans by columns (so that it may or by rows). This

nature of the Floyd-Warshall algorithm requires a great deal of I/O between disk

and main memory for matrices too large to fit in memory.

When storing an adjacency matrix memory, size was an issue in 1975, and is still an

issue in 2010. Warren mentions using an IBM System/360, which was introduced

in 1964 and had a memory capacity between 4KiB to 8MiB. The largest adjacency

matrix that can be stored in 4KiB could represent a graph with 181 nodes. For

8MiB, it jumps up to 8,192 nodes. For the system I will be using in evaluating TC

algorithms, which has 12GiB, it can, in theory, handle a matrix of at most 321,060

nodes. Of course, this does not account for OS, programs, etc., running in the same

memory, which would reduce this number.

Transit graphs, that represent real systems, contain 1,800,000 to 8,000,000 nodes.

This would require 377GiB 7TiB of memory to contain the entire adjacency

matrix. These numbers are achievable on a modem supercomputer, and if Moores

law holds out, on a home PC in a decade or two. However, compared to other

problems, these are miniscule. For the foreseeable future, brute force storage may

not be the answer.

8

Warrens algorithm scans by rows, removing the need to or by columns. It does,

however, require a second pass at the matrix to add the final arc [23].

Algorithm

For i from 2 to n

For j from 1 to i-1

If M [i, j] = 1

For k from 1 to n

M [i, k\ = M [i, k] \ M [j, k\

For i from 1 to n 1

For j from i +1 to n

If M [i, j] = 1

For k from 1 to n

M [i, k\ = M [i, k] \ M [j, k]

The algorithm first considers row i, starting from row 2 to row n. Then it checks

every y-th bit of row i starting from the first bit to the i-th bit, giving a stepped

calculation. If the y-th bit is found to be 1 , then an 4or operation is performed on

the k-th bit of row i against the k-th bit of row j, and the result stored in the k-th bit

of row t, where k runs from 1 to n thus completing the first pass.

The second pass considers row i, starting from row 1 to row n-1. Then, it checks

every y'-th bit of row i starting from the i-th + 1 bit to the n-th bit, giving a reverese

stepped calculation. If the y-th bit is found to be 1 , then an or operation is

performed on the k-th bit of row i against the k-th bit of row y, and the result stored

in the k-th bit of row i, where k starts from 1 to n thus completing the second pass.

9

The runtime of this algorithm is O (V3), which is the same as Floyd-WarshalTs

algorithm, but it has two advantages over Floyd-Warshall. One advantage, for the

case of a very sparse graph, is that Warrens algorithm will execute faster than

Warshalls by a factor of approximately w, and in the worst case it is about equal to

Warshalls [23]. Here w is the number of bits that can be tested at a time. The

second advantage is, for a sparse adjacency matrix. Roughly ^ fewer disk read

operations are required.

The reduction in disk I/O can be shown in this manner. Assume a row can be read

into memory from disk in one rowRead function. The Matrix can then be read in n

calls to rowRead. The worst case for the Floyd-Warshall algorithm requires n

rowRead calls per row, giving n2 rowRead calls. For the Warren algorithm, the

fastest case is if the matrix is all 0s. For the best case, this gives one rowRead per

row, for the first part of Warrens algorithm, and the same for the second. Giving a

total of 2n rowRead calls for the best case.

n2rowreads n ~ .

----------= fewer rowReads (best case)

2n rowReads 2

In the worst case, which is a matrix of all 1 s, Warrens algorithm for each row

reads in an additional d rows where d

calls, for the first part of the Warren algorithm, and n2 rowRead calls for the

second part giving 2n2 rowRead calls. The worst case is approximately the same

as Floyd-Warshall algorithm in disk I/O.

10

5. TC Hybrid Matrix-Based Algorithm

Hybrid algorithms try to, Incorporate the optomiztation features of graph based

algorithms [7, 13] within a matrix framework [1], Agrawal and Jagadish presented

this concept in their paper Hybrid Transitive Closure Algorithms[l]. This section

will review the Agrawal-Jagadish [1] and Yang-Yu-Dao-Wang-Phan [25]

algorithms. The Reduced Hybrid Algorithm will also be presented, which is based

on the Agrawal-Jagadish algorithm. All of these algorithms will be compared

against the Floyd-Warshall and Warren algorithms in Section 8.

5.1. Agrawal-Jagadish Hybrid Algorithm

In 1990, Agrawal and Jagadish presented a hybrid algorithm for computing

the transitive closure of a graph [1], It is so named because they felt their solution

was a hybrid of matrix based solutions [5, 24, 11,23] while utilizing optimization

features of graph based algorithms [7, 13]. The method that they propose can only

be used for reverse topologically sorted DAG graphs.

First, Agrawal-Jagadishs algorithm can be explained starting with an adjacency

matrix. Take an adjacency matrix for an acyclic directed graph, and if unordered,

sort the matrix in a reverse topological order. This sorting preparation step takes

time, but is negligible as Kahn presented, in his paper, Topological Sorting of

Large Networks [14], an algorithm that runs in O (V + E). At this point the graph

no longer needs to be stored in an n x n matrix, but rather it can be stored in an n x i

lower triangular matrix, where n is the number of nodes in the graph and i is the

row number. This reduces the storage size by half.

11

Algorithm (Basic)

For i from 1 to n

Copy row i into Temporary row i'

For j from i 1 to 1

If i'\j] 0

Call add_succ(i, j, i')

Procedure add_succ(i, j, i')

For k from 1 to j- 1

If j[k] = 1

If i'[k] = 1

i'[k] = 0

Else

j[k] = 1

The algorithms starts with the first row being copied to a separate variable, i'.

Then, it iterates from right to left through the copied row from the i-th 1 bit to the

first bit of the copied row. If the y'-th bit of the copied row is set to 1, iterate through

the y-th row, from 1, to) 1. If the k-th bit of the y-th row is set to 1 and the k-th

bit of the copied row is also set to 1, then set the k-th bit of the copied row to 0,

otherwise set the k-th of the original row to 1.

Setting the k-th bit of the copied row to 0, in the case that it was 1, is where the

optimization takes place. Since a connection between i and j exists, all of the

successor nodes of y-th row will be added to the i-th row. If one of the successor

nodes is already present in the i-th row, it can be removed from the copied row so it

is not considered, since it and all its successors have already been added. There is

no need to include what has already been added.

12

Agrawal and Jagadish also present a blocked form of their algorithm to help with

memory and disk I/O. The basic Agrawal-Jagadish Algorithm is already suited for

disk I/O performance similar to Warrens Algorithm. However, this can be

improved upon by keeping more rows accessible in memory.

ie

The matrix is partitioned into blocks of rows from row is to ie, as shown in Figure

5.1. For each block, the rows is to ie, are read into memory and a copy of these

rows are created into i's to i'e for optimization calculations. The off-diagonal block

portion is processed first, from right to left, from is -1 to 1, and from top to bottom,

per column. If the given y-th column is set to 1, then fetch the y'-th row from disk

into memory. If it is not already in there, then process it with add_succ. This is

where additional optimization takes place. The y-th row will only be read into

memory once, for all the block, from rows is to ie, as it is used by them all before

13

the next y'-th row is considered. After the off-diagonal block elements has been

processed, the diagonal-block is processed for rows is to ie.

Algorithm (Blocked)

Assume matrix is partitioned into m blocks

Do the following for each block bt =

Let the block bt consists of rows is to ie

Fetch rows is through ie into memory

Copy into rows i's through i'e

//Process elements in off-diagonal block

For y from is 1 to 1

For i from is to ie

If i'[j] 0

//Fetch the row jj if not already

//in memory

Call add._succ{i, j, i')

//Process elements in diagonal block

For i from is to ie

For j from i to is

If i'[j] 0

Call add_succ(i, j, i')

Procedure add_succ(i, j, i')

For k from 1 to j- 1

If j[k\ = 1

If i'[k] = 1

i'[k] = 0

Else

i[k] = 1

14

Unlike the Floyd-Warshalls algorithm, the Agrawal-Jagadishs compares rows to

rows rather than rows to columns. This is similar to the Warren algorithm, but

unlike Warrens, Agrawal-Jagadishs has optimization features and does not require

a second pass at the matrix.

5.2. Yang-Yu-Dao-Wang-Phans Hybrid Algorithm

Yang, Yu, Liu, Dao, Wang, and Pham present a hybrid algorithm for computing

the transitive closure, of a DAG, in their paper, A Hybrid Transitive Closure

Algorithm for Sequential and Parallel Processing [25]. Their hybrid algorithm

solves the transitive closure problem, in such a way, that it can support sequential

and parallel processing of the solution.

The algorithm requires a topologically sorted adjacency matrix for the DAG, which

results in an "upper triangular matrix [25] versus a lower for [1, 7] (see Figure 5.2).

Also, an original parent relationship, P{j), must be created and maintained (see

Figure 5.2). P(y) is the set of all parents of the y'-th column [25].

15

P(j)

\ t N. N. * \. reachable

\ reachable n, portion

\ 1 portion \ i \ = 0,1

t-H o' II ^s S ^ 5 /

non-reachable^x non-reachable^v

portion N, portion 'v

= 0 \ = 0 \

Figure 5.2 P(j) and upper triangular matrix

Algorithm (Basic)

For i from 1 to n -1

Ini ti ate = 0 for i < j
For j from i + lton

Call Procedure CT(i, j)

Procedure CT(i, j)

For each x e P(j)

If x = i or Tix = 1

Tij = 1 and terminate

Yang-Yu-Liu-Dao-Wang-Phans Hybrid algorithm works in this way: iterate

through the rows in the reachability matrix, remove all edges in the row, iterate

through the Parent relationships P(J) from i + 1 to n, if P(J) contains i or T(i, x)

1, where x e P(J), set T^j = 1.

16

A modified version of this algorithm is presented to help with I/O performance

similar to Agrawal and Jagadishs [1] idea of blocking. The basic algorithm is not

suited well for I/O performance, as the P(j) must be called for each row for which it

might be a parent.

Algorithm (Blocked)

Do the following for each block fy = l,2, ...,m

Let the block bt consists of rows is to ie

Fetch rows is through ie into memory

For each row Ini ti ate fj = 0 for i < j
For j from is + 1 to n

For i from is to min(y 1, ie)

Call Procedure CT(i, j)

Procedure CT(i, j)

For each x e P(J)

If x = i or Tix = 1

Tij = 1 and terminate

The matrix is partitioned into blocks of rows from is to ie, as shown in Figure 5.3.

For each block, the rows is to ie, are read into memory. Each row is initiated to

remove all edges in the row tjj = 0 for i < j < n. The block is processed by

columns, from left to right, so that P(j) for is < j
column per block and applied to every valid row within the block. This will greatly

reduce the number of times P(J) will be accessed and will greatly improve I/O.

17

Figure 5.3 Rows block upper right triangle

Algorithm (Blocked)

Do the following for each block h( = l,2,...,m

Let the block bt consists of rows is to ie

Fetch rows is through ie into memory

For each row Ini ti ate titj = 0 for i < j
For j from is + 1 to n

For i from is to min (/ 1, te)

Call Procedure CT(i, j)

Procedure CT(i, j)

For each x e P(j)

If x = i or Tix = 1

Tij = 1 and terminate

18

5.3. The Proposed Reduced Hybrid Algorithm

The hybrid algorithm of Agrawal and Jagadish [1] is the basis of the reduced

hybrid algorithm introduced in this section. The problem that inspired me to

research transitive closure has a very specific type of graph, a transit graph. The

best way to explain a transit graph is to show how it is created.

A transpiration system, for example a regional bus/rail system, consists of fixed

routes. Figure 5.4 shows a fixed route that has five nodes where it picks up and/or

drops off passengers. Each of the five nodes has a specific geographical location

and a time when the bus will arrive/depart from that location. As an example, node

Cs location is 5th and Elm St. and its arrival/departure time is 1:30 PM.

To transfer within the system, edges are added between fixed routes. These edges

are called transfer edges (see Figure 5.5). A transfer edge is added between all

pairs of nodes from different fixed routes, if the nodes are within a transferable time

and a transferable distance of each other. As an example, given a transferable time

between 2 to 15 minutes, transfer edges can only be added between nodes that are at

least 2 minutes and no more than 15 minutes from each other. Additionally, given

a transferable distance of 500 meters, transfer edges can only be added between

nodes that are at most 500 meters from each other.

19

Figure 5.5 Fixed route w/ transfer edges

Figure 5.5 shows the transfer edges, indicated in red, from the fixed route (A-E) in

Figure 5.4 and between two other fixed routes (F-I and J-M). This graph is not the

transit graph, however, the TC of this graph can be solved with my proposed

algorithm, but the graph itself has a few problems with modeling a transportation

system with fixed routes. The first problem is it produces invalid paths. A

passenger will only get on a fixed route if they intend to travel between nodes (A

passenger will not get on and off a bus at the same stop). The path {J, K, B, G, H] has

a passenger getting on and off at node B and this is an example of an invalid path.

Another problem is paths exist in the graph, from Figure 5.5, that contain

superfluous nodes. These nodes do not require the passenger to take any action.

For example, the path {J, K, L, M, D, E} has two superfluous nodes K and L. The

passenger does not get on, off, or transfer from these nodes, but rather stays on the

fixed route. To fix these problems all of the fixed routes must be adjusted.

20

Figure 5.6 Start/End nodes of fixed route

To fix these problems, identify nodes which are both start nodes and end nodes.

These nodes must be split. A, B, C, and D are nodes at which a passenger can get

on the fixed route, a start node (Figure 5.6). Nodes B, C, D, and E are nodes at

which a passenger can get off the fixed route, an end node (Figure 5.6). A

passenger cannot get off the fixed route at node ,A as it is the initial start of the

route and likewise, a passenger cannot get on at node E, as it is the end of the fixed

route.

The edges are then removed from the fixed route. Then, nodes which are both start

and end nodes are split into two nodes. One of the two new nodes becomes a start

node and the other becomes an end node (see Figure 5.7).

21

Figure 5.7 Start/End nodes split

Next, add travel edges. Travel edges represent a passengers entire trip along a

fixed route. Each start node, of a fixed route, has an outbound edge to every end

node, of the same fixed route, whose time is after the start nodes own time. Figure

5.8 represents the fixed route updated in this manner.

Figure 5.8 Connecting start end nodes

22

Transfer edges are added just as they were in Figure 5.5 with one exception. Start

nodes can only connect to end nodes and vice versa. Figure 5.9 shows an update of

Figure 5.5 using the revised fixed routes. This is the transit graph. Connecting the

fixed routes in this manner removes the invalid paths and paths with superfluous

nodes.

23

These fixes do come at a cost, nearly doubling the number of nodes and

significantly increasing the number of edges, which will increase the computational

cost of computing a reachability matrix. The good news is every path in a transit

graph is a shortest path.

The reachability matrix of a transit graph follows some additional restrictions.

The first restriction is based on the concept that a passenger traveling through a

transit system is limited on the length of time it takes to travel between the start and

end nodes of their trip. A node can only reach other nodes within the nodes system

imposed travel time limit. Any nodes beyond this limit are not considered.

Since each node in a transit graph is associated with a time, the graph can easily be

ordered by reverse time. This ordering is a reverse topological sort. With the sorted

graph and the knowledge of a start node paths time limit, it can be said that a node i

can only connect to the previous x rows in the reverse topologically sorted matrix.

In other words, if node i has a restriction stating that it can only connect to nodes

within m minutes, this is equivalent to x rows.

If x < i, then the left portion of the row cannot contain reachability data. If it

cannot contain reachability data, then it cannot contain adjacency data and never

needs to be considered or stored (see Figure 5.10).

Each row now has a non-reachable portion defined that is between 0 and i in length

(see Figure 5.10). This can greatly reduce the size of the data that needs to be stored

and analyzed. The reduced hybrid matrix is, at most, as large as the hybrid-matrix,

24

where the non-reachable portion always equals 0. In practice, for simulated transit

systems, a 65-70% reduction in graph size has been seen compared to a hybrid-

matrix.

It should be noted that because of the nature of the transit graph, no node that shares

an adjacency edge with node i, has a path to any of the other nodes that also have an

adjacency edge with node i. Therefore, the optimization step Agrawal-Jagadishs

algorithm contains can be dropped. However, my current simulation of a transit

graph is incomplete and cannot currently account for this characteristic. For the time

being the optimization will be included.

Figure 5.10 Required accessible rows

25

Algorithm (Basic)

For i from 1 to n

Copy row i into Temporary row i'

For j from i 1 to in

If i'\j] 0

Call add_succ(i, j, i')

Procedure add_succ(i, j, i')

For k from in to j 1

If j[k] = 1

If i'[k] = 1

i'[k] = 0

Else

i[k] = 1

Each row only needs the rows contained within its required accessible rows (see

Figure 5.11) to create its reachability version. With the non-reachable portion

number we have per row, and the size of the row itself, we can calculate the

maximum number of rows in the largest required accessible area. If this area does

not exceed the available memory, a rolling queue of rows can be stored in memory

to create each rows reachability. Once a row is finalized, the first row in the queue

is removed and the current, complete row is added to the queue. The rolling queue

size can further be reduced, in some cases, by utilizing a second restriction on the

graph.

The second restriction that can be applied, is based on the idea that the adjacency

edges of node i are among nodes that are within e nodes as numbered from the

26

reverse-topologically-sorted matrix, where e id (see Figure 5.11). Therefore, the

rows that will be accessed in creating the reachability of row i are between ie and i.

If there is sufficient memory to hold the maximum \ie t| over all the rows, then it

is possible to read each row into memory only once (to be processed and used in

other row processes then removed and written to disk). This reduced-rolling-queue

will greatly reduce the memory footprint required to process the reachability matrix.

In practical cases a reduction of 80-85% in memory required has been observed. Of

course, this is all dependent on how small e is compared to id.

Figure 5.11 Required accessible rows (path adjacent edges)

In addition to the same inputs needed for the basic algorithm, the rolling queue

version requires an additional nX 1 matrix that contains the ie length for each row.

27

Algorithm (reduced-rolling-queue)

maxRows = max (ie G V)

Store rows 1 to maxRows in rowCache

For i from 1 to n

If j > largest row in rowCache

Remove lowest row from rowCache

And write it to disk

Add row i to rowCache

Copy row rowCache[i] into Temporary row i'

For j from i -1 to in

If i'\j] 0

Cal 1 add_succ(rowCache(i), rowCache(J), i')

Write rows remaining in rowCache to Disk

Procedure add_succ{i, j, i')

For k from in to j -1

If j[k] = 1

If i'[k] = 1

i'[k] = 0

Else

i[k] = 1

28

6. TC Parallel Algorithms

The large computational power needed to solve for the TC of a graph has

resulted in a great deal of research into creating parallel algorithms to generate

faster solution for computing TC [25, 3, 6]. This section will review a parallel

version of the Yang-Yu-Dao-Wang-Phan [25] and a parallel version of Reduced

Hybrid algorithms. The parallel version of the Reduced Hybrid algorithm will be

compared with sequential TC algorithms in Section 8.

6.1. Yang-Yu-Dao-Wang-Phans Hybrid Algorithm

The blocked version of Yang-Yu-Liu-Dao-Wang-Phans hybrid algorithm

allows the blocks to be processed individually with no need to get information from

other blocks [25], Since the blocks do not interact with each other, they can be

processed in parallel over K processors, such that K = bn t, where bn is the number

of blocks the hybrid adjacency matrix is split across.

The nature of the blocking can be adjusted to suite the situation. If the processors

share the same characteristics, such as speed and memory allocation, then the blocks

can be modified to be proportional to each other. However, if the available

processors characteristics are mixed, then the block sizes can be changed to

accommodate the differences and to allow real time runs per block to be

approximately the same (each takes a load according to its means). Also, the

problem does not need to be split into bn blocks. It could be split into even smaller

blocks such that 1 < K < bn. If K
than one block to complete. This would allow for each processor to require less

memory than if AT = bn.

29

Other issues to solve arise from this parallel solution. These issues include: dividing

the blocks appropriately, transmitting the blocks and the P(J) to each processor, and

reassembling the hybrid-reachability matrix once the blocks have been processed.

Algorithm (Blocked-Parallel)

Split the hybrid-reachability matrix into K blocks, where K is the number of

processors ...

For bk where 1 < k < K

Assign Block bk to processor k

Let the block bk consists of rows is to ie

Fetch rows is through ie into memory (the entire

block)

For each row Initiate ti; = 0 for i
For j from is + 1 to n

For i from is to min(y 1, te)

Call Procedure CT(i, j)

Procedure CT(i, j)

For each x e P(J)

If x = i or Tix = 1

Tij = 1 and terminate

30

6.2. The Proposed Reduced Hybrid Algorithm

The Reduced-Hybrid algorithm can be run in parallel with a limited number of

processors. Two details about the type of graph can be exploited to allow it to run

in parallel.

First, each node of the transit graph, that the reduced-hybrid algorithm solves for,

has a time component. The units being used in my graph are one minute intervals

(other solutions could use 30 or 15 minutes). No node can connect to other nodes

in its own unit of time. As an example, all nodes that have a time component of 78

minutes cannot connect to another node that also has a time component of 78

minutes. This is because no instantaneous transfers are allowed between nodes.

The number of nodes in the same time unit is represented with iu (see Figure 6.1).

Each node in the same time unit can be processed in parallel since they cannot share

edges and therefore, will not share successor sets.

Second, each node will also contain a transfer buffer value it. No edge from node i

can connect to nodes within tt. This value is adjusted in the system it represents to

allow for typical variances. In the case I am modeling, buses must contend with

varying traffic patterns and need a buffer of x minutes to reduce the likelihood of

missing a valid transfer. There will be no edge between row i and the rows in its

transfer buffer it.

Together each row has a time unit buffer, iu, and a transfer buffer, it, giving a row

buffer of ib = it + iu (see Figure 6.1). The value of ib represents the number of

rows, before row t, with which row i does not share an edge. Thus, all rows within

31

ib can be processed in parallel as they do not share any edges and therefore, will not

share any successor sets. The number of processors K that can be utilized to process

rows is 1 < K < ib, where K n.

Algorithm (Parallel reduced-rolling-queue)

Set K to be the min (ib E i) rows where \ib \ i < 0 are ignored

maxRows = max(te EV) + K

Store rows 1 to maxRows in rowCache

For each processor K Call Processk(i) where i is the

processor number

32

Procedure Processk(i)

Copy row rowCacheQ) into Temporary row i'

For j from i 1 to in

If i'[j] 0

Cal 1 add_succ(rowCache(i), rowCacheQ), i')

next = call getNextRow

If next < n

Call Processk(next)

Else

Terminate Process

Procedure add_succ(rowCache{i), rowCacheQ), Â£')

For l from in to j -1

If j [Z] = 1

if ;'[/] = l

i'[l] = 0

Else

ill] = 1

Procedure getNextRowQ

i = i + 1

If i > largest row in rowCache

Remove lowest row from rowCache

And write it to disk

Add row i to rowCache

Return i

Once all processes have terminated

Write remaining rows in rowCache to disk

33

7. Additional TC Algorithms

7.1. Matrix Multiplication

Matrix multiplication can be used in the following manner to solve the TC of a

graph. Let A be an adjacency matrix of a DAG and then, convert A into an identity

matrix by setting the values along the diagonal to 1. The TC of A, A* = An. This

can be obtained by multiplying the matrix in the following manner A2 = A1 A1 ,

A4 = A2A2, A8 = A4A4, until An has been obtained. As a last step, set the

diagonal to 0 [3].

With this in mind, many papers have been dedicated to the idea of utilizing matrix

multiplication to solve for TC [10, 3, 16]. Two examples are the Macii [16] and

Altman [3] papers. Macii utilizes the Strassen matrix multiplication algorithm [20]

to compute TC. The Altman paper shows a method to compute TC, in parallel, in

O(logn), where n is the number of nodes in the graph, and achieves it using only

n3 processors.

7.2. Graph Based

Matrix manipulation is not the sole mechanism for explaining and creating TC.

There are many graph based solutions to solving TC [7, 10, 12, 13]. Many graph

based solutions rely on graph traversal, as shown in the aptly titled paper,

Transitive Closure Algorithms Based on Graph Traversal [12],

34

7.3. Maintaining TC

Another interesting area of TC research is in maintaining a TC of a graph in a

dynamic system [21, 8, 9], This area of research explores algorithms that maintain

the TC of a graph as nodes or edges are added, removed, changed.

8. Performance Analysis

The performance of five different algorithms and their I/O optimized

derivates have been tested for this thesis: the algorithms of Floyd-Warshall [5, 24,

11], Warren [23], Agrawal-Jagadish [1], Yang-Yu-Liu-Dao-Wan-Pham [25], and

Reduced Hybrid. The code for the algorithms is available in the appendix. The test

environment is described in Table 8.1.

Table 8.1 System details

Operating System Ubuntu 9.0.4 (64 bit)

Motherboard Gigabyte EP45-DS3L (LGA 775 socket)

CPU Intel El400 2.0 Dual-Core Processor

Memory 12GB (2 x 2GB + 2 x 4GB) DDR2 800

Hard Drive Raid 0 w/ 2xWestem Digital WD10EADS 1TB SATA2

8.1. Results

The first set of results compare the computational time of the five algorithms in

their basic form, maintaining the matrix in memory (No disk I/O). Each test was

run five times and the average was used for comparison.

35

The input matrix is generated dynamically and loaded into memory before

beginning each algorithm. Two matrices are created. The first, is a topologically

sorted DAG required for the Yang-Yu-Liu-Dao-Wan-Pham [25] algorithm. The

second, is a reverse topologically sorted DAG with transit type connectivity used by

the other four algorithms. The results can be seen in Figure 8.1 which was

computed from data in Table 8.2.

The graph, in Figure 8.1, shows that Floyd-Warshalls algorithm [5, 24, 11] has the

slowest performance followed by Yang-Yu-Liu-Dao-Wan-Phams algorithm [25].

The remaining are clustered together and need further testing with a greater number

of nodes to better distinguish their differences.

Time (millisec) vs. Node Count

Comparison of 5 Algorithms Run In Memory

Floyd-Warshall

Warren

Agrawal-Jagadish

Reduced Hybrid

YangYuLiuDaoWanPham

Figure 8.1 Time vs. node count comparison of 5 algorithms (memory)

36

Table 8.2 Small memory test results (milliseconds)

Nodes Floyd- Warshall Warren Agrawal- Jagadish Reduced Hybrid YangYuLiu- Dao-Wan-Pham

50 17 11 1 0.4 21

100 56 26 10 6 40

150 57 39 33 17 48

200 90 50 38 27 114

250 145 43 42 22 98

300 245 52 51 35 155

350 418 65 57 37 194

400 605 76 60 47 300

450 1,012 83 63 50 375

500 1,436 123 57 52 485

The second set of results compare the computational time of the five algorithms

using disk I/O. Each test was run five times and the average was used for

comparison. The results can be seen in Figure 8.2 and Figure 8.3 which was

computed from data in Table 8.3.

Time (millisec) vs. Node Count

Comparison of 5 Algorithms Run From Disk

1,000,000 1

800,000 - f 0 Floyd-Warshall

600,000 - Warren

400,000 - A 1 Agrawal-Jagadish

200,000 - '>< Reduced Hybrid

- g 0 El HS Iffll ItLI leU MOW -#-YangYuLiuDaoWanPham

\ & % % % % % % % % %

Figure 8.2 Small-disk test results (milliseconds)

37

The Floyd-Warshall diverges upwards so quickly that the other algorithms are

indistinguishable from each other by comparison (see Figure 8.3). This is a result of

the heavy I/O the algorithm requires as pointed out in Warrens paper [23].

Time (millisec) vs. Node Count

Comparison of 4 Algorithms Run From Disk

Warren

Agrawal-Jagadish

Reduced Hybrid

YangYuLiuDaoWanPham

Figure 8.3 Small disk test results Floyd-Warshall removed (milliseconds)

The computational arcs of the other algorithms were better seen when the Floyd-

Warshall results were removed (see Figure 8.3). Similar to the memory results, the

disk results were still clustered (once the Floyd-Warshall was removed). Testing

graphs with larger node counts was needed to better reveal the differences between

algorithms.

38

Table 8.3 Small-disk I/O test results (milliseconds)

Nodes Floyd- Warshall Warren Agrawal- Jagadish Reduced Hybrid YangYuLiu- Dao-Wan-Pham

50 736 69 27 31 41

100 3,856 111 48 70 56

150 11,874 196 119 141 120

200 31,876 219 160 197 175

250 69,292 271 252 244 190

300 133,896 310 307 291 269

350 234,708 358 334 312 449

400 375,789 389 388 341 509

450 594,052 483 433 378 674

500 869,729 573 465 419 868

The third set of results compare the computational time of Warren [23], Agrawal-

Jagadish [1], Yang-Yu-Liu-Dao-Wan-Pham [25], Reduced Hybrid, blocked version

of Agrawal-Jagadish, blocked version of Yang-Yu-Liu-Dao-Wan-Pham, and rolling

queue version of the Reduced Hybrid. The Floyd-Warshall was dropped from this

test as it was found to have a vast gap in performance compared to the other

algorithms. Each test was run five times and the average was used for comparison.

The results can be seen in Figure 8.4 and Figure 8.5 which was computed from data

in Table 8.4.

The connectivity of the nodes, from the simulated graph, for the third set differs

from those of the first and second set. The nodes have a fixed average connectivity

of 50, ranging per node, from 0 to 100 edges. This connectivity is representative of

a transit graph created from a regional public transportation system.

39

Time (millisec) vs. Node Count

Comparison of 7 Algorithms

Warren

Agrawal-Jagadish

Agrawal-Jagadish Blocked

(20)

Reduced Hybrid

Reduced Hybrid reduced-

rolling-queue

YangYuLiuDaoWanPham

YangYuLiuDaoWanPham

blocking (20)

Figure 8.4 Mid-disk/blocked/queue test results

Utilizing larger graphs, it is clear to see, that the Yang-Yu-Liu-Dao-Wan-Pham, and

to a lesser extent, the Warren, have a poorer performance than the other algorithms

(see Figure 8.4). The sluggishness of the Yang-Yu-Liu-Dao-Wan-Pham can be

attributed to its need to read in the P(j) for every column of every row with which it

intersects. The blocked version of the Yang-Yu-Liu-Dao-Wan-Pham has a much

better performance as it limits the number of times P(J) is read from disk per row,

as seen in Figure 8.4, and in detail, Figure 8.5.

40

Warrens algorithm, although outperforming Yang-Yu-Liu-Dao-Wan-Phams, does

not perform as well as the remaining algorithms tested in this section. However,

Warrens algorithm is less fragile than the others tested, because it does not need a

topologically ordered matrix as input.

Time (millisec) vs. Node Count

Comparison of 5 Algorithms

Agrawal-Jagadish

Agrawal-Jagadish Blocked

(20)

Reduced Hybrid

M Reduced Hybrid reduced-

rolling-queue

r YangYuLiuDaoWanPham

blocking (20)

Figure 8.5 Mid-disk/blocked/queue test results (Yang and Warren removed)

To better view the differences between the remaining algorithms, remove the basic

Yang-Yu-Liu-Dao-Wan-Pham and Warren (see Figure 8.5). The blocked version of

the Yang-Yu-Liu-Dao-Wan-Pham still suffers from poor performance, even though

reads from P(J) have been reduced, but not eliminated. The basic forms of the

algorithms of Agrawal-Jagadish and Reduced Hybrid outperform the Yang-Yu-Liu-

Dao-Wan-Pham blocked version.

41

It is interesting to note that at this tested node range, the basic forms of Agrawal-

Jagadish and Reduced Hybrid closely follow each others computational time, with

the Reduced Hybrid slightly ahead. The same goes for their respective blocked and

queued versions. This makes sense as the Reduced Hybrid is tightly based on the

Agrawal-Jagadish Algorithm.

Table 8.4 Mid-disk/blocked/queue test results (milliseconds)

Nodes Warren Agrawal- Jagadish Agrawal- jVIagadish Blocked (20) Reduced Hybrid Reduced Hybrid Rolling- Queue YangYuLiu- DaoWanPham YangYuLiu- DaoWanPham Blocked (20)

500 537 483 231 453 214 908 263

1,000 1,553 975 431 872 341 5,235 832

1,500 3,342 1,656 790 1,426 579 16,447 1,841

2,000 6,056 2,650 1,036 2,045 783 37,913 3,293

2,500 15,493 3,434 1,458 2,959 1,095 72,325 5,248

3,000 25,430 4,867 1,946 3,904 1,353 126,497 7,895

3,500 38,876 6,411 2,674 5,403 1,607 194,375 10,837

4,000 54,461 7,923 3,378 6,808 1,888 302,511 14,337

4,500 81,447 10,358 4,165 8,588 2,322 408,784 18,752

5,000 103,833 12,863 5,206 11,252 2,736 558,532 23,864

The fourth set of results compare the computational time of the blocked Agrawal-

Jagadish [1], rolling queue version of the Reduced Hybrid, 64-bit rolling queue of

the Reduced Hybrid, and a 64-bit rolling queue version of the Reduced Hybrid run

in parallel. Each test was run five times and the average is used for comparison.

The results can be seen in Figure 8.4 and Figure 8.5 which was created from data in

Table 8.4.

The input matrices, aside from the number of nodes, have the same characteristics

as those used in the third set of results.

42

Time (millisec) vs. Node Count

Comparison of 4 Algorithms

Reduced Hybrid reduced-

rolling-queue

Reduced Hybrid 64-bit-

reduced-rolling-queue

Reduced Hybrid 64-bit-

reduced-rolling-queue-2-

Threads

Agrawal-jagadish Blocked

(20)

Figure 8.6 Large blocked/queued test

The Reduced Hybrid algorithms far outperform Agrawal-jagadishs blocked

algorithm as seen in Figure 8.6 created from data in Table 8.5. The Reduced Hybrid

matrix file is approximately 1/3 the size of the Agrawal-jagadish matrix file and

therefore the I/O and computational time are reduced.

43

Table 8.5 Large-disk/blocked/queue/64bit/threads test results (milliseconds)

Nodes Agrawal- Jagadish Blocked (20) Reduced Hybrid Rolling-Queue Reduced Hybrid Rolling-Queue 64 bit Reduced Hybrid Rolling-Queue 64 bit with 2 processes

10,000 8,446 898 942 24,398

20,000 30,235 2,068 1,760 122,591

30,000 66,672 3,537 3,035 311,081

40,000 114,119 6,019 4,673 586,980

50,000 179,695 9,207 6,548 998,646

60,000 259,106 12,321 8,753 1,501,882

70,000 333,989 16,105 10,412 2,000,950

80,000 458,831 18,453 14,007 2,919,352

90,000 583,194 25,066 17,194 3,795,719

100,000 723,305 29,376 21,433 4,774,739

The fifth, and final set of results, focus on very large graphs. These graphs are large

enough to approximate real transportation systems. Currently, Denvers local transit

authority, RTD, requires a transit graph of 1,839,526 nodes. This graph size is

approximately the size of several other transit authorities, such as Dallas, and San

Francisco (BART). One extreme outlier is the New York Bus system, which would

require a transit graph of over 8,000,000 nodes.

44

Time (millisec) vs. Node Count

Best Algorithm Very Large Node Count

Reduced Hybrid Algorithm

with reduced-rolling-

queue 64 bit version w

threads(2)

Figure 8.7 Best algorithm very large node count test

Figure 8.7, generated from data in Table 8.6, shows an exponential cubic trend for

the time taken to compute the TC of a transit graph. One abnormal data point is the

test with 1,839,526 nodes in its graph. This represents the size of the Denver RTD

system. I was taken by surprise, and pleased to see, that the TC of the transit

adjacency graph was computed as fast as it was on a consumer level machine. I

would have been more then content to see 4 to 14 day runtimes to compute the TC,

but instead only a few hours were required.

An interesting trend, shown in Table 8.6, is the percentage of time spent in I/O.

Simply retrieving and writing rows increases from 48% at 250,000 nodes to

approximately 74% at 2,000,000 nodes. Increasing the disk performance with more

modem disks, such as SSD disks, would have a big impact on the overall

computation time of this algorithm.

45

Table 8.6 Very large graphs test results (milliseconds)

Nodes Reduced Hybrid Rolling-Queue 64 bit with 2 processes Percentage of time spent in I/O with disk

250,000 134,782 48%

500,000 511,309 51%

750,000 1,156,303 57%

1,000,000 2,368,806 64%

1,250,000 4,583,407 69%

1,500,000 7,142,919 69%

1,750,000 11,473,473 74%

1,839,526 13,135,813 72%

2,000,000 16,398,128 74%

8.2.Conclusions

For the specific case of a transit graph, the Reduced Hybrid 64-bit rolling queue

run in parallel, outperforms all the other considered algorithms. It is fast enough to

compute the TC of real transit systems in a reasonable time (3.65 hours for an RTD

transit graph, on a consumer level machine). The other algorithms considered in

this paper are inadequate for this task given its parameters, with the exception of the

sequential version of the 64-bit Rolling Queue Reduced Hybrid algorithm.

Two of the more promising algorithms not tested were Altmans matrix

multiplication-based approach and the parallel version of Yang-Yu-Liu-Dao-Wan-

Phams algorithm. The system requirements to implement these are beyond the

scope of this thesis. However, given the resources, it would be worthwhile to

explore the speed of these parallel algorithm with very large graphs.

46

APPENDIX

package com._10xl3 .adjacencymatrixwrtool. readwrite;

import

com._10xl3.adjacencymatrixwrtool.utiltiies.BitTool;

import

com._10xl3.adjacencymatrixwrtool.uti1ti i es.GraphProperti

es;

i mport com._10xl3.adjacencymatrixwrtool.uti1ti i es.Row;

import java.io.Bufferedlnputstream;

import java.io.File;

import java.io.Filelnputstream;

import java.io.lOException;

import java.io.RandomAccessFi1e;

import java.util.Arrays;

fic-k

"fC

* author T. Patrick Bailey

* Date 10/2010

*/

public class AdjacencyGraphWR {

/** Graph Properties file. **/

private GraphProperties graphProp;

/** Buffered input stream. **/

private RandomAccessFile randAccessFile = null;

/** Row Byte Lengths, used for stepped and

doublestepped. **/

private int[] rowByteLengths;

private int maxRowByteLength = 0;

/** Row Zero Byte Lengths, used for doublestepped.

^ j

private int[] zeroByteLengths;

/** Row Range lengths, used for doublestepped. **/

private int[] rowRangeLengths;

private int maxRowRange = 0;

/** GraphFile number. **/

private int graphFileNum = 1;

/** which row is ready to be read. **/

private int readyToReadRow = 0;

/** Bytes read in so far. **/

47

private long bytesReadlnSoFar = 0;

/** Determines if I can write to graph File. **/

private boolean canwrite;

/** how many times a bit is read. **/

private long bitReadCount = 0;

/** how many times a bit was written. **/

private long bitwriteCount = 0;

/** How many times a byte is read. **/

private long byteReadCount = 0;

/** How many times a byte is written to. **/

private long byteWriteCount = 0;

public AdjacencyGraphWR(GraphProperties graphProp,

boolean canwrite) throws lOException {

this.graphProp = graphProp;

this.canwrite = canwrite;

i f(graphProp.getRowLengthStoredFile().

existsO) {

readlnRowLengthsQ;

i f(graphProp.getZeroLengthStoredFile().

existsO) {

readlnZeroLengths();

i f(graphProp.getRowRangeLengthStoredFile().

existsO) {

readlnRowRangeLengthsO;

} }

/**

* Reads in rowLengthStored.graph and stores the

* data.

* throws lOException exception.

*/

private void readlnRowLengthsO throws lOException {

Bufferedlnputstream bnsRowLength;

int numvertices = graphProp.getNumvertices();

rowByteLengths = new int[numvertices];

48

byte[] intByte = new byte[4];

bisRowLength = new BufferedlnputStreamC

new FilelnputStream(graphProp.

getRowLengthStoredFileO)) ;

for (int x = 0; x < numvertices; x++) {

bi sRowLength.read(i ntByte);

rowByteLengths[x] =

BitTool.byteArrayTolnt(intByte);

if (maxRowByteLength < rowByteLengths[x]) {

maxRowByteLength = rowByteLengths[x];

}

}

bi sRowLength. closeQ ;

I**

* Reads in rowZeroFi11 Length.graph.

* throws lOException

*/

private void readlnZeroLengthsO

throws lOException {

BufferedlnputStream bisZeroLength;

int numvertices = graphProp.getNumVerticesO;

zeroByteLengths = new int[numvertices];

byte[] intByte = new byte[4];

bisZeroLength = new BufferedlnputStreamCnew

FilelnputStream(graphProp.

getZeroLengthStoredFile()));

for (int x = 0; x < numvertices; x++) {

bi sZeroLength.read(i ntByte);

zeroByteLengths[x] =

BitTool.byteArrayTolnt(intByte);

}

bi sZeroLength. closeQ ;

/**

* read in RowRange Lengths.

* throws lOException

V

private void readlnRowRangeLengthsQ

49

>!

throws lOException {

BufferedlnputStream bisRowRangelength;

int numvertices = graphProp.getNumVerticesC);

rowRangeLengths = new int[numvertices];

byte[] intByte = new byte[4];

bisRowRangelength = new BufferedlnputStream(new

Filelnputstream(graphProp.

getRowRangeLengthstoredFile()));

for (int x = 0; x < numvertices; x++) {

biSRowRangelength.read(intByte);

rowRangeLengths[x] =

BitTool.byteArrayToint(intByte);

if(rowRangeLengths[x] > maxRowRange) {

maxRowRange = rowRangeLengths[x];

}

}

bisRowRangelength.close();

}

**

Returns the nth Row.

@param nthRow the nth row.

* return nth row.

* throws Exception

*/

public final Row readRow(final int nthRow)

throws Exception {

if (nthRow >= graphProp.getNumVerticesO) {

throw new Exception(rfThere is not a row:

+ nthRow + "' in this graph");

}

}

return new Row(graphProp.getNumvertices(),

getZeroFi11Length(nthRow),

getByteData(nthRow));

I ii

jkk

* Read in rows based on rows.length.

* param rows Array of Row to fill.

* param from which row to begin reading from.

* throws Exception

50

*/

public final void readRows(Row[] rows, int from)

throws Exception {

if (rows.length + from >

graphProp.getNumverticesO) {

throw new ExceptionC

"There is not enough rows:'"

+ (rows.length + from)

+ "' in this graph");

for(int x = from; x < rows.length + from; x++) {

rows[x from] = readRow(x);

}

j**

* Write nth Row to file with provided row.

* Row given must have same parameters as row on

* disk

* rowdata size, zero length size.

* @param nthRow which row to write to.

* @param row row to read from.

* return true if row is written to else false (row

* parameters do not match).

* throws Exception

*/

public final boolean writeRow(final int nthRow,

final Row row) throws Exception {

if (nthRow >= graphProp.getNumverticesO) {

throw new Exception(riThere is not a row:'"

+ nthRow + in this graph");

}

boolean isSameRow = false;

int total Length = (graphProp.getNumverticesO

% 8 != 0) ? (1 + (graphProp.getNumverticesO

/ 8)) : (graphProp.getNumverticesO / 8);

if(row.getRowDataLength() ==

getRowDataLength(nthRow)

&& row.getZeroFi 11 LengthO ==

getZeroFi11Length(nthRow)

&& row.getTotalLengthO == total Length){

isSameRow = true;

51

if(readyToReadRow > nthRow) {

resetC);

}

if(readyToReadRow < nthRow) {

//Move to be ready to write to row.

ski pTo(nthRow);

readyToReadRow = nthRow;

}

writeNext(row.getRowDataO);

readyToReadRow++;

}

return isSameRow;

}

* write the rows[] provided starting from.

* @param rows rows to write to.

* @param from nth row to start writing from.

* throws Exception

V

public final void writeRows(Row[] rows, int from)

throws Exception {

if (rows.length + from >

graphProp.getNumVerticesO) {

throw new Exception(

"There is not enough rows:'"

+ (rows.length + from)

+ in this graph");

for(int x = from; x < rows.length + from; x++) {

writeRow(x, rows[x from]);

}

/**

* Read the xthPlace of the nthRow

* param xthBit xth place.

* param nthRow nth Row.

* return true if the xth place of the nth Row is

* 1.

* throws Exception

52

*/

public final boolean readBitOfRow(final int xthBit,

final int nthRow) throws Exception {

if (xthBit >= graphProp.getNumverticesO) {

throw new Exception(rThere is not a Place:'"

+ xthBit + in this graph");

bitReadCount++;

return readRow(nthRow).readBit(xthBit);

/* **

* Read the xthByte of the nthRow

* @param xthBit xth Byte.

* @param nthRow nth Row.

* return byte read from xth byte nth row.

* throws Exception

*/

public final byte readByteOfRow(final int xthBit,

final int nthRow) throws Exception {

if (xthBit / 8 >= graphProp.getNumverticesO) {

throw new Exception(

"There is not a (byte) Place:'"

+ xthBit + "' in this graph");

return readRow(nthRow).readByte(xthBit);

}

* writes the given bit to the xthPlace of the

* nthRow.

* Returns true if successful else false.

&

* param xthBit xth place.

* param nthRow nth Row.

* param setTo bit to set.

* return true if byte was actually updated else

* false (could be an un-updatable byte).

* throws Exception

*/

public final boolean writeBitOfRow(final int xthBit,

final int nthRow, boolean setTo) throws Exception{

if (xthBit >= graphProp.getNumverticesO) {

throw new Exception(rtThere is not a Place:'"

53

+ xthBit + in this graph");

byte returnedByte;

boolean isUpdate = false;

}

bi twri teCount++;

returnedByte = readRow(nthRow).writeBit(

xthBit, setTo);

return writeByteOfRow(returnedByte, xthBit / 8,

nthRow);

* Overwrite the xth byte of the nth row.

* @param b Byte to write.

* @param xthByte which byte of the row to write to

* (counted as matrix).

* param nthRow which row to write to.

* return true if byte is written to else false.

* throws lOException

V

public final boolean writeByteOfRow(final byte b,

final int xthByte, final int nthRow)

throws lOException {

int zeroFi11 Length = getZeroFillLength(nthRow);

int rowDataLength = getRowDataLength(nthRow);

int rowDataBytesToSkip = xthByte

- zeroFi11 Length;

byte[] bytes = new byte[l];

boolean waswritten = false;

bytes [0] = b;

if (readyToReadRow > nthRow) {

//Resetting to read from beginning

resetO;

}

//Move to be ready to write to row.

ski pTo(nthRow);

readyToReadRow = nthRow;

if (zeroFi11 Length <= xthByte && rowDataLength

>= rowDataBytesToSkip

& (zeroFi11 Length + rowDataLength) >

xthByte) {

54

if ((xthByte zeroFi11 Length) > 0) {

ski pTheNextXBytes(xthByte

- zeroFi11 Length);

//Now write to the next byte

writeNext(bytes);

waswritten = true;

//Reset to beginning

resetO;

}

return waswritten;

* Returns the zero per fill length (used for double

* stepped).

* @param nthRow nthRow to read.

* return zero fill length.

V

private int getZeroFi11Length(final int nthRow) {

int zeroFi11 Length = 0;

if (zeroByteLengths != null

&& zeroByteLengths.length > 0) {

zeroFi11 Length = zeroByteLengths[nthRow];

}

return zeroFi11 Length;

/*- a*

* Row range length (used for double stepped)

* param nthRow

* return Row range length.

V

public int getRowRangeLength(final int nthRow) {

int rowRangeLength = 0;

if (graphProp.getStorageTypeO.

equals("doublestepped")) {

rowRangeLength = rowRangeLengths[nthRow];

}

return rowRangeLength;

55

J

* Maximum row range.

* return max row range length.

*/

public int getMaxRowRangel_ength()

return maxRowRange;

{

/**

* Return the rowData from the Nth Row.

* param nthRow nthRow.

* return RowData.

*/

private byte[] getByteDataCfinal int nthRow)

throws lOException {

byte[] rowData = null;

if (readyToReadRow == nthRow) {

rowData =

readNext CgetRowDataLength(nthRow));

readyToReadRow++;

} else if (readyToReadRow < nthRow) {

ski pTo(nthRow);

readyToReadRow = nthRow;

rowData = getByteData(nthRow);

} else {

//Resetting to read from beginning

resetO ;

rowData = getByteData(nthRow);

}

return rowData;

j j. j.

* RowLength of nth Row.

* param nthRow nthRow.

* return length of nthRow in bytes.

*/

private int getRowDataLength(final int nthRow) {

int rowLength;

if (graphProp.getStorageTypeO.

equals("matrix")) {

rowLength = graphProp.getNumverticesO / 8;

if (graphProp.getNumverticesO % 8 > 0) {

56

rowLength++;

} else {

rowLength = rowByteLengths[nthRow];

return rowLength;

I *

* return max Row byte length.

*/

public int getMaxRowByteLengthC) {

return maxRowByteLength;

n

* This assumes a constanc number of rows stored

* param buffer buffer added to maxRowBytelength

* return number of bytes needed to be stored

*/

public long getMinBytesNeededwithBufferC

int buffer) {

long memLengthRequired = 0;

long sum = 0;

long[] sumOfRowLengths =

new long[rowByteLengths.length];

int minRowsNeeded =

getMi nRowsNeededwithBuffer(buffer);

for(int x = 0; x < rowByteLengths.length; x++){

sum += rowByteLengths[x];

sumOfRowLengths[x] = sum;

if(minRowsNeeded > graphProp.getNumVertices()){

return sumOfRowLengths[

sumOfRowLengths.1ength-1];

} else {

memLengthRequired =

sumOfRowLengths[mi nRowsNeeded-1];

for(int x = minRowsNeeded; x <

graphProp.getNumVertices(); x++) {

if(memLengthRequired <

sumOfRowLengths[x] -

57

sumofRowLengths[x-mi nRowsNeeded]){

memLengthRequi red =

sumOfRowLengths[x] -

sumOfRowLengths[x-mi nRowsNeeded];

} }

}

return memLengthRequired;

public int getMinRowsNeededwithBuffer(int buffer) {

return 8*(getMaxRowRangeLength()) + buffer;

* Read next x bytes.

* param byteLength byte length to read in.

* return bytes read from file(s)

* throws lOException

*/

private byte[] readNext(final int byteLength)

throws lOException {

byte[] rowData = new byte[byteLength] ;

1ong avai1ableSpacelnCu rrentFi1e;

if (randAccessFile == null) {

setlnputStreamO;

}

//Number of Bytes left in current file

if CgraphProp.qetNumberOfGraphFilesO == 1) {

//if MaxFileSize is 0 that indicates there

//is only one file

availableSpacelnCurrentFile =

Long.MAX_VALUE;

} else {

avai1ableSpacelnCurrentFi1e =

getCurrentGraphFileNameO .lengthO

- (bytesReadlnSoFar

% graphProp.getMaxFileSizeQ) ;

}

if (availableSpacelnCurrentFile >

rowData.length) {

58

//More then enough space simply read from

//current file

randAccessFile.read(rowData, 0, byteLength);

byteReadCount += (long) rowData.length;

bytesReadlnSoFar += (long) rowData.length;

} else if (availableSpacelnCurrentFile == 0) {

if (graphFileNum <

graphProp.getNumberOfGraphFi1es()) {

graphFi1eNum++;

setinputStreamO ;

}

} else if (availableSpacelnCurrentFile <

rowData.length) {

//rowData must be read from more than one

//fi1e

//Get the rest of the date from the current

//file

rowData = Arrays.copyOf(readNext((int)

availableSpacelnCurrentFile),

byteLength);

//second portion of the data

byte[] rowData2 = readNext(byteLength

- (int) availableSpacelnCurrentFile);

for (int x = (int)

availableSpacelnCurrentFile; x <

byteLength; x++) {

rowData[x] = rowData2[

x (int) availableSpacelnCurrentFile];

}

} else {

//They are equal, an exact fit write then

//increment to next file.

randAccessFile.read(rowData, 0, byteLength);

byteReadCount += (long) rowData.length;

bytesReadlnSoFar += (long) rowData.length;

if (graphFileNum <

graphProp.getNumberOfGraphFi1es()) {

graphFileNum++;

setinputStreamO ;

}

}

return rowData;

59

private final void writeNext(byte[] bytes)

throws lOException {

//byte[] rowData = new byte[byteLength];

1ong avai1ableSpacelnCurrentFi1e;

if (randAccessFile == null) {

setlnputStreamO;

}

//Number of Bytes left in current file

if CgraphProp.getNumberOfGraphFilesO == 1) {

//if MaxFileSize is 0 that indicates there

//is only one file

availableSpacelnCurrentFile =

Long.max_valu E;

} else {

availableSpacelnCurrentFile =

getCurrentGraphFi1eNameO.1ength()

- (bytesReadlnSoFar

% graphProp.getMaxFileSizeO) ;

}

if (availableSpacelnCurrentFile >

bytes.length) {

//More then enough space simply write the

//bytes to the current file

randAccessFile.write(bytes);

byteWriteCount += bytes.length;

bytesReadlnSoFar += (long) bytes.length;

} else if (availableSpacelnCurrentFile

< bytes.length) {

//rowData must be read from more than one

//file

//Get the rest of the date from the current

//file

writeNext(Arrays.copyOf(bytes, (int)

availableSpacelnCurrentFile));

//second portion of the data

wri teNext(Arrays.copyOfRange(bytes, (i nt)

avai1ableSpacelnCurrentFi1e,

bytes.length));

} else {

60

//They are equal, an exact fit write then

//increment to next file.

randAccessFile.write(bytes);

byteWriteCount += bytes.length;

bytesReadlnSoFar += (long) bytes.length;

if (graphFileNum <

graphProp.getNumberOfGraphFi1es()) {

graphFileNum++;

setlnputStreamO ;

}

}

}

* Creates inputstream based on currentfilename.

* throws lOException

V

private void setlnputStreamO throws lOException {

closeO ;

if (canwrite) {

randAccessFile = new

RandomAccessFi1e(

getCurrentGraphFileNameO, "rw");

} else {

randAccessFile = new

RandomAccessFi1e(

getCurrentGraphFileNameO, "r");

}

}

/**

* Close buffered output stream.

* throws lOException

V

public final void close() throws lOException {

if (randAccessFile != null) {

randAccessFile.closeO;

randAccessFile = null;

}

}

/**

* Returns the current Graph file name to write to.

* return current graph file name.

61

*/

private File getCurrentGraphFileNameO {

String graphFileName = graphFileNum +

while CgraphFileName.lengthO < 8) {

graphFileName = "0" + graphFileName;

return new Fi1eCgraphProp.getDirectoryC),

graphFileName + ".graph");

/**

* Skip to nth row.

* @param nthRow nth Row.

* throws lOException

*/

private void skipTo(final int nthRow)

throws lOException {

long skipLength = 0;

if (randAccessFile == null) {

setlnputStreamO;

}

for (int x = readyToReadRow; x < nthRow; x++) {

skipLength += getRowDataLength(x);

}

if (graphProp.getNumberOfGraphFiles() > 1) {

//Handle multiple files here

ski pBytesMulti Fi1e(ski pLength);

} else {

}

randAccessFi1e.seek(randAccessFi1e.

getFilePointerO + skipLength);

}

J A A

* Skip the next X Bytes.

* param byteskipLength number of bytes to skip.

* throws lOException

*/

private void skipTheNextXBytes(

final int byteskipLength) throws lOException {

long skipLength = byteskipLength;

62

if (randAccessFile == null) {

setlnputStreamO;

}

}

if (graphProp.getNumberOfGraphFilesO > 1)

//Handle multiple files here

ski pBytesMulti Fi1e(ski pLength);

} else {

randAccessFi1e.seek(

randAccessFi1e.getFi1ePoi nter()

+ ski pLength);

}

{

j JU JU

* used to skip around when reading from a graph

* that is contained

* in multiple files.

*

* @param bytesToSkip number of bytes to skip.

* throws lOException

*/

private final void skipBytesMultiFileC

long bytesToSkip) throws lOException {

long availableSpacelnCurrentFile =

getCurrentGraphFileNameO.length()

- (bytesReadlnSoFar

% graphProp.getMaxFileSizeO) ;

if (bytesToSkip < availableSpaceinCurrentFile) {

randAccessFi1e.seek(

randAccessFi1e.getFi1ePoi nter()

+ bytesToSki p);

bytesReadlnSoFar += bytesToSkip;

} else if(availableSpacelnCurrentFile == 0) {

graphFileNum++;

setlnputStreamO;

} else if (bytesToSkip >

availableSpaceinCurrentFile) {

//Skip current file

ski pBytesMulti File(

availableSpaceinCurrentFile);

//skip the rest

ski pBytesMulti Fi1e(bytesToSki p

63

- availableSpacelnCurrentFile);

} else {

//They are equal, an exact fit write then

//increment to next file.

graphFileNum++;

setlnputStreamO;

bytesReadlnSoFar += bytesToSkip;

}

}

/**

* How many times a bit is read.

* return how many times a bit is read.

*/

public long getBitReadCountO {

return bitReadCount;

}

j j.

* How many times a bit is written to.

* return how many times a bit is written to.

*/

public long getBitwriteCountO {

return bitwriteCount;

}

jkk

* Bytes read.

* return bytes read.

*/

public long getByteReadCountO {

return byteReadCount;

}

* Bytes written to.

* return number of bytes written to.

V

public long getByteWriteCountO {

return bytewritecount;

}

/**

* resets to read from first file first byte.

* throws lOException

64

V

private void resetC) throws lOException {

close();

readyToReadRow = 0;

graphFileNum = 1;

bytesReadlnSoFar = 0;

}

J JU JU

* Returns true if file is set to be written to.

* return true if set to be written to.

*/

public boolean isWritableO {

return canwrite;

}

65

package com._10xl3.adjacencymatrixwrtool.readwrite;

import

com._10xl3.adjacencymatri xwrtool.uti1ti i es.GraphProperti

es;

import com._10xl3.adjacencymatrixwrtool.utiltiies.Row;

import java.io.lOException;

j JU JU

*

* author T. Patrick Bailey

* Date 10/2010

*/

public class FullyBufferedAdjacencyGraphWR {

/** GraphWR tool. **/

private AdjacencyGraphWR graphWR;

/** Graph Properties file. **/

private GraphProperties graphProp;

/** Rows. **/

private Row[] rows;

/** How many times a row is read. **/

private long rowReadCount = 0;

/** How many times a row has been written to. **/

private long rowwritecount = 0;

/** Byte read count. **/

private long byteReadCount = 0;

/** Byte write count. **/

private long bytewriteCount = 0;

/** Bit Read Count. **/

private long bitReadCount = 0;

/** Bit write count. **/

private long bitwriteCount = 0;

public FullyBufferedAdjacencyGraphWR(

GraphProperties graphProp, boolean canwrite)

throws IOException, Exception {

graphWR = new AdjacencyGraphWR(

graphProp, canwrite);

this.graphProp = graphProp;

rows = new Row[graphProp.getNumVertices()];

graphWR.readRows(rows, 0);

}

66

* Read nth Row.

* @param nthRow row to read.

* return nth row.

* throws Exception

V

public Row readRow(final int nthRow)

throws Exception {

checkRow(nthRow);

rowReadCount++;

return rows[nthRow];

/**

* Write the row into the rows matrix.

* param row row to overwrite old row.

* param nthRow nth row.

* throws Exception

*/

public boolean writeRow(final Row row,

final int nthRow) throws Exception {

checkRow(nthRow);

boolean isSameRow = false;

int totalLength = (graphProp.getNumverticesO

% 8 != 0) ?

(1 + (graphProp.getNumverticesO / 8))

: (graphProp.getNumverticesO / 8);

//New row must fit within old row.

if(row.getRowDataLength() ==

rows[nthRow].getRowDataLengthO

&& row. getZeroFi 11 LengthO ==

rows [nthRow]. getZeroFi 11 LengthO

&& row.getTotalLengthO ==

total Length) {

isSameRow = true;

rows[nthRow] = row;

rowwri teCount++;

}

return isSameRow;

}

JU

67

* Read the xthByte of the nth row.

* @param xthByte xth byte.

* @param nthRow nth row.

* return xth byte of the nth row.

* throws Exception

*/

public byte readByteOfRow(final int xthByte,

final int nthRow) throws Exception {

checkRow(nthRow);

checkByte(xthByte);

byteReadCount++;

return rows[nthRow].readByte(xthByte);

/**

* Set the xthByte of the nthRow.

* param b byte to set to.

* param xthByte xthByte.

* param nthRow nthRow.

* return true if set else false (out of range).

* throws Exception

*/

public boolean writeByteOfRow(final byte b,

final int xthByte, final int nthRow)

throws Exception {

checkRow(nthRow);

checkByte(xthByte);

boolean waswritten = false;

if (rows[nthRow].getzeroFi11Length() <= xthByte

&& (rows[nthRow].getzeroFilll_ength()

+ rows [nthRow] .getRowDatal_ength())

> xthByte) {

//Now write to the next byte

bytewri teCount++;

waswritten = true;

rows[nthRow].wri teByte(xthByte, b);

}

return waswritten;

}

/**

* Read the xthPlace of the nthRow.

68

* @param xthPlace xth bit to read.

* @param nthRow nthRow to read from.

* return true if bit is 1 else 0.

* throws Exception

V

public boolean readBitOfRow(final int xthPlace,

final int nthRow) throws Exception {

checkRow(nthRow);

checkRow(xthPlace);

bi tReadCount++;

return rows[nthRow].readBit(xthPlace);

* writes bit to xthPlace of Nth row, returns true

* if bit was set else false;

* param xthPlace xthPlace.

* param nthRow nthRow.

* param setTo Boolean to set it to.

* return true if bit was set else false.

* throws Exception

V

public boolean writeBitOfRow(final int xthPlace,

final int nthRow, boolean setTo) throws Exception{

checkRow(nthRow);

checkRow(xthPlace);

boolean waswritten = rows[nthRow].

writeBitwasUpdated(xthPlace, setTo);

if(wasWritten) {

bitWriteCount++;

}

return waswritten;

yju a

* chevk if there is an nthRow.

* param nthRow nth row to check.

* throws Exception

*/

private void checkRow(final int nthRow)

throws Exception {

if (nthRow > rows.length) {

69

throw new Exception("There is not a row:'"

+ nthRow + in this graph");

}

j JU JU

* Check to make sure there are enough bytes in the

* given row.

* param xthByte byte to test validity.

* throws Exception

V

private void checkByteCfinal int xthByte)

throws Exception {

if(xthByte > rows.length) {

throw new Exception("There is not a byte:'"

+ xthByte + "' in any row");

}

}

* Returns number of rows read count.

* return number of times all rows has been read.

*/

public final long getRowReadCountO {

return rowReadCount;

}

/**

* Returns how many times an entire row has been

* written to.

* return how many times an entire row has been

* written to.

*/

public final long getRowWriteCountO {

return rowWriteCount;

}

I -kit

* How many bytes read from disk.

* return how many bytes read from disk.

*/

public final long getDiskByteReadCountO {

return graphWR.getByteReadCountO;

70

}

* How many bytes written from disk.

* return how many bytes written from disk.

*/

public final long getDiskBytewriteCountO {

return graphWR.getByteWriteCountQ;

*/

Return number of bytes read from memory,

return number of bytes read from memory

public final long getByteReadCountO {

return byteReaaCount;

}

* Return byte write count from memory.

* return byte write count from memory.

*/

public final long getByteWriteCountO {

return bytewriteCount;

}

jitit

* Returns the bit read count from memory.

* return the bit read count from memory.

*/

public final long getBitReadCountO {

return bitReadCount;

}

/**

* Returns the byte write count from memory.

* return the byte write count from memory.

V

public final long getBitwriteCountO {

return bitWriteCount;

}

/

**

* Returns true if file is set to be written to.

* return true if set to be written to.

V

71

public boolean iswritableO {

return graphWR.iswritableO;

}

/**

* Closes RandomFileAccess.

* throws lOException

*/

public final void closeO throws lOException {

graphWR.closeO;

}

y A JU

* Writes all rows to disk.

* throws Exception

*/

public void writeAllO throws Exception

graphWR.writeRows(rows, 0);

}

/A A

* Returns the rowRangeLength.

* param nthRow nthRow.

* return row range.

V

public int

return

}

getRowRangeLength(final int nthRow) {

thi s.graphWR.getRowRangeLength(nthRow);

72

package com._10xl3.adjacencymatrixwrtool.utiltiies

*

* author T. Patrick Bailey

* Date 10/2010

*/

public class BitTool {

/* ** Hidden Constructor. **/

private BitTool () { }

y**

* Sets the trailing bits to 0.

* param b Byte to adjust

* param bit num bits.

* return update byte.

*/

publ i c_______ _______________

final byte b,

byte newB = b;

static byte setLeadingBitsToZeroC

k final int bit)

if

}

}

}

}

}

}

}

(bit ==

newB =

else if

newB =

else if

newB =

else if

newB =

else if

newB =

else if

newB =

else if

newB =

1) {

(byte)

(bit ==

(byte)

(bit ==

(byte)

(bit ==

(byte)

(bit ==

(byte)

(bit ==

(byte)

(bit ==

(byte)

(b & 0x7F)

2) {

(b & 0x3F)

3) {

(b & OxlF)

4) {

(b & OxOF)

5) {

(b & 0x07)

6) {

(b & 0x03)

7) {

(b & 0x01);

return newB;

}

* Sets the leading bits to 0.

* param b Byte to adjust

* param bit num bits.

73

* return update byte.

*/

public static byte setTrailingBitsTozeroC

final byte b, final int bit) {

byte newB = b;

if (bit == 1) {

newB = (byte) (b & 0x80);

} else if (bit == 2) {

newB = (byte) (b & OxCO);

} else if (bit == 3) {

newB = (byte) (b & OxEO);

} else if (bit == 4) {

newB = (byte) (b & OxFO);

} else if (bit == 5) {

newB = (byte) (b & 0xF8);

} else if (bit == 6) {

newB = (byte) (b & OxFC);

} else if (bit == 7) {

newB = (byte) (b & OxFE);

}

return newB;

}

/* **

* Converts an int to a byte array.

* @param value int to convert.

* return converted byte array.

V

public static byte[] intToByteArray(

final int value) {

return new byte[] {

(byte) (value > 24),

(byte) (value > 16),

(byte) (value > 8),

(byte) value};

* Converts a byte array to an int.

* param b byte array to convert.

* return converted int value.

*/

74

public static int byteArrayTolntCfinal byte [] b) {

return (b[0] 24)

+ C(b[l] & OxFF) 16)

+ CCb[2] & OxFF) 8)

+ Cb[3] & OxFF);

public static byte[] longToByteArray(

final long value) {

return new byte[] {

(byte) (value > 56),

(byte) (value > 48),

(byte) (value > 40),

(byte) (value > 32),

(byte) (value > 24),

(byte) (value > 16),

(byte) (value > 8),

(byte) value};

/**

* Converts a byte array to an long.

* @param b byte array to convert.

* return converted int value.

*/

public static long byteArrayToLong(

final byte [] b) {

return ((long)b[0] 56)

+ ((long)(b[l] & OxFF) 48)

+ ((long)(b[2] & OxFF) 40)

+ ((long)(b[3] & OxFF) 32)

+ ((long)(b[4] & OxFF) 24)

+ ((long)(b[5] & OxFF) 16)

+ ((long)(b[6] & OxFF) 8)

+ ((long)b[7] & OxFF);

/**

* If the bit is one at the xthPlace

* param b

* return true if the bit is one at the xthPlace.

V

public static boolean isBitOne(final byte b,

final int bit) {

75

return (((b bit) & 0x80) == 0x80);

}

j "it "it

* if the bit is one at the xthPlace

* @param b

* return true if the bit is one at the xthPlace.

*/

public static boolean isBitone(final long 1,

final int bit) {

return (((1 bit) & 0x80000000000000001)

== 0x80000000000000001)

}

j

* Set the bit to 0 if setBitTo is false otherwise

* set to 1.

* And returns updated byte

* param b byte to set.

* param bit bit number to update.

* param setBitTo Byte.

* return updated byte.

*/

public static byte setBit(final byte b,

final int bit, boolean setBitTo) {

byte newByte = b;

byte mask = (byte) (0x01 (7 bit));

byte maskopposite = (byte) (mask a OxFF);

if (setBitTo) {

newByte = (byte) (b | mask);

} else {

newByte = (byte) (b & maskopposite);

}

return newByte;

}

}

76

package com._10xl3.adjacencymatrixwrtool .utiltiies;

import java.util.Arrays;

* author T. Patrick Bailey

* Date 10/2010

*/

public class Row {

/** Row byte Data.

* This is the data that may not be zero filled

* It may not be the entire length of the row.

ju j

private byte[] rowData;

/** Number of edges in this row. **/

private int numEages=-l;

/** The number of bytes to fill with Os preceeding

the rowData.**/

private int zeroFi11 Length;

/** The total length of the row in bytes.

* It will at most be the same as

* rowData.length+zeroFi 11 Length

* However there can be trailing 00s after that

* it will be exactly roundup(NumNodes/8).

*/

private int total Length;

/** Number of nodes. **/

private int numNodes;

/**

* Row Constructor.

* param numNodes Number of Nodes in graph.

* param zeroFi11 Length zero fill length.

* param rowData Actual rowData.

*/

public RowCfinal int numNodes,

final int zeroFi 11 Length,

final byte[] rowData) throws Exception {

this.total Length = (numNodes % 8 != 0)

? (1 + (numNodes / 8)) : (numNodes / 8);

this.numNodes = numNodes;

this.zeroFi11 Length = zeroFi11 Length;

this.rowData = rowData.cl one();

77

if (this.rowData.length + this.zeroFillLength >

total Length) {

throw new Exception("RowData cannot be

+ longer than the total Length");

} }

public void resetvalues(final int numNodes,

final int zeroFi11 Length,

final byte[] rowData) throws Exception {

this.total Length = (numNodes % 8 != 0)

? (1 + (numNodes / 8)) : (numNodes / 8);

this.numNodes = numNodes;

this.zeroFillLength = zeroFi11 Length;

this.rowData = rowData;

if (this.rowData.length + this.zeroFillLength >

total Length) {

throw new Exception("RowData cannot be"

+ "longer than the total Length");

} }

* Returns true if the xth place is 1.

*

* if it is outside the rowData it will return 0

*

* return true if the xth place is 1.

*/

public boolean readBit(int xthBit) {

boolean isOne = false;

int byteNumber = xthBit / 8;

int bitOfByte = xthBit % 8;

if (byteNumber < zeroFi11 Length + rowData.length

&& byteNumber >= zeroFi11 Length) {

//byte is within the rowData

isOne = BitTool.isBitone(rowData[byteNumber

- zeroFi11 Length], bitOfByte);

return isOne;

}

78

/J. A

* Updates the xthPlace of this row to the setBitTo

* will only update it if its within the rowData

* area

* @param xthBit xth bit in row.

* @param setBitTo if true set it to 1 else 0.

* return byte that was updated, if not updated

* will return 0x00.

*/

public final byte writeBit(final int xthBit,

final boolean setBitTo) {

int byteNumber = xthBit / 8;

int bitOfByte = xthBit % 8;

byte byteData = 0;

if (byteNumber < zeroFi11 Length

+ rowData.length

&& byteNumber >= zeroFi11 Length) {

//byte is within the rowData

rowData[byteNumber zeroFi11 Length] =

Bi tTool.setBi t(rowData[byteNumber

- zeroFillLength], bitOfByte, setBitTo);

byteData = rowData[byteNumber

- zeroFillLength];

}

return byteData;

}

/.. j.

* Updates the xthPlace of this row to the setBitTo

* will only update it if its within the rowData

* area

* param xthBit xth bit in row.

* param setBitTo if true set it to 1 else 0.

* return true if bit was updated else false;

*/

public final boolean writeBitwasupdated(

final int xthBit, final boolean setBitTo) {

int byteNumber = xthBit / 8;

int bitOfByte = xthBit % 8;

boolean wasUpdated = false;

79

if (byteNumber < zeroFi11 Length

+ rowData.length

&& byteNumber >= zeroFi11 Length) {

//byte is within the rowData

rowData[byteNumber zeroFi11 Length] =

Bi tTool.setBi t(rowData[byteNumbe r

- zeroFi11 Length], bitOfByte, setBitTo);

wasupdated = true;

}

return wasupdated;

}

/**

* Returns the xth byte of this row.

* If it is not within row data it will be 0.

* @param xthByte xth byte to get.

* return xth byte returned.

*/

public byte readByteCint xthByte) {

if (xthByte < zeroFi11 Length + rowData.length

&& xthByte >= zeroFi11 Length) {

//byte is within the rowData

return rowData[xthByte zeroFillLength];

}

return 0;

/"*

* Returns the xth byte of this row.

* If it is not within row data it will be 0.

* param xthByte xth byte to get.

* return xth byte returned.

*/

public byte readByteNoCheck(int xthByte) {

return rowData[xthByte zeroFillLength];

}

public boolean writeByte(int xthByte, byte b) {

boolean iswithin = false;

if (xthByte < zeroFi11 Length + rowData.1ength

&& xthByte >= zeroFillLength) {

//byte is within the rowData

rowData[xthByte zeroFi11 Length] = b;

80

iswithin = true;

}

return iswithin;

* Total Length.

* return total row length.

V

public final int getTotalLengthC) {

return this.total Length;

}

j j.

* byte size of zero fill length.

* return zero fill length.

*/

public final int getzeroFi11LengthC) {

return this.zeroFi11 Length;

/**

* Row data length.

* return row data length.

*/

public final int getRowDataLengthQ {

} return this.rowData. length;

/*
JU Returns the row as a byte array.

A return the row as a byte array.

*/

public final byte[] getRowO {

byte[] rowDataWithZeroFill =

new byte[zeroFillLength + rowData.length];

if (rowData.1ength == total Length) {

rowDataWithZeroFill = Arrays.copyOfC

rowData, total Length);

} else {

for (int x = zeroFi11 Length; x <

zeroFi11 Length + rowData.length; x++) {

rowDatawitnZeroFi11[x] = rowData[

x zeroFi11 Length];

81

}

rowDatawithZeroFill = Arrays.copyOfC

rowDatawi thzeroFi11, total Length);

return rowDatawithzeroFi11;

}

* Returns the validRowData.

* return the validRowData.

*/

public final byte[] getRowDataO {

return rowData.cloneO;

}

* Returns the validRowData with leading zero bytes

* return the validRowData with leading zero bytes

V

public final byte[] getRowDataWithLeadingzerosO {

byte[] rowDatawithZeroFill = new byte[

zeroFi11 Length + rowData.length];

}

if (rowData.length == total Length) {

rowDatawithZeroFill = Arrays.copyOfC

rowData, total Length);

} else {

for (int x = zeroFi11 Length; x <

zeroFi11 Length + rowData.length; x++) {

rowDatawithzeroFi11[x] = rowData[

x zeroFi11 Length];

}

}

return rowDatawi thzeroFi11; *

* Returns the entire row as a string in binary

* format.

* return the entire row as a string in binary

* format.

*/

Overri de

public final String toStringO {

final StringBuffer strB = new StringBuffer(lOO)

82

final byte[] all Row = getRowO;

for (int x = 0; x < al1 Row.length; x++) {

strB.append(zeroPadLeft(Integer.tostri ng(

al1Row[x] & Oxff, 2)) +

}

if (strB.length() > 0) {

strB.deleteCharAt(strB.length() 1);

return strB. toStringQ;

j itit

* Returns the entire row as a string in binary

* format.

* @return the entire row as a string in binary

* format.

*/

public final String getRowBinaryAsStringO {

return this. toStringO ;

/j. j.

* Returns the entire row as a string in hex format.

* ^return the entire row as a string in hex format.

*/

public final String getRowHexAsString() {

final StringBuffer strB = new StringBuffer(500);

final byte[] all Row = getRowO;

for (int x = 0; x < al1 Row.length; x++) {

strB.append(zeroPadHexLeft(integer.

toString(allRow[x] & Oxff, 16)) + ":");

if (strB.lengthO > 0) {

strB.deleteCharAt(strB.length() 1);

return strB.toString();

/**

* Returns the rowdata as a string in binary format.

83

* return the rowdata as a string in binary format.

*/

public final String getRowDataBinaryAsStringO {

final StringBuffer strB = new StringBuffer(500);

for (int x = 0; x < this.rowData.length; x++) {

strB.append(zeroPadLeft(

integer.toString(this.rowData[x] & Oxff,

if (strB.lengthC) > 0) {

strB.deleteCharAt(strB.length() 1);

return strB.toString();

j a

* Returns the rowdata as a string in binary format.

* return the rowdata as a string in binary format.

V

public final String getRowDataHexAsString() {

final StringBuffer strB = new StringBuffer(500);

for (int x = 0; x < this.rowData.length; x++) {

strB.append(zeroPadHexLeft(

integer.toString(this.rowData[x] & Oxff,

16)) + ":");

}

if (strB.lengthC) > 0) {

strB.deleteCharAt(strB.length() 1);

return strB.toString();

j JU JU

* Pad binary data with '0's.

* param str padded binary string.

* return padded binary string.

*/

public static String zeroPadLeft(String str) {

while (str.length() < 8) {

str = "0" + str;

}

84

return str;

}

j JU JU

* Pad binary data with 'O's.

* pararn str padded binary string.

* return padded binary string.

*/

public static String zeroPadHexLeft(String str) {

while (str.lengthC) < 2) {

str = "O" + str;

}

return str;

}

public final Row cl one 0 {

Row row = nul1;

try{

row = new Row(this.numNodes,

thi s.zeroFi11 Length, thi s.rowData);

} catch (Exception e) {

e.printStackTraceO ;

}

return row;

}

public final int getNumNodesO {

return this.numNodes;

}

85

package com._10xl3.f1oydwarshal1algori thm;

import

com._10xl3.adjacencymatri xwrtool

hWR;

import

com._10xl3.adi acencymat

AdjacencyGrapnWR;

import

com._10xl3.adj acencymat ri xwrtool

es;

import java.io.File;

import java.io.lOException;

i mport j ava.i o.RandomAccessFi1e;

import java.text.Decimal Format;

import java.text.NumberFormat;

ri xwrtool

readwri te.Adj acencyG rap

readwrite.FullyBuffered

uti1ti i es.GraphProperti

* author T. Patrick Bailey

* Date 10/2010

*/

public class FloydwarshallAlgorithm {

private StringBuffer fullDataStrB =

new StringBuffer(1000);

private RandomAccessFile randAccessFile =

new RandomAccessFile(new File("graphOutput.txt"),

"rw");

private GraphProperties gProp;

private NumberFormat formatter = new

Deci malFormat("###,###,###,###,###,###,###,###"); *

* Runs time test on the Floyd-warshall Algorithm

* param graphFile

* throws lOException

* throws Exception

*/

public FloydWarshallAlgorithm(File graphFile)

throws lOException, Exception {

gProp = new GraphProperties(new

Fi 1 e (g raph Fi le. get Parent File() ,

"graph,properties"));

long memoryTime = getTimeToRuninMemoryO;

86

long diskTime = getTimeToRunlnDiskO;

String firstLine;

firstLine = "Floyd-warshall: "

+ formatter.format(gProp.getNumVerti ces())

+ ": + formatter.format(memoryTime)

+ ": + formatter.format(diskTime);

System.out.println(fi rstLine);

fullDataStrB.insert(0, fi rstLine);

fullDataStrB.i nsertCO,

"\n--Floydwarshal1 Algori thm--");

randAccessFi1e.seek(randAccessFi1e.1ength());

randAccessFile.write(

ful lDataStrB. tost ring () .getBytesO) ;

randAccessFile.closeQ;

/**

* Returns the runtime in nanoseconds of the

* Floyd-warshall Algorithm in memory.

* return the runtime in nanoseconds of the

* Floyd-warshall Algorithm in memory.

* throws lOException

* throws Exception

*/

public long getTimeToRunlnMemoryO

throws lOException, Exception {

FullyBufferedAdjacencyGraphWR buffGraphWR =

new FullyBufferedAdjacencyGraphWR(

gProp, true);

int numvert = gProp.getNumvertices();

long totalTime;;

totalTime = System.nanoTimeQ;

for(int i =0; i < numvert; i++) {

for(int i =0; j < numvert; j++) {

if (buffGraphWR.readBitofRow(j, i)) {

for (int k = 0; k < numvert; k++) {

if (buffGraphWR.readBitOfRow(

i, k)) {

buffGraphWR.wri teBi tOfRow(

j, k, true);

}

}

87

}

}

}

totalTime = System.nanoTimeO totalTime;

fullDataStrB.append("\nMemory\nBits Read: "

+ formatter.format(

buffGraphWR. getBi tReadCountO)

+ "\nBits write: "

+ formatter.format(buffGraphWR.

getBi twri teCountO));

buffGraphWR.closeO ;

return totalTime;

}

/**

* Returns the runtime in nanoseconds of the

* Floyd-Warshall Algorithm from Disk.

* return the runtime in nanoseconds of the

* Floyd-Warshall Algorithm from Disk

* throws Exception

V

public long getTimeToRunlnDisk () throws Exception {

AdjacencyGraphWR graphWR =

new AdjacencyGraphWR(gProp, true);

int numvert = gProp.getNumvertices();

long totalTime;;

totalTime = System.nanoTimeQ;

for(int i =0; i < numvert; i++) {

for(int j =0; j < numvert; j++) {

if (graphWR.readBitOfRow(j, i)) {

for (int k = 0; k < numvert; k++) {

if (graphWR.readBitOfRow(

i, k)) {

graphWR.writeBitOfRow(j, k,

true);

}

}

}

}

}

totalTime = System.nanoTimeO totalTime;

88

II

fullDataStrB.append("\nDisk\nBits Read:

+ formatter.format(graphWR.getBi tReadCountO)

+ "\nBits write: "

+ formatter.format(graphWR.

getBitWriteCountO)) ;

graphWR. closeO;

return totalTime;

public static void main(final String...args) {

File graphFile = new File(args[0]);

try {

new FloydWarshallAlgorithm(graphFile);

} catch(Exception e) {

e.printStackTraceO;

}

}

}

89

package com._10xl3.warrenalgorithm;

import

com._10xl3.adjacencymatri xwrtool.readwrite.AdjacencyGrap

hWR;

import

com._10xl3.adjacencymatrixwrtool.readwrite.Ful lyBuffered

AdjacencyGrapnWR;

import

com._10xl3.adjacencymatri xwrtool.uti1ti i es.GraphProperti

es;

import com._10xl3.adjacencymatrixwrtool.utiltiies.Row;

import java.io.File;

import java.io.lOException;

import java.io.RandomAccessFile;

import java.text.Decimal Format;

import java.text.NumberFormat;

j -k *

* author T. Patrick Bailey

* Date: 10/2010

*/

public class warrenAlgorithm {

private StringBuffer fullDataStrB =

new StringBuffer(lOOO);

private RandomAccessFile randAccessFile = new

RandomAccessFi1e(

new FileC'graphOutput.txt") "rw");

private GraphProperties gProp;

private NumberFormat formatter = new

Deci malFormatC"###,###,###,###,###,###,###,###");

* Runs time test on the Warren Algorithm

* param graphFile

* throws lOException

* throws Exception

*/

public WarrenAlgorithm(File graphFile)

throws lOException, Exception {

gProp = new GraphPropertiesCnew

Fi 1 e(g raph Fi le. get Parent FileO ,

graph.properties"));

long memoryTime = getTimeToRunlnMemoryO;

90

long diskTime = getTimeToRuninDiskO;

String firstLine;

firstLine = "warren: "

+ formatter.formatCgProp.getNumVerti ces())

+ ": + formatter.format(memoryTime)

+ " + formatter.format(diskTime);

System.out.println(fi rstLine);

fullDataStrB.insertCO, fi rstLine);

fullDataStrB.i nsertCO, "\nWarrenAlgori thm")

randAccessFi1e.seek(randAccessFi1e.1ength());

randAccessFi1e.wri te(ful1DataStrB.

tostring() .getBytesO);

randAccessFile.closeO;

}

* Returns the runtime in nanoseconds of the

* Warren Algorithm in memory.

* return the runtime in nanoseconds of the

* Warren Algorithm in memory.

* throws IOException

* throws Exception

*/

public long getTimeToRunlnMemoryO

throws IOException, Exception {

FullyBufferedAdjacencyGraphWR buffGraphWR = new

FullyBufferedAdjacencyGraphWR(gProp, true);

int numvert = gProp.getNumverticesO;

long totalTime;

Row rowl;

Row row!;

totalTime = System.nanoTime();

for (int i =1; i < numvert; i++) {

rowl = buffGraphWR.readRow(i);

for (int j = 0; j < numvert; j++) {

if (rowl.readBit(j)) {

rowD = buffGraphWR.readRow(j);

for (int k = 0; k < numvert; k++) {

rowl.writeBit(k,

(rowl.readBit(k) |

row!.readBi t(k)));

}

91

}

}

}

for (int i =0; i < numvert-1; i++) {

rowl = buffGraphWR.readRow(i);

for (int j = i+1; j < numVert; j++) {

if (rowl.readBit(j)) {

rowJ = buffGraphWR.readRow(j);

for (int k = 0; k < numVert; k++) {

rowI.writeBit(k,

(rowl.readBit(k) |

rowj.readBi t(k)));

}

}

}

}

totalTime = System.nanoTimeO totalTime;

fullDataStrB.append("\nMemory\nBits Read: "

+ formatter.format(

buffGraphWR. getBi tReadCountO)

+ "\nBits write: "

+ formatter.format(buffGraphWR.

getBi twri teCountO)) ;

buffGraphWR.close();

return totalTime;

}

/**

* Returns the runtime in nanoseconds of the

* warren Algorithm from Disk.

* return the runtime in nanoseconds of the

* warren Algorithm from Disk

* throws Exception

*/

public long getTimeToRunlnDisk () throws Exception {

AdjacencyGraphWR graphWR =

new AdjacencyGraphWR(gProp, true);

int numVert = gProp.getNumVertices();

long totalTime;;

Row rowl;

Row rowJ;

92