
Citation 
 Permanent Link:
 http://digital.auraria.edu/AA00001656/00001
Material Information
 Title:
 Using fractional coloring to solve sparse iterative systems in parallel
 Creator:
 Marchant, Melanie A
 Place of Publication:
 Denver, CO
 Publisher:
 University of Colorado Denver
 Publication Date:
 1991
 Language:
 English
 Physical Description:
 vii, 56 leaves : illustrations ; 29 cm
Subjects
 Subjects / Keywords:
 Iterative methods (Mathematics) ( lcsh )
Parallel computers ( lcsh ) Graph theory ( lcsh ) Linear programming ( lcsh ) Graph theory ( fast ) Iterative methods (Mathematics) ( fast ) Linear programming ( fast ) Parallel computers ( fast )
 Genre:
 bibliography ( marcgt )
theses ( marcgt ) nonfiction ( marcgt )
Notes
 Bibliography:
 Includes bibliographical references.
 Thesis:
 Submitted in partial fulfillment of the requirements for the degree, Master of Science, Department of Mathematical and Statistical Sciences
 Statement of Responsibility:
 by Melanie A. Marchant.
Record Information
 Source Institution:
 University of Colorado Denver
 Holding Location:
 Auraria Library
 Rights Management:
 All applicable rights reserved by the source institution and holding location.
 Resource Identifier:
 25334848 ( OCLC )
ocm25334848
 Classification:
 LD1190.L622 1991m .M37 ( lcc )

Downloads 
This item has the following downloads:

Full Text 
USING FRACTIONAL COLORING
TO SOLVE
SPARSE ITERATIVE SYSTEMS IN PARALLEL
\
by
Melanie A. Mar chant
B.S., Dickinson College, 1983
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Master of Science
Department of Mathematics
1991
This thesis for the Master of Science
degree by
Melanie A. Marchant
has been approved for the
Department of Mathematics
David C. Fisher
%ii 3o,,m\
Date
I
I
I (
i I
I
March ant, Melanie A. (M.S., Applied Mathematics)
Using Fractional poloring to Solve Sparse Iterative Systems in Parallel
Thesis directed by Assistant Professor David C. Fisher
Coloring algorithms have been widely used to find parallel implementations of
I
iterative methods1, such as Successive OverRelaxation. We study how fractional
coloring can be used to solve sparse iterative systems in parallel, as well as speed
the convergence pf the iterative method. We also study the fractional coloring
number of a, graph x/(G). Using properties of linear programming, we prove
l
that for any Igraph G, u>((7) < w/(G) = Xf{G) < x(G), where w(G) is the
clique number of,jC?, u>f(G) is the fractional clique number of G, and x(G) is the
chromatic number of G. We also prove that x/(G) is on the order of x(
I i
random graphs, and that the fractional coloring can be computed to a desired
accuracy. Finally, experimental evidence is given that Xf(@) can be computed
i I
I 1
more quickly than x(^).
The form andjcoritent of this abstract are approved. I recommenditspubbc^iom
Signed
i David C. Fisher
111
To my husband Eric
and
my son Aaron
for
their unconditional love
IV
CONTENTS
.;i ;
Figures ................................................
Acknowledgements........................................
1 USING !fr{aCTIONAL COLORING TO SOLVE SPARSE IT
ERATIVE SYSTEMS IN PARALLEL
I
1.1 Parallelism of Symmetric Systems
1.2 Comparing Coloring and Clique Numbers
1.2.1 A Sequence of Inequalities.............................
1.2.2 Examples.................................................
1.2.3 Numerical Bounds.......................................
1.3 Calculating the Fractional Coloring Number
1.3.1 Columh Generation......................................
1.3.2 Fractional Coloring Algorithm .........................
1.3.3 Maximal Independent Set Algorithm......................
1.3.4 Fractional Coloring Program............................
1.3.4.11 Cycling........................................
1.3.4.2 Refinement for Rounding Error...................
1.3.4.3 Adjustment for Large Graphs.....................
1.3.4.41 Runtime Results................................
1.4 Efficient Parallelism
1.4.1 Updating Schemes.......................................
1.4.2 Iterative Methods .....................................
i ;
1.5 Parallelism of NonSymmetric Systems
I '!
1.6 Summary
i
Appendix
References i
i
i
vi
vii
1
3
7
8
9
12
16
16
17
20
24
24
24
26
27
28
29
30
32
34
36
56
FIGURES
i
1 A 5cycle.....................................................
2 A 6: cycle....................................................
3 A 7cycle.....................................................
4 The Myicielskian of a triangle ...............................
5 Grotzschs graph .............................................
6 Limits fpr the fractional coloring number of almost all graphs . .
7 Maximal independent set algorithm graph (i)...................
8 Maximal independent set algorithm graph (ii) ................
9 Maximal independent set algorithm graph (iii)................
10 Fractional solution vs. running time..........................
11 Dependency graph ............................................. I
I
I
I
I : vi
i
4
9
10
11
12
16
22
23
23
28
32
ACKNOWLEDGEMENTS
I
i i
I would like to thank David C. Fisher and Jennifer Ryan for giving me this
opportunity jandj sharing their expertise. Above all, I would like to thank Dr.
Fisher for his support, guidance, and immeasurable patience.
i
I
Vll
1 USING FRACTIONAL COLORING TO SOLVE
SPARSE; ITERATIVE SYSTEMS IN PARALLEL
Iterative algorithms are those of the form x(Â£ + 1) = /(x(Â£)), t = 0,1,...,
where / : TV 4 TV. They are used to solve numerical problems such as sys
tems of equations and optimization. Each algorithm commences with an initial
 I
approximation x(0), and the iterative scheme generates a sequence of vectors,
{x(0),x(l), x(2);,...}, which may converge to a fixed point of /. Naturally, it is
; j
desirable to seek an iterative scheme attaining the fastest possible convergence of
i l
l
this sequence.
Following the notation used in Bertsekas and Tsitsiklis [1, section 1.2.4], let
*,(Â£) denote the i component of x(Â£) and let /, denote the i component of the
i
function /. When we write ,(Â£ + 1) = /,(i(f),..., (Â£)), we are indicating that
_ :l
updating the component of x may require the values of i(Â£),... ,xn(t). An
iteration is defined as a sequence of component updates such that each component
j i
is updated at least once. If components x\,..., xn are updated exactly once in
an iteration; then we say there are n steps per iteration. In an iteration, it may
1 I
also be thatj several components will not require the values of all the components.
Furthermore, thlose components not requiring similar component values could be
updated simultaneously using parallel computers. In this case, it is possible to
update components simultaneously in m parallel steps, where m < n.
\
An example 6f parallelism is a parallel implementation of the iterative method
i
Successive OverRelaxation, i.e., SOR, to solve the system Ax = b, where A is
sparse and symmetric. Coloring algorithms have been widely used to find parallel
! j
implementations' of the SOR method (Bertsekas and Tsitsiklis [1]). Initially,

i i
the system is interpreted as a graph, G, where the variables of the system are
represented by the nodes in the graph. Thereafter, a coloring of G is found and
i ;
the nodes of the'same color are updated in parallel using SOR.
I
A coloring of G can be thought of as a collection of independent sets such
that each node is in at least one set. The coloring number of G, x(G), is the
l
fewest number of independent sets that color G. A fractional coloring of G is a
collection of jweighted independent sets such that each node of G is in sets whose
weights sum to at least one. The fractional coloring number of G, %/(G), is the
minimal sum of'the weights in a fractional coloring of G. We will see that the
weighted independent sets suggest an updating scheme for parallel computation
on the nodes of G, and that this scheme could speed the rate of convergence of the
iterative method by a factor of %(G)/x/(G). Furthermore, a minimal fractional
coloring of G can usually be found much more quickly than a minimal coloring
of G. !
I
2
1.1 Parallelism of Symmetric Systems
I
It is possible to accelerate the convergence of iterative methods by using parallel
computers to up;date different components simultaneously. Graph colorings can
be used to specify which components of x may be updated in parallel and to
suggest a scheduling of the ,s for updating. To do so, first construct a graph G
from the given system Ax = b, where A is symmetric and positive definite. Let
i
V = {1,... ,n} be the nodes of G, and E = {(t,i7)1 depends on j}. In other
words, for every column in A, there is a node in G, and an edge is placed between
nodes i and j in G if a,j ^ 0 in A. For example, if a given system Ax = b is
translated as an.iteration,
1 i(* + 1) = fi(xi(t),x2(t),x5(t))
1 X2(t + 1) = f2(xr{t),x2{t),x3(t))
I X3(t + 1) = f3(x2(t), 3(t), 4(t))
X4(t + 1) = /4(3(<), 4(t), *s(t))
I ' x5(t + 1) = /5(*i(t), *4(i), X5(t)),
then the adjacency matrix for G is
I
adj(Gr) =
1 1 0 0 1
1 1 1 0 0
0 1 1 1 0
0 0 1 1 1
1 0 0 1 1
and G is a 5cycle or C5 (see figure 1).
I
I
3
Figure 1: A 5cycle
i
, 
Next, partition the nodes of G into independent sets, or color classes. An in
l
dependent set is a subset of nodes such that there are no edges between any two
nodes. Thus^ if nodes i and j are in an independent set, then Xj is not needed to
i i
compute Xi and ict is not needed to compute xj, so , and xj may be updated in
l
!
parallel. Furthermore, the number of parallel computers, or processors, required
1
per iterationj is bounded by the cardinality of the largest independent set, or the
independence number, a(G). If we partition the nodes of G into as few indepen
: I
dent sets as possible, we could maximize the number of nodes per independent set
i
I j
and minimize the number of parallel steps per iteration. We would also optimize
I
I
the use of multiple processors, leaving less processors idle per iteration. Thus to
I i
j !.
maximize parallelism, we need to find a minimum coloring of G, or its chromatic
j j
number, %(G). For example, the coloring number of C$ is 3. In other words,
!' i
at least three colors are required to color the nodes of G. We could color the
. l 1
j
independent! sets {1,3} in red, {2,4} in blue, and {5} in white. Two processors
I
are requiredito perform an iteration as follows: update nodes 1 and 3 in parallel,
I
update nodes 2 and 4 in parallel, update node 5. Thus in x{@) = 3 parallel steps,
i
I
1
4
'I
all nodes areupdated once; in six parallel steps all nodes are updated twice; etc.
!
Finding the minimum coloring of G is an NPhard problem, so there is no
l
known polynomial algorithm to find %((?). One can reformulate the coloring prob
lem as an integer programming problem, though, to determine %(G) and specify
which nodes may be updated simultaneously. Let K be the nodeindependent
set incidence matrix, where the rows of K represent the nodes of G, and the
i' i
columns represent all independent sets of G. If node i is in independent set j,
I 1
then kij = L Otherwise, k(j = 0. The following matrix is the node independent
set incidence matrix K of G = C5.
'1
,1 1 0 0 0 0 1 0 0 1 0
0 1 0 0 0 0 1 0 0 1
K = 0 0 1 0 0 1 0 1 0 0
0 0 0 1 0 0 1 0 1 0
0 0 0 0 1 0 0 1 0 1
To find x{G), we could solve the following integer programming problem (IP),
![
j X{G)~ min lTx
J subject to Kx > 1
; x > 0, x is integer.
I
It may be difficult to find a minimal coloring number this way for large graphs,
1
I
since the number of columns in the (IP) could be exponential in the size of the
graph. To avoid this problem, we solve a linear programming relaxation of the
I
(IP), and use a' linear programming technique called column generation. The
solution of the linear programming problem (LP) is called the fractional coloring
5
number, X/((?) (Larsen, Propp, and Ullman [6]).
Xf(G) = min lTx
1 subject to Kx > 1
x > .
I
In the (LP), it is beneficial to think of ; as a fractional weight of an indepen
dent set. Thus, Xj(G) = as,, the sum of the weights on the independent sets.
j
The first set! of constraints, ifx > 1, are satisfied if each node of G is in inde
 I
pendent setsj whose weights sum to at least one. Also note that x G TZ+ allows
independent i sets to have fractional weights. Thus, Xf{G) may be noninteger.
In fact, since Xf[G) is the solution to an (LP), we know that it will always be
 j
a rational number. In Larsen, Propp, and Ullman [6], they also show that every
rational number greater than two is Xf(G) for some graph G.
The fractional coloring number of a C5 is 2.5, corresponding to the inde
pendent setS, {1,3}, {2,4}, {3,5}, {1,4}, and {2,5}, each with weight 1/2. Two
processors are required to perform an iteration as follows: update nodes 1 and 3
j i
in parallel; update nodes 2 and 4 in parallel; update nodes 3 and 5 in parallel;
update nodes 1 and 4 in parallel; updates node 2 and 5 in parallel. Note that each
i
node is updated! twice in five parallel steps, as opposed to six parallel steps as
l
determined by x(G). This would suggest a theoretical speedup of 3.0/2.5 = 1.2,
or x{G)/Xf(G)1
I
i
6
1.2 Comparing Coloring and Clique Numbers
Before examining the method of column generation and its role in determining
I
which nodes[ to update in parallel, it is beneficial to understand how a graphs
i ] ,
clique number, o;(G), and fractional clique number, 07(G), relate to its coloring
i I.
numbers. The clique number of a graph, o>(G), is the number of nodes in the
largest complete subgraph of G. Integer programming can be used to solve the
clique problem. To find w(G), we solve the following (IP),
i I
I i o;(G) = max lTy
subject to yTK < 1T
I y > 0, y is integer.
I
We can use: linear programming to find 07(G). To find 07(G) we solve the
following (LP),
! 07(G) = max lTy
' subject to yTK < 1T
I i y > fl
it is beneficial to think of y, as the fractional weight of a node. As with the
I I
coloring number problems, the rows of K are indexed by the nodes in the graph,
and its columns' represent the independent sets of the graph. The first set of
constraints, yTK < 1T, is satisfied if the sum of the weights on the nodes in any
independent
set 'is one or less. Also note that y 7l+ allows nodes of G to have
noninteger ^weights. Since 07(G) = J2 Vii the fractional clique number may also
1
be noninteger. In fact, it is always rational since it is the solution of an (LP).
The following inequality first appeared in Hell [5], and subsequently in Larsen,
I
I
Propp, andlUllman [6]: u;(G) < 07(G) = X/(G) < x(G). We will prove the
7
inequality in a new context and establish numerical bounds on the clique and
coloring numbers.
i 'i
' i
J
1.2.1 A Sequence of Inequalities
First, we will show that v(G) < uj{G) = Xf(G) < x{G) It is well known
that w{G) < x{G) for any graph. We will need to show that u;(G) < w/(G),
i
/(G) = Xf(G), and Xf{G) < x(G).
Lemma 1 *(G) < 01(G).
]
Proof: The optimal solution to the (IP) solving the clique problem is also a
feasible solution to the (LP) solving the fractional clique problem. Since the max
imal solution to the (LP) is greater than or equal to any of its feasible solutions,
u>(G) < v/(G). Q
Lemma 2 u>/(G) = Xf(G).
Proof: To show that Wf(G) = %/((?), we need to observe the (LP)s which
calculate the1 fractional coloring and clique numbers. The (LP)s are dual prob
,
lems and we will refer to the (LP) which computes Xj{G) as the primal and the
(LP) which computes wj(G) as the dual. The Strong Duality Theorem of linear
[ i
programming guarantees that the primal and the dual will have equal optimal
i
objective function values if the problems have optimal solutions (see e.g., Winston
!
[10]). Thus, w,{G) = Xl(G).
8
Lemma 3 xf(Q') < x(G).
Proof: The optimal solution to the (IP) solving the coloring problem is also
a feasible solution to the (LP) solving the fractional coloring problem. Since the
i
minimal solution to the (LP) is less than or equal to any of its feasible solutions,
Xj{G) < x{G).
i 1
Theorem I [4]'u(G) < u>/(G) = %/(
I
Proof: By the results in the previous lemmas and the fact that a>(G) < X(G),
I
we have proved, the inequality.
1.2.2 Examples
As an illustration of Theorem 1, we look at ncycles. If n is even, such as a Cq
in figure 2, then
I
! : w(Q) = 2 < w,(G) = 2 = x,(G) < x(G) = 2.
Figure 2: A 6cycle
9
If n is odd, such as a C7 in figure 3, then
! w(G) = 2 < u,(G) = 2n/(n 1) = */((?) < x(G) = 3.
 ' w(G) = 2 < w,(G) = 7/3 = xAG) < %(G) = 3
I
Figure 3s A 7cycle
As another illustration, we look at Mycielskian graphs. Mycielskian graphs
are a sequence of graphs, Gn, for which G+i can be constructed from Gn, and
some interesting properties apply. Following the notation in Larsen, Propp, and
j !
Ullman [6], .the Mycielskian fi{G) of a graph G = (V,E) is defined as follows. Let
I
i !
v = Define l4G) by V(ji(G)) = {z1,x2,... ... ,ym,z} with
EG))=! ,
I
{(*., i) & (vi,Vj) E(G),
i 1 (xi,Vj) ^ ((vi,vj) e E(G),
i
1 (i,*)(V*= l,,..,m)}.
i
Larsen, Propp, and Ullman also proved the following theorem.
I
: i
10
Theorem 2 Let fi(G) be the Mycielskian of a graph G with at least one edge.
I
Then
i
1. u){p{G)) =; u>((r);
' I i
8 x{fi(G)) = x{G) + 1;
i
I
3 xM9)) = Xf(G) + ^G).
An illustration of Theorems 1 and 2 is the Mycielskian of a K3, (i.e., triangle)
i
in figure 4. j
<(/*(
1 Figure 4: The Mycielskian of a triangle
j ,i
As another illustration, let G2 = K2, and define Gn+1 = //(Gn) for n > 2,
.1 1
then G3 is a C5 and G4 is Grotzschs graph in figure 5. Grotzschs graph is the
! ,
smallest graph where the clique and chromatic numbers differ by more than one.
11
1
k o 2Q
u(i*(G)) ~ 2 > XfiKty) 2 + 5 = Jo = 3 + 1 = 4
I
i ; Figure 5: Grotzschs graph
For this class of Mycielskian graphs, Larsen, Propp, and Ullman also show that
x(G) %/(G) * oo and X/(C?) o;(G) oo. In the next section, we will see
I
that the clique and coloring numbers of random graphs behave quite differently.
! .
1.2.3 Numerical Bounds
i
We have shown ithat uj(G) < Xf(G) < %(G) for any graph, but how close to either
w(G) or x(G) is Xf(G) for a random graph? We know that our iterative scheme
will converge faster with use of fractional coloring. It is reasonable to assume
j i
that if Xf(G) is almost always closer in value to oj{G) than x(^) fr any grapbs
! !
then the use of fractional coloring is indispensible in solving large, sparse systems
j
in parallel. 1 Unfortunately, we will see that Xf(G) is on the order of x(G) rather
i 12
than u>((7) for random graphs. Keep in mind, though, that X/(G) < x(G) and
! ;
the use of fractional coloring is an improvement on the use of integer coloring in
i 1
the parallel implementation of an iterative method.
First, we prove a theorem which will become useful in the proof of Theorem
6 1 .
i ':
Theorem 3 [4] Let G be a graph with n nodes, and let a(G) be the maximal
\
cardinality of ah independent set of G. Then Xf{G) > afbj"
! I
Proof: I From Lemma 2, we know that Xf{G) = <*>/(Â£?), the optimal dual
solution. Ajfeasible solution to the (LP) solving the fractional clique problem is
a weight assignment of on each node. If there are n nodes, then w/(G) >
Thus, Xf{G) > zfb) D
Next, we state two theorems on which we base the proof of Theorem 6. Grim
 
mett and McDiarmid (quoted in Palmer [7]) found bounds on the chromatic num
' I 1
ber as follows in Theorem 4, and Matula (quoted in Palmer [7]) found the clique
i 1
number of almost every graph as follows in Theorem 5.
Theorem ,4 Let 0 < e < 1 be given, and set b = With p, the probability of
an edge, fixed, the chromatic number of almost all graphs G satisfies
(I ~ e)n
logfcfa)
< X(G) <
(1 + e)n
logb(TC) '
13
Theorem 5 Lei G be a graph with n nodes where the probability of an edge is
fixed at p = j. Let the function d d(n) be defined by
!d(n) = 2logb(n) 21og6logb(n) + 1f 21og6(^).
 I
Then for arty e > 0,
Prob[d[n) e <
I (
We are now ready to state and prove a theorem revealing bounds on the
l
fractional coloring number.
Theorem 6 [4]' Let n be the number of nodes in a graph G, and b = where
j
p is the fixed probability of an edge. Then, for all e > 0, we have with probability
1 that
logftfa)
log&()
Proof: Let Q < 8 < Then we know from Theorem 4,
Prob (< X(G) <(1+ S)n'
log b()
Let b = Then by Theorem 5,
logb(^)
Prob (u;(G) d(n) < *) > 1
where d(n) = 21og^(n) 21og^logf(n) + 1 + 21ogF(f). Thus,
Prob (u>(G) < d[n) + b) 1.
14
Since a>(G) a((?),
and
Prob (a(G) < d(n) + f)> 1,
/ n n \
Prob ( . > . I > 1.
\a{G) d(n) + 8/
Since Xf(G) > ^),
Pl0b 1X'(C) Wn)Ts) ^ L
Since d{n) < 2log^(n) for large n,
n
Prob h
Letting 8 = we write
/
Prob
Xf(G) >
n
\
2e]g?(n)
21og^) + ^5
After algebraic simplification, we achieve
p>b H >~ yÂ£
i.
We proceed
with notation used in Theorem 4. Recognizing that b in Theorem 5
is the same as b in Theorem 4, we may write
(Â§0
71
Prob ((G) S ki#1 P
15
Since */(
P10Jp^
\ log6(n) logb(n) )
Figure 6 gives an illustration of the numerical bounds as described above.
U)>
X/(G) <*(
21og^(n) 21ogj,(n)
logfcW
n
Figure 6: Limits for the fractional coloring number of almost all graphs
_ i i
1.3 Calculating the Fractional Coloring Number
To directly solve the (LP) for Xf{G), we would first have to generate all indepen
dent sets of! G. Since the number of independent sets in G is exponential in the
number of njodes in G, finding the independent sets and the solution of the result
ing (LP) would'be prohibitively expensive on a graph of even a modest number
i
of nodes. Instead, we examine the method and benefits of column generation.
1.3.1 Column Generation
i i
I '!
The method of column generation allows us to start with a reasonable number of
! i
independent sets and to add subsequent sets as needed to solve a (LP). Employing
16
this technique to solve the (LP),
 min lTx
subject to Kx > 1
i x > 0,
i
1
we start with enough columns in the K matrix to provide an initial basic fea
sible solution, such that each node in the graph is represented in at least one
I
independent1 set: We then run the Simplex algorithm. The algorithm returns
1 : I
two sets of weights. The primal weights in x are the weights on the independent
j
sets represented by the columns of K. The dual weights in y are the weights on
the nodes in the graph represented by the rows of K. If one can determine an
I
independentj set whose dual weights sum to one or greater, then the dual is not
; !
feasible. Hence, a new column, or the incidence vector of the generated indepen
i 1
dent set, is added to K before running the Simplex algorithm again. This process
repeats until one cannot determine any independent set whose dual weights sum
i 1
to one or greater. Upon this situation, the dual is feasible. Furthermore, since
! ;
X = X Vi, the dual and primal are optimal by the Strong Duality Theorem.
t
i i
i
1.3.2 Fractional Coloring Algorithm
' I
We have an algorithm to calculate the fractional coloring number of a graph
using the linear1 programming method of column generation. The details of the
algorithm are described directly below in steps 15.
1. Create the columns of the K matrix by adding the incidence vectors of
I i
I . 1
enough independent sets of G such that each node of G is in at least one
independent set. Nodes already represented in independent sets may be
added  to a subsequently generated independent set to fill or maximize
i
the sets size.
2. Run the Simplex algorithm, which will return primal weights in the vector,
j
x, and dual weights in the vector, y.
3. Check! to see if the dual is feasible using the dual weights from Simplex.
I !
 '
a. ) If the dual is not feasible, an independent set may be formed whose
I
' I 1
dual weights sum to greater than one. Proceed to step 4.
b. ) If the dual is feasible, no independent set can be formed whose dual
i
weights sum to greater than one. Proceed to step 5.
4. Add the incidence vector of the generated independent set as a column in
j
the K matrix. Go to step 2.
5. The dual is feasible. In fact, since Yxi = Y Vii the dual and primal are
j
optimal by the Strong Duality Theorem. Thus, Xf(@) = 2
As an illustration of the fractional coloring algorithm, we will follow the details
in calculating Xf(@) fr G = C5. Let y and x denote the dual and primal weight
18
vectors, respectively. We begin with enough independent sets to cover the nodes
of G.
1 0 0
0 1 1
1 0 0
0 1 0
0 0 1
Simplex is run giving the following independent sets, primal weights, and dual
weights.
X{1,3} = X{2,4} = X{2,5} = 1
y = (1,0,0,1,1)
Is the dual feasible? No. We can add {1,4} as a column of K.
I
K =
10 0 1
0 110
10 0 0
0 10 1
0 0 10
Simplex is run giving the following primal and dual weights.
X{1,3} = X{2,4} = X{2,5} = 1 ; X{1,4} = 0
y = (0,0,1,1,1)
Is the dual feasible? No. We can add {3,5} as a column of K.
1 0 0 1 0
0 1 1 0 0
1 0 0 0 1
0 1 0 1 0
0 0 1 0 1
I
I
19
Simplex is run giving the following primal and dual weights.
! 'i
I
X{1,3> = X(2,4}X{2,5} = X{1,4} = x{3,5} = 0.5
y = (0.5,0.5,0.5,0.5,0.5)
Is the dual feasible? Yes, there are no independent sets whose dual weights sum
i ;
to one or greater. Since the dual is feasible, and J2xi = 2/n the dual and primal
are optimal jby the Strong Duality Theorem. Thus, %/((?) = 2.5.
1.3.3 Maximal Independent Set Algorithm
i
When generating columns in K, one could simply select the first independent
'I
set whose dual weights sum to greater than one. To find Xf(&) most efficiently,
though, a maximal independent set should be added to K when the dual is
! !
not feasible. Ideally, this independent set has a maximum dual weight sum and
contains a maximum number of nodes. Although finding a maximal independent
I
set may take more time than selecting the first eligible independent set, it is
probable that less total time will be required to solve the (LP) (Chvatal [3]).
j 1
We have; an algorithm which exhaustively searches for a maximal independent
!
set whose dual weights sum to one or greater. This algorithm chooses nodes for
the set according to their dual weights and degrees as follows below in steps 17.
Note that it is appropriate to insert the steps of this algorithm in the fractional
coloring algorithm whenever the coloring algorithm generates an independent set.
20
1. Initially, consider those nodes whose dual weights are of significance. Hence,
we are considering a subgraph of G in steps 16, where we determine an in
j j
dependent'set with nodes of significant weight. We continue by maximizing
I
the size of the independent set with nodes of insignificant weight in step 7.
i 1
I
2. Calculate a weight ratio of available nodes. Let v represent a node in the
'I
graph, and y(v) represent the dual weight of that node. Let N(v) be the
set of nodes adjacent to v and available for selection.
ratio(v) =
y{y)
Z)ae;v(u) y{a) +
Note that je is added in the denominator to avoid possible division by zero
if v is an isolated node.
3. Calculate the degree of each available node.
4. Select inode for the maximal independent set with the greatest weight ratio
i :
i
and least degree.
5.
6.
I
I ((
Disregard selected nodes adjacent nodes in any subsequent steps.
I
RepeJt steps 25 until no nodes are available for selection from the subgraph
i :
of significant nodes.
7.
Fill the independent set with whatever nodes of insignificant weight can be
j
added accordingly.
21
0
As an illustration of the maximal independent set algorithm, we are given the
1 'i
graph and node weights in figure 7. Let r and d denote the ratio and degree
I 'I
I
vectors, respectively. Thus, we begin to generate a maximal independent set, I,
j
with node 1 such that / = {!}.
r(l, 2,3,4,5,6) = (0.5,0.4,0.5,0.5,0.5,0.5)
 :
j d(l, 2,3,4,5,6) = (1,3,2,2,2,2)
Figure 7: Maximal independent set algorithm graph (i)
i
Since node 1 was chosen for the independent set, its adjacent node, 2, is disre
garded and jweight ratios and degrees are recalculated on the following subgraph
l
in figure 8. 1
22
r(3,4,5,6) = (1,0.5,0.5,1)
d(3,4,5,6) = (1,2,2,1)
i I
Figure ;8: Maximal independent set algorithm graph (ii)
i
We add node 3 to the independent set giving I = {1,3}. Node 4 is disregarded
I
i
and weight ratios and degrees are recalculated on the following subgraph in figure
9.
I
r(5,6) = (1,1)
d(5,6) = (1,1)
Figure 9: Maximal independent set algorithm graph (iii)
We add inode 5 to the independent set giving I = {1,3,5}. Node 6 is disre
:l
garded. No 'subgraph remains and there are no additional nodes with insignificant
 l
weights to consider, so the independent set I = {1,3,5} with total weight of 5/3
1
may be added as a column in K. Note that maximal independent sets were
! j
selected in the illustration of the fractional coloring algorithm.
23
1.3.4 Fractional Coloring Program
Jennifer Ryan and I wrote a program to implement the fractional coloring algo
rithm and maximal independent set algorithm. We also incorporated a version
of the Simplex Algorithm written by The Rand Corporation. In doing this we
discovered a! number of problems, as well as some interesting results.
1.3.4.1 Cycling
It is very rare that the Simplex algorithm cycles. It appears to be common,
though, in linear programming when using column generation. Cycling occurred
in our program execution due to degenerate pivots. The problem was corrected
after coding!in Blands least index pivot rule.
1.3.4.2 Refinement for Rounding Error
! l
Rounding errors also affected the execution of our code. Passing tolerances and
making other checks did not completely remedy the situation. Finally, a change
i
was made to the fractional coloring algorithm in the way it determined the fea
I j
sibility of the dual problem. According to the dual (LP), the dual is feasible
if yTk < 1 for all independent sets incidence vectors, k. Due to rounding er
i
rors, it was possible that the dual weights sum for a particular independent set
was slightly greater than one. The independent set was added to the K matrix

and Simplex readded the dual weights and found that the sum was either one
or slightly less than one plus some tolerance. Simplex did not use the indepen
24
,!
dent set and returned the same dual weights from the previous execution. This
i
resulted in ooping and repeated objective function values. The problem was
remedied after the feasibility of the dual was changed such that yTk < 1 + e for
 I
all k, where e is arbitrary. It was also discovered that the value of e allowed one
to determine an, accuracy of the fractional coloring number as suggested by the
i
following theorem.
 l
Theorem 7 [4] i Given any graph on n nodes, find its fractional coloring num
ber by solving an (LP) using column generation. Let y and x be its dual and
primal weights, 'respectively, as returned by the Simplex algorithm. Let I be any
independent j set in the graph and N be the number of independent set incidence
vectors (i.e., columns of K). For any e > 0, if Â£lgi yi < 1 + e for all I, then the
I
corresponding feasible fractional coloring has
! N
Xf(G) < Y, < (! + e)Xf(G).
\ i i=1
1
Proof: Recall that a graphs dual weights represent the weights on its nodes
and a graphs primal weights represent weights on its independent sets.
I
Let y[ = Then Pi 1 since Vi < 1 + e. So y' is a feasible dual
II
solution. Thus, Â£"=i y < uf(G) = Xf{G). This implies that Â£"=i ^ < Xf(G),
! , i
or E"=i Vi ^ (1 + e)Xf(G). Recall that Xf{G) < a feasible solution to
the primal. Also note that the Simplex algorithm always returns dual and primal
25
weights such, that Y^Li xj = Â£"=1 Vi Thus,
! '! Xf(G) < Â£J < (! + e)xt(Cf).
' j=i
, i# ,
1.3.4.3 Adjustment for Large Graphs
i I
A final situation of interest occurred when the fractional coloring program was
:  '
executed for a random graph on 100 nodes. After many executions of the Simplex
1 I
code, the program took an unreasonable amount of time to find an independent
set whose dual weights sum was greater than 1 + e. It was found that a clique on
! i
four nodes contained the only nodes out of 100 with any significant weight. To
j. i
avoid further situations like this when generating a maximal independent set, we
i I
changed our code to initially consider a subgraph of G with those nodes whose
i
dual weights] were greater than From this subset of nodes with significant
weights, an independent set was formed, and nodes with insignificant weights
were added subsequently to maximize the size of the set. We continued to check
j
dual feasibility by determining if any independent set could be formed with a dual
weight sum greater than 1 + e. Thus, our code actually checks for dual feasibility
!
by determining it an independent set can be generated with a dual weight sum
greater than l +1. This is done so that if all n nodes of a graph have insignificant
i
weights, then no Independent set could be generated. Hence,
i
Â£y.
26
Not only didjthisj strategy speed the convergence of the fractional coloring number
for large graphs,; but similar results were achieved for graphs of all sizes.
1.3.4.4 Runtime Results
The fractional coloring algorithm has been compared to an integer coloring algo
I
rithm (Syslo, Narsingh, and Kowalik [8]). It was found for a random graph on
40 nodes, with probability of an edge equal to 0.5, that it took 3.04 seconds to
compute %/(G) = 7.64846, as compared to the 993.01 seconds it took to com
pute x(G) =j 8 (see figure 10). As expected, the fractional coloring number was
less than the chromatic number for the graph. More importantly, though, the
fractional algorithm converged in less time to x/(G) than the integer algorithm
converged to^ %(G). This is of great importance if an algorithm is given a limited
amount of time to find the coloring number of a graph. Note that a copy of the
fractional coloring program in FORTRAN is provided in the Appendix of this
paper.
11
10
Coloring
Number
9
8
/h
T~/h
3 " 128
Time (in seconds)
y/
i'//i
129 " 993
Figure 10: Fractional solution vs. running time
I
1.4 Efficient Parallelism
There are several reasons to use fractional coloring as opposed to integer coloring
j
in solving large, sparse systems. Probably the most important one is the theo
,  l
retical speedup of the iterative scheme. In fact, we quantified this theoretical
!
speedup in jthe iterative scheme as x(G) I Recall the C$ example with a
possible speedup of 3.0/2.5 = 1.2. In that discussion we had neglected to con
sider how toj achieve this speedup given the columns or independent sets in the
l
final K matrix. ,We will consider some possible criteria using this example, and
!

also examine different iterative methods to determine the elements of efficient
I
parallelism. j
I
28
I
1.4.1 Updating Schemes
In the 5cycle example in figure 1, the final run of Simplex gave the independent
sets {1,3}, {2,4}', {3,5}, {1,4}, and {2,5} each with weight 1/2. One criterion in
i
1
achieving a speedup of the iterative scheme might be to update the nodes in each
independent set of the time. In the C& example, each independent set
should be updated 0.5/2.5 of the time, or with equivalent consideration. Another
I
j
criterion might be to find an ordering of the independent sets so that consecutive
. L
sets in the iterative scheme do not contain like nodes. An efficient ordering for
i
the Cs example !is simply {1,3}, {2,4}, {3,5}, {1,4}, {2,5}. As graphs increase
j
in size and ^complexity, though, there may be additional criteria to consider.
Furthermore:, it is uncertain if optimal updating schemes could even exist for all
i
graphs. 
Consider! the: difficulty of an updating scheme with the following example.
When calculating %/((?) for Grotzschs graph in figure 5, the final execution of
i
Simplex returns 'the following independent sets and primal weights.
I 1
*{6,7,8,9,10} = 0.4
I l
I
! x{2,4,7,9} = *{2,5,7,10} = X{M,6,9} = *{3,5,8,10} = 0.3
! i
*{1,3,11} = *{2,4,11} = *{2,5,11} = *{1,4,11} = *{3,5,11} = 0.2
j
To achieve the theoretical speedup of x{^)IXf{G)i or 4/2.9, we might need to
find an ordering jof the independent sets to meet criteria such as that insuring no
29
consecutive! sets contain like nodes and {6,7,8,9,10} is updated 4/29 of the time,
{1,3,11}., {2,4,11}, {2,5,11}, {1,4,11}, {3,5,11} are each updated 3/29 of the time,
and {2,4,7,9}, {2,5,7,10}, {1,4,6,9}, {3,5,8,10}, {1,3,6,8} are each updated 2/29
of the time. Clearly, this is no simple task.
i

1.4.2 Iterative Methods
Throughout this discussion, we referred to an iterative method as x(Â£ + 1) =

f(i(Â£),... ,(Â£i)). This iterative scheme is known as the Jacobi method. A
i
j
possible improvement to the Jacobi method is computing xi(t + 1) using the
values of ij(Â£ + 1),..., ,_i(Â£ + 1),,+i (Â£),... ,(Â£). In other words, we can
use the mpst recent calculations available in our iterative scheme such that
i
*.(Â£ + 1) =j/i(i(< + l),...,fi(i+l),;_i(Â£),n(i)),i = 1 ,...,n. This iter
ative technique is known as the GaussSeidel method. It is possible to accelerate
i
the computation in a GaussSeidel iteration using parallel computers. Recall
i
that if entries in the system matrix A, a,j = aji = 0, then xj(t) is not needed
to compute ,(Â£), and ,(Â£) is not needed to compute Xj(t). Thus, ,(Â£) and
Xj(t) can be computed simultaneously, and component values calculated in any
j. j
prior parallel step of the iteration may be used for component computation in
I
the current parallel step. It is also possible that a GaussSeidel iteration is non
parallelizable. For example, if each component, ,, depends on all other compo
nents, i, 2i, .,, ,_i,,+i,..., n, then only one component may be updated at
a time. In a I sparse system, though, where a;j = aji = 0 for numerous i and j, it
is always desirable to study an efficient parallel implementation of the iterative
method.
If the sysjtenr matrix, A, is strictly diagonally dominant, then both the Jacobi
and GaussSeidel methods converge to the solution for any arbitrary x(0) (Varga
[9]). There is a. way to accelerate the rate of convergence for systems which
I
converge by the GaussSeidel method. These methods are called Successive Over
i
Relaxation, i.e., SOR. Furthermore, if A is positive definite and symmetric, then
j
the SOR method is guaranteed to converge for any initial approximation, x(0)
I
I
(Burden and Faires [2]).
j 1
It would appear that the fastest rate of convergence could occur with a par
i
allel implementation of the SOR iterative method. It was found, though, that an
efficient ordering scheme in the parallel implementation was necessary to achieve
this desired speed. Prior to observing the effects of parallelism, systems of equa
tions were solved using a random SOR method. Components were randomly
updated as follows. After every execution of Simplex, the program returned dual
weights in y
and each node, i, was assigned a weight ratio, u= j/i/ Â£yi, such
that 0 < Wi < 1. Thereafter, the SOR algorithm generated a random number,
,j i
r, such that 0 < r < 1. If Wii < r < u;,, and y, ^ 0, then node i was updated.
!
Unfortunately, the convergence rate of the random SOR was on the order of a

Jacobi iteration,! which is much slower than SOR (Fisher [4]). Thus, a desirable
31
parallelism requires an efficient updating scheme as well as an effectual
method.
j
1.5 Parallelism of NonSymmetric Systems
iterative
Thus far, we have only considered solving Ax = b if A was symmetric. An
!
I
other interesting problem is posed when A is not symmetric. Consider solving
a large, sparse system Ax = b where A is nonsymmetric and parallel com
puting may [be used to speed the rate of convergence of the iterative method.
i !
i
Following the notation used in Bertsekas and Tsitsiklis [1, section 1.2.4], define
I
a corresponding dependency graph, G, as G = (V, E) where V = {1,...,rc} and
I
E = {(i,j)\fj depends on ,}. For example, if a given system, Ax = b, is trans
lated as the [following Jacobi iteration,
XX(t+ 1) = /a(i(<),5(0)
 ! X2(t + 1) = f2(x1(t),x2{t))
x3(t f 1) = h(x2(t),x3(t))
X4(t + 1) = f4{x3{t), X4(t))
! , 5(t + 1) = fs(x4(t),x5(t)),
then the corresponding dependency graph, G, is that in figure 11.
1
Figure 11: Dependency graph
The adjacency matrix for G is
I
adj(G) =
1 1 0 0 0
0 1 1 0 0
0 0 1 1 0
0 0 0 1 1
1 0 0 0 1
If we think of independent sets as those cycleless sets in the graph, then we
i ;
i
can update nodes in a cycleless set in parallel using a different processor for each
node in the set. Furthermore, the number of processors required per iteration
is bounded by the size of the largest cycleless set. Thus, a symmetric system
j
can be defined in a nonsymmetric context by generating a directed graph from
the undirected graph by replacing edges with double arcs between nodes. Thus,
I
nodes connected with an edge could not possibly be placed in the same cycleless
set.
As an example of nonsymmetric parallelism, consider the dependency graph
( I 1
in figure 11, where %((?) = 2. We could choose the cycleless sets {1,2,3,4} and
{5} since no! cycle exists among nodes in each set. In this context, maximizing
parallelism is equivalent to optimally coloring a dependency graph as suggested
by the following proposition in Bertsekas and Tsitsiklis [1, Proposition 2.5].
Proposition 1 The following statements are equivalent:
1. There exists an ordering of the variables such that an iteration of the cor
i
responding1 GaussSeidel algorithm can be performed in K parallel steps.
33
2. There exists a coloring of the dependency graph that uses K colors and with
I 
the property that the induced subgraph on each color class has no cycles.
For the dependency graph in figure 11, four processors are required to perform
an iteration as follows: update nodes 4, 3, 2, and 1 in the first parallel step, and
update node 5. Note that each node is updated once in x(G?) = 2 parallel steps,
or m times in 2m parallel steps.
If we solve the same problem using fractional coloring, Xf(G) = 5/4, corre
sponding to cycleless sets {1,2,3,4}, {2,3,4,5}, {3,4,5,1}, {4,5,1,2}, {5,1,2,3}, each
with weight jl/4. Four processors are required to perform an iteration as follows:
! 
update nodes 4, 3, 2, and 1 in the first parallel step, update nodes 5, 4, 3, and 2
in the second parallel step, update nodes 1, 5, 4, and 3 in the third parallel step,
I
update nodes 2,, 1, 5, and 4 in the fourth parallel step, and update nodes 3, 2,
I
1
1, and 5 in the fifth parallel. Note that each node is updated eight times in ten
parallel steps as opposed to sixteen parallel steps as determined by x(G). This
would suggest a theoretical speedup of 2.0/1.25 = 1.6. Note, once again, that
I
regardless of the symmetric qualities of the system or the iterative method, an
' ^ I
efficient updating scheme is required to obtain this theoretical speedup.
i

1.6 Summary
i
In this paper, we have studied fractional coloring and its related parameter,
Xj{G). We proved that a>{G) < w/(G) = Xf(G) < x(G), using properties of linear
34
programming, and that %/((?) is on the order of x(G) for random graphs. We also
showed that the fractional coloring program can calculate a graphs fractional
i

coloring number to a desired accuracy. We have also studied how fractional
coloring can be used to solve sparse, symmetric iterative systems in parallel, and
speed the convergence of the iterative method. Thereafter, we noted some of the
complexities of efficient parallelism regardless of the iterative method. Finally, we
!
showed that fractional coloring can also be used to solve nonsymmetric systems.
Open areas jof study include efficient updating schemes, a modification of the
i I
fractional coloring number program to accommodate nonsymmetric systems, and
a program to execute parallel implementations of SOR in solving sparse iterative
systems.
APPENDIX
I ;
i
i
C THE FRACTIONAL COLORING PROGRAM
! :l
C
C THIS PROGRAM CALCULATES THE FRACTIONAL COLORING NUMBER
C OF A GRAPH. THE USER CAN EITHER HAVE A GRAPH READ FROM
C foroo8.dAt OR ALLOW THE GENERATION OF A RANDOM GRAPH.
C TO GENErIlTE: a random graph, one must enter the number
C OF NODES! IN ,THE GRAPH, A SEED FOR THE RANDOM GENERATOR,
C THE PROBABILITY OF AN EDGE, AND AN EPSILON WHICH WILL
C DETERMINE THE ACCURACY OF THE CALCULATED FRACTIONAL
C COLORING NUMBER.
C
C VARIABLES FOR SIMPLEX
REAL!AM(200,600),BM(200),CM(600)
INTEGER[KM(7)
REAL!PS(600),DS(200),NEWC0L(200),NDS(200)
REAL!BINV(40000)
INTEGERIKB1(600).basis(200)
REAL]PEI(200),XI(200)
EQUIVALENCE (FEAS1,KM(6))
LOGICAL j SIMFLAG,VER1,NEG1,FEAS1
INTEGERjNVARS,NROWS,TEMPVARS,NCOLMNS
REAL i LPVAL,ADDRED,EPS
INTEGER 1 ADDIND,SEED
INTEGER SURP1.SURP2
C I i
C VARIABLES FOR COLOURING
INTEGER[EDGES(38000), N0DEP0INT(2OO)
INTEGER COUNTER
LOGICAL ;flag,maxset
C ;
C
C READ IN DATA FROM UNIT 8
C
C READ (8,;*) NODES
C COUNTERS
C DO 10 1=1,NODES
C NODEPOINT(I)=COUNTER
C READ(8,*) NADJ,(EDGES(J),J=COUNTER,(COUNTER+NADJ1))
C COUNTER=COUNTER+NADJ
i 1
36
C 10 CONTINUE
C N0DEP0INT(N0DES+1)=C0UNTER
C READ (8,,*) EPS
C 
c
C READ IN RANDOM GRAPH
C
ACCEPT *,NODES,SEED,PROB.EPS
CALL RANDOMGRAPH(SEED,PROB,NODES.NODEPOINT,EDGES)
i. '
C i
C 
C CALL ROtJTlNE TO SET UP THINGS FOR THE SIMPLEX ALGORITHM
CALL J COLlNIT(AM,BM,CM,NODES,KB1,BASIS,EDGES,NODEPOINT,
*  1 NVARS,SURP1,SURP2)
C LOOP TO REPETITIVELY ADD COLUMNS STARTS HERE
C CALL THE
SIMPLEX ALGORITHM FOR THE FIRST TIME
NC0LMNS=2OO
NROWS=NODES
CALL ISIMPLE(2,NROWS,NVARS,AM,BM,CM,KM,KB1,DS,BASIS,BINV,
*  ; 0.ADDRED,VER1,PE1,X1)
DO 15 J=l,NVARS
KBJ=KB1(J)
PS(J)=0
Ilj (KBJ.NE.O) PS(J)=X1(KBJ)
15 CONTINUE
999 FORMAT (1I00F5.2)
CALL
C CHECK TO
PRINTSOL(PS,DS,CM,NVARS,NROWS,SURP1,SURP2,AM)
SEE IF THERE IS A NEW COLUMN TO ADD
DO 30 1=1,NODES
NDS(I!)=DS(I)
30 CONTINUE;
FLAG = MAXSET(EDGES,NODEPOINT,NODES,NDS.NEWCOL.EPS)
IF (FLAG) THEN
C ADDING A .COLUMN
PR0D=O
DO 8 1=1,NROWS
PROD = PROD + DS(I)*NEWCOL(I)
JNTINUE
8
CO
ADDRED=PROD + 1
IF (NVARS.LT.NCOLMNS) THEN
37
!NVARS=NVARS+1
TEMPVARS=NVARS
ELSE 1
"11=1
99 IF (KB1(I).EQ.O) THEN
TEMPVARS=I
QO TO 88
'END IF
1=1+1
;60 TO 99
END IF
88 DO 13 1=1,NROHS
AM(I,TEMPVARS)=NEWCOL(I)
13 CONTINUE
CM(TEMPVARS)=1
PS(TEMPVARS)=0
CALL SIMPLE(0,NROWS,NVARS,AM,BM,CM,KM,KB1,DS,BASIS,BINV,
* TEMPVARS,ADDRED,VER1,PEI,XI)
DO 17 J=1,NVARS
! KBJ=KB1(J)
 PS,(J)=0
! IF (KBJ.NE.O) PS(J)=X1(KBJ)
17 CONTINUE
CALL 'PRINTSOL(PS,DS,CM,NVARS,NROWS,SURP1,SURP2,AM)
GO TO 11
ELSE'
C HE ARE DONE ,
WRITE!(9,*) 'THE LAST SOLN HAS OPTIMAL'
PRINT *, 'THE LAST SOLN HAS OPTIMAL'
END IF
END
C i
C 'l
SUBROUTINE RANDOMGRAPH(SEED,PROB,N,POINTERS,EDGES)
INTEGER jSEED, I, J, K, ADJ (200,200) ,POINTERS (1) EDGES (1)
REALPROB,RANDOM
DO ,20 1=2,N
DO 10 J=1,I1
IF(RANDOM(SEED).LT.PROB) THEN
,jADJ(J, I)=l
 Iadj(i,j)=i
38
I
I
ELSE
ADJ(J,I)=0
ADJ(I,J)=0
END IF
10 CONTINUE
ADJ(I,I)=0
20 CONTINUE
i
K=1
DO 40 1=1,N
POINTERS(I)=K
DO 30 J=1,N
i IF(ADJ(I,J).EQ.l) THEN
'EDGES (K)=J
K=K+1
END IF
30 CONTINUE
40 CONTINUE
POINTERS(N+1)=K
END
C
FUNCTION RANDOM(SEED)
INTEGER SEED
SEED=MOD(25173*SEED+13849,65536)
RANDOM=(1.0*SEED)/65536
END ; i
C !
C
SUBROUTINE PRINTSOL(PS,DS,CM,NVARS,NOBS,SURP1,SURP2,AM)
REAL PS(1),DS(l),CM(1),AM(200,l)
INTEGER SURP1.SURP2
C PRINT OUT THE PRIMAL SOLUTION TO THE LP
WRITE (9 ,>900)
PRINT 900
WRITE(9,910)
PRINT 910
DO 12 J=;i,NVARS
WRITE(9,920) J,PS(J)
PRINT 920,J,PS(J)
IF ((PS(J).GT.O.O).AND.
C i ((J.LT.SURP1).OR.(J.GT.SURP2))) THEN
WRITE(9,990)
39
13
PRINT 990
DO 13 1=1,NOBS
IF (AM(I,J).GT.O.O) THEN
PRINT 991,1
WRITE(9,991) I
END IF
CONTINUE
i
END IF
12 CONTINUE
C PRINT OUt THE DUAL SOLUTION TO THE LP
WRITE (9; 930)
PRINT 930
WRITE(9j940)
PRINT 9^0
DO 14 1=1,NOBS
WRITE(9'920) I,1*DS(I)
PRINT 920,I,1*DS(I)
14 CONTINUE
C CALCULATE AND PRINT OUT THE OBJECTIVE VALUE OF THE LP
VALUE=0;
DO 16 J=1,NVARS
VALUE=VALUE+CM(J)*PS(J)
16 CONTINUE
WRITE(9,950) VALUE
PRINT 950,VALUE
900 FORMAT (I2X, THE PRIMAL SOLUTION IS)
910 FORMAT (j9X, J,20X,XJ)
920 FORMAT (7X,I3,13X,F15.8)
930 FORMAT (2X,THE DUAL SOLUTION IS)
940 FORMAT (9X,I,20X,YI)
950 FORMAT (2X,THE OBJECTIVE VALUE IS,3X,F15.8)
990 FORMAT (2X,THE INDEPENDENT SET IS NODES:)
991 FORMAT (2X.I3)
RETURN
END ,
C
C
SUBROUTINE COLINIT(A,B,C,NODES,KB,BASIS,EDGES,POINTER,
* ;! NVARS, SURP1, SURP2)
!: I , I
' . I
1 I i I
REAL;!a(2!oO,1),B(1),C(1)
40
INTEGEREDGES(1), POINTER(l),KB(l).BASIS(1),NEW(200)
REALjWEIGHTS(200), SET(200)
LOGICAL[FLAG,MAXSET.FIRST,FOUND,COVERED(200)
INTEGERIsURPl,SURP2
C 1
C
C SET UP A MATRIX AND BASIS VECTORS
DO 10 1=1,NODES
WEIGHTS(I)=2
BASIS(I) =0
10 CONTINUE
NVARS=0 ;
NFEAS=0 !
DO 11 1=1,NODES
new(i) = 0.
11 CONTINUE
15 IF (NFEAS.EQ.NODES) GO TO 21
NVARS=NVARS+1
FLAG = MAXSET(EDGES,POINTER,NODES,WEIGHTS,SET,EPS)
D0I2O 1=1,NODES
A(I,NVARS)=SET(I)
IF (!(SET(I) .EQ. 1.0) .AND. (WEIGHTS(I) .Eq.2.0)) THEN
WEIGHTS(I)=0
NFEAS=NFEAS+1
: NEW(I) = NVARS
END IF
20 CONTINUE
GOJTO ,15
'j
21 DO 25 l=l.NODES
COVERED(I)=.FALSE.
25 CONTINUE
DO 29 J=NVARS,1,1
I?OUND=. FALSE.
DO 3j3 1=1, NODES
. IF (FOUND) GO TO 35
IF ((.NOT.COVERED(I)).AND.(NEW(I).EQ.J)) THEN
FOUND=.TRUE.
BASIS(I)=J
i KB(J)=I
1 ENDIF
41
33
35
37
29
22
CONTINUE
IF !(FOUND) THEN
DO 37 1=1,NODES
,j IF (A(I, J) .NE.O.O) COVERED(I)=.TRUE
CONTINUE
2ND IF
CONTINUE
DO 22 j=l,NODES
IF (BASIS(I).EQ.O) THEN
BASIS(I) = NVARS+I
KB(NVARS+I) = I
LSE
KB (NVARS+I) = 0
:sndif
CONTINUE
C SET UP COSTS
100 DO 30 1=1,NVARS
C(I)=1
30 CONTINUE
C SET UP RIGHT HAND SIDE
DO 40 l4l,NODES
B(I)=1
40 CONTINUE
C SET UP SURPLUSES
DO 50 1=1,NODES
DO 6CJ J=l,NODES
a(i,nvars+j)=o
60 CONTINUE
A(I,NVARS+I)=1
50 CONTINUE
SURP1=NVARS+1
SURP2=NVARS+N0DES
C COSTS FOR SURPLUSES
DO 70 1=1,NODES
c(nvaKs+i)=o
70 CONTINUE
NVARS=NVARS+NODES
RETURN 
END i
C
I
42
c
c
SUBROUTINE SIMPLE(INFLAG,MX,NN,A,B,C,KOUT,KB,P,JH,E,
* NEWIND,NEWRED,VER,PE,X)
DIMENSION A(200,l),B(1),C(1)
DIMENSION K0UT(7),P(1),JH(1),X0LD(2OO)
DIMENSION X(l),Y(200),PE(1),E(l),KO(7),KB(1),OLDKB(600)
EQUIVALENCE (K,KO(l)),(ITER,K0(2)),(INVC,KO(3)),
1(NUMyR,KO(4)),(NUMPV,KO(5)),(FEAS,KO(6)),(JT,KO(7))
EQUIVALENCE (XX,LL)
THE FOLLOWING DIMENSION SHOULD BE SAME HERE AS IN CALLER.
INTEGER NEWIND
REAL NEWRED
LOGICAL NOPIV,FEAS,VER,NEG,TRIG,KQ,ABSC
M=MX
N=NN !
TEXP= .5**14
10
MAXDEG =
NDEG i= Oj
NCUT=4*M+10
NVER=M/2+ 5
M2=M**2
NOPIV = ,. TRUE.
IF (NEWIND.EQ.O) GO TO 1348
C ELSE NEW COLjJMN TO ADD, NO NEED TO SELECT PIVOT
C RESET SOME OUTPUTS
K0(l)j=0 i
K0(2)j=0
KO(3) =0'
JT = NEWIND
BB = NEWRED
GO TO! 600
C MOVE INPUTS ... ZERO OUTPUTS
1348 TRIG = 'FALSE.
DO 1341 1=1,7
K0(l)=0
1341 CONTINUE
IF(INFLAG.NE.O) GO TO 1400
43
C* NEW* START PHASE ONE WITH SINGLETON BASIS
D01402j4l,N
KB(J)=0 ,
KQ= .'FALSE.
D01403I=1,M
IF(A(I,J).EQ.O.O) GO TO 1403
IF(KQ.OR.A(I,J).LT.O.O) GO TO 1402
KQ=.TRUE.
1403 CONTINUE
kb(j)=i ;
1402 CONTINUE
1400 IF(INFLAG.GT.l) GO TO 1320
D01401I=1,M
JH(I)=1
1401 CONTINUE
C* VER CREATE INVERSE FROM KB AND JH*
1320 VER=,TRUE.
1121 INVC=0
1122 NUMVR=NUMVR+1
D01101I=1,M2
E(I)=0.o
1101 CONTINUE
MM=1
D01113I=1,M
E(MM)=1.0
PE(I)=0.0
X (I) ==B (Ij)
IF(JH(I)j.NE.O) JH(I)=1
MM=MM+M+il
1113 CONTiNUE
C FORM INVERSE
JT=0 
11101 JT=JT+1
IF(KB(JT).EQ.O) GO TO 1102
GO TO 600
C 600 CALL JMY,
C CHOOSE PIVOT
1114 TY=0.0
D01104I=1,M
IF(JH(I).NE.l) GO TO 1104
IF(ABS(Y(I)).LE.TY) GO TO 1104
44
IR=I
TY=ABS(Y(I))
1104 CONTINUE
KB(JT)=0
C TEST PIVOT,
IF(TY.LE.TPIV) GO TO 1102
C PIVOT j
JH(IR)=JT
KB(JT)=IR
GO TO 900
C 900 CALLi PIV
1102 CONTINUE
IF(JT.LT.N) GO TO 11101
C RESET ARTIFICIALS
D01109I=1,M
IF(jk(l).EQ.l) JH(I)=0
1109 CONTINUE
1200 VER=lFALSE.
C PERFORM1 ONE ITERATION
C* XCK DETERMINE FEASIBILITY
FEAS=.TRUE.
NEG=!. FALSE.
D01201I=1,M
if(x(i) ;lt.o.o) go to 1250
IF(JH(I).EQ.O) FEAS=.FALSE.
1201 CONTINUE
C* 'GET' GET APPLICABLE PRICES
IF(.NOT'FEAS) GO TO 501
C PRIMAL PRICES
DO 503 1=1,M
P(I)=PE(I)
503 CONTINUE
ABSCf=. FALSE.
GO TO 599
C COMPOSITE PRICES
1250 FEAS=.FALSE.
NEG=.TRUE.
501 D050j4J=l ,M
P(J)=0.'
504 CONTINUE
ABSC=.TRUE.
I
45
D0505I=1,M
MM=I
IF(X(I)JGE.0.0)G0 TO 507
ABSC=.FALSE.
D0508J=1,M
P(J)=Pq)+E(MM)
MM=MM+M 1
508 CONTINUE
GO TO 565
507 IF(JH(lj.NE.O) GO TO 505
IF(X(I).NE.O) ABSC=.FALSE.
D0510J=lI ,m
P(J)=P(J)E(MM)
MM=MM+M 
510 CONTINUE
505 CONTINUE
C* MIN FIND MINIMUM REDUCED COST
599 JT=0
IF (NDEG.GE.MAXDEG) JT = NVARS+1
BB=0[0 i
D0701J1.N
C SKIP COLUMNS IN BASIS
IF(KB(J).NE.O) GO TO 701
DT=0;.0 ,
D0303I=1,M
IF(A(I,J).NE.O.O) DT=DT+P(I)*A(I,J)
303 CONTINUE
IFCFEAS} DT=DT+C(J)
IF(ABSC) DT=ABS(DT)
IF((NDEG.GE.MAXDEG.AND.DT.LT.0.0.AND.J.LT.JT).OR.
* j (NDEG.LT.MAXDEG.AND.(DT.LT.BB)))THEN
IF (NDEG.GE.MAXDEG) WRITE(6,*) 'ANTICYCLING KICKING IN'
bb=dt!
JT=J
END IF
701 CONTINUE
C TEST FOR NO PIVOT COLUMN
IF((IjT.LE.0.AND.NDEG.LT.MAXDEG) .OR.
* I i (JT.GE.NVARS+1.AND.NDEG.GE.MAXDEG))
GO TO 203
C TEST FOR ITERATION LIMIT EXCEEDED
46
I
I I
590 IF(ITER J GE.NCUT) GO TO 160
ITER=ITER+1
C* JMY MULTIPLY INVERSE TIMES A(.,JT)
600 D06lOl=l! ,M
Y(I)=0.0
610 CONTINUE
LL=0 i
COSTfC(JT)
D0606l=l,M
AIJT=A(I,JT)
IF(AIJT.Eq .0. ) GO TO 602
COSTCOST+AIJT*PE(I)
D0606J=1,M
C
C
C
LL=LL+1
Y(J)=Y(4)+AIJT*E(LL)
606 CONTINUE
GO TO 60j5
602 LL=LL+M [
605 CONTINUE
COMPUTE j PIVOT TOLERANCE
YMAX=0.0
I
D0620I=1,M
YMAX=AMAX1(ABS(Y(I)),YMAX)
620 CONTINUE
TP IV=YMAX*TEXP
RETURN TO INVERSION ROUTINE, IF INVERTING
IF(VER) jGO TO 1114
COST TOLERANCE CONTROL
if(trig:and.bb.ge.tpiv) go to 203
TRIG.FALSE.
IF(BB.GE.TPIV) THEN
TRIG=.TRUE.
ELSE
NOPIV ;= .FALSE.
ENDIF ,
C* ROW SELECT PIVOT ROW
C AMONG EQS. jWITH X=0, FIND MAXIMUM Y
C AMONG ARTIFICIALS, OR, IF NONE,
C GET MAX jPOSITIVE Y(I) AMONG REALS.
1000 IF (NDEG.GT.10) THEN
IR=0
I
47
I
IND = >M+1
DO:1051 1=1,M
IF (X(I).HE.O.O.OR.Y(I).LE.TPIV) GO TO 1050
IF (jJH(I) .GE. IND) GO TO 1051
IND=JH(I)
IR=I
1051 CONTINUE
ELSE
IR0
AAfO.O
KQ=.FALSE.
D01050I=1,M
IF(X(I).NE.0.0.OR.Y(I).LE.TPIV) GO TO 1050
IF(JH(I).EQ.O) GO TO 1044
IF(KQ) GO TO 1050
1045 If](Y(I) .LE.AA) GO TO 1050
GOj TO j 1047
1044 If'(KQ) GO TO 1045
KQ=.TRUE.
1047 AA=Y(I)
IRI
1050 CONTINUE
END IF
IF(IR.NE.O) GO TO 1099
1001 AA=1.0E+20
C FIND MIN. PIVOT AMONG POSITIVE EQUATIONS
D01010I1.M
IF (Y;(I) I LE.TPIV. OR. X(I) .LE. 0.0. OR. Y(I) *AA .LE .X(I))
GO TO 1010
AA=X(I)/Y(I)
IR=l
1010 CONTINUE
IF(.NOT.NEG) GO TO 1099
C FIND PIVOT!AMONG NEGATIVE EQUATIONS,
C IN WHICH X/Y IS LESS THAN THE
C MINIMUM X/Y IN THE POSITIVE EQUATI0NS8
C THAT HAS THE LARGEST ABSF(Y).
1016 BB=TPIV
D01030I=1,M
IF(X(I).GE.O..OR.Y(I).GE.BB.OR.Y(I)*AA.GT.X(I))
GO TO 1030
48
I
IR=I
1030 CONTINUE
C TEST FOR NO PIVOT ROW.
1099 IF(IR.LE.O) GO TO 207
C* 'PIV' PIVOT ON (IRfJT)
900 IF (X(IR).LE.O) THEN
NDEG=NDEG+1
ELSE,
NDEG=0
END IF ;
C LEAVE TRANSFORMED COLUMN IN Y(I).
NUMPV=NUMPV+1
YI=Y(IR)
Y(IR)=1.0
LL=0
C TRANSFORM INVERSE
D09o4j=l,M
L=LL+IR
IF(E(L) JnE.O.O) GO TO 905
LL=LL+M
GO TO 904
905 XY=e!(L)/YI
PE(J)=PE(J)+COST*XY
E(L)=0.0
D0906I=1,M
LL=LL+1!
E(LL)=E(LL)+XY*Y(I)
906 CONTINUE
904 CONTINUE
C TRANSFORM 3C.
xy=x!(ir)/yi
D0908I=1,M
XNEWf=X (1) +XY Y (I)
IF(VER.OR.XNEW.GE.O..OR.Y(I).GT.TPIV.OR.X(I).LT.O.)
GO TO 907
X(I)=0.0
GO TO 908
907 X(I)=XNEW
908 CONTINUE
C RESTORE Y(IR)
Y(IR)=YI
X(IR)=XY
IF(VER) ,G0 TO 1102
221 IA=JH(IR)
IF(IA.GT.O) KB(IA)=0
213 KB(JT)=IR
JH(IR)=JT
IF(NUMPV.LE.M) GO TO 1200
C TEST FOR INVERSION ON THIS ITERATION.
INVC=INVC+1
IF(INVC;Eq.NVER) GO TO 1320
GO TO 1200
C* END OF ALGORITHM, SET EXIT VALUES.
C INFINITE SOLUTION.
207 K=2 !
GO TO 250
C PROBLEM'IS'CYCLING.
160 K=4 j j
GO TO 250
C FEASIBLE OR INFEASIBLE SOLUTION.
203 K=0 j :
250 IF(.NOT:FEAS) K=K+1
C SET ROUTi
1392 D01393I=1,7
KOUT(I)=KO(I)
1393 CONTINUE
RETURN ,
END !
C
C
C
LOGICAL1FUNCTION MAXSET(EDGES,NODEPOINT,NODENUM.INHT,
C INDSET.EPS)
INTEGERIFILLNUM,I,J,K,NODE,NNUM,LEVEL,NODENUM
INTEGER!EDGES(1),NODEPOINT(1),LIST(200)
INTEGER EDGE(38000),PT(200),FILLERNODE(200)
INTEGER!SIGN0DE(200)
real! inht(1) ,WT(200), INDSET(l) ,EPS
LOGICAL1FLAG,SETFOUND
I
50
DO ljI=l,NODENUM
SIGN0DE(I)=0
PT(l)^0
witi)=o
INDSET(I)=0
1 CONTINUE
DO 2 j I=l! ,38000
EDGE(I)=0
2 CONTINUE
K=1 1
FILLNUM=0
NNUM=0 j
DO 5 ^1=1.NODENUM
IF (ABS(INWT(I)).GT.(EPS/(2*N0DENUM))) THEN
NNUM=NNUM+1
PT(NNUM)=K
SIGNODE(NNUM)=I
WT(NNUM)=INHT(I)
C :IF j(NODEPOINT(I).NE.(NODEPOINT(1+1)1)) THEN
:  DO 3 J=NODEPOINT(I),NODEPOINT(1+1)1
IF (INWT(EDGES(J)).GT.(EPS/(2*N0DENUM))) THEN
! EDGE(K)=EDGES(J)
' K=K+1
END IF
3 CONTINUE
C 'END IF
ELSE
filinum=fillnum+i
FILLERNODE(FILLNUH)=I
END IF
5 CONTINUE
PT(NNUM+1)=K
IF (NNUM.NE.l) THEN
DO 7 1=1,Kl
J=1 ,
6 IF (EDGE(I).EQ.SIGNODE(J)) THEN
EDGE(I)=J
ELSE
i !
; J=J+1
j GO TO 6
END IF
51
7 CONTINUE
END IF 1
FLAG.FALSE.
FLAG=SETFOUND(INDSET,NNUM,WT,PT,EDGE,SIGNODE,
C EPS,LIST,LEVEL)
IF (FLAG) THEN
CALL FILLGRAPH(FILLNUM,LEVEL,LIST,FILLERNODE,
C 1 ' INDSET,NODEPOINT,EDGES,NNUM,SIGNODE)
MAXSET.TRUE.
ELSE
MA'XSET. FALSE.
END jlF
RETURN ,
END
LOGICAL! FUNCTION SETFOUND(INDSET,NNUM,WT.PT,EDGE,
C SIGNODE,EPS,LIST,LEVEL)
INTEGER1 I, J, NODE, NNUM, GETVAR, LEVEL
INTEGER SUBG(200).LIST(200)
INTEGER! EDGE(38000),PT(200),SIGN0DE(200)
REAL WT(200).INDSET(200).AVAILWEIGHT.EPS
LEVEL=Q
SETFOUND.FALSE.
IF (NNUM.Eq.l) THEN
SETFOUND.TRUE.
LEVEL1
LST(1)=SIGN0DE(1)
INDSET(SIGNODE(l))=1
RETURN
END IF j
AVAILWEIGHT=0.0
DO 10 1=1,NNUM
SUBG(I)=1
AVAILWEIGHT=AVAILWEIGHT + WT(I)
10 CONTINUE
16 IF (AVAILWEIGHT.GT.(1.0+(EPS/2))) THEN
NODE = GETVAR(NNUM,SUBG,PT,EDGE,WT)
52
IF (NODE.EQ.l) THEN
AVAILWEIGHT =0.0
DO 22 1=1,LEVEL
j AVAILWEIGHT = AVAILWEIGHT + HT(LIST(I))
22 CONTINUE
IF (AVAILWEIGHT.LE.(1.0+(EPS/2))) THEN
GOTO 25
ELSE
SETFOUND=.TRUE.
RETURN
ENDIF
END IF
LEVELLEVEL + 1
LlsT (LEVEL) =NODE
SUBG(NODE)=LEVEL
C IF (PT(NODE).NE.(PT(N0DE+1)1)) THEN
DO 20 I=PT(NODE),PT(NDDE+1)1
i J=EDGE(I)
IF (SUBG(J).EQ.l) THEN
 iSUBG(J)=LEVEL
1 AVAILWEIGHT=AVAILWEIGHTWT(J)
' END IF
20 CONTINUE
C END IF
ELSE
IE (LEVEL.LE.O) THEN
IRETURN
END IF
25 NODE=LIST(LEVEL)
AVAILWEIGHT=AVAILWEIGHTWT(NODE)
SUBG(NODE)=LEVEL 1
DO 24 1=1,NNUM
IF (SUBG(I).EQ.LEVEL) THEN
, SUBG(I)=l
AVAILWEIGHT=AVAILWEIGHT+WT(I)
end IF
24 CONTINUE
LEVEli=LEVEL 1
END IF
GO TO 16
end!
53
c
30
C
40
INTEGER,FUNCTION GETVAR(NNUM,SUBG,PT,EDGE,WT)
INTEGERiSUBG(200),PT(200),EDGE(38000)
INTEGER1' NNUM,I,J,NODEDEG,LOWDEG
REAL, WT(200),DEN(200),RAT(200).HIGH
! i
*
HIGH=0.0
LQWDEG=1000000
GETVAR=f1
DO 4b 1= 1.NNUM
If (SUBG(I).EQ.l) THEN
nodEdeg=o
DEN(I)=0.0
I IF (PT(I).NE.(PT(I+1)1)) THEN
, DO 30 J= PT(I),(PT(I+1)1)
II I IF (SUBG(EDGE(J)) .EQ.i) THEN
DEN(I)=DEN(I) + WT(EDGE(J))
j NODEDEG=NODEDEG + 1
END IF
j CONTINUE
! END IF
RAT(l)=WT(I)/(DEN(I)+ 0.000001)
IF ((RAT(I).EQ.HIGH .AND. NODEDEG.LT.LOWDEG)
i .OR. RAT(I).GT.HIGH) THEN
j HIGH=RAT(I)
i GETVAR=I
LOWDEG=NODEDEG
END'IF
END IF
CONTjlNUE
END
SUBROUTINE FILLGRAPH(FILLNUM,LEVEL.LIST,FILLERNODE,
C j ' INDSET,NODEPOINT,EDGES,NNUM,SIGNODE)
INTEGER FILLNUM,LEVEL,LIST(1),FILLERNODE(1)
INTEGER NODEPOINT(1),EDGES(l).NNUM.SIGNODE(l),J
REAL INDSET(1)
54
IF (NNUM.EQ.l) THEN
Go!TO ,1988
END IF '
DO 1987 1=1,LEVEL
LIST(I)=SIGNODE(LIST(I))
INDSET(LIST(I))=1
1987 CONTINUE
1988 DO 1991J1=1,FILLNUM
J=0
1989 J=J+1,
IF! (jGT.LEVEL) THEN
LEVEL=LEVEL+1
LIST(LEVEL)=FILLERNODE(I)
INDSET(LIST(LEVEL))=1
ELSE i
K=NODEPOINT(L1ST(J))
1990 IF (K.GE.NODEPOINT(LIST(J)+l)) THEN
GO TO 1989
END!IF
IF (FILLERNODE(I).Eq.EDGES(K)) THEN
i GO TO 1991
ELSE
K=K+1
END,  IF
GO TO 1990
END IF
1991 CONTINUE
END
55
REFERENCES
:i ]
11 i
[1] Bertsekas, Dimitri, and Tsitsiklis, John N. Parallel and Distributed Com
putation: Numerical Methods. Englewood Cliffs, New Jersey: PrenticeHall
Inc., 1989. j
:i
[2] Burden1, Richard L., and Faires, J. Douglas. Numerical Analysis. 3rd ed.
Boston:! Prindle, Weber, & Schmidt, 1985.
"' i
[3] Chvatalj Vasek. Linear Programming. San Francisco: W.H. Freeman, 1983.
I1
'I J
[4] Fisher, David C., private communication.
[5] Hell, Pi and Roberts, F. Analogues of the Shannon capacity of a Graph,
Ann. Disc. Math. 12(1982), 155162.
\ 1
[6] Larsen, jMichael, and Propp, James, and Ullman, Daniel. The Asymptotic
Chromatic Number of a Graph, preprint.
' j" 1
[7] Palmer; Edgar M. Graphical Evolution: An Introduction to the Theory of
Random Graphs. New York: John Wiley & Sons, 1985.
[8] Syslo, Maciej M., and Deo, Narsingh, and Kowalik, Janusz S. Discrete Opti
mization Algorithms with PASCAL Programs. Englewood Cliffs, New Jersey:
PrenticeHall Inc., 1983.
,j 
[9] Varga, Richard S. Matrix Iterative Analysis. Englewood Cliffs, New Jersey:
PrenticeHall, Inc., 1962.
,! I
"! 'i
[10] Winston, Wayne L. Operations Research: Applications and Algorithms.
Boston:  Duxbury Press, 1987.
56

Full Text 
PAGE 1
USING FRACTIONAL COLORING TO SOLVE SPARSE ITERATIVE SYSTEMS IN PARALLEL by Melanie A. Marchant B.S., Dickinson College, 1983 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Master of Science Department of Mathematics 1991
PAGE 2
This thesis for the Master of Science degree by Melanie A. Marchant has been approved for the Department of Mathematics by David C. Fisher Date
PAGE 3
Marchant, Melanie A. (M.S., Applied Mathematics) Using Fractional Coloring to Solve Sparse Iterative Systems in Parallel Thesis directed by Assistant Professor David C. Fisher Coloring algorithms have been widely used to find parallel implementations of iterative methods, such as Successive OverRelaxation. We study how fractional coloring can be used to solve sparse iterative systems in parallel, as well as speed the convergence of the iterative method. We also study the fractional coloring number of a graph XJ(G). Using properties of linear programming, we prove that for any graph G, w(G) :S w1(G) = x1(G) :S x(G), where w(G) is the clique number of G, wJ(G) is the fractional clique number of G, and x(G) is the chromatic number of G. We also prove that XJ(G) is on the order of x(G) for random graphs, and that the fractional coloring can be computed to a desired accuracy. Finally, experimental evidence is given that XJ( G) can be computed more quickly than x(G). The form and content of this abstract are approved. I recommend its publication. iii Signed David C. Fisher
PAGE 4
To my husband Eric and my son Aaron for their unconditional love IV
PAGE 5
CONTENTS Figures ..... Acknowledgements 1 USING FRACTIONAL COLORING TO SOLVE SPARSE ITERATIVE SYSTEMS IN PARALLEL 1.1 Parallelism of Symmetric Systems 1.2 Comparing Coloring and Clique Numbers 1.2.1 A Sequence of Inequalities 1.2.2 Examples ..... 1.2.3 Numerical Bounds ..... 1.3 Calculating the Fractional Coloring Number 1.3.1 Column Generation . . . . 1.3.2 Fractional Coloring Algorithm . 1.3.3 Maximal Independent Set Algorithm 1.3.4 Fractional Coloring Program . . 1.3.4.1 Cycling . . . . . . 1.3.4.2 Refinement for Rounding Error 1.3.4.3 Adjustment for Large Graphs 1.3.4.4 Runtime Results ...... 1.4 Efficient Parallelism 1.4.1 Updating Schemes 1.4.2 Iterative Methods 1.5 Parallelism of NonSymmetric Systems 1.6 Summary Appendix References v VI vii 1 3 7 8 9 12 16 16 17 20 24 24 24 26 27 28 29 30 32 34 36 56
PAGE 6
FIGURES 1 A 5cycle 4 2 A 6cycle 9 3 A 7cycle 10 4 The Mycielskian of a triangle 11 5 Grotzsch 's graph . . . 12 6 Limits for the fractional coloring number of almost all graphs 16 7 Maximal independent set algorithm graph (i) 22 8 Maximal independent set algorithm graph (ii) 23 9 Maximal independent set algorithm graph (iii) 23 10 Fractional solution vs. running time 28 11 Dependency graph . . . . . . . 32 VI
PAGE 7
ACKNOWLEDGEMENTS I would like to thank David C. Fisher and Jennifer Ryan for giving me this opportunity and sharing their expertise. A hove all, I would like to thank Dr. Fisher for his support, guidance, and immeasurable patience. Vll
PAGE 8
1 USING FRACTIONAL COLORING TO SOLVE SPARSE ITERATIVE SYSTEMS IN PARALLEL Iterative algorithms are those of the form x(t + 1) = f(x(t)), t = 0, 1, ... where f : R" R". They are used to solve numerical problems such as systerns of equations and optimization. Each algorithm commences with an initial approximation x(O), and the iterative scheme generates a sequence of vectors, {x(O),x(1),x(2), ... }, which may converge to a fixed point of f. Naturally, it is desirable to seek an iterative scheme attaining the fastest possible convergence of this sequence. Following the notation used in Bertsekas and Tsitsiklis [1, section 1.2.4], let x;(t) denote the i'h component of x(t) and let f; denote the i'h component of the function f. When we write x;(t + 1) = f;( x1 ( t), ... x,( t)), we are indicating that updating the i'h component of x may require the values of x1(t), ... ,x,(t). An iteration is defined as a sequence of component updates such that each component is updated at least once. If components x1 ... Xn are updated exactly once in an iteration, then we say there are n steps per iteration. In an iteration, it may also be that several components will not require the values of all the components. Furthermore, those components not requiring similar component values could be updated simultaneously using parallel computers. In this case, it is possible to 1
PAGE 9
update components simultaneously in m parallel steps, where m < n. An example of parallelism is a parallel implementation of the iterative method Successive OverRelaxation, i.e., SOR, to solve the system Ax = b, where A is sparse and symmetric. Coloring algorithms have been widely used to find parallel implementations of the SOR method (Bertsekas and Tsitsiklis [1 ]). Initially, the system is interpreted as a graph, G, where the variables of the system are represented by the nodes in the graph. Thereafter, a coloring of G is found and the nodes of the same color are updated in parallel using SOR. A coloring of G can be thought of as a collection of independent sets such that each node is in at least one set. The coloring number of G, x( G), is the fewest number of independent sets that color G. A fractional coloring of G is a collection of weighted independent sets such that each node of G is in sets whose weights sum to at least one. The fractional coloring number of G, XJ(G), is the minimal sum of the weights in a fractional coloring of G. We will see that the weighted independent sets suggest an updating scheme for parallel computation on the nodes of G, and that this scheme could speed the rate of convergence ofthe iterative method by a factor of x(G)/x1(G). Furthermore, a minimal fractional coloring of G can usually be found much more quickly than a minimal coloring of G. 2
PAGE 10
1.1 Parallelism of Symmetric Systems It is possible to accelerate the convergence of iterative methods by using parallel computers to update different components simultaneously. Graph colorings can be used to specify which components of x may be updated in parallel and to suggest a scheduling of the x; 's for updating. To do so, first construct a graph G from the given system Ax = b, where A is symmetric and positive definite. Let V = {1, ... ,n} be the nodes of G, and E = {(i,j)lx; depends on Xj} In other words, for every column in A, there is a node in G, and an edge is placed between nodes i and j in G if a;i yo 0 in A. For example, if a given system Ax = b is translated as an iteration, x 1 (t + 1) = f 1 (x 1(t),x2 (t),xs(t)) x2(t + 1) = h(x1(t),x2(t),x3(t)) :v3(t + 1) = h(x2(t),x3(t),x4(t)) :v4(t + 1) = j4(x3(t),x4(t),xs(t)) xs(t + 1) = fs(x!(t),x4(t),xs(t)), then the adjacency matrix for G is 1 1 0 0 1 1 1 1 0 0 adj (G) = 0 1 1 1 0 0 0 1 1 1 1 0 0 1 1 and G is a 5cycle or C5 (see figure 1). 3
PAGE 11
1 5 2 4 3 Figure 1: A 5cycle Next, partition the nodes of G into independent sets, or color classes. An in dependent set is a subset of nodes such that there are no edges between any two nodes. Thus, if nodes i and j are in an independent set, then x.; is not needed to compute x; and x; is not needed to compute Xj, sox; and x.; may be updated in parallel. Furthermore, the number of parallel computers, or processors, required per iteration is bounded by the cardinality of the largest independent set, or the independence number, a( G). If we partition the nodes of G into as few indepen dent sets as possible, we could maximize the number of nodes per independent set and minimize the number of parallel steps per iteration. We would also optimize the use of multiple processors, leaving less processors idle per iteration. Thus to maximize parallelism, we need to find a minimum coloring of G, or its chromatic number, x( G). For example, the coloring number of C5 is 3. In other words, at least three colors are required to color the nodes of G. We could color the independent sets {1,3} in red, {2,4} in blue, and {5} in white. Two processors are required to perform an iteration as follows: update nodes 1 and 3 in parallel, update nodes 2 and 4 in parallel, update node 5. Thus in x( G) = 3 parallel steps, 4
PAGE 12
all nodes are updated once; in six parallel steps all nodes are updated twice; etc. Finding the minimum coloring of G is an NPhard problem, so there is no known polynomial algorithm to find x( G). One can reformulate the coloring problem as an integer programming problem, though, to determine x( G) and specify which nodes may be updated simultaneously. Let K be the nodeindependent set incidence matrix, where the rows of K represent the nodes of G, and the columns represent all independent sets of G. If node i is in independent set j, then k;j = 1. Otherwise, k;1 = 0. The following matrix is the node independent set incidence matrix K of G = C5 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 1 K= 0 0 1 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 1 To find x( G), we could solve the following integer programming problem (IP), x( G) = min 1 T X subject to Kx > 1 X > o, x is integer. It may be difficult to find a minimal coloring number this way for large graphs, since the number of columns in the (IP) could be exponential in the size of the graph. To avoid this problem, we solve a linear programming relaxation of the (IP), and use a linear programming technique called column generation. The solution of the linear programming problem (LP) is called the fractio'nal coloring 5
PAGE 13
number, x.r( G) (Larsen, Propp, and Ullman [6]). x.r(G) =min lTx subject to Kx > 1 X > 0. In the (LP), it is beneficial to think of x; as a fractional weight of an independent set. Thus, x.r(G) = I: x;, the sum of the weights on the independent sets. The first set of constraints, Kx 2: 1, are satisfied if each node of G is in independent sets whose weights sum to at least one. Also note that x E n+ allows independent sets to have fractional weights. Thus, x.r( G) may be noninteger. In fact, since X!( G) is the solution to an (LP), we know that it will always be a rational number. In Larsen, Propp, and Ullman [6], they also show that every rational number greater than two is XJ( G) for some graph G. The fractional coloring number of a C5 is 2.5, corresponding to the independent sets, {1,3}, {2,4}, {3,5}, {1,4}, and {2,5}, each with weight 1/2. Two processors are required to perform an iteration as follows: update nodes 1 and 3 in parallel; update nodes 2 and 4 in parallel; update nodes 3 and 5 in parallel; update nodes 1 and 4 in parallel; updates node 2 and 5 in parallel. Note that each node is updated twice in five parallel steps, as opposed to six parallel steps as determined by x( G). This would suggest a theoretical speedup of 3.0/2.5 = 1.2, or x(G)/x.r(G). 6
PAGE 14
1.2 Comparing Coloring and Clique Numbers Before examining the method of column generation and its role in determining which nodes to update in parallel, it is beneficial to understand how a graph's clique number, w(G), and fractional clique number, w1(G), relate to its coloring numbers. The clique number of a graph, w( G), is the number of nodes in the largest complete subgraph of G. Integer programming can be used to solve the clique problem. To find w(G), we solve the following (IP), w(G) =max lTy subjecttoyTK < lT y > O, y is integer. We can use linear programming to find w 1 (G). To find wJ(G) we solve the following (LP), wt(G) =max lTy subject to yTK < fT y > 0. It is beneficial to think of y; as the fractional weight of a node. As with the coloring number problems, the rows of K are indexed by the nodes in the graph, and its columns represent the independent sets of the graph. The first set of constraints, yTK :S lT, is satisfied if the sum of the weights on the nodes in any independent set is one or less. Also note that y E n+ allows nodes of G to have noninteger weights. Since wJ(G) = L; y;, the fractional clique number may also be noninteger. In fact, it is always rational since it is the solution of an (LP). The following inequality first appeared in Hell [5], and subsequently in Larsen, Propp, and Ullman [6]: w(G) :<:: WJ(G) = XJ(G) :<:: x(G). We will prove the 7
PAGE 15
inequality in a new context and establish numerical bounds on the clique and coloring numbers. 1.2.1 A Sequence of Inequalities First, we will show that w(G) :S w 1(G) = x 1(G) :S x(G). It is well known that w(G) :S x(G) for any graph. We will need to show that w(G) :S wJ(G), wr(G) = xr(G), and xr(G) :S x(G). Lemma 1 w( G) :S w1 ( G). Proof: The optimal solution to the (IP) solving the clique problem is also a feasible solution to the (LP) solving the fractional clique problem. Since the max imal solution to the (LP) is greater than or equal to any of its feasible solutions, w( G) :S wr( G). o Lemma 2 w1(G) = x1(G). Proof: To show that w1(G) = x1(G), we need to observe the (LP)'s which calculate the fractional coloring and clique numbers. The (LP)'s are dual prob lems and we will refer to the (LP) which computes x1(G) as the primal and the (LP) which computes w J( G) as the dual. The Strong Duality Theorem of linear programming guarantees that the primal and the dual will have equal optimal objective function values if the problems have optimal solutions (see e.g., Winston [10]). Thus, WJ(G) = xr(G). 0 8
PAGE 16
Lemma 3 Xt(G) :S: x(G). Proof: The optimal solution to the (IF) solving the coloring problem is also a feasible solution to the (LP) solving the fractional coloring problem. Since the minimal solution to the (LP) is less than or equal to any of its feasible solutions, Xt(G) :S: x(G). 0 Theorem 1 [4] w(G):::: WJ(G) =X!( G):::: x(G). Proof: By the results in the previous lemmas and the fact that w( G) :S: x( G), we have proved the inequality. 0 1.2.2 Examples As an illustration of Theorem 1, we look at ncycles. If n is even, such as a C6 in figure 2, then w(G) = 2::; w 1 (G) = 2 = xt(G)::; x(G) = 2. 4 3 5 2 6 1 Figure 2: A 6cycle 9
PAGE 17
If n is odd, such as a C7 in figure 3, then w(G) = 2 :S WJ(G) = 2n/(n1) = XJ(G) :S x(G) = 3. 4 3 5 2 6 1 7 w(G) = 2 :S w 1(G) = 7/3 = XJ(G) :S x(G) = 3 Figure 3: A 7cycle As another illustration, we look at Mycielskian graphs. Mycielskian graphs are a sequence of graphs, Gn, for which Gn+l can be constructed from G,, and some interesting properties apply. Following the notation in Larsen, Propp, and Ullman [6], the Mycielskian !L( G) of a graph G = (V, E) is defined as follows. Let V = { Vt, ... v,}. Define fL(G) by V(fL(G)) = Xz, ... x,, Yi> ... Yn" z} with E(fL( G)) = {(x;,xj) '* (v;,vj) E E(G), (x;,yj) '* ((v;,vj) E E(G), (y;, z )(lfi = 1, ... m )}. Larsen, Propp, and Ullman also proved the following theorem. 10
PAGE 18
Theorem 2 Let G) be the Mycielskian of a graph G with at least one edge. Then 1. = w(G); 2. = x(G) + 1; 3. = Xt(G) + x)a) An illustration of Theorems 1 and 2 is the Mycielskian of a K3 (i.e., triangle) in figure 4. 3 6 7 5 4 1 2 1 10 = 3; = 3 + 3 = "3; = 3 + 1 = 4 Figure 4: The Mycielskian of a triangle As another illustration, let G2 = K2 and define GnH = for n ?: 2, then is a C5 and G4 is Grotzsch's graph in figure 5. Grotzsch's graph is the smallest graph where the clique and chromatic numbers differ by more than one. 11
PAGE 19
1 6 4 3 5 2 29 w(l'( G)) = 2 ; XJ(I'( G)) = 2" + S = 10 ; x(l'( G))= 3 + 1 = 4 Figure 5: Grotzsch's graph For this class of Mycielskian graphs, Larsen, Propp, and Ullman also show that x(Gn)XJ(Gn)+ oo and XJ(Gn)w(Gn)+ oo. In the next section, we will see that the clique and coloring numbers of random graphs behave quite differently. 1.2.3 Numerical Bounds We have shown that w(G) S XJ(G) S x(G) for any graph, but how close to either w(G) or x(G) is X!( G) for a random graph? We know that our iterative scheme will converge faster with use of fractional coloring. It is reasonable to assume that if XJ( G) is almost always closer in value to w( G) than x( G) for any graph, then the use of fractional coloring is indispensible in solving large, sparse systems in parallel. Unfortunately, we will see that X!( G) is on the order of x(G) rather 12
PAGE 20
than w( G) for random graphs. Keep in mind, though, that XJ( G) ::; x( G) and the use of fractional coloring is an improvement on the use of integer coloring in the parallel implementation of an iterative method. First, we prove a theorem which will become useful in the proof of Theorem 6. Theorem 3 [4] Let G be a graph with n nodes, and let a( G) be the maximal cardinality of an independent set of G. Then Xt(G) aCG)" Proof: From Lemma 2, we know that XJ(G) = wJ(G), the optimal dual solution. A feasible solution to the (LP) solving the fractional clique problem is a weight assignment of on each node. If there are n nodes, then w J( G) Thus, XJ( G) aCGJ. o Next, we state two theorems on which we base the proof of Theorem 6. Grimmett and McDiarmid (quoted in Palmer [7]) found bounds on the chromatic numher as follows in Theorem 4, and Matula (quoted in Palmer [7]) found the clique number of almost every graph as follows in Theorem 5. Theorem 4 Let 0 < e < 1 be given, and set b = 1 With p, the probability of an edge, fixed, the chromatic number of almost all graphs G satisfies 13
PAGE 21
Theorem 5 Let G be a graph with n nodes where the probability of an edge is fixed at p = t. Let the function d = d( n) be defined by Then for any e > 0, Prob(d(n)e :S w(G) :S d(n) +e)> 1. We are now ready to state and prove a theorem revealing bounds on the fractional coloring number. Theorem 6 [4] Let n be the number of nodes in a graph G, and b = where p is the fixed probability of an edge. Then, for all e > 0, we have with probability 1 that Proof: Let 0 < 8 < Then we know from Theorem 4, 8)n < x(G) < (1 + 8)n) ___. 1. log&(n) log&(n) Let b = Then by Theorem 5, Frob (lw(G)d(n)j :S 8) > 1 Frob (w(G) :S d(n) + 8)> 1. 14
PAGE 22
Since w( G) = a( G), and Since XJ( G) 2': Frob (a( G):<; d(n) + 5)+ 1, Frob (n> + 1. a( G) d(n) + 5 Frob (xJ(G) > ) + 1. d(n + 5 Since d( n) :<; 2logr;( n) for large n, Frob (x1 ( G) 2': 1 ) 5 ) + 1. 2ogb n + L < 2clog0(n) ettmg v = 1 we wnte 2 After algebraic simplification, we achieve We proceed with notation used in Theorem 4. Recognizing that b in Theorem 5 is the same as b in Theorem 4, we may write 15
PAGE 23
Since X!( x(G), Prob e)n
PAGE 24
this technique to solve the (LP), min fl'x subject to Kx > 1 X > 0, we start with enough columns in the K matrix to provide an initial basic feasible solution, such that each node in the graph is represented in at least one independent set. We then run the Simplex algorithm. The algorithm returns two sets of weights. The primal weights in x are the weights on the independent sets represented by the columns of K. The dual weights in yare the weights on the nodes in the graph represented by the rows of K. If one can determine an independent set whose dual weights sum to one or greater, then the dual is not feasible. Hence, a new column, or the incidence vector of the generated independent set, is added to K before running the Simplex algorithm again. This process repeats until one cannot determine any independent set whose dual weights sum to one or greater. Upon this situation, the dual is feasible. Furthermore, since I: X;= I: y;, the dual and primal are optimal by the Strong Duality Theorem. 1.3.2 Fractional Coloring Algorithm We have an algorithm to calculate the fractional coloring number of a graph using the linear programming method of column generation. The details of the algorithm are described directly below in steps 15. 17
PAGE 25
1. Create the columns of the K matrix by adding the incidence vectors of enough independent sets of G such that each node of G is in at least one independent set. Nodes already represented in independent sets may be added to a subsequently generated independent set to "fill" or maximize the set's size. 2. Run the Simplex algorithm, which will return primal weights in the vector, x, and dual weights in the vector, y. 3. Check to see if the dual is feasible using the dual weights from Simplex. a.) If the dual is not feasible, an independent set may be formed whose dual weights sum to greater than one. Proceed to step 4. b.) If the dual is feasible, no independent set can be formed whose dual weights sum to greater than one. Proceed to step 5. 4. Add the incidence vector of the generated independent set as a column in the K matrix. Go to step 2. 5. The dual is feasible. In fact, since 2.:: :v; = 2.:: y;, the dual and primal are optimal by the Strong Duality Theorem. Thus, XJ( G) = 2.:: :v;. As an illustration of the fractional coloring algorithm, we will follow the details in calculating XJ( G) for G = C5 Let y and x denote the dual and primal weight 18
PAGE 26
vectors, respectively. We begin with enough independent sets to cover the nodes of G. 1 0 0 0 1 1 K = 1 0 0 0 1 0 0 0 1 Simplex is run giving the following independent sets, primal weights, and dual weights. X{t,3) = X{2,4} = X{2,5) = 1 y = (1,0,0, 1, 1) Is the dual feasible? No. We can add {1,4} as a column of K. 1 0 0 1 0 1 1 0 K= 1 0 0 0 0 1 0 1 0 0 1 0 Simplex is run giving the following primal and dual weights. X{l,3) = X{2,4} = X{2,5} = 1 j X{t,4} = 0 y = (0,0,1,1,1) Is the dual feasible? No. We can add {3,5} as a column of K. 1 0 0 1 0 0 1 1 0 0 K= 1 0 0 0 1 0 1 0 1 0 0 0 1 0 1 19
PAGE 27
Simplex is run giving the following primal and dual weights. X(1,3) = X{2,4jX{2,5) = X{l,4) = X(3,5) = 0.5 y = (0.5, 0.5, 0.5, 0.5, 0.5) Is the dual feasible? Yes, there are no independent sets whose duaJ weights sum to one or greater. Since the dual is feasible, and I: :v; = I: y;, the dual and primal are optimal by the Strong Duality Theorem. Thus, xt(G) = 2.5. 1.3.3 Maximal Independent Set Algorithm When generating columns in K, one could simply select the first independent set whose dual weights sum to greater than one. To find x.r( G) most efficiently, though, a "maximal" independent set should be added to K when the dual is not feasible. Ideally, this independent set has a maximum dual weight sum and contains a maximum number of nodes. Although finding a maximal independent set may take more time than selecting the first eligible independent set, it is probable that less total time will be required to solve the (LP) (Chvatal [3]). We have an algorithm which exhaustively searches for a maximaJ independent set whose dual weights sum to one or greater. This algorithm chooses nodes for the set according to their dual weights and degrees as follows below in steps 17. Note that it is appropriate to insert the steps of this algorithm in the fractional coloring algorithm whenever the coloring algorithm generates an independent set. 20
PAGE 28
1. Initially, consider those nodes whose dual weights are of significance. Hence, we are considering a subgraph of Gin steps 16, where we determine an independent set with nodes of significant weight. We continue by maximizing the size of the independent set with nodes of insignificant weight in step 7. 2. Calculate a "weight ratio" of available nodes. Let v represent a node in the graph, and y( v) represent the dual weight of that node. Let N( v) be the set of nodes adjacent to v and available for selection. y(v) ratio( v) = =''7,'LEN(v)Y(a) + E Note that E is added in the denominator to avoid possible division by zero if v is an isolated node. 3. Calculate the degree of each available node. 4. Select node for the maximal independent set with the greatest weight ratio and least degree. 5. Disregard selected node's adjacent nodes in any subsequent steps. 6. Repeat steps 25 until no nodes are available for selection from the subgraph of significant nodes. 7. Fill the independent set with whatever nodes of insignificant weight can be added accordingly. 21
PAGE 29
As an illustration of the maximal independent set algorithm, we are given the graph and node weights in figure 7. Let r and d denote the ratio and degree vectors, respectively. Thus, we begin to generate a maximal independent set, I, with node 1 such that I= {1}. 1 2 6 3 5 4 r(1, 2, 3, 4, 5, 6) = (0.5, 0.4, 0.5, 0.5, 0.5, 0.5) d(1,2,3,4,5,6) = (1,3,2,2,2,2) Figure 7: Maximal independent set algorithm graph (i) Since node 1 was chosen for the independent set, its adjacent node, 2, is disregarded and weight ratios and degrees are recalculated on the following subgraph in figure 8. 22
PAGE 30
6 3 5 4 r(3,4,5,6) = (1,0.5,0.5,1) d(3,4,5,6) = (1,2,2,1) Figure 8: Maximal independent set algorithm graph (ii) We add node 3 to the independent set giving I = {1, 3}. Node 4 is disregarded and weight ratios and degrees are recalculated on the following subgraph in figure 9. r(5,6) = (1,1) d(5, 6) = (1, 1) Figure 9: Maximal independent set algorithm graph (iii) We add node 5 to the independent set giving I = {1, 3, 5}. Node 6 is disre garded. No subgraph remains and there are no additional nodes with insignificant weights to consider, so the independent set I= {1, 3, 5} with total weight of 5/3 may be added as a column in K. Note that maximal independent sets were selected in the illustration of the fractional coloring algorithm. 23
PAGE 31
1.3.4 Fractional Coloring Program Jennifer Ryan and I wrote a program to implement the fractional coloring algorithm and maximal independent set algorithm. We also incorporated a version of the Simplex Algorithm written by The Rand Corporation. In doing this we discovered a number of problems, as well as some interesting results. 1.3.4.1 Cycling It is very rare that the Simplex algorithm cycles. It appears to be common, though, in linear programming when using column generation. Cycling occurred in our program execution due to degenerate pivots. The problem was corrected after coding in Bland's least index pivot rule. 1.3.4.2 Refinement for Rounding Error Rounding errors also affected the execution of our code. Passing tolerances and making other checks did not completely remedy the situation. Finally, a change was made to the fractional coloring algorithm in the way it determined the fea sibility of the dual problem. According to the dual (LP), the dual is feasible if yTk :S 1 for all independent sets' incidence vectors, k. Due to rounding er rors, it was possible that the dual weights sum for a particular independent set was slightly greater than one. The independent set was added to the K matrix and Simplex readded the dual weights and found that the sum was one or slightly less than one plus some tolerance. Simplex did not use the indepen24
PAGE 32
dent set and returned the same dual weights from the previous execution. This resulted in looping and repeated objective function values. The problem was remedied after the feasibility of the dual was changed such that yTk ::; 1 + E for all k, where E is arbitrary. It was also discovered that the value of E allowed one to determine an accuracy of the fractional coloring number as suggested by the following theorem. Theorem 7 [4] Given any graph on n nodes, find its fractional coloring number by solving an (LP) using column generation. Let y and x be its dual and primal weights, respectively, as returned by the Simplex algorithm. Let I be any independent set in the graph and N be the number of independent set incidence vectors (i.e., columns of K). For any E > 0, if I:iel y; :::; 1 + E for all I, then the corresponding feasible fractional coloring has N XJ(G):::; I>j::; (1 + e)xJ(G). j=l Proof: Recall that a graph's dual weights represent the weights on its nodes and a graph's primal weights represent weights on its independent sets. Let = Then LEI y; ::; 1 since I:iEI Yi :S 1 + e. So y' is a feasible dual solution. Thus, y; ::; wr( G) = XJ( G). This implies that ::; X!( G), or Yi :S (1 + e)xt(G). Recall that Xt(G) ::; I:f=1 Xj, a feasible solution to the primal. Also note that the Simplex algorithm always returns dual and primal 25
PAGE 33
weights such that I:}':1 Xj = I:?=I y;. Thus, N X!( G) :S L>j :S (1 + E)Xt(G). j:::::] 0 1.3.4.3 Adjustment for Large Graphs A final situation of interest occurred when the fractional coloring program was executed for a random graph on 100 nodes. After many executions of the Simplex code, the program took an unreasonable amount of time to find an independent set whose dual weights sum was greater than 1 +E. It was found that a clique on four nodes contained the only nodes out of 100 with any significant weight. To avoid further situations like this when generating a maximal independent set, we changed our code to initially consider a subgraph of G with those nodes whose dual weights were greater than z'n. From this subset of nodes with significant weights, an independent set was formed, and nodes with insignificant weights were added subsequently to maximize the size of the set. We continued to check dual feasibility by determining if any independent set could be formed with a dual weight sum greater than 1 +E. Thus, our code actually checks for dual feasibility by determining if an independent set can be generated with a dual weight sum greater than 1 + This is done so that if all n nodes of a graph have insignificant weights, then no independent set could be generated. Hence, 26
PAGE 34
Not only did this strategy speed the convergence of the fractional coloring number for large graphs, but similar results were achieved for graphs of all sizes. 1.3.4.4 Runtime Results The fractional coloring algorithm has been compared to an integer coloring algorithm (Syslo, Narsingh, and Kowalik [8]). It was found for a random graph on 40 nodes, with probability of an edge equal to 0.5, that it took 3.04 seconds to compute XJ( G) = 7.64846, as compared to the 993.01 seconds it took to compute x( G) = 8 (see figure 10). As expected, the fractional coloring number was less than the chromatic number for the graph. More importantly, though, the fractional algorithm converged in less time to XJ( G) than the integer algorithm converged to x( G). This is of great importance if an algorithm is given a limited amount of time to find the coloring number of a graph. Note that a copy of the fractional coloring program in FORTRAN is provided in the Appendix of this paper. 27
PAGE 35
Coloring Number 1l H 9 8 Time (in seconds) Figure 10: Fractional solution vs. running time 1.4 Efficient Parallelism There are several reasons to use fractional coloring as opposed to integer coloring in solving large, sparse systems. Probably the most important one is the theoretical speedup of the iterative scheme. In fact, we quantified this theoretical speedup in the iterative scheme as x(G)/xJ(G). Recall the C5 example with a possible speedup of 3.0/2.5 = 1.2. In that discussion we had neglected to consider how to achieve this speedup given the columns or independent sets in the final K matrix. We will consider some possible criteria using this example, and also examine different iterative methods to determine the elements of efficient parallelism. 28
PAGE 36
1.4.1 Updating Schemes In the 5cycle example in figure 1, the final run of Simplex gave the independent sets {1,3}, {2,4}, {3,5}, {1,4}, and {2,5} each with weight 1/2. One criterion in achieving a speedup of the iterative scheme might be to update the nodes in each independent set OJ;/XJ( G) of the time. In the C5 example, each independent set should be updated 0.5/2.5 of the time, or with equivalent consideration. Another criterion might be to find an ordering of the independent sets so that consecutive sets in the iterative scheme do not contain like nodes. An efficient ordering for the C5 example is simply {1,3}, {2,4}, {3,5}, {1,4}, {2,5}. As graphs increase in size and complexity, though, there may be additional criteria to consider. Furthermore, it is uncertain if optimal updating schemes could even exist for all graphs. Consider the difficulty of an updating scheme with the following example. When calculating x1 ( G) for Grotzsch's graph in figure 5, the final execution of Simplex returns the following independent sets and primal weights. X(s r s g 10} = 0.4 ' X(2,4,7,9} = X(2,5,7,10} = X(l,4,6,9} = X(3,5,8,10} = 0.3 X(t,3,11} = X(2,4,11} = X(2,5,!1} = X(l,4,ll} = X(3,5,11} = 0.2 To achieve the theoretical speedup of x(G)/x1(G), or 4/2.9, we might need to find an ordering of the independent sets to meet criteria such as that insuring no 29
PAGE 37
consecutive sets contain like nodes and {6,7,8,9,10} is updated 4/29 of the time, {1,3,11 }, {2,4,11 }, {2,5,11 }, {1,4,11 }, {3,5,11} are each updated 3/29 of the time, and {2,4,7,9}, {2,5,7,10}, {1,4,6,9}, {3,5,8,10}, {1,3,6,8} are each updated 2/29 of the time. Clearly, this is no simple task. 1.4.2 Iterative Methods Throughout this discussion, we referred to an iterative method as x( t + 1) = f( XJ ( t), ... xn(t) ). This iterative scheme is known as the Jacohi method. A possible improvement to the Jacobi method is computing re;(t + 1) using the values of x1(t + l), ... ,re;_1(t + l),re;+1(t), ... ,ren(t). In other words, we can use the most recent calculations available in our iterative scheme such that x;(t + 1) = f;(x1(t + 1), ... ,re;_1(t + 1),re;_1(t),ren(t)),i = 1, ... ,n. This iterative technique is known as the GaussSeidel method. It is possible to accelerate the computation in a GaussSeidel iteration using parallel computers. Recall that if entries in the system matrix A, a;j = aj; = 0, then Xj(t) is not needed to compute re;(t), and re;(t) is not needed to compute Xj(t). Thus, re;(t) and rej(t) can be computed simultaneously, and component values calculated in any prior parallel step of the iteration may be used for component computation in the current parallel step. It is also possible that a GaussSeidel iteration is non parallelizable. For example, if each component, re;, depends on all other components, re" re2, ... re;_1 '"i+h ... """' then only one component may be updated at 30
PAGE 38
a time. In a sparse system, though, where a;j = a1 ; = 0 for numerous i and j, it is always desirable to study an efficient parallel implementation of the iterative method. If the system matrix, A, is strictly diagonally dominant, then both the Jacobi and GaussSeidel methods converge to the solution for any arbitrary x(O) (Varga [9]). There is a way to accelerate the rate of convergence for systems which converge by the GaussSeidel method. These methods are called Successive Over Relaxation, i.e., SOR. Furthermore, if A is positive definite and symmetric, then the SOR method is guaranteed to converge for any initial approximation, x(O) (Burden and Faires [2]). It would appear that the fastest rate of convergence could occur with a par allel implementation of the SOR iterative method. It was found, though, that an efficient ordering scheme in the parallel implementation was necessary to achieve this desired speed. Prior to observing the effects of parallelism, systems of equa tions were solved using a random SOR method. Components were randomly updated as follows. After every execution of Simplex, the program returned dual weights in y and each node, i, was assigned a "weight ratio," w; = y;j I: y;, such that 0 < w; < 1. Thereafter, the SOR algorithm generated a random number, r, such that 0 < r < 1. If w;_1 :::; r :::; w;, and y; f 0, then node i was updated. Unfortunately, the convergence rate of the random SOR was on the order of a Jacobi iteration, which is much slower than SOR (Fisher [4]). Thus, a desirable 31
PAGE 39
parallelism requires an efficient updating scheme as well as an effectual iterative method. 1.5 Parallelism of NonSymmetric Systems Thus far, we have only considered solving Ax = b if A was symmetric. Another interesting problem is posed when A is not symmetric. Consider solving a large, sparse system Ax = b where A is nonsymmetric and parallel computing may be used to speed the rate of convergence of the iterative method. Following the notation used in Bertsekas and Tsitsiklis [1, section 1.2.4], define a corresponding dependency graph, G, as G = (V, E) where V = {1, ... n} and E = {(i,j)lf.; depends on :v;}. For example, if a given system, Ax= b, is translated as the following Jacobi iteration, :v,(t + 1) = f!(:v!(t),:vs(t)) :vz(t + 1) = fz(:v,(t),:vz(t)) :v3(t + 1) = h(:vz(t),:v3(t)) :v4(t + 1) = f.J(:V:J(t),:v4(t)) :v5(t + 1) = fs(:v4(t),:v5(t)), then the corresponding dependency graph, G, is that in figure 11. 1 5 2 4 3 Figure 11: Dependency graph 32
PAGE 40
The adjacency matrix for G is 1 1 0 0 0 0 1 1 0 0 adj(G) = 0 0 1 1 0 0 0 0 1 1 1 0 0 0 1 If we think of independent sets as those cycleless sets in the graph, then we can update nodes in a cycleless set in parallel using a different processor for each node in the set. Furthermore, the number of processors required per iteration is bounded by the size of the largest cycleless set. Thus, a symmetric system can be defined in a nonsymmetric context by generating a directed graph from the undirected graph by replacing edges with double arcs between nodes. Thus, nodes connected with an edge could not possibly be placed in the same cycleless set. As an example of nonsymmetric parallelism, consider the dependency graph in figure 11, where x(G) = 2. We could choose the cycleless sets {1,2,3,4} and {5} since no cycle exists among nodes in each set. In this context, maximizing parallelism is equivalent to optimally coloring a dependency graph as suggested by the following proposition in Bertsekas and Tsitsiklis [1, Proposition 2.5]. Proposition 1 The following statements are equivalent: 1. There exists an ordering of the variables such that an iteration ,of the corresponding GaussSeidel algorithm can be performed in K parallel steps. 33
PAGE 41
2. There exists a coloring of the dependency graph that uses K colors and with the property that the induced subgraph on each color class has no cycles. For the dependency graph in figure 11, four processors are required to perform an iteration as follows: update nodes 4, 3, 2, and 1 in the first parallel step, and update node 5. Note that each node is updated once in x(G) = 2 parallel steps, or m times in 2m parallel steps. If we solve the same problem using fractional coloring, XJ( G) = 5/4, corre sponding to cycleless sets {1,2,3,4}, {2,3,4,5}, {3,4,5,1}, {4,5,1,2}, {5,1,2,3}, each with weight 1/4. Four processors are required to perform an iteration as follows: update nodes 4, 3, 2, and 1 in the first parallel step, update nodes 5, 4, 3, and 2 in the second parallel step, update nodes 1, 5, 4, and 3 in the third parallel step, update nodes 2, 1, 5, and 4 in the fourth parallel step, and update nodes 3, 2, 1, and 5 in the fifth parallel. Note that each node is updated eight times in ten parallel steps as opposed to sixteen parallel steps as determined by x( G). This would suggest a theoretical speedup of 2.0/1.25 = 1.6. Note, once again, that regardless of the symmetric qualities of the system or the iterative method, an efficient updating scheme is required to obtain this theoretical speedup. 1.6 Summary In this paper, we have studied fractional coloring and its related parameter, XJ(G). We proved that w(G) S wJ(G) = Xt(G) S x(G), using properties oflinear 34
PAGE 42
programming, and that Xt( G) is on the order of x( G) for random graphs. We also showed that the fractional coloring program can calculate a graph's fractional coloring number to a desired accuracy. We have also studied how fractional coloring can be used to solve sparse, symmetric iterative systems in parallel, and speed the convergence of the iterative method. Thereafter, we noted some of the complexities of efficient parallelism regardless of the iterative method. Finally, we showed that fractional coloring can also be used to solve nonsymmetric systems. Open areas of study include efficient updating schemes, a modification of the fractional coloring number program to accommodate nonsymmetric systems, and a program to execute parallel implementations of SOR in solving sparse iterative systems. 35
PAGE 43
APPENDIX C THE FRACTIONAL COLORING PROGRAM c C THIS PROGRAM CALCULATES THE FRACTIONAL COLORING NUMBER C OF A GRAPH. THE USER CAN EITHER HAVE A GRAPH READ FROM C FDR008.DAT OR ALLOW THE GENERATION OF A RANDOM GRAPH. C TO GENERATE A RANDOM GRAPH, ONE MUST ENTER THE NUMBER C OF NODES IN THE GRAPH, A SEED FOR THE RANDOM GENERATOR, C THE PROBABILITY OF AN EDGE, AND AN EPSILON WHICH WILL C DETERMINE THE ACCURACY OF THE CALCULATED FRACTIONAL C COLORING NUMBER. c C VARIABLES FOR SIMPLEX REAL AM(200,600),BM(200),CM(600) INTEGER KM(7) REAL PS(600),DS(200),NEWCOL(200),NDS(200) REAL BINV(40000) INTEGER KB1(600),basis(200) REAL PE1(200) ,X1(200) EQUIVALENCE (FEAS1,KM(6)) LOGICAL SIMFLAG,VER1,NEG1,FEAS1 INTEGER NVARS,NRDWS,TEMPVARS,NCDLMNS REAL LPVAL,ADDRED,EPS INTEGER ADDIND,SEED INTEGER SURP1,SURP2 c C VARIABLES FOR COLOURING INTEGER EDGES(38000), NODEPDINT(200) INTEGER COUNTER c c LOGICAL FLAG,MAXSET C READ IN DATA FROM UNIT 8 c C READ(8,*) NODES C COUNTER=1 C DO 10 I=1,NDDES C NDDEPOINT(I)=COUNTER C READ(8,*) NADJ,(EDGES(J),J=COUNTER,(COUNTER+NADJ1)) C COUNTER=CDUNTER+NADJ 36
PAGE 44
C 10 CONTINUE C NODEPOINT(NODES+1)=CDUNTER C READ(8,*) EPS c c C READ IN RANDOM GRAPH c c c ACCEPT *,NODES,SEED,PROB,EPS CALL RANDOMGRAPH(SEED,PROB,NODES,NODEPOINT,EDGES) C CALL ROUTINE TO SET UP THINGS FOR THE SIMPLEX ALGORITHM CALL COLINIT(AM,BM,CM,NODES,KB1,BASIS,EDGES,NODEPOINT, NVARS,SURP1,SURP2) C LOOP TO REPETITIVELY ADD COLUMNS STARTS HERE C CALL THE SIMPLEX ALGORITHM FOR THE FIRST TIME NCOLMNS=200 NROWS=NODES CALL SIMPLE(2,NRDWS,NVARS,AM,BM,CM,KM,KB1,DS,BASIS,BINV, O,ADDRED,VER1,PE1,X1) DO 15 J=1,NVARS KBJ=KB1 (J) PS(J)=O IF (KBJ.NE.O) PS(J)=X1(KBJ) 15 CONTINUE 999 FORMAT(100F5.2) CALL PRINTSOL(PS,DS,CM,NVARS,NROWS,SURP1,SURP2,AM) C CHECK TO SEE IF THERE IS A NEW COLUMN TO ADD DO 30 I=1,NODES NDS(I)=DS(I) 30 CONTINUE FLAG = MAXSET(EDGES,NODEPOINT,NODES,NDS,NEWCOL,EPS) IF (FLAG) THEN C ADDING A COLUMN PROD=O DO 8 I=1,NROWS PROD = PROD + DS(I)*NEWCOL(I) 8 CONTINUE ADDRED=PROD + 1 IF (NVARS.LT.NCOLMNS) THEN 37
PAGE 45
NVARS=NVARS+1 TEMPVARS=NVARS ELSE I=1 99 IF (KB1(I).EQ.O) THEN TEMPVARS=I GO TO 88 END IF I=I+l GO TO 99 END IF 88 DO 13 I=1,NROWS AM(I,TEMPVARS)=NEWCOL(I) 13 CONTINUE CM(TEMPVARS)=1 PS(TEMPVARS)=O CALL SIMPLE(O,NROWS,NVARS,AM,BM,CM,KM,KB1,DS,BASIS,BINV, TEMPVARS,ADDRED,VER1,PE1,X1) DO 17 J=1,NVARS KBJ=KB1 (J) PS(J)=O IF (KBJ.NE.O) PS(J)=X1(KBJ) 17 CONTINUE CALL PRINTSOL(PS,DS,CM,NVARS,NROWS,SURP1,SURP2,AM) GO TO 11 ELSE C WE ARE DONE c c WRITE(9,*) 'THE LAST SOLN WAS OPTIMAL' PRINT*, THE LAST SOLN WAS OPTIMAL' END IF END SUBROUTINE RANDOMGRAPH(SEED,PROB,N,PDINTERS,EDGES) INTEGER SEED,I,J,K,ADJ(200,200),PDINTERS(1),EDGES(1) REAL PROB,RANDOM DO 20 I=2,N DO 10 J=1,I1 IF(RANDOM(SEED).LT.PROB) THEN ADJ(J,I)=1 ADJ(I,J)=1 38
PAGE 46
ELSE ADJ(J,I)=O ADJ(I,J)=O END IF 10 CONTINUE ADJ(I,I)=O 20 CONTINUE K=1 DO 40 I=1,N POINTERS(I)=K DO 30 J=1,N IF(ADJ(I,J) .EQ.1) THEN EDGES(K)=J K=K+1 END IF 30 CONTINUE 40 CONTINUE c c c POINTERS(N+1)=K END FUNCTION RANDOM(SEED) INTEGER SEED SEED=MOD(25173*SEED+13849,65536) RANDOM=(1.0*SEED)/65536 END SUBROUTINE PRINTSOL(PS,DS,CM,NVARS,NOBS,SURP1,SURP2,AM) REAL PS(1),DS(1),CM(1),AM(200,1) INTEGER SURP1,SURP2 C PRINT OUT THE PRIMAL SOLUTION TO THE LP WRITE(9,900) PRINT 900 WRITE(9,910) PRINT 910 DO 12 J=1,NVARS WRITE(9,920) J,PS(J) PRINT 920,J,PS(J) IF ((PS(J) .GT.O.O) .AND. C ((J.LT.SURP1).0R.(J.GT.SURP2))) THEN WRITE(9,990) 39
PAGE 47
PRINT 990 DO 13 I=1,NOBS IF (AM(I,J).GT.O.O) THEN PRINT 991,I WRITE(9,991) I END IF 13 CONTINUE END IF 12 CONTINUE C PRINT OUT THE DUAL SOLUTION TO THE LP WRITE(9,930) PRINT 930 WRITE(9,940) PRINT 940 DO 14 I=1,NOBS WRITE(9,920) I,1*DS(I) PRINT 920,I,1*DS(I) 14 CONTINUE C CALCULATE AND PRINT OUT THE OBJECTIVE VALUE OF THE LP VALUE=O c c c DO 16 J=1,NVARS VALUE=VALUE+CM(J)*PS(J) 16 CONTINUE WRITE(9,950) VALUE PRINT 950,VALUE 900 FORMAT (2X,'THE PRIMAL SOLUTION IS') 910 FORMAT (9X, 'J' ,20X, 'XJ') (7X,I3,13X,F15.8) 920 FORMAT 930 940 950 990 991 FORMAT FORMAT FORMAT FORMAT FORMAT RETURN END (2X,'THE DUAL SOLUTION IS') ( 9X, 'I' 20X, 'YI') (2X,'THE OBJECTIVE VALUE IS' ,3X,F15.8) (2X,'THE INDEPENDENT SET IS NODES:') (2X,I3) SUBROUTINE COLINIT(A,B,C,NODES,KB,BASIS,EDGES,POINTER, NVARS,SURP1,SURP2) REAL A(200,1),B(1),C(1) 40
PAGE 48
c c INTEGER EDGES(1), POINTER(1),KB(1),BASIS(1),NEW(200) REAL WEIGHTS(200), SET(200) LOGICAL FLAG,MAXSET,FIRST,FOUND,COVERED(200) INTEGER SURP1,SURP2 C SET UP A MATRIX AND BASIS VECTORS DO 10 I=1,NODES WEIGHTS(I)=2 BASIS(I) = 0 10 CONTINUE NVARS=O NFEAS=O DO 11 I=1,NODES NEW(I) = 0 11 CONTINUE 15 IF (NFEAS.EQ.NODES) GO TO 21 NVARS=NVARS+1 FLAG= MAXSET(EDGES,POINTER,NODES,WEIGHTS,SET,EPS) DO 20 I=1,NODES A(I,NVARS)=SET(I) IF ((SET(I) .EQ.1.0).AND.(WEIGHTS(I).EQ.2.0)) THEN WEIGHTS(I)=O NFEAS=NFEAS+1 NEW(I) = NVARS END IF 20 CONTINUE GO TO 15 21 DO 25 I=1,NODES COVERED(I)=.FALSE. 25 CONTINUE DO 29 J=NVARS,1,1 FOUND=.FALSE. DO 33 I=1,NODES IF (FOUND) GO TO 35 IF ((.NOT.COVERED(I)) .AND.(NEW(I).EQ.J)) THEN FOUND=.TRUE. BASIS(I)=J KB(J)=I END IF 41
PAGE 49
c c c c c 33 CONTINUE 35 IF (FOUND) THEN DO 37 I=1,NODES IF (A(I,J).NE.O.O) COVERED(I)=.TRUE. 37 CONTINUE END IF 29 CONTINUE DO 22 I=1,NODES IF (BASIS(I) .EQ.O) THEN BASIS(I) = NVARS+I KB(NVARS+I) = I ELSE KB(NVARS+I) = 0 END IF 22 CONTINUE SET UP COSTS 100 DO 30 I=1,NVARS C(I)=1 30 CONTINUE SET UP RIGHT HAND SIDE DO 40 I=1,NODES B(I)=1 40 CONTINUE SET UP SURPLUSES DO 50 I=1,NODES DO 60 J=1,NODES A(I,NVARS+J)=O 60 CONTINUE A(I,NVARS+I)=1 50 CONTINUE SURP1=NVARS+1 SURP2=NVARS+NODES COSTS FOR SURPLUSES DO 70 I=1,NODES C(NVARS+I)=O 70 CONTINUE NVARS=NVARS+NODES RETURN END 42
PAGE 50
c c SUBROUTINE SIMPLE(INFLAG,MX,NN,A,B,C,KOUT,KB,P,JH,E, NEWIND,NEWRED,VER,PE,X) DIMENSION A(200,1),B(1),C(1) DIMENSION KOUT(7),P(1),JH(1),XOLD(200) DIMENSION X(1),Y(200),PE(1),E(1),K0(7),KB(1),0LDKB(600) EQUIVALENCE (K,KD(1)),(ITER,K0(2)),(INVC,K0(3)), 1(NUMVR,K0(4)),(NUMPV,K0(5)),(FEAS,K0(6)),(JT,K0(7)) EQUIVALENCE (XX,LL) C THE FOLLOWING DIMENSION SHOULD BE SAME HERE AS IN CALLER. c INTEGER NEWIND REAL NEWRED LOGICAL NOPIV,FEAS,VER,NEG,TRIG,KQ,ABSC M=MX N=NN TEXP= .5**14 MAXDEG = 10 NDEG = 0 NCUT=4*M+10 NVER=M/2+ 5 M2=M**2 NOPIV = .TRUE. IF (NEWIND.EQ.O) GO TO 1348 C ELSE NEW COLUMN TO ADD, NO NEED TO SELECT PIVOT C RESET SOME OUTPUTS K0(1)=0 K0(2)=0 K0(3) =0 JT = NEWIND BB = NEWRED GO TO 600 C MOVE INPUTS ZERO OUTPUTS 1348 TRIG = .FALSE. DO 1341 I=1,7 KO(I)=O 1341 CONTINUE IF(INFLAG.NE.O) GO TO 1400 43
PAGE 51
C* 'NEW' START PHASE ONE WITH SINGLETON BASIS D01402J=1,N KB(J)=O KQ=.FALSE. D01403I=1,M IF(A(I,J).EQ.O.O) GO TO 1403 IF(KQ.OR.A(I,J).LT.O.O) GO TO 1402 KQ=.TRUE. 1403 CONTINUE KB(J)=1 1402 CONTINUE 1400 IF(INFLAG.GT.1) GO TO 1320 D01401I=1 ,M JH(I)=1 1401 CONTINUE C* 'VER' CREATE INVERSE FROM 'KB' AND 'JH' 1320 VER=.TRUE. 1121 INVC=O 1122 NUMVR=NUMVR+1 D01101I=1 ,M2 E(I)=O.O 1101 CONTINUE MM=1 D01113I=1 ,M E (MM)=1. 0 PE(I)=O.O X(I)=B(I) IF(JH(I) .NE.O) JH(I)=1 MM=MM+M+1 1113 CONTINUE C FORM INVERSE JT=O 11101 JT=JT+1 IF(KB(JT).EQ.O) GO TO 1102 GO TO 600 C 600 CALL JMY C CHOOSE PIVOT 1114 TY=O.O D01104I=1,M IF(JH(I).NE.1) GO TO 1104 IF(ABS(Y(I)).LE.TY) GO TO 1104 44
PAGE 52
IR=I TY=ABS(Y(I)) 1104 CONTINUE KB(JT)=O C TEST PIVOT IF(TY.LE.TPIV) GO TO 1102 C PIVOT JH(IR)=JT KB(JT)=IR GO TO 900 C 900 CALL PIV 1102 CONTINUE IF(JT.LT.N) GO TO 11101 C RESET ARTIFICIALS D01109I=1,M IF(JH(I) .EQ.1) JH(I)=O 1109 CONTINUE 1200 VER=.FALSE. C PERFORM ONE ITERATION C* 'XCK' DETERMINE FEASIBILITY FEAS=.TRUE. NEG=.FALSE. D01201I=1,M IF(X(I) .LT.O.O) GO TO 1250 IF(JH(I).EQ.O) FEAS=.FALSE. 1201 CONTINUE C* 'GET' GET APPLICABLE PRICES IF(.NOT.FEAS) GO TO 501 C PRIMAL PRICES DO 503 I=1,M P(I)=PE(I) 503 CONTINUE ABSC=.FALSE. GO TO 599 C COMPOSITE PRICES 1250 FEAS=.FALSE. NEG=.TRUE. 501 D0504J=1,M P(J)=O. 504 CONTINUE ABSC=.TRUE. 45
PAGE 53
D0505I=1,M MM=I IF(X(I).GE.O.O)GD TO 507 ABSC=.FALSE. D0508J=1,M P(J)=P(J)+E(MM) MM=MM+M 508 CONTINUE GO TO 505 507 IF(JH(I) .NE.O) GO TO 505 IF(X(I) .NE.O) ABSC=.FALSE. D0510J=1,M P(J)=P(J)E(MM) MM=MM+M 510 CONTINUE 505 CONTINUE C* 'MIN' FIND MINIMUM REDUCED COST 599 JT=O IF (NDEG.GE.MAXDEG) JT = NVARS+1 BB=O.O D0701J=1,N C SKIP COLUMNS IN BASIS IF(KB(J).NE.O) GO TO 701 DT=O.O D0303I=1,M IF(A(I,J).NE.O.O) DT=DT+P(I)*A(I,J) 303 CONTINUE IF(FEAS) DT=DT+C(J) IF(ABSC) DT=ABS(DT) IF((NDEG.GE.MAXDEG.AND.DT.LT.O.O.AND.J.LT.JT).OR. (NDEG.LT.MAXDEG.AND.(DT.LT.BB)))THEN IF (NDEG.GE.MAXDEG) WRITE(6,*) 'ANTICYCLING KICKING IN' BB=DT JT=J END IF 701 CONTINUE C TEST FOR NO PIVOT COLUMN IF((JT.LE.O.AND.NDEG.LT.MAXDEG).OR. (JT.GE.NVARS+1.AND.NDEG.GE.MAXDEG)) GO TO 203 C TEST FOR ITERATION LIMIT EXCEEDED 46
PAGE 54
590 IF(ITER.GE.NCUT) GO TO 160 ITER=ITER+1 C* 'JMY' MULTIPLY INVERSE TIMES A(. ,JT) c 600 D0610I=1,M Y(I)=O.O 610 CONTINUE LL=O COST=C(JT) D0605I=1,M AIJT=A(I,JT) IF(AIJT.EQ.O.) GO TO 602 COST=COST+AIJT*PE(I) D0606J=1,M LL=LL+1 Y(J)=Y(J)+AIJT*E(LL) 606 CONTINUE GO TO 605 602 LL=LL+M 605 CONTINUE COMPUTE PIVOT TOLERANCE YMAX=O.O D0620I=1,M YMAX=AMAX1(ABS(Y(I)),YMAX) 620 CONTINUE TPIV=YMAX*TEXP C RETURN TO INVERSION ROUTINE, IF INVERTING IF(VER) GO TO 1114 C COST TOLERANCE CONTROL IF(TRIG.AND.BB.GE.TPIV) GO TO 203 TRIG=.FALSE. IF(BB.GE.TPIV) THEN TRIG=.TRUE. ELSE NOPIV = .FALSE. END IF C* 'ROW' SELECT PIVOT ROW C AMONG EQS. WITH X=O, FIND MAXIMUM Y C AMONG ARTIFICIALS, OR, IF NONE, C GET MAX POSITIVE Y(I) AMONG REALS. 1000 IF (NDEG.GT.10) THEN IR=O 47
PAGE 55
IND = M+1 DO 1051 I=1,M IF (X(I).NE.O.O.OR.Y(I).LE.TPIV) GO TO 1050 IF (JH(I).GE.IND) GO TO 1051 IND=JH(I) IR=I 1051 CONTINUE ELSE IR=O AA=O.O KQ=.FALSE. D01050I=1,M IF(X(I) .NE.O.O.OR.Y(I) .LE.TPIV) GO TO 1050 IF(JH(I) .EQ.O) GO TO 1044 IF(KQ) GO TO 1050 1045 IF(Y(I).LE.AA) GO TO 1050 GO TO 1047 1044 IF(KQ) GO TO 1045 KQ=.TRUE. 1047 AA=Y(I) IR=I 1050 CONTINUE END IF IF(IR.NE.O) GO TO 1099 1001 AA=1.0E+20 C FIND MIN. PIVOT AMONG POSITIVE EQUATIONS D01010I=1,M IF(Y(I).LE.TPIV.OR.X(I).LE.O.O.OR.Y(I)*AA.LE.X(I)) GO TO 1010 AA=X(I)/Y(I) IR=I 1010 CONTINUE IF(.NOT.NEG) GO TO 1099 C FIND PIVOT AMONG NEGATIVE EQUATIONS, C IN WHICH X/Y IS LESS THAN THE C MINIMUM X/Y IN THE POSITIVE EQUATIONS8 C THAT HAS THE LARGEST ABSF(Y). 1016 BB=TPIV D01030I=1,M IF(X(I).GE.O .. OR.Y(I).GE.BB.OR.Y(I)*AA.GT.X(I)) GO TO 1030 48
PAGE 56
BB=Y(I) IR=I 1030 CONTINUE C TEST FOR NO PIVOT ROW. 1099 IF(IR.LE.O) GO TO 207 C* 'PIV' PIVOT ON (IR,JT) 900 IF (X(IR).LE.O) THEN NDEG=NDEG+1 ELSE NDEG=O END IF C LEAVE TRANSFORMED COLUMN IN Y(I). NUMPV=NUMPV+1 YI=Y(IR) Y(IR)=1.0 LL=O C TRANSFORM INVERSE D0904J=1,M L=LL+IR IF(E(L) .NE.O.O) GO TO 905 LL=LL+M GO TO 904 905 XY=E(L)/YI PE(J)=PE(J)+COST*XY E(L)=O.O D0906I=1 ,M LL=LL+1 E(LL)=E(LL)+XY*Y(I) 906 CONTINUE 904 CONTINUE C TRANSFORM X. XY=X(IR)/YI D0908I=1,M XNEW=X(I)+XY*Y(I) IF(VER.OR.XNEW.GE.O .. OR.Y(I) .GT.TPIV.OR.X(I) .LT.O.) GO TO 907 X(I)=O.O GO TO 908 907 X(I)=XNEW 908 CONTINUE C RESTORE Y(IR) 49
PAGE 57
Y(IR)=YI X(IR)=XY IF(VER) GO TO 1102 221 IA=JH(IR) IF(IA.GT.O) KB(IA)=O 213 KB(JT)=IR JH(IR)=JT IF(NUMPV.LE.M) GO TO 1200 C TEST FOR INVERSION ON THIS ITERATION. INVC=INVC+1 IF(INVC.EQ.NVER) GO TO 1320 GO TO 1200 C* END OF ALGORITHM, SET EXIT VALUES. C INFINITE SOLUTION. 207 K=2 GO TO 250 C PROBLEM IS CYCLING. 160 K=4 GO TO 250 C FEASIBLE OR INFEASIBLE SOLUTION. 203 K=O 250 IF(.NOT.FEAS) K=K+1 C SET 'KOUT'. c c c 1392 D01393I=1,7 KOUT(I)=KO(I) 1393 CONTINUE RETURN END LOGICAL FUNCTION MAXSET(EDGES,NODEPOINT,NODENUM,INWT, C INDSET, EPS) INTEGER FILLNUM,I,J,K,NODE,NNUM,LEVEL,NODENUM INTEGER EDGES(1),NODEPOINT(1),LIST(200) INTEGER EDGE(38000),PT(200),FILLERNODE(200) INTEGER SIGNODE(200) REAL INWT(1),WT(200),INDSET(1),EPS LOGICAL FLAG,SETFOUND 50
PAGE 58
DO 1 I=1,NODENUM SIGNODE(I)=O PT(I)=O WT(I)=O INDSET(I)=O 1 CONTINUE DO 2 I=1,38000 EDGE(I)=O 2 CONTINUE K=1 FILLNUM=O NNUM=O DO 5 I=1,NODENUM IF (ABS(INWT(I)).GT.(EPS/(2*NDDENUM))) THEN NNUM=NNUM+1 PT(NNUM)=K SIGNODE(NNUM)=I WT(NNUM)=INWT(I) C IF (NODEPOINT(I).NE.(NODEPOINT(I+1)1)) THEN DO 3 J=NODEPOINT(I),NODEPOINT(I+1)1 IF (INWT(EDGES(J)).GT.(EPS/(2*NODENUM))) THEN EDGE(K)=EDGES(J) K=K+1 END IF 3 CONTINUE C END IF ELSE FILLNUM=FILLNUM+1 FILLERNODE(FILLNUM)=I END IF 5 CONTINUE PT(NNUM+1)=K IF (NNUM.NE.1) THEN DO 7 I=1,K1 J=1 6 IF (EDGE(I).EQ.SIGNODE(J)) THEN EDGE(I)=J ELSE J=J+1 GO TO 6 END IF 51
PAGE 59
7 CONTINUE END IF FLAG=.FALSE. FLAG=SETFOUND(INDSET,NNUM,WT,PT,EDGE,SIGNODE, C EPS,LIST,LEVEL) IF (FLAG) THEN CALL FILLGRAPH(FILLNUM,LEVEL,LIST,FILLERNODE, C INDSET,NODEPOINT,EDGES,NNUM,SIGNODE) MAXSET=.TRUE. ELSE MAXSET=.FALSE. END IF RETURN END LOGICAL FUNCTION SETFOUND(INDSET,NNUM,WT,PT,EDGE, C SIGNODE,EPS,LIST,LEVEL) INTEGER I,J,NODE,NNUM,GETVAR,LEVEL INTEGER SUBG(200),LIST(200) INTEGER EDGE(38000),PT(200),SIGNODE(200) REAL WT(200),INDSET(200),AVAILWEIGHT,EPS LEVEL=O SETFOUND=.FALSE. IF (NNUM.EQ.1) THEN SETFOUND=.TRUE. LEVEL=1 LIST(1)=SIGNODE(1) INDSET(SIGNODE(1))=1 RETURN END IF AVAILWEIGHT=O.O DO 10 I=1,NNUM SUBG(I)=1 AVAILWEIGHT=AVAILWEIGHT + WT(I) 10 CONTINUE 16 IF (AVAILWEIGHT.GT.(1.0+(EPS/2))) THEN NODE = GETVAR(NNUM,SUBG,PT,EDGE,WT) 52
PAGE 60
IF (NODE.EQ.1) THEN AVAILWEIGHT = 0.0 DO 22 I=1,LEVEL AVAILWEIGHT = AVAILWEIGHT + WT(LIST(I)) 22 CONTINUE IF (AVAILWEIGHT.LE.(1.0+(EPS/2))) THEN GOTO 25 ELSE SETFOUND=.TRUE. RETURN END IF END IF LEVEL=LEVEL + 1 LIST(LEVEL)=NODE SUBG(NODE)=LEVEL C IF (PT(NODE).NE.(PT(NODE+1)1)) THEN DO 20 I=PT(NODE),PT(NODE+1)1 J=EDGE(I) IF (SUBG(J) .EQ.1) THEN SUBG(J)=LEVEL AVAILWEIGHT=AVAILWEIGHTWT(J) END IF 20 CONTINUE C END IF ELSE IF (LEVEL.LE.O) THEN RETURN END IF 25 NODE=LIST(LEVEL) AVAILWEIGHT=AVAILWEIGHTWT(NODE) SUBG(NODE)=LEVEL 1 DO 24 I=1,NNUM IF (SUBG(I) .EQ.LEVEL) THEN SUBG(I)=1 AVAILWEIGHT=AVAILWEIGHT+WT(I) END IF 24 CONTINUE LEVEL=LEVEL 1 END IF GO TO 16 END 53
PAGE 61
INTEGER FUNCTION GETVAR(NNUM,SUBG,PT,EDGE,WT) INTEGER SUBG(200),PT(200),EDGE(38000) INTEGER NNUM,I,J,NODEDEG,LOWDEG REAL WT(200),DEN(200),RAT(200),HIGH HIGH=O.O LOWDEG=1000000 GETVAR=1 DO 40 I= 1,NNUM IF (SUBG(I) .EQ.1) THEN NODEDEG=O DEN(I)=O.O C IF (PT(I).NE.(PT(I+1)1)) THEN 30 c DO 30 J= PT(I),(PT(I+1)1) IF (SUBG(EDGE(J)) .EQ.1) THEN DEN(I)=DEN(I) + WT(EDGE(J)) NODEDEG=NDDEDEG + 1 END IF CONTINUE END IF RAT(I)=WT(I)/(DEN(I)+ 0.000001) IF ((RAT(I) .EQ.HIGH .AND. NDDEDEG.LT.LOWDEG) .OR. RAT(I).GT.HIGH) THEN HIGH=RAT(I) END IF END IF GETVAR=I LDWDEG=NDDEDEG 40 CONTINUE END SUBROUTINE FILLGRAPH(FILLNUM,LEVEL,LIST,FILLERNODE, C INDSET,NODEPOINT,EDGES,NNUM,SIGNODE) INTEGER FILLNUM,LEVEL,LIST(1),FILLERNDDE(1) INTEGER NDDEPDINT(1),EDGES(1),NNUM,SIGNODE(1),J REAL INDSET(1) 54
PAGE 62
IF (NNUM.EQ.1) THEN GO TO 1988 END IF DO 1987 I=1,LEVEL LIST(I)=SIGNODE(LIST(I)) INDSET(LIST(I))=1 1987 CONTINUE 1988 DO 1991 I=1,FILLNUM J=O 1989 J=J+1 IF (J.GT.LEVEL) THEN LEVEL=LEVEL+1 LIST(LEVEL)=FILLERNODE(I) INDSET(LIST(LEVEL))=1 ELSE K=NODEPOINT(LIST(J)) 1990 IF (K.GE.NODEPOINT(LIST(J)+1)) THEN GO TO 1989 END IF IF (FILLERNODE(I).EQ.EDGES(K)) THEN GO TO 1991 ELSE K=K+1 END IF GO TO 1990 END IF 1991 CONTINUE END 55
PAGE 63
REFERENCES [1] Bertsekas, Dimitri, and Tsitsiklis, John N. Parallel and Distributed Com putation: Numerical Methods. Englewood Cliffs, New Jersey: PrenticeHall Inc., 1989. [2] Burden, Richard 1., and Faires, J. Douglas. Numerical Analysis. 3rd ed. Boston: Prindle, Weber, & Schmidt, 1985. [3] Chvatal, Vasek. Linear Programming. San Francisco: W.H. Freeman, 1983. [4] Fisher, David C., private communication. [5] Hell, P., and Roberts, F. Analogues of the Shannon capacity of a Graph, Ann. Disc. Math. 12(1982), 155162. [6] Larsen, Michael, and Propp, James, and Ullman, Daniel. The Asymptotic Chromatic Number of a Graph, preprint. [7] Palmer, Edgar M. Graphical Evolution: An Introduction to the Theory of Random Graphs. New York: John Wiley & Sons, 1985. [8] Syslo, Maciej M., and Deo, Narsingh, and Kowalik, Janusz S. Discrete Opti mization Algorithms with PASCAL Programs. Englewood Cliffs, New Jersey: PrenticeHall Inc., 1983. [9] Varga, Richard S. Matrix Iterative Analysis. Englewood Cliffs, New Jersey: PrenticeHall, Inc., 1962. [10] Winston, Wayne 1. Operations Research: Applications and Algorithms. Boston: Duxbury Press, 1987. 56

