Citation
Fast multigrid methods for transport equations

Material Information

Title:
Fast multigrid methods for transport equations
Creator:
Yang, Gaoming
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
124 leaves : ; 28 cm

Subjects

Subjects / Keywords:
Multigrid methods (Numerical analysis) ( lcsh )
Neutron transport theory ( lcsh )
Multigrid methods (Numerical analysis) ( fast )
Neutron transport theory ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 122-124).
Thesis:
Applied mathematics
General Note:
Department of Mathematical and Statistical Sciences
Statement of Responsibility:
by Gaoming Yang.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
40326522 ( OCLC )
ocm40326522
Classification:
LD1190.L622 1993d .Y36 ( lcc )

Downloads

This item is only available as the following downloads:


Full Text

PAGE 1

FAST MULTIGRID METHODS FOR TRANSPORT EQUATIONS by Gaoming Yang B.S., Nanjing Aeronautical Institute, 1983 M.S., Nanjing Aeronautical Institute, 1987 M.S., University of Colorado at Denver, 1992 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Applied Mathematics 1993

PAGE 2

Gaoming Yang (Ph.D., Applied Mathematics) Fast Multigrid Methods for Neutron Transport Equations Thesis directed by Professor Thomas A. Manteuffel The purpose of this study is to develop fast multigrid algorithms for the neutron transport equation. We present a multigrid algorithm that is much faster than DSA and avoids some of the problems associated with DSA. In general, the DSA algorithm requires a certain degree of consistency between the S N spatial discretization scheme and the diffusion spatial discretization scheme. DSA works because the transport model becomes diffusive in the thick limit. Such problems are usually amenable to multigrid techniques directly. Moreover, the difficulties associated with deriving and solving consistent DSA diffusion equations increase with higher dimensions. The multigrid techniques described here have a natural generalization to higher dimensions and, as mentioned previously, can be used in conjunction with the multigrid algorithm for anisotropic problems(Morel and Manteuffel, 1991 ). While any generalization to higher dimensions is fraught with peril, we feel that the multigrid algorithms presented here can be successfully extended. Although similar in spirit, our multigrid method is much more efficient than those of Nowak(1988), Nowak and Larsen(1990) and Nowak, Morel and Harris(l989). These methods require an expensive relaxation and do not achieve a

PAGE 3

convergence factor that approaches zero in the thick diffusion limit. In this study, the derivation of transport equations and their properties are introduced. The transport equations are discretized by finite element methods using the Modified Linear Discontinuous(MLD) discretization. Two relaxation schemes are presented. It is shown that x-line relaxation is not suitable for multigrid since, in the thick limit, this relaxation will not smooth any spatial high frequency errors. But two-cell JL-line relaxation will reduce errors to piecewise linear across two cells and independent of angles in the thick limit. For transport equations without absorption, by using the linear interpolation and full weighting restriction operators, the coarse grid operators will still preserve the MLD formula. Extensive theoretical and numerical analyses are presented in the study. It is proved for S2 case that two-cell JL-line relaxation reduces errors to the range of interpolation and the multigrid solver is exact provided that the coarsest grid be solved exactly. For SN, N > 2 cases, both theoretical and numerical analyses show that the multigrid algorithms are almost exact solvers in both thin and thick limits. For transport equations with absorption, two-cell JL-line relaxation will no longer reduce errors to be piecewise linear across two cells. Instead, the errors after the relaxation will be kinked and sublinear. Thus, linear interpolation will not be suitable. A kinked, relaxation-induced interpolation was devised for this case. The study shows that this interpolation is very accurate and suitable for the multigrid algorithms. The study also shows that although the coarser IV

PAGE 4

grid operators with the relaxation-induced interpolation are not MLD, they have the same structure as the MLD operator. Therefore, JL-line relaxation on coarser grids can be implemented at the same cost as on the finest grid. The numerical results show that the modified multigrid algorithm for transport equation with absorption is also very efficient and highly paralizable. This abstract accurately represents the content of the candidate's thesis. I recommend its publication. Tom Manteuffel v

PAGE 5

ACKNOWLEDGEMENTS I would like to take this opportunity to thank my advisor, Professor Tom Man teuffel, for the guidance and encouragement that he provided during all phases of the preparation of this thesis. Throughout my entire doctoral study, he provided exceptional insight and guidance in both my academic and daily life. I also give my sincerest thanks to Professor Steve McCormick for the countless discussions and helpful guidance. Many thanks also go to the other committee members: Professor Tom Russell, Professor Jan Mandell, and Professor John Trapp, for their contribution and effort in every step of the thesis preparation. Special thanks and appreciation go to Professor Chaoqun Liu for his technical guidance, support and encouragement. His broad knowledge in many computational mathematics and engineering fields has rendered many useful discussions. I would also like to thank Debbie Wangerin for the help she often offered during my entire study at the department.

PAGE 6

CONTENTS List of Figures o ix List of Tables o 0 0 o 0 0 o 0 0 0 0 0 0 0 0 0 0 o 0 0 0 0 0 0 0 0 0 0 0 0 0 0 XI 1 Introduction o o o o o o o o o o o 1.1 Motivation and Background 1.2 Notations and Conventions 1 1 4 2 Derivation of Transport Equations and Overview o o o o o 5 3 Isotropic Transport Equations and Multigrid Algorithm 8 3.1 Introduction 0 0 o 0 0 302 The Discrete Model 0 3 03 Relaxation 0 0 0 0 0 0 3o3o1 x-line Relaxation 3o3o2 p;-line Relaxation 3.4 Multigrid Solution Technique 3o5 Analysis of Convergence Factors of the Multigrid Algorithm 3o6 Computational Results 0 o 0 o 0 3o6o1 Multigrid Convergence Factors 306.2 DSA Convergence Factors o 0 0 8 10 18 18 20 39 43 66 66 69

PAGE 7

4 Transport Equation with Absorption and Multigrid Scheme 75 4.1 Introduction .. 4.2 Discrete Model 4.3 Relaxation . 4.4 Multigrid Algorithm 4.5 Properties of Kinked Interpolation 4.6 Computational Results ...... 4.6.1 Multigrid Convergence Factors 4.6.2 DSA Convergence Factors ... 75 76 78 97 100 103 103 106 5 Summary 120 Bibliugraphy 122 viii

PAGE 8

LIST OF FIGURES 3.1 Computational Domain . . . . . . . . . . . . 12 3.2a Bases for vh+, . . . . . . . . . . . . . . 13 3.2b Bases for . . . . . . . . . . . . . . 14 3.3 Fine grid cells 2i-1, 2i and coarse grid cell i . . . . . . 26 4.1 Piecewise Discontinuous Elements . . . . . . . . . 77 4.2 Error Distribution for Positive Angle, p = 1 . . . . . . 88 4.3 Error Distribution for Negative Angle, p = 1 . . . . . . 88 4.4 Error Distribution for Positive Angle, p = 2 . . . . . . 90 4.5 Error Distribution for Negative Angle, p = 2 . . . . . . 90 4.6 Error Distribution for Positive Angle, p = 3 . . . . . . 92 4.7 Error Distribution for Negative Angle, p = 3 . . . . . . 92 4.8 Interpolated Approximation, p = 1 . . . . . . . . 101 4.9 Interpolated Approximation, p = 2 . . . . . . . . 101

PAGE 9

4.10 Interpolated Approximation, p = 3 . . . . . . . . 102 X

PAGE 10

LIST OF TABLES 3.1 Convergence Factors for Uniform Grid ...... 67 3.2 Convergence Factors for Uniform Grid: Worst Case 68 3.3 Convergence Factors for Nonuniform Grids .... 69 3.4 Convergence Factors for the Multigrid Algorithm 71 3.5 Convergence Factors for the DSA Algorithm . 72 3.6 Adjusted Convergence Factors for the DSA Algorithm ', 74 4.1 Convergence Factors for Uniform Grid . . 103 4.2 Convergence Factors for Linear Interpolation 104 4.3 Convergence Factors for Kinked Interpolation 104 4.4 Convergence Factors for Kinked Interpolation, 1 = 0.999 105 4.5 Convergence Factors for Nonuniform Grid 106 4.6 Convergence Factors for Multigrid 1 = 0.9999 108 4.7 Convergence Factors for DSA 1 = 0.9999 109 4.8 Convergence Factors for Multigrid 1 = 0.999 110 4.9 Convergence Factors for DSA 1 = 0.999 111 4.10 Convergence Factors for Multigrid 1 = 0.99 112 4.11 Convergence Factors for DSA 1 = 0.99 113

PAGE 11

4.12 Convergence Factors for Multigrid 1 = 0.9 4.13 Convergence Factors for DSA 1 = 0.9 ... 4.14 Adjusted Convergence Factors for DSA 1 = 0.9999 4.15 Adjusted Convergence Factors for DSA 1 = 0.999 4.16 Adjusted Convergence Factors for DSA 1 = 0.99 4.17 Adjusted Convergence Factors for DSA 1 = 0.9 Xll 114 115 116 117 118 119

PAGE 12

CHAPTER 1 INTRODUCTION 1.1 Motivation and Background Neutral and charged-particle transport plays a fundamental role in many areas of science and engineering, such as nuclear reactor design, radiation therapy in medical science and satellite electronical shielding. The roots of transport theory go back more than a century to the Boltzman equation, first formulated for the study of radiation transport theory of gases. The study of radiation transport in stellar atmospheres led to a number of analytical solutions of transport problems in the 1930s. In the 1940s with the advent of nuclear chain reactors, interest arose in solving neutral particle transport problems in a broad range of geomet rical configuration in nuclear reactor design and radiation shielding applications, and transport calculations are required in many applications of transport prob lems. For example, complex neutral and charged particle transport calculations are essential for nuclear weapon design and analysis. With the rapidly increas ing computational power of modern digital computers, increasingly sophisticated numerical methods have been developed. Our objective is to develop discrete-ordinates equations and fast solution algorithms that are compatible with physical phenomena by using multigrid methods.

PAGE 13

Multigrid techniques have proven to be efficient methods for a wide spectrum of scientific and engineering problems. However, to a large extent, the success of classical multigrid algorithms depend on a full understanding of the problem and a good design of relaxation, interpolation and restriction operators. By applying \. multigrid technique and utilizing the massively parallel computers, we ensure that the solution algorithms we use are very efficient and highly parallelizable. Extensive numerical experiments and analyses have been conducted on the theoretical aspects of the multigrid method we have developed for transport problems. In Chapter 2, we derive the neutron transport equations and their properties. In Chapter 3, a fast multigrid algorithm for isotropic transport problems IS extensively studied both theoretically and numerically. In Section 3.1, the transport equations are discretized by finite element methods using the Modified Linear Discontinuous(MLD) discretization. To solve the discretized equation, two relaxation methods are presented. They are :n-line relaxation and pAine relaxation. Both relaxations are examined in detail in terms of error reduction. Also in this chapter, a relaxation induced interpolation operator is derived. An extensive analysis is given for the multigrid algorithm we have developed. In Section 3.5, numerical results for our multigrid method for both uniform and highly nonunform grids are presented. Chapter 4 studies transport problems with absorption. In many cases, there are absorptive interactions in the transport problems. When there is absorption, error reduction properties of 11-line relaxation introduced in Chapter 3 will not be 2

PAGE 14

the same as in the isotropic case. This chapter introduces a kinked interpolation, which is also relaxation-induced. Numerical results in Section 4.6 show that kinked interpolation is very accurate and the multigrid algorithm developed with this kinked interpolation is very efficient. Chapter 5 is a summary of the results of this thesis. 3

PAGE 15

1.2 Notations and Conventions For the most part, the following notational conventions are used in this dis sertation. Particular usage and exceptions should be clear from corresponding contexts. -.j; flux of particles entering the slab. x, y, zthree-dimensional coordinates. a, b two ends of a slab geometry. B angle in degrees from the x-axis. f1 cos( B). O"t expected number of absorptive interactions over unit distance.
PAGE 16

CHAPTER 2 DERIVATION OF TRANSPORT EQUATIONS AND OVERVIEW The transport of neutrally-charged particles can be modeled by the linear Boltz-man transport equations. Let N(;r_,O,E,T) represent the density of particles at a position ;r_ moving in direction 0 with energy Eat timeT. The balance equation IS (2.1) oN f:)T = -0 v\1 N
PAGE 17

where is the probability that a particle will be scattered from !1'-.n and E'-. E. If a particle is equally likely to scatter to any other direction, the scattering is said to be isotropic. We will first consider isotropic scattering within a single energy group. The physical domain is a semi-infinite slab of width 2 in the x coordinate and infinite in the y and z coordinates. If we assume that 7/J is independent of y and z, we have = = 0. This yields {2.4) JL a,p /_1 ( ') 1 --+7/J ='ljJ ;JJ,JL dJL + -Q, u, 8;]) 2 -1 u, where x E(-1,1), JL =cos( II) and II is the angle from the positive x-axis. Boundary conditions prescribe particles entering the slab: {2.5) ,P(-1,JL) = go(JL), ,P(1,-JL) = 91(JL), JL E (0,1). Problem (2.4) with boundary condition (2.5) becomes difficult to solve in the optically dense or thick limit. Physically, this corresponds to a mean-free-path between collisions that is small compared to the width of the slab and materials that allow very little absorption. Mathematically, we have {2.6) u, Ut ---)oo, I = --t 00. Ut Ignoring the source term and taking the limits in (2.4) yields (2.7) whiclt admits any 7/J(;JJ,JL) that is independent of JL Thus, in this limit (2.4) is singularly perturbed with a large near null space. 6

PAGE 18

We will discuss the one-dimensional problem with isotropic scattering. The model problem becomes (2.8) .!!_ f)f),P + ,p = 21 !1 ,P( x, Jl')dp/ Ut X -1 with boundary conditions prescribing particles entering the slab: (2.9) ,P( -1,J1) = 9o(J1), ,P(1, -Jl) = 91(J1), J1 E (0, 1). For anisotropic problems with forward-peaked scattering, Morel and Manteuffel[14] have developed the technique which is called multigrid in angle. This technique depends upon solving an isotropic problem on the coarsest angle grid. Successful multigrid algorithms have been developed for the one-dimensional problem with isotropic scattering. These algorithms depend upon the difference schemes used to approximate the Boltzman equations. In the next chapter, we discretize the transport equations and design two relaxation schemes to solve the discretized transport equations. An efficient multigrid scheme is developed and both theoretical and numerical results are presented. 7

PAGE 19

CHAPTER 3 ISOTROPIC TRANSPORT EQUATIONS AND MULTIGRID ALGORITHM 3.1 Introduction Numerical approximations of (2.4) suffer several difficulties in the thick limit. Many difference schemes yield inaccurate solutions in this limit even for well behaved solutions. For a discussion of this issue, see (Larsen and Morel[7]). One difference scheme that behaves well in the thick limit is the Modified Linear Discontinuous(MLD) scheme[7]. Not only does it give the proper behavior in the thick limit but it is also very accurate. The algorithm described in this chapter will address the solution of the SN approximation in angle (See Section 3.2) using M LD in space. Even when the difference scheme behaves properly in the thick limit, it may be difficult to solve the resulting discrete equations. The Diffusion Synthetic Acceleration( DBA) scheme was developed to address this difficulty(Alcouffe [2]). DSA is based upon the recognition that in the thick limit the transport of parti cles becomes diffusive in nature. The flux is nearly independent of direction except near the boundarys and sources. DSA involves solving a diffusion equation as a preconditioning for the transport equations (see Faber and Manteuffel[4]). It achieves a convergence factor of approximately .23 per iteration independent

PAGE 20

of u1 In this chapter we present a multigrid algorithm that is much faster than DSA and avoids some of the problems associated with DSA. The DSA algo-rithm appears to be very sensitive to the difference scheme used to approximate (2.4). In recent work Larsen [5], Morel and Larsen [13] and Adams and Martin[1] have derived methods for finding consistent diffusion schemes. However, this is rather belabored. D SA works because the transport model becomes diffusive in the thick limit. Such problems are usually amenable to multigrid techniques directly. Moreover, D SA has difficulties with higher dimensions and is not well suited to anisotropic problems. The multigrid techniques described here have a natural generalization to higher dimensions and, as mentioned previously, can be used in conjunction with the multigrid algorithm for anisotropic problems described in Morel and Manteuffel [14]. We concentrate on the problem with no absorption, that is, u, = u1 The problem with u, f u1 can be solved with a slight modification and will be addressed in the next chapter. In Section 3.2 we describe the discrete equation. In Section 3.3 two-cell, red/black, block relaxation is developed. The fact that the scattering operator is rank-1 allows for a very efficient relaxation. Letting h; be the width of cell i, we show a) For max(u1h;_1,u1h;) 1 at two neighboring cells i 1 and i, relaxation leaves the error linear across the two cells up to O(max( ui hLu ui hl) ). b) For min( u,h,_1 u1h;) 1, relaxation leaves the error linear across 9

PAGE 21

the two cells up to "'1h)). In Section 3.4 we combine relaxation with a multigrid algorithm and derive two-grid convergence rates, p a) For max (,.,h,) 1, p = O(m!J-X (IT.hi)2). In Section 3.6 we present numerical results for both uniform and highly nonuniform grids. They yield the following convergence factor,p, for a V(1, 1) cycle: a) For m!J-X (,.,h,) 1, b) For min (lTthi) 1, P = 0( 1 ). (uthi)2 The slowest factor of p=0.0077 occurs for ,.,h 0.01. These rates make a single V(1, 1) cycle nearly an exact solver, especially in the thick and thin limits. The cost of a V(1, 1) cycle is about the same as 4 sweeps across the computational grid. This represents a significant savings over DSA. Moreover, the algorithm was designed to be highly parallel. We have implemented the algorithm on the Thinking Machines Inc. CM 200 at Los Alamos National Labs. These results appear in Manteuffel et a!., [10][19]. 3.2 Discrete Model The angle dependence in (2.4) IS discretized by an SN approximation(c.f. 10

PAGE 22

Lewis and Miller [8]), that is, we assume N-1 (3.1) .,P(x,Jl) = 2::(2l + 1)!(x)pl(Jl), l=O where Pl(Jl) is the zth Legendre polynomial and (2l + 1) is a normalization factor. If a standard Galarkin formulation is used, a system of equations for the th(x)'s, called the moment equations, is established. The moments may be found by where w1 ... WN and Jl 1 ... JlN are the Gauss quadrature weights and points, respectively. The second equality is exact because of assumption (3.1 ). Thus, finding 1/Ji( x) = .,P(x, Jlj) for j = 1, ... N is equivalent to finding the moments. A simple transformation of the moment equations yields a system for the 1/JJ(x)'s, called the flux equations, as follows: (3.3) for j = l, ... ,N. For our purposes here we choose to differentiate the positive and negative directions. Assuming N is even and letting n = N /2, the Gauss quadrature points are symmetric about the origin and can be denoted as -Jln <, ... < -JI1 < 0 < Jll <, ... Jln Likewise, the weight associated with -jlj will equal the weight associated with Jlj We define (3.4) ..PJ(x) = .,P(x,JlJ), 1/Jj(x) = .,P(x, -JIJ), 11

PAGE 23

for j = 1, ... n. Equation(3.3) becomes the pair of equations (3.5a) (3.5b) for j = 1, ... n. The boundary conditions (2.5) become (3.6a) (3.6b) for j = 1, ... n. Figure 3.1 is the computational domain. 1 -1 h 1 X -1 Figure 3.1 Computational Domain The space dependence is now discretized by the Modified Linear Discontinuous difference scheme (Larsen and Morel [7]) which can be derived by using finite element methods. We first describe the Linear Discontinuous(LD) difference scheme[7]. Consider a set of grid points given by (3.7) a= Xl < < ... < "'m+l =b. 2 2 2 12

PAGE 24

These represent cell edges. The centers are given by (3.8) i=l, ... ,m, the cell width is h; = X;+l X;_l and m is the number of cells. We start by ' forming piecewise linear trial spaces. However, a different space is used for 'if; J ( x) than for ,Pj(x). For ,PJ(x)(for each j), we choose break points at (3.9a) with xt = x,_t + e;, while for ,Pj(x )(for each j) we choose break points at (3.9b) with x; = x+t ;. Here E; is small. Bases elements for the two spaces, which we denote as V/, and are depicted in Figures 3.2a and 3.2b, respectively. ' Figure 3.2a Bases for YJ.;, 13

PAGE 25

Figure 3.2b We assume that ,Pj(x) and ,Pj(x) can be expressed as a linear combination of the bases elements in vh;. and respectively. We denote (3.10a) (3.10b) We also denote (3.10c) It is important to note that -ti and -?;:i can be expressed in terms of values of ,Pj(x) and ,Pj(x) at the cell edges and cell center. Since ,Pj(x) and ,Pj(x) are assumed to be linear, -?tj and -?;:j can be found as an extrapolation from the center and edge. We have (3.1la) (3.1lb) h h -2e .!.2( )!.-( ).-o/i,j-h 'f'i,j h 14

PAGE 26

The Linear Discontinuous (LD) difference scheme is derived by applying a standard Galarkin formulation, then taking the limit as ( E;) --> 0. One way to construct the equations is to plug the trial space representation into (3.5a) and (3.5b ), then integrate over each cell against the constant function and then against the linear function, (x-x;). This will yield a set of equations involving E;. The final equations are found by setting c;=O. The Modified Linear Discontinuous(MLD) scheme is found in a similar manner. As in the LD scheme, (3.5a) and (3.5b) are integrated over each cell against the constant function. Then after setting E; = 0 we get the following equations: These are referred to as the balance equations. Instead of integration against linear functions, as in LD, now the equations (3.5a) are collocated at xi, equations (3.5b) are collocated at xt, and the limit is taken as max E;--> 0. This yields the following equations referred to as edge equations. (3.13a) (3.13b) 15

PAGE 27

This may be formalized as integration of (3.5a) against 5(x :Vi) and (3.5b) against 5(x-:Vt). Notefrom (3.1la, b) that we may replace -J;1;. and -J;;;, with 2,P1;.-,Pi,_1 and 2,PU,-,P;_1 respectively. The boundary conditions are 2 {3.14) .!.+ + .!.-. 1 2 'f'!i = 9aj1 = 9bj ,J = ' ... n. We see from the edge equations and balance equations that it is the product of frt and h; that will determine the property of the discretized equations. We will use as a parameter in our discussion and set 1 = = 1. In block matrix form, these equations become {3.15a) (3.15b) for i = 1, ... ,m, and {3.15c) (3.15d) for i = 1, ... m, with boundary conditions {3.15e) .!.+ + .!.Yt -9..c.' = fb where B; and R are defined as 0 1 (3.16a) B;= R= [ Wt ''' Wn] = 1wT, 0 1 16

PAGE 28

and (3.16b) ,p++!. = (,p++ll, ... ,,p++l f, ,p+ = (,P1;, ... ,,pt;y, ..!...-1 2 t 2 t 2n _, (3.16c) We can write equation (3.15) in matrix form as qrl -D. A2 -c. w. (3.17) -D; A; -C; = [ Q]' qr. -Dm Am \lim where Q is the right-hand side and I+ 2B;R -2R -2B; R 0 I -R -R B; (3.18a) A;= ,i=l, ... ,m, B; -R I-R 0 R -2B; -2R I+ 2B;R 0 0 0 0 0 0 0 0 (3.18b) C;= ,i=l, ... ,m-1, B; 0 0 0 0 0 0 0 17

PAGE 29

0 0 0 0 0 0 0 B, {3.18c) D,= ,i = 2, ... ,m, 0 0 0 0 0 0 0 0 (3.18d) 'li, = (,P-:, ,,p+,,p-:,,p++' )T,i = 1, ... ,m. -'--4-2 """-4 -l 2 In the following section, we will examine x-line and tt-line relaxation method and their properties. 3.3 Relaxation 3.3.1 x-line Relaxation Suppose the right-hand sides of (3.15a-d) were known. The positive angles would then satisfy a system of equations involving the matrix I ,Pt -, -B1 I B1 ,p+ -1 -2B1 I+ 2B1 ,Pt (3.19) -, -B2 I B2 Y!; -2B2 I+ 2B2 ,Pt -, The first block equation represents the boundary conditions. This block lower triangular matrix could be solved in a forward sweep by inverting the block 18

PAGE 30

matrices as follows: (3.20) = (I+ 2B; + 2Bfr1 [ I Bl-1 [ I + 2B; -B, ] -2B; I+ 2B; 2B, I Since B; is diagonal, this is a trivial operation. Likewise, the negative angles could be found in a backward sweep. This process represents :z:-line relaxation (also known as a source iteration or transport sweep). If we reorder the unknowns by cells and examine the effect of a complete :z:-line relaxation, the resulting error equation for a single cell can be written as I+ 2B; 0 -2B; 0 I 0 (3.21) -B; 0 I 0 -2B, 0 R 2R 0 -R 0 R R 0 0 R R 0 -R 0 2R R 0 -B; 0 I+ 2B, e1 e. + e. e + 0 B.+ 1 -;.-2 0 !+1 where lis the relaxation step. When rr1h; 1, :z:-line relaxation is not effective. Equation (3.16a) reveals that, when rr1h; -> oo, B; approaches zero. Then, an :z:line relaxation nearly reproduces the component of the error that is independent of angle because the matrix on the left-hand side approaches identity. Since 19

PAGE 31

this does not affect any spatial frequency, the resulting error is not smooth in space. Any error that is independent of angle will not be suppressed regardless of its spatial frequency. This makes x-line relaxation inappropriate for use in a multigrid scheme, since even if u,h, is small on the finest grid, it will become large on coarser grids. 3.3 .2 p;Line Relaxation On the other hand, we can rearrange equation (3.17) to reveal the form of two-cell red-black block JL -line relaxations, by which we mean that at cells i and i + 1, we set and as boundary values and solve all other interior values in cells i and i + 1 simultaneously. This relaxation is carried out in a red-black ordering, which means that we can consider each pair of cells individually. For two cells i and i + 1, the errors then after relaxation will satisfy (3.22) e:- + fi+l -c. 20 0 0 r 0 0 0 0

PAGE 32

[ AThe inversion of -Di+1 -C; ] is inexpensive since R is rank one and I, B;, A;+l A; and B;+l are diagonal matrices. To show this, we write (3.23) I+ 2B; 0 -2B; 0 0 I 0 B; B; 0 I 0 -B; 0 -2B; 0 I +2B; I+ 2B;+l 0 -2Bi+1 0 -B;+l 0 I 0 Bi+1 B;+1 0 I 0 0 -2Bi+1 0 I+ 2Bi+1 21

PAGE 33

21 0 1 1 1 1 lwT 2WT 0 _lwT 20 21 _lwT 2-0 WT lwT 221 0 1 1 1 1 0 21 where 1 = (1, 1, ... 1f and wT = (w1,w2 wn) The first matrix in (3.23) can be easily inverted since it is sparse and its components I, B;, and B;+l are diagonal matrices. The second term is a rank four matrix. If we denote (3.23) as (3.24) [ _;, ::: ] A,-vw', where A0 is the first matrix in (3.23) and V, WT are the two rank four matrices in (3.23), then the Sherman-Morrison formulas yield Since A0 can be easily inverted, the main problem is to invert (I-WT A01V), 22

PAGE 34

which is a 4 X 4 matrix and can be easily inverted directly. au 0 a1a a14 0 a22 a23 a24 (3.26) (I WT Aij1V) = aa1 aa2 aaa 0 a41 a42 0 a44 where (3.27a) ,., n 2w(u + 2-; ) L J r-J tTthi an = a22 = 2 ( ,.. ,.. ) j=l u1h; 1 + + 2,.;/.; (3.27b) (3.27c) (3.27d) (3.27e) (3.27) 23

PAGE 35

one in the negative direction, R = !2 Bk = '"h' k = 1, ... m. At cells 2i -1 and
PAGE 36

l 2 1 The errors fit the following linear formulas: (3.30a) h k 2 3 2 1 2. 1 2. d t 0 h,._, h h !;,;, d were = t-2, z, t-2 z,an = ,-2-, 2i_1 2i-l+ 2 ,correspon-ingly, and (3.30b) h2i-l 2J.L1 ek + ek + -h+ -h--h + -h 2i CTt 2i 2i 2 2i 2 after relaxation the errors fit linear formulas at both the positive and negative angle directions, the errors are linearly distributed across two cells. D h ;v2i Figure 3.3 Fine grid cells 2i-1, 2i and coarse grid cell i 26

PAGE 37

Define a new cell i on a coarser grid h whose length h; is h; = h2;_1 + h2 ; and that covers cells 2i -1 and 2i as depicted in Figure 3.3. We can find values at the cell center x; and the edges X;_l, X;+l of cell i of grid h by letting = 0, = 2 2 h,; +h,; and = h2i-1 + h 2 ; in (3.30). Thus, we have (3.31a) (3.31b) e +(h) = u,h2; - 21-'1 + u,h2i-1 + u,h2; (3.31c) -(h) lTth2i e. -,-----:-----:--! - 21-'1 + u,h2i-1 + u,h2; (3.31d)

PAGE 38

W th 't (3 29) t f -(h) +(h) -(h) +(h) e can en wr1 e In erms o , !q+!. as 2 2 ;,_ I 0 0 0 2 e+ 0 2h2i+h2i-1 I 0 -h2i I -2i-1 h2i+h2i-1 h2i+h:ai-1 f2i-1 h2i I 0 h:ai-1 I 0 -(h) h2i+h:H-1 h:H+h:ai-1 e. _,_2 + 0 2h2j I 0 h:ai-1-h:a; I +(h) .2i-.!. e h:ai+h:ai-1 h:ai+h:ai-1 (3.32) 2 f2i-!. h:ai-h:ai 'I 0 2h2i 1 I 0 -(h) h:a;+h:ai-1 e. h2i+h:ai-1 + 0 h2i I 0 h:ai 1 I +(h) h2i+h:ai-1 h2;+h2i-1 -h:ai-1 I 0 2h:ai-t +h2s: I 0 h:a;+h:ai-1 h:ai+h2i-l + f2i+! 0 0 0 I Let I 0 0 0 0 2hit+h, I 0 -hil I (3.33a) T1; = hitl +h; hi+t +h; h,l I hi+t +hi 0 0 h I hitl +h; 0 0 2h'' I hi+t +hi 0 h;-h,t I hitl +hi hitt-hi I hitl +h; 0 2h I hi+t +h; 0 0 hit I 0 h I (3.33b) T2,i = hitl +hi hitl +hi -h I hitl +hi 0 2h;+hi I hi+t +hi 0 0 0 0 I 28

PAGE 39

Then after the relaxation, the errors can be written as (3.34) e, -, e+ -1 !!.1 e+ _, 2 + T1,1 T2,1 T1,a T2,a We write (3.34) in multigrid notation as (3.35) h Ih 2h = where (3.36) 29 -(h) 2 +(h) !!.1 -(h) !!.1 +(h) e, -, +(h) 2 2

PAGE 40

We use the notation 2h to indicate a coarse grid although our grid is not uniform( the mesh size h; at cell ion the coarse grid is h2i-1 + h2i, not twice that of the fine grid). We will see that (3.34) means that the errors after relaxation are in the range of interpolation. In the multi-angle case, the errors will no longer be exactly linearly distributed across two cells after two-cell red-black block JL-line relaxation, but we can exam-ine the properties of relaxation in two extreme cases: max (
PAGE 41

(3.38a) 2 0 2 0 0 0 0 0 Uth':Zi-1 Uth2i-l 0 0 0 _1_ 0 0 0 0 O"th2i-1 1 0 0 0 1 0 0 0 O'th2i-1 O"th2i-1 0 2 0 2 0 0 0 0 Ho=M CTth2i-1 Uth2i-1 0 0 0 0 2 0 2 0 O'th2i CTth'Ji 0 0 0 1 0 0 0 1 Uth2i O'th2i 0 0 0 0 1 0 0 0 Uth2i 0 0 0 0 0 2 0 2
PAGE 42

and J.L1 0 (3.38c) 0 J.Ln Then {3.39) It is straightforward to obtain 0 0 h2i-1 0 0 0 h2i 0 0 h2i-1 0 -h2i-1 0 0 0 0 2 -h2i-l 0 h2i-1 0 0 0 h2i 0 2 0 h2i-1 0 0 0 0 0 0 {3.40) H-1-M-1 0 -O"t 0 0 0 0 0 0 h2i 0 0 h2i-1 0 0 0 h2i 0 _flli 2 0 0 0 0 _flli 0 h2i 0 2 0 h2i-1 0 0 0 h2i 0 0 When
PAGE 43

Substituting (3.41) into (3.39), we have When we carry out the arithmetic in (3.42) by using (3.38b) and (3.40) and substitute the resulting formula into (3.28), we have that the errors after relaxation satisfy I 2 (1 + __fu_;__ )M-1(I h2i-1 R) + 0 -'-M-1 R .2i-1 2 f2i-1 I (! + __fu_;__ )M-1(I-R) 2 h2i-l + 0 -M-1 R .2i-l (3.43) 2 + O'th2i-1 + .2i-;t 2 __fu_;__ M-1(I-R) 2 I 2 h2i-1 + 0 -(1 + _fui_ )M-1 R .2i 2h2i-l .2i I _fui_M-1(I 2h2i-1 R) + 0 -(1 + __fu_;__ )M-1 R .2i+k h21-1 0 I 0 I + + O'th2i 0 I 0 I (1 + )M-1(I-R) 33

PAGE 44

With the same cell i on grid h as in Theorem 1, we have (3.44a) -(h) ( ( h2i )) -1( ) + ( h2i-1) -1 -e._1 = 1-(J"th2i-1 1 + -hM IR + (]",h2i 1 + -h--M R.2.+1 -,. 2 2i-1 t 2 2i 2 (3.44d) +(h) h (1 h2, )M-1R + ( .,;+1 = (]"t 2i-1 + -h + 1 2 2t-1 2 h2i-1 -1 (J",h2,(1 + -h-))M (IR).2.+1 2 2i t 2 ( 43) -(h) +(h) -(h) +(h) We can wnte 3. m terms .i .i+! as 2 2 (3.45) + .2i-1 + [ T1,2i-1 l T2,2i-t +(h) _, -(h) _, 34

PAGE 45

Thus, the errors are piecewise linear across the two cells 2i -1 and 2i up to accuracy We now focus our attention on the optically dense or thick limit. Lemma 1 Suppose (u,h;) 1; then the asmptotic inversion of A; defined by (3.18a) can be ezpanded as R 2R 0 -R A:-1 = u,h; 0 R R 0 (3.46) +I+ 2co 0 R R 0 -R 0 2R R -2(MR+RM) -2(MR+RM) 2(MR+RM) 2MR 1 (MR-RM) 0 -2MR -(MR+RM) 1 2co -(MR+RM) -2MR 0 (MR-RM) +O( u1h; ). 2MR 2(MR+RM) -2(MR+RM) -2(MR+RM) Proof: Write

PAGE 46

I +2B; 0 -2B; 0 21 0 0 I 0 B; 1 1 [ lwT WT 0 -'' l 22-, B; 0 I 0 1 1 _lwT 0 WT lwT 2-20 -2B; 0 I+ 2B; 0 21 where 1 = (1, 1, ... 1)T and WT = (w1, w2, ... Wn) The first term on the right-hand side of (3.47) is easily inverted while the second term is a rank two matrix. If we denote this right-hand side by (3.48) A,= A0 with Ao denoting the first term and VWT the second, then the Sherman-Morrison formulas yield (3.49) A0 1 is readily given by I 0 2B; 0 0 I+ 2B; 0 -B; (3.50) A01 =(I+ 2B; + 2Blf1 -B; 0 I+ 2B; 0 0 2B; 0 I The matrix (I -wT A()1 V) is 2 X 2 and its inverse can be obtained as (3.51) (Iw' A;'vr' e, [ : : ] 36

PAGE 47

where (3.52) In the optically dense media case, we expand (i, ei as (3.53) 2c0 1 u,h; 1 ,,=1--h +O( aha),e.=-2-+o(-h), crt i crt i co crt i and (3.54) 2 -1 2 2 2 1 (I+2B;+2B,) =l--hM+ 2h2M +0( 3h3). Cft i Cft i Cft i Substituting (3.50),(3.51), (3.53) and (3.54) into (3.49), we have (3.46). D Theorem 3 Suppose min ( Uthk) :;}> 1; then the errors after two cell red-black p.-line k=(2i-1,2i) relazation will be piecewise linear up to accuracy O(max(-h1 _l_h )). 1, we need to expand Write k=(2i-1,2i) -D2; A2; (3.55) [ A 2;_1 -02;_1 l [ A2i-1 0 l [ 0 G2i-1 l = A0 VWT, -D2; A2i -D2; A2; 0 0 where (3.56) Ao = [ A2i-1 o I vwT = [ o D2i-1 I -D2i A2i 0 0 37

PAGE 48

with v = (O,O,I,o,o,o,o,of and WT = (O,O,O,O,B2i-llo,o,o). The inversion of (3.55) can be obtained by Sherman-Morrison formula (3.25). We first note that (3.57) A-10 -[ A-1 2i-1 Ail-1 D2iA2? ;.. l We can expand A01 by using Lemma 1. By substituting the expansion of A01 into (J-[ A2i-1 -C2i-1 WT A()1 V), we can then expand (J-WT A()1 v)-1 The explicit expansion of -D2; A2i is omitted here because of its complexity. After the expansion, the error equation (3.22) at cells 2i -1 and 2i after two-cell JL-line relaxation can be expanded as (3.58) + ' + R ( 1 + h,._, )R 2 2(h>i-1 +h,;) ( 1 + h,._, )R 2 2(h:H-t +hai) 0 1 1 +O(max(O'th2i-1' u,h)) As in Theorems 1 and 2 we define the same cell i on grid h and obtain (3.59a) -(i.) R + 1 = 3' t-2 t-2 (3.59b) 1 1 e+(h) = -Re+ 3 + -2R12.2-,+,-'' =4 2 -21-2 38 0 R r

PAGE 49

(3.59c) (3.59d) Then (3.58) can be written as + +(h)R -' (3.60) + = [ T1,2i-1 l T2,2i-1 +(h) e -(h) e 1 1 +O(max( h ,-h-)). Ut 2i-l Ut 2i + + Therefore, the leading terms in the errors after relaxation are piecewise linear across the two cells 2i-1 and 2i up to accuracy O(max(-h1 -h1 )). D O"t 21-1
PAGE 50

the fine to coarse grids and will be defined later. A three-level CS(Correction Scheme)(Brandt [3]) V(vl, V2) multigrid method proceeds as follows: ii) calculate the residual rh = Jh -A huh and transfer it to grid 2h: iv) calculate the residual r 2 h = jlh A2hulh and transfer it to grid 4h: j4h = Ii;kr2 h v) relax v1 + V2 times on A4hu4 h = J 4 h or solve it directly vi) replace u 2 h
PAGE 51

where I+ 2BfR -2R R 0 I -R -R (3.62a) -R I R 0 R -2Bf -2R I+ 2BfR with the restriction operator defined by (3.64) (3.65) I2hh -I 0 S1,1 = 0 0 0 h I hitl +hi 0 0 Sa,1 Sa,2 0 0 0 0 h I hitl +hi 0 0 0 Sm-1,1 Sm-1,2 0 0 0 0 hil I 0 ,S1,2 = hitl +hi 0 0 h,l I hitl +hi 0 0 0 41 0 0 0 I

PAGE 52

and the interpolation operator defined by (3.36). It is easy to show that {3.66) D2h -m/2 A2h m/2 where I+ 2B2 h R -2R -2B?h R 0 I -R -R B?h A2h = {3.67a) B?h -R I -R 0 R -2B?h -2R I+ 2B?h R In the next section, we analyze the convergence factors of the multigrid algorithm defined above. 42

PAGE 53

3.5 Analysis of Convergence Factors of the Multigrid Algorithm We start our analysis with the S2 problem. Theorem 4 For the S2 angular discretization, that is, when there is only one angle in the positive and one in the negative directions, the multigrid algorithm with two-cell red-black block fL-line relazation will be an exact solver. Proof: Define Gh be the two-cell red-black block (L-line relaxation matrix. The error after relaxation is (3.68) h Gh h = is the errors before relaxation respectively. From (3.35) in Theorem 1, the errors after relaxation can be expressed as (3.69) h Ghh Jh2h = fa = 2hfi where f!_2 h is defined on a coarse grid h with mesh size of cell i given by h; = h2i-l + h2i and its cell edges at the same spatial position of left edge of cell 2i 1 and right edge of cell 2i of fine grid h as depicted in Figure 3.3. The action of a two-grid multigrid V(l, 1) cycle on the error can be written in matrix form as (3.70) = I;h(A2htl 43

PAGE 54

Substituting (3.69) into (3.70), we have (3.71) Note that the coarse grid operator A2 h is defined by (3.63). So (3.71) becomes (3.72) This means that the errors will be completely eliminated by a multigrid V -cycle provided the coarse grid 2h is solved exactly. By induction, the coarse grid can be solved exactly if the coarsest grid solution is exact. The coarsest grid in our algorithm contains only two cells and is thus exactly solved by two-cell block JL-line relaxation. D For SN with N > 2, the multigrid algorithm will no longer be an exact solver. However, we can establish upper bounds on the convergence factors of the multigrid algorithm when max (
PAGE 55

and c1 is a constant independent of O"t, h; and hi+ 1 1. The matrix in (3.42) can be expanded as (3.75) 0 0 u1h; 0 u1h;+1 0 u,hi+ 1 0 0 u1h; 0 1 0 0 0 0 -u,h'2 1 0 u1h; 0 0 0 u,hi+1 0 -u,h'2 0 u,h; 0 0 0 0 0 0 M-1 0 0 0 0 0 0 u,hi+1 0 0 u1h; 0 0 0 u,hi+ 1 0 1 -u,hi+12 0 0 0 0 1 -u,hi+12 0 u,hi+ 1 0 0 u1h; 0 u1h; 0 u,hi+1 0 0 where M is defined by(3.38c). Since M is diagonal and constant, we can find constants C1 such that (3.76) 0 45

PAGE 56

To simplify our notation, define (3. 77a) H; = [ A!' r Al;1 0 0 [ s,.:,, s2i-1,2 0 s,:,, l -Ch. 0 2 0 s2i+l,l 0 -c;i+l 0 0 (3.77b) U;+l = [ A" -C .. r Al;1 0 0 0 0 [ s,:,, S2i-1,2 0 s,.:,] 0 0 0 0 0 s2i+l,l 0 0 0 0 c2i+2 0 0 0 v.+l = [ All, -Cil. r. (3.77c) D2h A2h i+3 i+3 0 0 0 D2i+2 [ s,:,, s2i+3,2 0 0 0 0 0 0 0 s2i+5,1 s2i+5,2 0 0 0 0 0 0 0 0 46

PAGE 57

where are defined by (3.67), Sk,1, Sk,2 are defined by (3.65) and Ck, are defined by (3.62). (3.78a) (3.78b) (3.78c) where c2 and c3 are constants independent of mesh sizes. 1, 2i, 2i + 1, 2i + 2 in (3.62). By taking the dominant terms, we have -Ct-1 0 0 [ s,,:,' S2i-1,2 0 s:., l 0 (3.79) 0 s2,,1 0 -Ct+l I 0 0 [ : :.] : H12 H1a + 0(1), H22 H23 where _2_M CTth2i-l 0 _2_M 17th2i-1 0 0 0 0 0 (3.80a) Hn = 1 M O't(h2i-1 +h2i) 0 0 0 0 0 0 0 47

PAGE 58

0 0 0 0 0 0 0 +h,i) M (3.80b) H,2 = 0 0 0 0 0 __ 2_M tTth2i 0 _2 __ M O'thli 0 0 0 0 0 0 0 0 (3.80c) 0 0 0 0 0 0 0 0 0 0 0 M (3.80d) H22 = Ut(h2i+l +h2it2) 0 0 0 0 0 0 0 0 0 0 0 0 (3.80e) 0 0 0 0 0 0 48

PAGE 59

0 0 0 0 0 0 (3.80f) 0 0 0 0 Since M is constant, we can find a C4 such that (3.81) -c;,_l 0 0 II [ S2i-1,2 0 s:,.] -Dh. -Ch. 0 2 2 I h.oo 0 S2i,1 0 -Ct+l 0 0 < c4 min( Uth2i-h Cfth2i1 Uth2i+b Uth2i+2) Combined with Lemma 2, we have (3.82) IIH-11 < c max(O't(h2i-l + h2;),0't(h2i+l + h2;+2 )) l,oo 2 ( h h h h ) IDlll Ut 2i-1, Ut 2i, O"t 2i+b Ut 2i+2 with c2 = c1 C4. To prove (3. 78b ), we have (3.83) 0 0 0 0 [ S2i-1,2 0 0 0 0 0 [ :.. 0 0 : l = 0 s2>+1,1 0 0 0 0 0 0 c2i+2 0 0 0 49

PAGE 60

with 0 0 0 0 0 0 0 0 (3.84) U21 = 1 M Ut(h2i+1 +h2i+2) 0 0 0 0 0 0 0 Using Lemma 2, (3.78b) is proved. The final result, (3.78c), can be proved in the same manner. 0 We next assume that1 on the finest grid, the value of G'thi does not vary too quicklyj that is, we assume that for each i (3.85) where C5 is a bound independent of mesh sizes. With (3.85), we have (3.86) We write the coarse grid operator (3.66) as (3.87)

PAGE 61

where A2h 1 c2h -1 -D2h 2 0 0 Z1= (3.88a} D2h i+l A2h i+l 0 0 A2h T-t 2-1 ' 0 0 0 0 0 0 c2h 2 0 0 D2h 3 0 0 0 0 (3.88b) z2 = 0 0 0 0 0 0 c2h T-2 0 0 D2h !j--1 0 0 0 0 0 0 Then (3.89} (A2ht1 = (Iz;1 z2t1 z;t. Lemma 4 Suppose max (uthk) 1 and(3.85) holdsforalli;then II(I-Z11Z2)-1II2 k::::(l, ... .!J!') is bounded by a constant independent of ITt hi. 51

PAGE 62

Proof: By using (3.75), the leading term of IZ:[1 Z2 can be expanded as (3.90) = where 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (3.91a) Ql= 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 52

PAGE 63

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 (3.91b) Q2= 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 Notice that Q2Q1 = 0 and (3.90) can be obtained by performing elementary operations on an identity matrix. So (3.92) where P1 and P2 are elementary matrices independent of Uthi. Thus (3.93) where Cg is a constant independent of O"thi. D With all the lemmas established, we present in the next theorem an upper bound on the convergence factor for the multigrid algorithm when max ( Uthk) 1. ... ,m) Theorem 5 When max ( Uthk) 1 and (3.85) holds for all i, a two-grid multi ... ,m) grid V(l, 0) with two-cell red-black block p.-line relaxation will have a convergence factor of 0( max (u:hm. ... ,m) 53

PAGE 64

Proof: From Theorem 21 the errors after two-cell red-black block J.L-line relaxation can be expressed as (3.94) A multigrid V(l, 0) cycle applied to the error takes the matrix form (3.95) h Gh h Ih (A.h)-1I2hAhGheh 2h h Substituting (3.94) into (3.95), we have (3.96) Note that (3.97) I h 2h Ih (A2h)-1J2hAhlh eh 2hf -2h h 2h- It is obvious that we only need to bound the norm of A h. From (3.89), we obtain (3.98) and -V2 Ha (3.99) 54

PAGE 65

where H;, U; and V; are defined by (3.78) and, by assumption of (3.85) and (3.86), we have (3.100) (3.101) The norms of can be bounded by a constant. So (3.102) Combining Lemma 4 with Equations (3.101) and (3.102), we obtain (3.103) 0 When min ( 1, we limit our analysis to four cells on the fine grid with k=(l, ... ,m) uniform mesh size h; and our numerical results imply that the analytical results are valid with nonuniform mesh size and many cells. The next theorem gives an upper bound on convergence factor of the multigrid algorithm when m = 4 and 1. First, we establish some space saving notation. Define the block matrices (3.104) Ei,t = [0, ... 0, I, 0, ... 0], to have l blocks of size n X n with the I appearing as the jth block. Consider another block matrix with n X n blocks, say (3.105) W = [W1, ... Wk] 55

PAGE 66

The product WT Ej,l is a k X l block matrix with n X n blocks. Each block column is zero except the jth block column, which contains WT. For example, we may write C; in (3.18b) as 0 0 0 0 0 0 0 0 (3.106) C;= = [0, 0, B;, O]T E1,4 B; 0 0 0 0 0 0 0 Theorem 6 When there are four uniform cells on the fine grid and O"th ::;)> 1, a multigrid V(1, 1) cycle with two-cell red-black block Jl-line relazation will have a convergence factor p :S:: u;h for some c independent of u,h. Proof: Define A h as the fine grid operator then (3.107) where 56

PAGE 67

(3.108a) I +2B-R -2R -2B R 0 0 0 0 0 I-R -R B 0 0 0 0 B -R I-R 0 -B 0 0 0 R -2B -2R I +2B-R 0 0 0 0 A= 0 0 0 0 I +2B R -2R -2B R 0 0 0 -B 0 I-R -R B 0 0 0 0 B -R I-R 0 0 0 0 0 R -2B -2R I+ 2B-R and (3.108b) C = [ 0 0 0 0 0 0 B 0 ] T E1,s, D = [ 0 B 0 0 0 0 0 0 ] T Es,B The coarse grid operator A 2 h is defined by (3.109) A2h Ih Ah I2h -2h h with and Ilh defined by (3.110a) I 0 0 [ :' s2 0 0 l 's, 0 li 0 I2h-2 h -0 s1 s2 0 0 li 2 0 0 0 57 0 0 0 0 0 0 0 li 0 0 ,s2 = 2 0 0 0 li 2 0 0 0 0 0 I

PAGE 68

(3.110b) T1 0 I 0 0 0 0 0 I 0 T2 0 0 0 _li 0 li 0 li I;h = ,Tl = 2 2 ,T2 = 2 2 0 T1 li 0 li 0 _li 0 0 2 2 2 2 0 T2 0 I 0 0 0 0 0 I The red-black relaxation matrix Gh is (3.111) A V(1, 1) multigrid cycle can be written in matrix form as (3.112) is the initial error is the error after the V(l, 1) cycle. Multiply Gh by Ah on the left to get 0 Note that (3.114) [ : : ] 1 0 0 ,B 0 0 0 0 0 r E, .. 58

PAGE 69

0 RM RM (A") 'If" [ : : I u,h 2RM (3.115) 4c2 2RM RM RM 0 !RM + _!'l._MRM 2 2c2 !(3 + _!'l._MRM-_c_RM2 8 2c2 4c2 !(3 + ac,c')RM + _!'l._MRM-_c_RM2 8 4c2 1(3 + _!'l._MRM-_c_RM2 4 2c2 2c2 !(-1 + ac,c')RM _!'l._MRM-_c_RM2 4 2c2 2c2 !(1 + + _!'l._MRM-_c_RM2 8 2c2 4c2 !(1 + ac, co )RM -_c_ M RM _c_ RM2 8 4c2 4c2 E1,1a 1 E1,1a + 0(-h), O't where M = u,hB and c1 = L;'J=1 Wjflj, c2 = L;'J=1 and Ca = L;'J=1 Note that RM R = c1R, RM2 R = c2R and RM3 R = c3R. 59

PAGE 70

Expand I-A-1 DA-1C as (3.116) 1 4RM 3RM 3RM 2RM E1,s 2RM RM RM 0 -8c2RM-4c1MRM-4c1RM2 -5c2RM + 4c1M RM --4 1 (13 )c1RM2 c2 -5c2RM-4c1MRMHl3)c1RM2 "2 -2c2RM + 4c1MRMH5 RM2 c2 1 2 -2c2RM -4c1MRMH52 c2RM-4c1M RM--41(5)c1RM2 c, c2RM + 4c1M RM--41 (5 -c'i' )c1RM2 c2 4c2RM + 4c1M RM 60 1 E1,s + 0( 2h2) (]"'

PAGE 71

From (3.115) and (3.116), we get (3.117) 61 0 2RM 2RM 4RM 4RM 2RM 2RM 0 1 E9,1s + 0( -h). u,

PAGE 72

Then 0 RM RM 2RM 2RM 3RM 3RM Jh (A2ht' 12h AhGh = _!_ 4RM 1 (3.118) E9,1s + 0( h). 2h h 4 c, 4RM (J't 3RM 3RM 2RM 2RM RM RM 0 62

PAGE 73

The expansion of Gh in (3.111) takes the form 0 lRM + .ELRM2 2 2c2 lRM + .ELRM2 2 2c2 RM + 9..RM2 c, RM + 9..RM2 c, + .ELRM2 2 2c2 + .ELRM2 2 2c2 Gh = _1_ 4RM 1 E9,1s + 0( /) 4c1 4RM !J't (3.119) 3RM 3RM 2RM 2RM RM RM 0 63

PAGE 74

So 0 RM RM 2RM 2RM 3RM 3RM GhGh = __!___ 4RM 1 (3.120) E9,16 + 0( Uth) 4c1 4RM 3RM 3RM 2RM 2RM RM RM 0 64

PAGE 75

and 0 RM RM 2RM 2RM 3RM 3RM (3.121) 4RM 1 E9,1s+ 0( /;) 4RM u, 3RM 3RM 2RM 2RM RM RM 0 Combining (3.120) and (3.121), we can have (3.122) where c is a constant independent of u,h. D 65

PAGE 76

3.6 Computational Results 3.6.1 Multigrid Convergence Factors The following computational results were conducted on a computational domain with m = 1024 and Ss, that is, n = 4. Our analytical results are independent of n. From the computational results conducted on S N with N ranging from 2 to 256, we observe a convergence performance independent of N. 1. Convergence Factor for Uniform Grid The results in Table 3.1 reveal that the observed convergence factor p is 0( u; h3 ) when u,h : 1 and 0( 2 1 h 2 ) when u,h ::}> 1. The computational are even better than the analytical results which only give upper bounds on the convergence factor. In Table 3.2, we present a range within which the maximal convergence factor occurs, which is around Uth = 0.01. 2. Convergence Factor for Nonuniform Grid We choose a highly variable O"thi by letting O"thi = cl02";, i = 1, ... m, where '1/i is a random number between ( -1, 1 ). Thus, neighboring cells can vary in width up to 4 orders of magnitude. Here, c is a constant chosen to control the range of cell widths. If c = 1, then O"thi are random numbers in (0.01,100). If c 2 100, then O"thi 2 1. If c ::; 0.01, then O"thi ::; 1. The results of Table 3.3 indicate that the performance of our multigrid scheme for a nonuniform grid is the same as for a uniform grid. Furthermore, in both the thin and thick limit cases, a nonuniform grid has the same convergence behavior as a uniform grid. 66

PAGE 77

(J't h convergence factor p 1o- 0.23 X 10-10 1o- 0.21 X 10-7 10-4 0.16 X 10-4 1o-a 0.20 X 10-2 10-2 0.97 X 10-2 lQ-1 0.89 X 10-2 1 o.58 x lo-a 10 1 o.21 x 10-4 102 0.20 X 10-6 103 0.20 X IQ-8 104 0.20 X lQ-10 Table 3.1. Convergence Factors for Uniform Grid 67

PAGE 78

O"t h convergence factor p 0.005 o.12 x 10-2 0.006 0.87 X 10-2 0.007 o.95 x 10-2 0.008 o.98 x 10-2 0.009 o.98 x 10-2 0.010 0.97 X 10-2 0.011 o.96 x 10-2 0.012 0.96 X 10-2 0.013 o.95 x 10-2 0.014 o.95 x w-2 0.015 0.95 X 10-2 Table 3.2. Convergence Factors for Uniform Grid: Worst Case 68

PAGE 79

c convergence factor p Lo-6 9.9 x I0-10 Lo- 8.5 X l0-7 Lo-4 2.1 xlo-4 1.0-3 5.4 X 10-3 1.0 9.1 X 10-4 1.02 3.3 X l0-4 1.03 6.1 x 10-6 1.04 2.4 x lo-s 1.05 2.3 X 10-10 Table 3.3. Convergence Factors for Nonuniform Grids 3.6.2 DSA Convergence Rates As mentioned in the introduction, a competing algorithm for the solution of the isotropic S N equations in slab geometry is Diffusion Synthetic Acceleration (DSA) which is motivated by the observation that, in the thick limit, the solution of the linear Boltzman equation (1.1) becomes independent of angle except near boundaries and sources. If the S N problem were recast as an S2 problem, then the zeroth and first Legendre moments of the solution could be found by solving a diffusion equation for the zeroth moment. DSA can be viewed as a precon-ditioning technique (Faber and Manteuffel [4]) or as a two-level multigrid scheme (Larson [6]) in which an 82 problem is used as a coarse level correction to an S N problem. In this context, a transport sweep or x-line relaxation (see(3.3)) corresponds to multigrid relaxation and the 69

PAGE 80

S2 problem represents the coarse level. DSA skips all of the intermediate levels. The multigrid in angle scheme developed in Morel and Manteuffel [14] visits levels N, N /2, N /4, ... 2 and is effective for isotropiC scattering as well as highly anisotropic scattering. The overall DSA algorithm is very sensitive to the difference scheme used to solve the diffusion equation. In our numerical results we solve an 82 equation discretized with a MLD scheme using Marshak boundary conditions as the coarse level correction. This is equivalent to solving a consistently differenced diffusion equation for the zeroth moment. In Table 3.4 and Table 3.5 we present convergence factors for a V(1, 1) multigrid cycle and for a DSA cycle. Here, the slab is assumed to have physical thickness 1. Thus, Ut represents the width of the slab measured in the number of mean-free-paths. The tests were performed using Sa and a wide range of u1 and m (the number of cells). The diagonals of these tables represent constant u,h. For the multigrid algorithm, the convergence factors are roughly equal along diagonals, with the worst factors occurring for u,h in the range 4-3 (.0156) to 4-s (.97E -3). For example, u1 = 4-l, m = 16 and u1 = 64, m = 4096 are on the diagonal with u1h = .0156. The behavior of the DSA algorithm is quite different. The best factor achieved is .124. Further, for any fixed Ut ::0:: 4, the slab is thick but the cells become individually thin as the number of cells increases. In this limit DSA saturates to a convergence factor of .23. In our tests the value of m was not quite large enough to achieve this limit, but a value of .219 was reached for Ut = 16 and m = 4096. 70

PAGE 81

O't m= 16 m= 64 m=256 m = 1024 m = 4096 4-5 .12E-9 .16E-9 .16E-9 .16E-9 .16E-9 4-4 .72E-8 .10E-7 .10E-7 .10E-7 .10E-7 4-3 .37E-6 .55E-6 .56E-6 .56E-6 .56E-6 4-2 .17E-4 .16E-4 .18E-4 .26E-4 .27E-4 4-1 .25E-3 .53E-3 .55E-3 .55E-4 .58E-4 1 .14E-2 .37E-2 .44E-2 .44E-2 .46E-2 4 .82E-3 .35E-2 .82E-2 .99E-2 .99E-2 16 .57E-3 .13E-2 .63E-2 .86E-2 .97E-2 64 .46E-4 .57E-3 .23E-2 .85E-2 .99E-2 256 .20E-5 .46E-4 .58E-3 .25E-2 .89E-2 1024 .12E-6 .18E-5 .47E-4 .58E-3 .26E-2 4096 .75E-8 .12E-6 .16E-5 .46E-4 .58E-3 8192 .45E-9 .75E-8 .12E-6 .14E-5 .47E-4 16384 .28E-10 .46E-9 .75E-8 .12E-6 .llE-5 32768 .18E-11 .30E-10 .47E-9 .75E-8 .12E-6 65536 .llE-12 .20E-11 .30E-10 .46E-9 .75E-8 Table 3.4. Convergence Factors for the M ultigrid Algorithm 71

PAGE 82

u, m= 16 m= 64 m=256 m = 1024 m = 4096 4-5 .190E-2 .190E-2 .190E-2 .190E-2 .190E-2 4-4 .747E-2 .746E-2 .746E-2 .747E-2 .747E-2 4-3 .276E-1 .276E-1 .276E-1 .276E-1 .276E-1 4-2 .836E-1 .834E-1 .834E-1 .835E-1 .835E-1 4-1 .153 .151 .151 .152 .152 1 .188 .203 .204 .204 .205 4 .180 .217 .216 .217 .218 16 .142 .164 .204 .217 .219 64 .129 .143 .149 .165 .185 256 .125 .131 .143 .145 .160 1024 .124 .126 .131 .143 .146 4096 .124 .124 .126 .131 .142 8192 .124 .124 .126 .127 .132 16384 .124 .124 .124 .126 .126 32768 .124 .124 .124 .124 .126 65536 .124 .124 .124 .124 .124 Table 3.5. Convergence Factors for the DSA Algorithm In all cases, the multigrid convergence factors were superior to the DSA convergence factors. However, it is important to adjust for the relative amount of computational work required by each algorithm. Of course, such measures will be machine dependent. On the Cray Y /MP, 72

PAGE 83

where these tests were performed, we compared computation time for N = 64 and m = 1024. The V(1, 1) cycle required 8.9 seconds and the DSA cycle required 3.7 seconds, the ratio of which is 2.4. Table 3.6 is the result of raising each term in Table 3.5 to the power 2.4. This is a more fair comparison with Table 3.4 on a serial machine. Note that the multigrid algorithm is nevertheless faster than DSA in all regimes. On a parallel machine we expect the results to even more heavily favor the multigrid algorithm. Both algorithms can be implemented with parallel complexity O(log(m)). In this context, however, we expect the times for a multigrid V(1, 1) cycle and a DSA cycle to be more nearly equal. A parallel version of the multigrid algorithm is described in [9][10]. A parallel version of the DSA sweep has also been implemented on the Thinking Machines Inc. CM-200 at Los Alamos National Laboratories([19]). In either setting, parallel or serial, a full multigrid algorithm can be implemented ([12]). This version of multigrid starts on a coarse grid and moves toward finer grids. In general, an amount of work equal to two V(1, 1) cycles on the finest grid will yield a solution that is accurate to the level of discretization error. This would provide a savings in some regions of the tables. Moreover 1 full multigrid provides a natural framework for adaptive grid refinement. 73

PAGE 84

u, m= 16 m= 64 m = 256 m = 1024 m 4096 4-5 .29E-6 .29E-6 .29E-6 .29E-6 .29E-6 4-4 .78E-5 .78E-5 .78E-5 .78E-5 .78E-5 4-3 .18E-3 .18E-3 .18E-3 .18E-3 .18E-3 4-2 .25E-2 .25E-2 .25E-2 .25E-2 .25E-2 4-1 .11E-1 .lOE-1 .10E-1 .lOE-1 .10E-1 1 .18E-1 .21E-1 .22E-1 .22E-1 .22E-1 4 .16E-1 .25E-1 .25E-1 .25E-1 .25E-1 16 .92E-2 .13E-1 .22E-1 .25E-1 .26E-1 64 .73E-2 .93E-2 .10E-1 .13E-1 .17E-1 256 .68E-2 .76E-2 .93E-2 .97E-2 .12E-1 1024 .66E-2 .69E-2 .76E-2 .93E-2 .98E-2 4096 .66E-2 .66E-2 .69E-2 .76E-2 .92E-2 8192 .66E-2 .66E-2 .69E-2 .70E-2 .77E-2 16384 .66E-2 .66E-2 .66E-2 .69E-2 .69E-2 32768 .66E-2 .66E-2 .66E-2 .66E-2 .69E-2 65536 .66E-2 .66E-2 .66E-2 .66E-2 .66E-2 Table 3.6. Adjusted Convergence Factors for the DSA Algorithm 74

PAGE 85

CHAPTER4 TRANSPORT PROBLEMS WITH ABSORPTION AND MULTIGRID SCHEMES 4.1 Introduction In Chapter 3, we developed an efficient scheme for solving the transport equations with-out absorption. In this chapter, we describe a fast method for solving the equations used to tnodel the transport of neutrally charged particles with isotropic scattering and various levels of absorption in slab geometry. From Chapter 21 the steady-state transport equation can be written as (4.1) When there is absorption in the transport equations and !J"th 1, the two-cell JL-line relaxation used in Chapter 3 will leave an error that is independent of angle and continuous but no longer linear across two cells. The deviation of the errors from linearity depends on the value off = !?:.!. Because of the nonlinearity of the errors after relaxation, the linear ,., interpolation we have used in our previous multigrid algorithm will not be suitable. In this chapter, we will present a new method that uses kinked elements whose shape will be determined by the severity of deviation from linearity of the errors after relaxation. When 1 = 1, the kinked element will be linear. In Section 4.2, we analyze the properties of relaxation when 1 oJ 1. In

PAGE 86

Section 4.3, we introduce the kinked element, relaxation-induced interpolation and coarse grid operator used to solve the transport problems when 1 # 1. In Section 4.4, we examine the properties of the new interpolation. In Section 4.5, computational results are presented. We show that the multigrid algorithm is faster than the Diffusion Synthetic Acceleration (DSA) algorithm [2][5] on a set of test problems. 4.2 The Discrete Model The same angular and spatial discretizations were used to discretize the transport equation with absorption. In block matrix form, MLD for transport equation with absorption can be written as (4.2) I+ 2B,-1R -21R -2B, 1R ,p+ _, In this chapter, we will use an edge/edge notation for the unknown flux variable 'if;. That is, the flux within cell i is linear and is determined by the value at the right and left sides,'l/Jt and We define the transformation tl 1 0 0 0 ti 0 2 0 -1 ,p+ ( 4.3a) _, 1 0 2 0 'if;-: _, 'l/Jt 0 0 0 1 76

PAGE 87

Then the inverse transformation of (4.3a) will be ..P,_t 1 0 0 0 ..p+ 0 1 0 1 t."i ( 4.3b) 2 2 = ..p-: 1 0 1 0 ..P-;,. 2 2 ..P"Zt-1 0 0 0 1 2 ..Pi-1,1 ..Pi-l,r ..Pu ..Pir Figure 4.1 Piecewise Discontinuous Elements Figure 4.1 shows the piecewise discontinuous elements we use. At cell i, two variables 1/Jil, .,Pi.,. are defined. By substituting ( 4.3b) into ( 4.2) and multiplying both sides of ( 4.2) by the transformation matrix of (4.3a), we then can write (4.2) in block matrix form as A1 -C1 q;l -D2 A2 -C2 q;2 (4.4) = [ Q]' -D; A, -C; q;. 77

PAGE 88

where Q is the right-hand side and I +B; -1R -!R -B; 0 -!R I+B;-!R 0 B, ( 4.5a) A;= B; 0 I+ B; -1R -!R 0 -B; -!R I+B;-!R i = l, ... ,m, 0 0 0 0 0 0 0 0 ( 4.5b) C;= i = 1, ... m-1, 2B; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2B; (4.5c) D;= ,i = 2, ... ,m, 0 0 0 0 0 0 0 0 ( 4.5d) W; = ,.,j;+)T,i = 1, ... ,m. -1. -t -tr -u In the following section, we will examine the two-cell J.L-line relaxation and its properties. 4.3 Relaxation By two-cell red-black fL-line relaxation, we mean that at cells 2i 1 and 2i we set and as boundary values and solve for all other interior values in cells 2i 1 and 2i 78

PAGE 89

simultaneously. This relaxation is carried out in a red-black ordering. Since the relaxation uses a red-black ordering, we can look at each two-cell pair individually. For a two-cell pair, for example, cells 2i 1 and 2i, the errors after the relaxation will be f(2i-l)l 0 + f(2i-l)l f(2i-l)r 0 e+ [A,., -C2i-l r 0 {4.6) -{2i-l)r f(2i)l -D2; A2; 0 + f(2i)l 0 f(2i)r + f(2i)r 0 where f. on the right-hand side is the error at the outside border of the two-cell pair before [ A2i-l -C2;-1 l relaxation. As we have shown in Chapter 3, the inversion of the can be -D2i A2i performed in O{n) operations by using the Sherman-Morrison formula. In this chapter, we will discuss the MLD equation with absorption(/ f 1). In the thin limit, 1 will not affect the behavior of !L-line relaxation. The proof of Theorem 2 that appears in Chapter 3 does not depend on I We restate Theorem 2 in Chapter 3 for 1 f 1 in the following way: Theorem 7 Suppose 1 ::; 1 and max(
PAGE 90

Proof: see the proof of Theorem 2 in Chapter 3. In the thick limit, the size of 1 1 relative to -h1 and -h1 will affect the property O"t :h-1 O't 2 of JL-line relaxation. For simplicity, we restrict our analysis to uniform Uthi and let h be the uniform mesh size. We analyze JL-line relaxation when O"th b )1 = p=1,2,3. In the second case, when Uth 1, the larger p is, the closer 1 is to 1 and the less there is absorption. Theorem 8 When 1 1 _Lh and u,h 1, two-cell !.-line relazation alone will reduce O"t errors at any two neigboring cells 2i 1, 2i by a factor of O{_Lh). O"t Proof: When 1 =f 1, define (4.7) and (4.8) u-'=[I-IR ]-' [I 0]+1:1[R Rl -1R I -1R 0 I R R Write 80

PAGE 91

u u (4.9) u u B 0 -B 0 0 0 0 0 0 B 0 B 0 0 0 0 B 0 B 0 -2B 0 0 0 0 -B 0 B 0 0 0 0 = Ho-H1 0 0 0 0 B 0 -B 0 0 0 0 -2B 0 B 0 B 0 0 0 0 B 0 B 0 0 0 0 0 0 -B 0 B 81

PAGE 92

where Ho is the first matrix and H1 the second in (4.9). By (4.8), Ho is not singular, so M 0 -M 0 0 0 0 0 0 M 0 M 0 0 0 0 M 0 M 0 -2M 0 0 0 1 0 -M 0 M 0 0 0 0 (4.10) H 0 1Ht u,h + 0 0 0 0 M 0 -M 0 0 0 0 -2M 0 M 0 M 0 0 0 0 M 0 M 0 0 0 0 0 0 -M 0 M RM RM -RM RM 0 0 0 0 RM RM -RM RM 0 0 0 0 RM -RM RM RM -2RM 0 0 0 I RM -RM RM RM -2RM 0 0 0 (1 1 )u,h 0 0 0 -2RM RM RM -RM RM 0 0 0 -2RM RM RM -RM RM 0 0 0 0 RM -RM RM RM 0 0 0 0 RM -RM RM RM where M is defined by (3.38c). When u,h > 1 and (1 1) > then (1 1 )u,h > 1 and we can find a constant c such that (4.11) IIH[J1 Htll2 ::0: ch. u, 82

PAGE 93

So (4.12) Since H01 is independent of (J'th, we have From (4.8) and (4.9), we have (4.14a) where c1 = max(wh ... ,wn) and thus (4.14b) IIHo1ll2:::: 1 + -1 = 1 + (c -1h. 1-{ 1-{ Therefore, by (4.14), we can find a constant c1 such that (4.15) Substituting (4.15) into (4.7) and noting that Bi = _LhM, we have O't (4.16) D When { = 1 and (J'th 1, p=1,2,3, two-cell fL-line relaxation will produce errors which are continuous and independent of angle but kinked across two cells. To show this 83

PAGE 94

behavior 1 we rewrite ( 4. 7) as l+t e+ -(2i-l)l e+ (4.17) -(2i-1)r = f(2i)l + f(2i)l f(2i)r + f(2i)r 0 0 2B2i-1 0 0 0 [ A,._, -c,,_, r I 0 0 + -D2; A2; 0 0 0 0 0 2B2; 0 0 [ A2i-1 -C2;-1 l-1 1 we can expand matrix into -D2; A2; When cr,h 1 and 1 = Taylor series as O(cr,h) + 0(1) + O(.,.;h). We omit the details of the expansion because of 84

PAGE 95

the complexity. Let fi = l .. and = _Lh. After the expansion, we have Utrv Ut (4.18) u,h where (4.19) 1 2RM 2RM 0 0 2e + 3h3 0 u, 0 0 0 0 0
PAGE 96

and (4.20) Here e is not expanded yet. Its expansion will depend on the power pin I = 1 We discuss the three cases p=1,2,3 separately: 1) p = 1 When cr,h 1, (in (4.19) can be expanded as (4.21) n Jlj Jlj 1 Co 1 ( = 1 I:; 2wj(1 + -h)(1-2-h + 0( 2h2)) = 1(12-h) + 0( 2h2 ) j=l Ut Ut Ut Ut U So, by (4.21), e in (4.19) can be approximated as ( 4.22) When p = 1, P of (4.20) can be expressed as ( 4.23) 2 1 P = (1 + MR)(1-MR) + 0(-) 1 + 4c0 cr1h 86

PAGE 97

By (4.22) and (4.23), the norm of the second term in (4.18) is O(u;h). So 0 2RM 2B2i-1 2RM 0 0 [A,, -C2;-1 r 0 1 0 1 ( 4.24) = +0(1) -D2; A2; 0 1 + 2c0 0 O't 0 0 0 0 0 0 Similarly, we have 0 0 0 0 0 0 [A,, -C2-1 r 0 1 0 1 (4.25) = +O(h). -D2; A2; 0 1 + 2c0 0 O't 0 0 2Bu 2RM 0 2RM By (4.24) and (4.25), the errors after relaxation will be independent of angle up to accuracy O(u!h), but not be piecewise linear. To graphically show this, we plot the errors after JL-line relaxation in Figures 4.2 and 4.3. Figure 4.2a is the first term in (4.17) for positive angles. Figure 4.2b is the second term in ( 4.17) for positive angles. The real errors after relaxation 87

PAGE 98

should be the summation of Figures 4.2a and 4.2b. Figure 4.3 shows the error distribution for negative angles after relaxation. From Figures 4.2 and 4.3, after relaxation, equals and .{2i-l)r equals .(2i)l up to accuracy of O(.,.;h). We can see that the error after relaxation is kinked and linear interpolation will not suitable for this case. However, regardless of the size of and the errors at the boundary between cell 2i-1 and cell 2i are 2-RMe+ 1+2c0 -(2i-2)r + )..(2i)l Figure 4.2a Figure 4.2b Error Distribution for Positive Angle, p = 1 2-RMe+ s Figure 4.3a _(2i-1)1 Figure 4.3b Error Distribution for Negative Angle, p = 1 2) p = 2 When p=2 and u,h > 1, with (in (4.21), e can be expanded as (4.26) 88

PAGE 99

and P in ( 4.20) be approximated as (4.27) P= u,h MR+0(1). 1 + 4c1 Substituting (4.26) and (4.27) into (4.18), we have 0 _!_RM co 2B2i-t _!_RM co 0 2c RM co(l+4ct) [A,., -c,._, r 0 2c RM ( 4.28) co(1+4cl) -D2; A2i 0 RM co(l+4c,) 0 2c, RM co(H4c,) 0 0 0 0 0 0 0 0 0 2c RM co(H4c,) [ A,. -c .. _, r 0 2c, RM co(1+4cl) ( 4.29) -D2; A2; 0 2c RM co(1+4ct) 0 2CJ RM co(1+4cl) 2B2i+t _!_RM "" 0 _!_RM co 89 1 + 0(----,;) u, 1 +0(/;/ u,

PAGE 100

We plot (4.28) and (4.29) in Figures 4.4 and 4.5. We can see once again the errors after relaxation are independent of angle, continuous across cells but kinked linear. In fact, they will be sublinear, that is, the error at the interface between cells 2i -1 and 2i is less than the average of the errors at the edges of the pair. Figure 4.4a Figure 4.4b Error Distribution for Positive Angle, p = 2 Figure 4.5a Figure 4.5b Error Distribution for Negative Angle, p = 2 3) p = 3 When p = 3 and rr,h 1, with (in (4.21), e can be expanded as ( 4.30) rr,h 1 2 + 0( 2h2). co ut and P in ( 4.20) can be expanded as 90

PAGE 101

(4.31) P = u,h MR + 0(1). 4c1 Substituting (4.30) and (4.31) into (4.18), we have 0 lRM co 2B2i-1 lRM co 0 1 RM 2co [A, -c,_, r 0 lRM ( 4.32) 2co -D2; A2; 0 1 RM 2co 0 lRM 2co 0 0 0 0 0 0 0 0 0 1 RM 2co [A,_, -c,_, r 0 1 RM ( 4.33) 2co -D2; A2; 0 1 RM 2co 0 1 RM 2co 2B2i+1 lRM co 0 lRM co 1 + 0( -;;)-u, 1 +0(1/ Ut We plot (4.32) and (4.33) in Figures 4.6 and 4.7. In this case the error after relaxation is 91

PAGE 102

independent of angle, linear and continuous across a pair up to accuracy 0( u!h ). Figure 4.6a Figure 4.6b Error Distribution for Positive Angle, p = 3 Figure 4.7a Figure 4.7b Error Distribution for Negative Angle, p = 3 In any case, after JL-line relaxation the error across a pair will be independent of angle, continuous and sublinear up to order The error at the boundary between cells 2i-1 and 2i can be expressed as ( 4.34) ( 1 ( 1 )( ) ( 1 = f(2i)! + 0 u,h) = 2 d + f(2i)r + 0 rr,h ), where 0 :S: d :S: and denotes variables in either positive or negative directions. We summarize the above discussion in the following theorem. 92

PAGE 103

Theorem 9 Suppose 1 = 1 and O'th 1. After p,-line relazation, the error across a pair will be independent of angle and continuous up to 0( u!h). Further, the error at the boundary between the pair can be written as in (4.34}, where i) d = for p :::; 1, ii} 0 < d < for p = 2, iii) d = 0 for p 3. Proof: The proof follows from the above discussion. D When p = 3, relaxation will produce errors of which the dominant terms are piecewise linear. But, in a multigrid algorithm, the mesh size of the coarse grid is twice as much as the fine grid. Since 1 is the same on all grids, although p = 3 on the finest grid, as the grids go from finest to coarsest level, the size of 1 -{ relative to ....Lh will change. To show "' this, let O'th = 100,p = 3 and 1 = 1lh, = 0.999999. When we go down 4levels, the u, mesh size on that level will be ( O'th )4h = 1600, so 1 = 0.999999 = 1 -( hJ'""'. That O't 4h. means when p = 3, even though the linear interpolation is good enough on finest grid, it is not proper for the coarse grid since the coarse grid mesh size will be doubled at each level and the size of 1 1 relative to ..Lh will change from level to level.
PAGE 104

and d. Then we can obtain and by interpolation: e+ -(2i-1)1 1 0 + "--d "--d [ t: ,, l f(2i-l)r 2 2 (4.35a) + "--d l d f(2i)l 2 2 f(2i)r + f(2i)r 0 1 The same interpolation can be used for negative angles. If f{;i-l)l and dare known, we have J2{2i-1 )1 1 0 J2{2i-1 )r "--d "--d [ ,.,,, l ( 4.35b) 2 2 "--d "--d f(2i)l 2 2 f(2i)r f(2i)r 0 1 When p = 1, by (4.25) and (4.34) we have d = 0.5. Then (4.17) can be approximated by J2{2i-1)1 1 0 0 0 e+ -{2i-1)1 0 1 0 0 J2{2i-l)r 0 0 0 0 f(2i-1)1 e+ 0 0 0 0 e+ 1 ( 4.36) -{2i-l)r -{2i-1)1 +0(1/ 0 0 0 0 u, f(2i)l f(2i)r + f(2i)l 0 0 0 0 + f(2i)r f(2i)r 0 0 1 0 + f{2i)r 0 0 0 1 94

PAGE 105

When p = 2, d = 2 ( 1 ; 4cl)' then (4.17) can be approximately expressed as 1 0 0 0 e+ -(2i-1)1 0 1 0 0 0 0 1+4cl 1+4cl e+ 0 0 + -(2i-1)r 1+4cl 1+4cl 1 ( 4.37) +0(---,;) 0 0 u, 1+4cl 1+4ct + 0 0 _1!:L_ + 1+4ct 1+4c1 0 0 1 0 + 0 0 0 1 For every pair, the problem is reduced to finding d, The parameter dis easy to obtain since it is determined by the relaxation. If we can find a relatively accu-rate interpolation, we can then use a coaser grid to approximate )P )P on the fine grid. In a multigrid algorithm, a coarse grid solution can be approximated by an even coarser grid solution provided a good interpolation can be found. This process can be accomplished recursively down to the coarsest grid, where there will be only a few grid points so the solution usually can be computed directly. For our case, two-cell JL-line relaxation will exactly solve the coarsest grid, which has only two cells. Thus, finding a good interpolation is vital to our efficient multigrid scheme. 95

PAGE 106

We define the restriction operator by s1 s2 (4.38) J2hh -s1 s2 where l 0 l 0 l 0 0 0 2 4 4 0 l 0 l 0 l 0 0 2 4 4 ( 4.39) s1 = s2 = 0 0 l 0 l 0 l 0 4 4 2 0 0 0 l 0 0 l 4 4 2 Note that the restriction operator represents full weighting, and from our discussion above, we also define the interpolation according to error distribution after two-cell p-line relaxation as T1 I;h = T2 ( 4.40) T1 T2 where ( 4.41) 1 0 0 0 d 0 !_d 0 2 2 0 1 0 0 0 !_d 0 !_d Tl = T2 = 2 2 0 !_d 2 0 0 0 1 0 0 l_d 2 0 !_d 2 0 0 0 1 96

PAGE 107

When d = 0, the interpolation operator It'h is the transpose of the restriction operator scaled by the factor In the next section, we discuss the multigrid algorithm using the kinked interpolation operator and restriction operator 4.4 Multigrid Algorithm In this section, we derive the coarse grid operator and show that although the coarse grid operator no longer represent MLD on the coarse grid, it has the same structure as the fine grid operator and two-cell fL-line relaxation can be accomplished with the same formula and cost as for the fine grid. From (4.5), we define the fine grid operator by ( 4.42) where A\ Chand Bh are defined by (4.6) for grid h. With restriction and interpolation 1;h operators defined by (4.38) and ( 4.41), respectively, the coarse grid operator can be obtained by ( 4.43) L 2h Ih Lhl2h -2h h -97

PAGE 108

where ( 4.44a) (4.44b) ( 4.44c) A" [ S, S, l [ -; ][ :J c" [ s, s, ] [ ;, : ][ :J s,J[: :][::] It is easy to verify that 0 0 0 0 0 0 0 0 (4.45a) 02h = 4B 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4B ( 4.45b) D2h = 0 0 0 0 0 0 0 0 To derive A 2 \ we write Fh 0 Qh 0 ahR ahR bhR 0 Kh 0 zh ahR ahR bhR ( 4.46) Ah = zh 0 Kh 0 bhR bhR ahR 0 Qh 0 Fh bhR bhR ahR 98 bhR bhR ahR ahR

PAGE 109

where F\ G\ Kh and zh are diagonal matrices and ah and bh are scalars. We use this notation to explain that the coarse grid operator will follow the same pattern as the fine grid operator; that is, A h and consecutive two-cell coarse grid operators A 2h, A 4 h ... are all composed by a sparse matrix whose components are diagonal matrices plus a rank two matrix. For the finest grid h, ph = I+ B, Gh = -B, Kh = I+ B, zh = B, ah = 1 and bh = 0. After multiplication, ( 4.44a) can be written as p2h 0 Q2h 0 a2hR a2hR b2hR b2hR 0 K2h 0 zh ahR a2hR b2hR b2hR (4.47) A2h = z2h 0 K2h 0 b2hR b2hR a 2hR a 2hR 0 ah 0 p2h bhR b2hR a 2hR a 2hR where (4.48a) F 2 h = d)Fh + (1-2d)Gh + (0.5d)Kh + zh-(1-2d)B], ( 4.48b) G 2 h = d)Fh + (2-2d)Gh + (0.5 d)Kh-(1-2d)B], (4.48c) K 2 h = d)Fh + Gh + (2.5-d)Kh + (1-2d)Zh-(1-2d)B], ( 4.48d) Z 2 h = d)Fh + (0.5 d)Kh + (2 -2d)Zh (1 -2d)B], ( 4.48e) ( 4.481) d -ah + 0 5bh 2 99

PAGE 110

Since ph, Gh, Kh, zh and B are diagonal matrices, F2h, G2h, K2h, Z2h are also diagonal matrices. So, structurally, Ah and A2h are the same. For two-cell J.L-line relaxation on grid [ -Cif'-1 l 2h, we can still easily invert A2h 21. 2i with O(n) operations by taking advantage of the fact that the matrix to be inverted consists of an easily invertible sparse matrix with diagonal components and a rank four matrix. The coarse grid operator on grid 4h will also have the same structure as on grids h and 2h. 4.5 Properties of Kinked Interpolation In this section, we reveal the properties of the kinked interpolation operator by solving ( 4.2) with boundary condition (4.49) .!.+ -1 .J,0 'Lor --' .'t::{m+l)l -on a slab of :u( -1, 1) and (J't :;}> 1 and 1 = 11,,p = 1, 2, 3. For each set of (J't'l' we " can find parameter d defined in the previous section on each level( from finest to coasest grid). With d on every grid obtained, we have kinked interpolation on each grid. Suppose h is the mesh size of the finest grid and H is the mesh size of the coarsest grid, which has only two cells. We define the accumulated interpolation as (4.50) We then solve a homogeneous Sa transport equation with unit isotropic boundary condition on the left and zero on the right on grids with 8 cells and 128 cells, respectively. We choose (J't = 100 and 1 = 1 :r,p 1, 2, 3. We choose an angle line 1'= 0.525 for the 100

PAGE 111

comparison of the interpolated approximation and the exact discretized solution. With the calculated values 1j; and 1j; :,. we use the accumulated kinked interpolation operator from the coarsest grid to the finest grid to interpolate to all interior values on finest grid and compare the interpolated approximation with the calculated solution. Figure 4.8 is the case that there is substantial absorption(p = 1). In this case, relaxation itselfis very efficient. ,_,,------------------, ,_,,-----------------, .. ' "' .. l:ZI! ",. \ ,_, I\ --\\ u\1 ut \ )o \\ \ : '' ----o. ' Figure 4.8a Figure 4.8b Interpolated Approximation, p = 1 ,_,,.--------------------, ,_,,---------------------, .. ;> : ... l:ZI! I> .. : '' ......... ----'' o. . o. c.: c.: ___ Figure 4.9a Figure 4.9b Interpolated Approximation, p = 2 101

PAGE 112

> '' .. = .. .! -----Soouuon '' '' 0. ,. o.z o.z M 1,-----:-:-----.,.,.,.., Figure 4.10a Figure 4.10b Interpolated Approximation, p = 3 Figure 4.9 concerns the case that is of most interest in the transport community(p = 2). Figure 4.10 is the case that is there is not much absorption (p = 3). Notice how well interpolation approximates the exact MLD on the finest grid. From Figure 4.10, we can see that when the power p increases, interpolation is more accurate since the kinked elements are less kinked. We can also conclude from these plots that the relaxation-induced interpolation we have designed is accurate and suitable for multigrid algorithm. In the next section, we present computational results of the multigrid algorithm using kinked interpolation. 102

PAGE 113

r . l 4.6 Computational Results 4.6.1 Multigrid Convergence Factors The following computational results were conducted on a computational domain with m = 128 and 832, that is, n = 16. From the computational results conducted on SN with N ranging from 2 to 256, we observe the same convergence performance. O't h 1 = 0.999999 1 = 0.999 1 = 0.9 1 = 0.7 lo-s 0.37 X 1o-9 o.37 x 10-9 0.35 X 10-9 o.33 x 10-9 10-4 0.31 X 10-6 0.31 X 10-6 o.31 x lo-s o.3o x 1o-s 10-3 o.11 x 1o-3 o.11 x 10-3 o.11 x 10-3 o.9 x 10-4 1o-2 o.5o x 1o-2 o.5o x 10-2 oA4 x 10-2 0.38 X 10-2 10-1 0.68 X 10-2 0.43 xlo-2 o.68 x 10-2 o.68 x 10-2 10-0 0.70 X 10-2 o.12 x 10-2 0.14 X 10-2 o.19 x 10-2 Tahle 4.1 Convergence Factors for Uniform Grid From the above table, we can see that the convergence factor p is 0( u: h3 ) when u,h 1. For the thick limit, we will let 1 = 1 ( u!h )P, since, from our previour section, the power p will influence the error distribution after relaxation. We will present results with p = 1, 2, 3 and 00. Table 4.2 depicts the convergence factors p are for a multigrid V(l,l) cycle with linear interpolation. Table 4.3 contains the convergence factors for a multigrid V(1,1) cycle with kinked interpolation. 103

PAGE 114


PAGE 115

will approximate the error on the next finer grid. From our discussion in the previous section, when p = 3 on the fine grid, there is relatively little absorption and the errors after relaxation will be nearly linear. But, as we go down to coarser grids, we will reach a level on which Uth will satisfy f = 1 -( )2 Thus, using linear interpolation on this particular level will adversely affect the overall convergence factor. From Table 4.3, when kinked interpolation is used, the multigrid V( 1,1) cycle has a convergence factor p of 0( u!h) for all levels of absorption. When there is no absorption, the multigrid V(l,l) with kinked interpolation has a convergence factor on the order of O((u!h)2). O"t 240 245 250 256 260 265 270 275 280 p 0.0082 0.0083 0.0084 0.0085 0.0085 0.0085 0.0084 0.0084 0.0083 Table 4.4 Convergence Factors for Kinked Interpolation,')' = 0.999 In Table 4.4, we show a range within which the worst convergence factor occurs when ')' = 0.999 and m = 256. In this case, the maximal pis 0.0085. In Table 4.5, we present convergence factors of the multigrid V(1,1) cycle with kinked interpolation for a nonuniform grid. We select Uthi = c X 102*'7i, where 'TJi is a random number between (-1,1). When c = 1, u,h ranges from 0.01 to 100. We use parameter c to shift from the thin to the thick limit. 105

PAGE 116

c 1 = 0.999999 1 = 0.999 10-4 0.12 X 10-10 o.12 x 10-10 10-3 0.32 X 10-7 0.32 X 10-7 10-2 0.29 x10-4 o.29 x 10-4 10-1 o.2o x 10-4 0.10 X lQ-2 lo-o 0.22 xlo-3 0.32 X 10-2 101 0.11 xlo-1 o.54 x 10-2 102 0.70 X 10-1 o.4o xlo-3 103 o.1s x 10-1 0.19 X 10-3 104 M5 x 10-1 0.17 X 10-4 105 o.19 x 10-2 0.20 x lo-s Table 4.5 Convergence Factors for Non uniform Grid From Table 4.51 we see that kinked interpolation is also suitable for nonuniform grids, or, equivalently, for nonconstant Ut. 4.6.2 DSA Convergence Factors In this section, we present a comparasion of the DSA ([2][5]) and the multigrid alogrithms with kinked interpolation. From Tables 4.6 to 4.13, we present convergence factors for a V(1,1) multigrid cycle and for a DSA cycle. Here, the slab is assumed to have physical thickness 2. Thus, Ut represents the width of the slab measured in the number of mean-free-paths. The tests were performed using Ss and a wide range of !Y't and m(the number of cells). The diagonals of these tables represent constant !Y'th. 106

PAGE 117

In the following tables, we say that the convergence factor is 0 if p < 10-11. For both algorithms, the convergence factors are roughly constant along diagonals. But for each fixed ; 1 the multigrid convergence factor will approach zero much faster than the DSA convergence factor as Ut goes to 00 in the thick limit and Ut goes to zero in the thin limit. In all cases, the multigrid convergence factors were superior to the DSA convergence factors. However, it is important to adjust for the relative amount of computational work required by each algorithm. Of course, as before, such measures will be machine dependent. On the Cray Y /MP, where these tests were performed, we compared times for N=64 and m=1024. Ten V(1,1) cycles required 93 seconds, which includes 4.2 seconds to obtain the parameter d on every level. The parameter d on the coarse grid will depend on relaxation on the fine grid. Thus, this setup process is sequential. One V(1,1) cycle required 9.3 seconds and one DSA cycle required 3.7 seconds, the ratio of which is 2.5. Tables 4.14 to 4.17 are the results of raising each entry in DSA tables to the power 2.5. This is a more fair comparison with the multigrid tables on a serial machine. When we compare the adjusted convergence factors of the DSA algorithm to the conver-gence factors of multigrid algorithm, we see that for 1 = 0.9999,0.999 and 0.9 the multigrid algorithm is faster than DSA in all regimes. When 1 = 0.99,m = 64 and u1 = 43 the DSA algorithm is slightly better. This is the region in which relaxation will not produce continuous kinked linear errors across two cells and, thus, kinked interpolation is not as accurate. On a parallel machine we expect the results to more heavily favor the multigrid algorithm. Both algorithms can be implemented with parallel complexity 0( log( m) ). In this context, however, we expect the times to be more nearly equal. The multigrid algorithm has been 107

PAGE 118

implemented on an Thinking Machines Inc. CM-200([19]) The fundamental step in the DSA algorithm, the transport sweep, has also been implenented on the CM-200. A form of cyclic reduction was used. A comparison of timings on this machine is available in [19). O"t 4-5 4-4 4-3 4-2 4-1 1 41 42 43 44 45 4s 47 4s 49 410 m = 16 m = 64 m = 256 m = 1024 0.0 0.0 0.0 0.0 o.32xl0-10 0.13 x10-10 o.1o x 10-10 o.1o x 10-10 0.11 x lo-s o.3o x lo-s o.19 x lo-s o.2o x lo-s o.12 x lo-s o.51 x lo-s 0.30 X 10-s o.3o x lo-s o.61 x 1o-4 0.29 X 10-4 o.13 x 10-4 o.11 x 10-4 o.57xlo-3 o.37xlo-a o.32 x 1o-3 o.32 x 1o-a 0.12 X 10-2 0.12 X 10-2 o.11 x 1o-2 o.11 x 10-2 0.64 x lo-a 0.12xl0-2 o.14x 10-2 o.13 x 10-2 o.2o x 10-2 o.Hxl0-2 o.25 x 10-2 0.71 X 10-2 o.32 x 10-2 0.38 X 10-2 o.25 x 10-2 o.39 x 1o-2 o.23 x 10-2 0.39 X 10-2 0.41 X 10-2 o.26 x 10-2 0.46 x lo-a o.23 x 10-2 oAo x 10-2 o.41 x 10-2 0.20 X 10-4 0.48 x lo-a o.23 x 10-2 o.1o x 1o-2 0.14Xl0-6 o.21 x 10-4 0.48 X 10-a o.23 x 10-2 o.28 x 10-9 o.15 x lo-s 0.21 X 10-4 o.48 x 1o-a 0.0 o.21x1o-9 o.15 x lo-s o.21 x 10-4 Table 4.6. Convergence Factors for Multigrid 1 = 0.9999 108

PAGE 119

In either setting, parallel or serial, a full multigrid algorithm can be implemented ([12]). This would provide a savings in some regions of the tables. Moreover, full multigrid provides a natural framework for adaptive grid refinement. u, 4-5 4-4 4-3 4-2 4-t 1 4t 42 43 44 45 46 47 48 4" 4to m = 16 m = 64 m = 256 m = 1024 0.19 X 10-2 0.19 xl0-2 o.19o x 10-2 o.19o x 10-2 o.75 xl0-2 o.75 x 10-2 o.75 xl0-2 0.76 X 10-2 o.28 x 1o-t 0.28 X 10-l o.zs x lo-t o.28 x 1o-t o.84 x 1o-t 0.83 X 10-t o.83 x lo-t o.83 x 1o-t 0.153 0.151 0.151 0.152 0.187 0.203 0.205 0.206 0.180 0.216 0.216 0.216 0.142 0.165 0.205 0.212 0.129 0.143 0.149 0.169 0.124 0.130 0.143 0.147 0.121 0.125 0.131 0.143 0.114 0.122 0.125 0.131 o.92 x 1o-t 0.114 0.122 0.125 o.51 x 1o-t o.92 x 1o-t 0.114 0.123 o.18 x 1o-t 0.51 X 10-t o.92 x lo-t 0.114 o.52 x 10-2 0.19 X 10-t 0.51 X 10-t o.92 x 1o-t Table 4.7 Convergence Factors for DSA r = 0.9999 109

PAGE 120

u, m = 16 m = 64 m = 256 m = 1024 4-5 0.0 0.0 0.0 0.0 4-4 0.32 X 10-10 0.13 X 10-10 0.10 X 10-10 0.10 X 10-10 4-3 o.71 x lo-s o.3o x lo-s 0.19 x lo-s 0.20 x lo-s 4-2 0.12 X 10-5 0.51 X 10-6 0.30 X 10-6 0.30 X 10-6 4-1 0.61 X 10-4 0.29 X 10-4 0.13 X 10-4 0.11 X 10-4 1 o.57xlo-3 0.37 X 103 o.32x10-3 0.31 X 10-3 4 0.12 X 10-2 0.12Xl0-2 0.11 X 10-2 0.13 X 10-1 42 o.31 x 10-2 0.36 X 10-2 o.54xlo-2 0.76 X 10-2 43 0.57 X 10-2 0.79X 10-2 0.73 X 10-2 o.11 x 10-1 44 0.57X 10-2 0.69 X 10-2 o.85x1o-2 o.78 x 10-2 45 o.68 x 10-3 0.58 X 10-3 0.71 X 10-2 0.86 X 10-2 46 0.93 X 10-5 0.71 X 10-3 0.58 X 10-2 0.71 X 10-2 47 0.25X 10-7 0.95 X 10-5 0.72 X 10-3 0.58 X 10-2 4s o.33 x 10-10 0.25 X 10-7 o.94x1o-5 0.72 X 10-3 4" 0.0 0.32Xl0-10 0.25 X 10-7 o.94x1o-5 410 0.0 0.0 0.32 X 10-10 0.25 X 10-7 Table 4.8 Convergence Factors for Multigrid [ = 0.999 110

PAGE 121

O"t m = 16 m = 64 m = 256 m = 1024 4-5 0.19 X 10-2 0.19 x10-2 0.190 X 10-2 o.19o x 10-2 4-4 o.75 x 10-2 0.75 X 10-2 0.75 X 10-2 0.75 x 10-2 4-3 0.28 X 10-1 0.28 X 10-1 o.28 x 10-1 o.28 x w-1 4-2 o.83 x 10-1 0.83 X 10-1 o.83 x 10-1 0.84 x 10-1 4-1 0.153 0.150 0.150 0.151 1 0.186 0.203 0.204 0.204 4 0.179 0.216 0.216 0.216 42 0.142 0.165 0.204 0.210 43 0.127 0.142 0.149 0.163 44 0.118 0.129 0.143 0.156 45 0.102 0.119 0.130 0.144 46 o.66 x 10-1 0.102 0.119 0.130 47 o.21 x 10-1 0.66 X 10-1 0.102 0.119 4" 0.82 X 10-2 0.27 X 10-1 o.66 x 10-1 0.102 49 o.21 x 10-2 0.82 X 10-2 o.21x1o-1 0.66 X 10-1 410 o.54x1o-3 0.21 x10-2 o.82 x 10-2 o.21x 10-1 Table 4.9 Convergence Factors for DSA 1 = 0.999 111

PAGE 122

u, m= 16 m = 64 m = 256 m = 1024 4-5 0.0 0.0 0.0 0.0 4-4 0.32 x 10-10 o.13 x l0-10 0.10 xl0-10 o.1o x 10-10 4-3 o.n x lo-s o.3o x lo-s o.19 x lo-s 0.20 x lo-s 4-2 o.12 x 10-5 0.50 xl0-6 o.3o x 10-6 o.29 x 1o-s 4-1 o.6o x 10-4 0.29 xl0-4 0.12 X 10-4 o.n x 10-4 1 o.59 x 1o-3 0.41 xl0-3 0.35 X 10-3 0.35 X 10-3 4 0.250 X 10-2 o.3o x 10-2 o.31 x 10-2 0.32 X 10-2 42 0.34 X 10-2 0.87 xl0-2 o.9o x 10-2 o.98xlo-2 43 0.80 X 10-2 0.44 X 10-2 o.99xlo-2 o.1o x 10-1 44 0.44 X 10-3 0.84 X 10-2 0.44 X 10-2 0.10 X 10-2 45 0.21 X 10-5 0.45 X 10-3 o.84xlo-2 0.45 X 10-2 46 o.32 x lo-s o.2o x 10-5 0.45xlo-3 0.84X 10-2 47 o.3o x lo-11 0.31 x lo-s o.2ox1o-5 o.45 x 1o-3 4s 0.0 o.3oxl0-11 o.31 x lo-s o.2ox 1o-5 4" 0.0 0.0 o.3o x lo-11 o.31 x lo-s 410 0.0 0.0 0.0 0.30 X 10-11 Table 4.10 Convergence Factors for Multigrid 1 = 0.99 112

PAGE 123

O't m = 16 m = 64 m = 256 m = 1024 4-6 o.19 x 10-2 0.19 X 10-2 o.19 x 10-2 o.19 x 10-2 4-4 o.74 x 10-2 o.74 x 10-2 o.74 x 10-2 o.74 x 10-2 4-3 0.27 X 10-1 0.27 X 10-1 0.27 X 10-1 0.28 x10-1 4-2 o.83 x 10-1 0.82 X 10-1 0.82 X 10-1 0.84 X 10-1 4-1 0.151 0.149 0.149 0.151 1 0.183 0.200 0.202 0.204 4 0.175 0.213 0.213 0.217 42 0.137 0.160 0.201 0.213 43 0.112 0.137 0.146 0.183 44 o.8o x 10-1 0.114 0.137 0.150 45 o.38x1o-1 o.8o x 1o-1 0.114 0.137 4" o.12 x 10-1 0.38 X 10-1 o.sox1o-1 0.114 47 o.34x 10-2 o.12x1o-1 o.38 x 10-1 o.8o x 10-1 4" o.86 x 10-3 o.34x1o-2 0.12 X 10-1 0.38 X 10-1 4" 0.22 X 10-3 0.86 X 10-3 0.34X 10-2 o.12 x 10-1 410 o.54x1o-4 o.22 x 1o-3 o.86 x 10-3 o.34x1o-2 Table 4.11 Convergence Factors for DSA 1 = 0.99 113

PAGE 124

(J"t m = 16 m = 64 m= 256 m = 1024 4-5 0.0 0.0 0.0 0.0 4-4 0.30 X 10-10 0.12 X 10-10 0.10 xl0-10 0.10 X 10-s 4-3 o.67 x lo-s 0.21 x lo-s 0.19 x10-s 0.19 X 10-s 4-2 0.12 x lo-s 0.46 X 10-s o.28 x 1o-s o.28 x lo-s 4-1 o.58 x 1o- o.3o xlo-4 o.12x1o- 0.11 X 10-4 1 0.60 X 10-3 0.45 X 10-3 o.34xlo-3 0.32 X 10-3 4 o.69 x 1o-3 o.18 x 10-2 o.16 x 10-2 0.15 X 10-2 42 0.33 X 10-2 0.72 X 10-3 0.22 X 10-2 0.20 X 10-2 43 o.9o x 1o- 0.35 X 10-2 o.75x10-3 0.23 X lQ-2 44 o.23 x lo-s o.91 x 10-4 o.35 x 10-2 o.76 x 10-3 45 o.28 x 10-9 o.22x1o-s 0.91 xl0-4 0.35X 10-2 4s 0.0 o.21x1o-9 o.22x1o-s 0.90 X 10-4 47 0.0 0.0 0.27X 10-9 0.22 X 10-6 4s 0.0 0.0 0.0 0.27 X 10-9 49 0.0 0.0 0.0 0.0 410 0.0 0.0 0.0 0.0 Table 4.12 Convergence Factors for Multigrid 1 = 0.9 114

PAGE 125

u, m = 16 m = 64 m = 256 m = 1024 4-5 0.11 xlo-2 0.17 X 10-2 0.11 x1o-2 o.19o x 10-2 4-4 0.67 X 10-2 o.67 x 10-2 o.67 xlo-2 0.67 X 10-2 4-3 o.25 x 10-1 o.25 x 10-1 0.25 X 10-1 o.25 x 10-1 4-2 0.73 X 10-1 o.73 x 10-1 0.73 X 10-1 0.73 X 10-1 4-1 0.135 0.133 0.133 0.137 1 0.159 0.176 0.177 0.184 4 0.139 0.186 0.188 0.197 42 0.98 X 10-1 0.129 0.174 0.185 43 0.50 X 10-1 0.96 X 10-1 0.120 0.171 44 o.11x1o-1 0.50X 10-1 0.96 X 10-1 0.121 45 0.48 X 10-2 o.11x1o-1 0.50 X 10-1 0.96 X 10-1 46 o.12x1o-2 0.48 X 10-2 o.11x10-1 o.5ox10-1 47 o.31 x 10-3 0.12 X 10-2 0.48 X 10-2 0.17X 10-1 48 o.79 x 1o-4 o.31 x 1o-3 0.12 X 10-2 0.48 X I0-2 49 0.20 X 10-4 o.79 x 1o-4 o.31 x 10-3 0.12 X 10-2 410 o.49 x 10-5 0.20 X 10-4 0.79 X 10-4 0.31 X 10-3 Table 4.13 Convergence Factors for DSA 'Y = 0.9 115

PAGE 126

O't m = 16 m = 64 m = 256 m = 1024 4-S o.16 x lo-s o.16 x lo-s o.16 x lo-s o.16 x lo-s 4-4 0.49 x lo-s 0.49 X 10-s o.49 x 1o-s 0.49 xlo-s 4-S o.13 x lo-s 0.13 x lo-s o.13 x 10-1 o.13 x lo-s 4-2 o.2o x 10-2 o.2o x 10-2 0.20 x10-2 o.2o x 10-2 4-1 o.92x 10-2 0.92 x10-2 o.92 x 10-2 o.92 x 10-2 1 o.15 x 10-1 o.19 x 10-1 o.19 x 10-1 o.19 x 10-1 41 o.14xlo-1 o.22x 10-1 o.22x1o-1 o.22 x 10-1 42 o.76xlo-2 o.n x 10-1 o.19xlo-1 0.21 X 10-1 4s o.6oxlo-2 o.11x 10-2 o.s6xlo-1 o.12 x 10-1 44 o.54x 10-2 o.6o x 10-2 o.77xlo-2 o.s3xlo-1 4s o.51 x 1o-2 o.55 x 10-2 o.62 x 10-2 o.11x 10-2 4s oA4x 10-2 o.52 x 10-2 0.55 X 10-2 o.62 x 10-2 47 o.26xlo-2 oA4 x 1o- o.52 x 10-2 o.55 x 10-2 48 o.59xlo-s o.26 x 10-2 0.44xlo-2 o.52 x 10-2 49 o.43 x 1o- 0.59 x lo-s o.26 x 10-2 0.44xlo-2 410 o.19 x lo-s o.43 x 1o- o.59 x 10-s o.26 x 10-2 Table 4.14 Adjusted Convergence Factors for DSA 1 = 0.9999 116

PAGE 127

u, m = 16 m= 64 m = 256 m = 1024 4-5 o.16 x 1o-s 0.16 X 10-s 0.16 x lo-s o.16 x lo-s 4-4 o.49 x 1o- 0.49 x lo-s 0.49 xlO-s o.49 x 1o- 4-3 o.13 x 10-3 o.13 x 10-3 0.13 X 10-3 o.13 x 10-3 4-2 0.20 xl0-2 0.20 X 10-2 o.2o x 10-2 o.2o x 10-2 4-1 o.92 x 10-2 o.87xlo-2 o.87xlo-2 o.87xlo-2 1 o.15x 10-1 o.19x 10-1 0.19 X 10-1 o.19 x 10-1 4 o.Hx 10-1 o.22 x 10-1 o.22 x 10-1 o.22 x 10-1 42 o.76 x 10-2 o.11 x 10-1 o.19 x 10-1 o.22 x 10-1 43 o.57xlo-2 o.76 x 1o-2 o.86 x 10-2 o.11 x 10-1 44 o.48 x 10-2 o.59x 10-2 o.77xlo-2 0.96 X 10-2 4s 0.33 X 10-2 o.49 x 10-2 0.61 X 10-2 o.79 x 10-2 4s o.12 x 10-2 0.33 X 10-2 0.49 X 10-2 o.61 x 10-2 47 o.12 x 1o-3 0.11 X 10-2 o.33 x 10-2 o.49 x 1o-2 48 0.61 x lo-s o.12 x 1o-3 o.11 x 10-2 o.33 x 10-2 4" o.2ox1o-s o.61 x lo-s 0.12 X 10-3 o.11 x 10-2 410 o.68xlo-a o.2ox1o-s 0.61 x lo-s o.12 x 10-3 Table 4.15 Adjusted Convergence Factors for DSA 1 = 0.999 117

PAGE 128

u, m = 16 m = 64 m = 256 m = 1024 4-5 o.16 x 1o-s 0.16 X 10-6 0.16 X 10-6 0.16 X IQ-6 4-4 o.49 xlo-5 o.49 xl0-5 o.49 xlo-5 0.49 x 10-5 4-a 0.12 x lo-a 0.12 x 10-a 0.12 X lo-a o.12 xlo-a 4-2 o.2o x 10-2 o.t9 x 10-2 0.19 x 10-2 0.20 X IQ-2 4-1 0.89 X 10-2 0.86 X 10-2 0.86 X IQ-2 0.89X 10-2 1 o.Hxlo-1 0.18 X 10-1 0.18 x 10-1 o.19 x 10-1 4 0.13 X 10-1 0.21 X l0-1 0.21 x 10-1 0.22 x 10-1 42 o.69 x 10-2 0.11 X 10-1 0.18 x 10-1 o.21 x 10-1 4a 0.42 X 10-2 0.69 X 10-2 o.81 x 10-2 0.41 X 10-1 44 o.18 x 10-2 o.44x 10-2 0.69 X 10-2 o.87xlo-2 45 0.28Xl0-a 0.18 X l0-2 0.44xlo-2 0.69X 10-2 4" o.16 x 10-4 0.28 X 10-a o.18 x 10-2 oA4x 10-2 47 o.67 x 10-7 0.16 X 10-4 0.28 X 10-a 0.18 X 10-2 4" o.22 x 10-7 o.67xlo-7 0.16 X l0-4 0.28 x 10-a 4" o.12 x 1o- o.22 x 10-7 0.67X 10-7 o.16 x 10-4 4'0 o.21 x 10-10 0.72 x 10-9 0.22 X 10-7 0.67 X 10-7 Table 4.16 Adjusted Convergence Factors for DSA ( = 0.99 118

PAGE 129

u, m = 16 m= 64 m = 256 m = 1024 4-s o.12 x 1o-6 o.12 x w-6 o.12 x 10-6 0.12 X lQ-6 4-4 o.37 x 1o-s 0.37 xw-s o.37 x 1o-s 0.37 x lo-s 4-3 0.99 X 10-4 o.99 x 10-4 0.99 X 10-4 o.99 x 10-4 4-2 o.14 x 10-2 0.14 X 10-2 0.14 xl0-2 0.14 X lQ-2 4-1 o.67xlo-2 o.65 x 10-2 o.65 x 10-2 0.69 X 10-2 1 o.11 x 1o-1 o.13 x 10-1 o.13 x 10-1 0.15 X 10-1 4 o.12 x 10-2 o.15 x 10-1 o.15 x 10-1 o.11x1o-1 42 o.3oxlo-2 o.6o x 10-2 o.13xlo-1 0.15 X 10-1 43 o.56xlo-3 o.29 x 10-2 o.5o x 10-2 0.12 X 10-1 44 o.3s x 10-4 o.56 x 10-3 o.29 x 10-2 0.51 X 10-2 4s o.16 x lo-s o.38 x 10-4 o.56 x 1o-3 0.29Xl0-2 46 o.5o x 10-7 o.16 x lo-s o.3s x 10-4 o.56 x 10-3 47 o.11x1o- 0.50 X 10-7 o.16 x 1o-s 0.38 X 10-4 4" o.55 x 10-10 o.17xlo- o.5o x 10-7 o.16xlo-s 49 0.0 o.55 x lo-10 0.11 xlo- o.5o x Io-7 410 0.0 0.0 0.55 X lQ-10 o.17xlo-s Table 4.17 Adjusted Convergence Factors for DSA 1 = 0.9 119

PAGE 130

CHAPTER 5 SUMMARY When there is no absorption, we conclude from our our numerical and analytical results that: For S2, two-cell, red-black, block JL-line relaxation is a good smoother that leaves errors in the range of interpolation after one relaxation. Thus, one multigrid V(l, 0) cycle is an exact solver for the Sz problem. For S N with N > 2, the errors at two neighboring cells 2i -1 and 2i after one such relaxation will be in the range of interpolation up to 0( max( -h1 _Lh ) ) when O"t 2,-1 O't 2l 1. The multigrid algorithm is an efficient solver with convergence factor of 0( max (cr2\2 )) k={l, ... ,m) t k for the thick limit and 0( max (u;h%)) for the thin limit with a maximum of k=(l, ... ,m) p = 0.0098 occurring around cr,h = 0.01 for the uniform grid. The multigrid algorithm performs equally well for uniform and nonuniform grids. In the thick limit, a single V(l,l) yields nearly an exact solution. The multigrid algorithm is much more efficient than DSA in all regimes. Relaxation as implemented in a red-black ordering is highly parallelizable.

PAGE 131

The above results assumed no absorption. The problem is easy, however, to solve if the absorption is sufficiently large, that is, if; = !!.L is bounded away from unity. In this case, Ut smoothing alone is sufficient for a fast solution. With no absorption, ; = 1, the algorithm presented above provides nearly an exact solver. However, for the case in which 1 = 1in the thick limit, the above algorithm yields a convergence factor that does not tend to zero in the thick limit. A minor adaptation to the algorithm has been developed that treats this special case and provides a convergence factor that does tend to zero in the thick limit for all levels of absorption. With absorption (I < 1), the error after relaxation will be kinked linear. The severity of deviation from linearity of the error after relaxation is determined by the values J, u,h and p in 1 = 1 -( u!h )P. In the thick limit, the error after relaxation will be independent of angle up to accuracy h). Relaxation induced kinked interpolation is reasonably effective Kinked interpolation can be obtained by the relaxation operator on each level to ensure that each level will have a good coarse grid approximation. The error reduction factor p for a multigrid V(1,1) cycle is p = O((u,h)3 ) for the thin limit and p = 0( u!h) for the thick limit when 1 = 1 ( u!h )P and 1 < p < 3. When p is smaller than 1, there will be more absorption and the multigrid performance will improve. When p is more than 3, there will be very little absorption and the multigrid performance will be close to that of the performance when 1 = 1. 121

PAGE 132

BIBLIOGRAPHY [1] M. L. Adams and W. R. Martin, Diffusion-synthetic acceleration of discontinuous finit element transport iterations, Nucl. Sci. Eng., 111, p. 145-167, 1992. [2] R. E. Alcouffe, Diffusion synthetic acceleration methods for the diamond-differenced discrete-ordinates equations, Nucl. Sci. Eng., 64, p. 344, 1977. [3] A. Brandt, Multi-level adaptive solution to boundary-value problems, Math. Comp., 31, 333-390, 1977. [4] V. Faber and T. A. Manteuffel, Neutron transport from the viewpoint of linear algebra, Transport Theory, Invariant Imbedding and Integral Equations{Nelson, Faber, Manteuf fel, Seth and White, eds.) Lecture Notes in Pure and Applied Mathematics, 115, p. 37-61, Marcel-Dekker, April, 1989. [5] E. W. Larsen, Unconditionally stable diffusion-synthetic acceleration methods for the slab geometry discrete ordinates equations, Nucl. Sci. Eng., 82, p. 47, 1982. [6] E. W. Larsen, Transport acceleration methods as two-level multigrid algorithms, Operator Theory: Advances and Applications, vol. 51, p. 34-47, 1991. [7] E.W.Larsen and J .E. Morel, Asymptotic solutions of numerical transport problems in optically thick diffusive regimes II, J. Comp. Phys., 83, p. 212, 1989.

PAGE 133

[8] E. E. Lewis and W. F. Miller,Jr., Computational Methods of Neutron Transport, 1984. [9] T. Manteuffel, S. McCormick, J. Morel, S. Oliveira and G. Yang, Parallel multigrid methods for transport equations1 Proceeding of Copper Mountain Conference on Iterative Methods, April 9 14, 1992. [10] T. Manteuffel, S. McCormick, J. Morel, S. Oliveira and G. Yang, A parallel version of a multigrid algorithm for isotropic transport equations, submitted to SIAM Journal of Sci. Comp., May, 1992. [11] T. Manteuffel, S. McCormick, J. Morel, S. Oliveira and G. Yang, A fast multigrid algorithm for isotropic transport problems, submitted to SIAM Journal of Sci. Comp., Oct., 1992. [12] S. F. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM Press, 1989. [13] J. E. Morel and E. W. Larsen, A multiple balance approach for differencing the S N equations, Nucl. Sci. Eng., 105, P. 1, 1990. [14] J. E. Morel and T. A. Manteuffel, An angular multigrid acceleration technique for the SN equations with highly forward-peaked scattering, Nucl. Sci. Eng., 107, p. 330,1991. [15] P. F. Nowak, A Coupled Synthetic and Multigrid Acceleration Method for Two-Dimensional Transport Calculations, PhD thesis, Department of Nuclear Engineering, University of Michigan, 1988. 123

PAGE 134

[16] P. F. Nowak, A multigrid method for SN calculations in z-y geometry, Transactions of the American Nuclear Society, vol. 56, p. 291-292, 1988. [17] P. F. Nowak and E. W. Larsen, Multigrid methods for SN problems, Transactions of the American Nuclear Society, vol. 57, p. 355-356, 1987. [18] P. F. Nowak, J. E. Morel and D. R. Harris, A multigrid acceleration method for one dimensional SN equations with anisotropic scattering, Nucl. Sci. Eng., 102, p. 1, 1989. [19] S. Oliveira, Parallel multigrid methods for transport equations, PhD thesis, Department of Mathematics, University of Colorado at Denver, May, 1993. 124