DOMAIN DECOMPOSITION PRECONDITIONERS
FOR THIN RECTANGULAR
p VERSION FINITE ELEMENTS
by
Gregory Scott Lett
B.A., University of Colorado at Denver, 1982
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
Department of Mathematics
1990
1990 by G. Scott Lett
All rights reserved.
This thesis for the Doctor of Philosophy
degree by
Gregory Scott Lett
has been approved for the
Department of
Mathematics
by
William L. Briggs
Charbel Farhat
Thomas F. Russell
Roland A. Sweet
A
lA
ISM
Date
Lett, Gregory Scott (Ph.D., Applied Mathematics)
Thesis directed by Professor Jan Mandel
The use of the pversion finite element method to solve an elliptic partial dif
ferential equation results in a very large linear system which must be solved nu
merically. We consider iterative solution, using the conjugate gradients method,
preconditioned by domain decomposition preconditioners, and analyze the con
dition number of two recently developed such preconditioners for problems in
SR2.
The pversion finite element method achieves accuracy by increasing the de
gree p of the elements, rather than by decreasing their diameter, as in the more
familiar hversion. Thus, there are many degrees of freedom on each element.
The preconditioners considered here treat each element as a subdomain, allowing
much of the work of generating and solving the preconditioner to be performed
on each element independently.
Most published analyses of preconditioners for the finite element method do
not consider the case of elements with high aspect ratios, a case which happens
often in practice, and consider only elements which are nearly, e.g. square. We
show that, for the first preconditioner considered here, the condition number
grows with the square of the aspect ratio, and thus its efficacy degrades sig
nificantly. We also show that, for the second preconditioner, which adapts to
elements of different aspect ratios, the condition number is bounded indepen
dently of the aspect ratio. Thus the second preconditioner can be more effective
than the first for the case of very thin elements. We accomplish this analysis
using energy seminorms which are defined especially for thin subdomains, and
basis polynomials which are orthogonal with respect to these seminorms. This
approach gives a greater insight into the behaviour of these methods than previ
ously published methods.
The form and content of this abstract are approved. I recommend its publication.
Signed
Jan Mandel
IV
For my wife,
Elizabeth
CONTENTS
1 INTRODUCTION 1
2 PRELIMINARIES 6
2.1 Function Spaces on Thin Domains....................... 6
2.1.1 Sobolev Spaces.................................... 6
2.1.2 Invariance and Equivalence of Seminorms......... 8
2.1.3 Short Side Trace Theorem........................ 9
2.1.4 Polynomial Spaces on Kc......................... 10
2.2 The Discrete Form of the Model Problem................ 12
2.3 The Basis (Shape) Functions for the pVersion Finite Element
Method ................................................ 13
2.4 The Preconditioned Conjugate Gradient Algorithm....... 15
2.4.1 Spectral Bounds on the Error for PCG............. 16
3 THE BILINEAR PRECONDITIONER 18
3.1 Preconditioning by Subspace Decomposition.............. 18
3.2 Formulation of the Bilinear Preconditioner............. 20
3.3 The First Condition Number Bound....................... 22
vi
4 THE ADAPTIVELINEAR PRECONDITIONER 26
4.1 Formulation..................................... 26
4.2 The First Condition Number Bound................ 27
4.3 A Different Condition Number Bound.............. 31
5 IMPLEMENTATION, VARIATIONS AND CONCLUSIONS 45
vii
List of Figures
2.1 Thin domain Ke.................................................... 8
3.1 Thin element K................................................... 20
3.2 Condition numbers for linear preconditioning................... 25
4.1 Condition numbers for improved preconditioning................. 30
Vlll
CHAPTER 1
INTRODUCTION
Elliptic partial differential equations appear frequently in physical models in
science and engineering. Frequently, we must make use of digital computers to
approximate their solutions. The numerical solution of partial differential equa
tions requires two main choices. Firstly, the method of discretizing the equation
must be chosen, 6uch as finite difference, finite element, or finite spectrum meth
ods. Secondly, the method of solving the resulting discrete system of equations
must be selected. This investigation will deal primarily with the second aspect
of this problem. We assume that the discretization technique is the pVersion
finite element method, and concentrate on analyzing some recently formulated
preconditioners for iteratively solving the resulting linear systems.
Our model problem will be the linear boundary value problem
Au = in ft
u = 0, on dft
where ft is a bounded, piecewise smooth domain in 312 with boundary 5ft.
The standard weak form of this problem is
u Â£ V : JJ vAudxdy = JJ fvdxdy, Vu Â£ V
n si
where V = Hy(Â£l). The solution to the weak form is known to be the same
as that for the original boundary value problem, when u is sufficiently smooth.
It is useful to be able to change this problem, using Greens Theorem, to the
symmetric form:
u Â£ V : J J VvTVudxdy = JJ fvdxdy, Vv G V. (1.1)
n n
In the Galerkin finite element method [49] we choose an ndimensional sub
space, S, of V, and approximate the weak form (1.1) as
uS: a(u,v) = f(v) ,Vv 5,
(1.2)
where
a(u,v) = JJ(Vv)T'Vudxdyi
n
f(v) = JJ fvdxdy.
Henceforth, the bilinear form a(, ) will be understood to be defined on S x S,
and the functional, /, will be defined on S. Once a basis, {i,2,..., <^ri}, for 5
has been selected, we transform problem (1.2) into a linear system of equations,
Ax = b, where A = (a,j), a(J = a(i,j). We will study the solution of this
system by the preconditioned conjugate gradients method (PCG). In particular,
2
we shall analyze some effective preconditioners for this method. A preconditioner
can be defined as a second symmetric bilinear form, c(, ) on 5x5, such that
the system of the form c(w,j) = f(j),j = 1,... ra, is solved at each iteration
of PCG.
The first domain decomposition method has been credited to Schwarz [45],
who used the convergence of his algorithm to prove the existence of solutions
to elliptic partial differential equations on domains which were unions of simpler
subdomains. The idea of domain decomposition algorithms today is to approxi
mate the solution to a large, illconditioned system by solving smaller systems on
subdomains of the original problem. The interest in these methods flowered in
the 1980s with the development of parallel computers, and many new algorithms
have been developed and analyzed. Many of the algorithms borrow ideas from
other methods, including substructuring [11,12, 13,15,17, 18,19] and multigrid
methods [55, 54, 37, 38]. The methods appear to be divided into two basic cate
gories. The first category, of the Schwarztype, involves solutions to problems on
overlapping subdomains. The second category involves the solution to problems
on nonoverlapping subdomains, but requires the solution to a small global sys
tem. Both types of methods are used, and hybrid methods have been developed
[13, 47]. The use of domain decomposition techniques to solve problems where
refinement is used to improve the accuracy of the solution was an important de
velopment (see, e.g., [14, 37, 38]), which improved the usability of locally refined
3
finite difference and finite element meshes. The pversion finite element method,
considered in our work here, is distinguished from the ^version finite element
method and finite difference methods only in the way in which refinement is
achieved. The pversion increases the degree of few finite elements, rather than
increasing the number of elements. Thus, the idea of using one (or few) pversion
elements as subdomains of domain decomposition algorithms seems natural.
It is known experimentally that the effectiveness of these preconditioners is
adversely effected by the aspect ratio of the subdomains [47, 34]. Few theoretical
results have been published, except in the case of the Schwarztype methods, for
which it is well established that the shape of the overlapping regions determines
the rate of convergence of the method. This paper gives the results of previous
work [36], and improves on the published theory, giving some results which
explain the behavior of certain domain decomposition methods for the practical
case of high aspect ratio subdomains.
In Chapter 2, we shall define the space 5, the basis (or shape) functions,
used in the pVersion finite element method, describe the PCG
algorithm, and state several useful properties of the algorithm. We will discuss
the desirable properties of a good preconditioner. We shall also quote various
results from the literature which will be needed later.
In Chapter 3, we shall describe a recently introduced domain decomposition
preconditioner for the PCG algorithm, which treats each pVersion element as a
4
subdomain. We shall show that the performance of this preconditioner degrades
under certain very realistic assumptions, namely when the subdomains have high
aspect ratios. It will be shown, as was done in [36] that the condition number
degrades as the square of the aspect ratio.
In Chapter 4, we shall modify the preconditioner from Chapter 3, and show
that the new preconditioner has superior spectral properties to the original. The
condition number for this preconditioner was introduced in [36], where we showed
that the condition number degrades no faster than the aspect ratio. Here we will
show that the condition number is bounded independently of the aspect ratio.
6
CHAPTER 2
PRELIMINARIES
In this chapter we will introduce some notation, define our problem, define
the Preconditioned Conjugate Gradients Algorithm, and state 6ome results of
functional analysis on thin domains.
2.1 Function Spaces on Thin Domains
In this section, we define the norms and function spaces used in this paper,
and state several propositions concerning functional spaces on a thin element.
2.1.1 Sobolev Spaces
We first define some norms and seminorms. We shall work with these explicitly
given seminorms and norms for various domains (cf. [53, 39]).
On a domain O C R2, define the usual 2/2norm of a function u on O, ul2(c>)j
by
HI Uo) =
o
On a segment a of dO, define
IMlLw = /
5
Now, consider the Sobolev seminorm [53]
Mi,o = Vu2dxdyj ,
and the trace seminorm
1/2
The corresponding norms are given by
lllli,o lklli2(ci) + Mi,o>
lWlll/2,9 = lllll(,) + ll?/2.,
(2.1)
(2.2)
We further denote
and
and, finally,
IMkoo.o = Y e8S8UP lDl
w<*
Hkoo, = Y eSS sup \Dau\.
a
Iklloo,. = vi/3,an
where v is defined to be equal to v on a and extended by zero to all of Sf2.
We now define several spaces of functions. L2(0) = Ho(O) is the space of
(equivalence classes of) functions u for which MÂ£2(c>) < oo. The space Hi(0)
7
Vi = (1, e)
fa = (l,e)
v2 = (1,0
Figure 2.1: Thin domain Kc
is the space of (equivalence classes of) functions u for which u^0 < oo, for
1 = 0,1/2,1. The space consists of infinitely differentiable functions with
support contained in a subdomain o of O. We denote by V the completion of
CSTiO) with respect to the lTi(0)norm.
2.1.2 Invariance and Equivalence of Seminorms
Consider the family of domains Ke = (1,1) x (e,e), 0 < e < 1 (see
Fig. 2.1). We shall use the associated family of bijections
Ki  Kc, Fc(x,y) = (x,ey).
We will also occasionally use the domains Kce = (e, e) x (e,e) and the corre
sponding mappings
Kx  Kce, Fcc(x,y) = (ex,ey).
Let = {1} x (e,e) be one short side of Kc as in Fig. 2.1. By a simple
8
computation, we have for any u G Hl(K{) the invariance of seminorms,
Because
lOi ,k = (23)
1 0 Pee1l/2,1>f = !u0 Frtl/2,lil, = ll/2,ii,i (2*4)
//,v,..,//g).!//(g)1,
Kt Kt
we have the equivalence of seminorms,
''Ml.*, s 1 0 f. Ml,*, Â£ 1/jlli,Ay
(2.5)
It is easy to see that the constants in (2.5) are sharp by considering the functions
u(x,y) = x and u(x,y) = y. Finally note that
Ih=leIIUFrl = :?IIU0F
(2.6)
*1
Kc
K"
2.1.3 Short Side Trace Theorem
With the definitions (2.1) and (2.2), we get a version of the trace theorem for
the family of domains Ke.
Theorem 2.1 [36] There is a constant C independent of t such that for all
e G (0,1],
Mi/2,sllt < C\u\uke, Mu G H\Kt).
9
Proof: Let u H*(Ke), ffktev. = 0. Put v = u o Fe. Then v Ki, and
by (2.6), HKiV = 0 From the trace theorem (see, e.g., [53]),
Ml/2.ii,, < IMl/2All < < C\v\hkl,
and using (2.3) and (2.4) we get
li/2A < c\u\1
It remains to note that < lTili.k, D
2.1.4 Polynomial Spaces on Ke
Let Pp(ke) be the space of all real tensor product polynomials of degree p on
Ke, that is, u Pp(Kc) if and only if
p
(*,y) = 53 Cijx'y3, a, 3?.
i,i=o
The following proposition, from [36], is concerned with bounding the I ll.Ke
seminorm of a bilinear function from its vertex values.
Lemma 2.1 Let uo P\(kc), and Vi, vi, V3, V4 be the vertices of Kc as in
Fig. 2.1. Then
Kli.tf, < Ce~l/2{u0(vi) o(t>4) + uo(t>2) o(t>3)) + e1/2S K(v,)l,
i=i
where C does not depend on u0 or e (0,1].
10
In [7, Corollary 6.3], it was proved that
ll^aA.SCtl + log^rilMU, V6P,(*,). (2.7)
This result was extended to a family of thin domains, again in [36].
Theorem 2.2 There is a constant C independent of e such that
IMIo,oo,3Ke < Ce1/2(1 +logl/2p)\u\hkt
for all u E Pp(Ke) such that ff^U = 0, and all e E (0,1].
Proof: Write u = v o F~*, and use (2.5), (2.6), and (2.7).
On the short sides iiie and we have the following bound independent of
e.
Theorem 2.3 [36] There is a constant C independent of e such that
max u(r) u(s) < (7(1 + log1/2p)u1 ^ Vu E Pp{Ke). (2.8)
r,si ,e *
for all e E (0,1] and all p.
Proof: Let e = 1. Because the inequality (2.8) is invariant to adding a
constant function to u, suppose without loss of generality that u = 0; then
(2.8) follows from (2.7). For a general e E (0,1], (2.3) and (2.4) give
max ti(r) u(s)\ < <7(1 + log1/2p)ul Aei,
and it remains to note that \u\^ < u^.
We will also need the following discrete extension theorem.
11
Theorem 2.4 [36] Let u Â£ Pp(Kc) such that u(vy) = ^(ui) = 0. Then there
exists a function v Â£ Pp(Ke) such that v = u on siiC,v = 0 on dKc \ i1)C, and
Mi,*. < Ce 1/2 (lli/2.sll. + (1 + log1/2p)lllo,oo,i1,e) (2.9)
for all e Â£ (0,1], and the constant C does not depend on e or u.
Proof: Let e = 1. It was proved in [7, Theorem 6.6] that
IMIoOA,! < C ('ltl/21Slll + (1 +log1/2p)ti0loc,i1,1)
Noting that
IMIW, = 1*1?/,Ail + < \u\]/2>Kl + Hi
OO,*!,! >
we put v = u on 5i,i, v = 0 on dK\ \ and extend by [7, Theorem 7.5]
to a function v Â£ Pp(Ky) to get (2.9) for e = 1. The general case then follows
using (2.3), (2.4), and (2.5).
2.2 The Discrete Form of the Model Problem
We consider solution of (1.1) by the usual Galerkin finite element method
[27], except that we use a family of reference elements defined by Kc = (1,1) x
(e,e), e Â£ (0,1], rather than a single reference element.
Let A = {K} be a partition of into nonoverlapping elements K, where each
K is the image of one reference element Ke,
5=Ur' K = Gk{K,k), ek 6(0,1],
Ke a
12
where Gk E Pi(KCK) are bijections such that
l^tfll.oo ,ktK < GKk, (2.10)
1 UK ll.oo.tf,* < ChK\ (2.11)
and
\Jgk lo.oo ,k'K < Ch2K, (2.12)
\^Gk lo,oo,kCR < ChK2, (2.13)
where Jqk is the Jacobian of Gk, h,K = diam(.K). Roughly, this means that each
element K is nearly one of the reference elements, Ke. We then define the finite
element space
Vk = {u V : u\K o Gk Pp(KCk), VK A},
and the discretization of (1.1) is
ueVA: a(u,v) = f(v), VvÂ£Va. (214)
2.3 The Basis (Shape) Functions for the pVersion Finite Element
Method
In order to solve (2.14) numerically, we must select a basis for Pp(KEk).
A hierarchical basis, described in, e.g. [6], has three types of shape functions
defined on a square reference element Ky\
13
the nodal shape functions;
the side shape functions;
the internal shape functions.
The nodal shape functions, IV,, i 1,2,3,4, are the usual bilinear functions
determined by their values at the nodes (vertices) of the element. Now, define
PjJ = 1,... by
w() =
where lj is the jth Legendre polynomial. The side functions, Sij, j = 1,... ,p 1,
associated with the side {(, y) Â£ {1} x (1,1)} are defined by
SiAx>v) = 2~Pi()i V(,y) G Kx.
The other side functions are defined similarly. The interior functions, Bj>, j, k =
1,... ,p 1, are defined by
Bj,k(x,y) = Pj(x)pk(y), V(x,y) G Kx.
These basis functions will not be used further here, but give a motivation
for the formulation of the preconditioners in later chapters. Note that there are
(p+ l)2 degrees of freedom on each element. The pversion finite element method
achieves convergence by increasing the degree p of the elements (see, e.g., [6] and
the references cited therein). The more familiar (hversion) finite element method
14
achieves convergence by decreasing the size h of elements, with p remaining fixed.
Published theoretical results (e.g., [5, 25, 6]) indicate that solutions of high
accuracy can be obtained with the pversion at lower computational cost than
with the /iversion. The pversion finite element method is quite similar to the
spectral methods [24].
2.4 The Preconditioned Conjugate Gradient Algorithm
Once a basis, ,...,n} for V\ has been chosen, the solution to the
discrete problem can be written
n
u =
i=0
The discrete problem (2.14) can be written as the linear system
ci,c2,...,cn : a(&,&) = ), (2.15)
i,j = l,...n.
This system can be written in matrix form,
Ax = 6,
where o,j = a(i,j), bj = and , = c,.
It is this problem which we will solve by the method of Preconditioned Con
jugate Gradients (PCG) [22]. We must first define a preconditioning bilinear
form, c(, ), also symmetric and positivedefinite, with corresponding matrix,
15
M ~ (Tn,j),m,j = We then define x0 = 0, and r0 = b, k = 1 and
perform
:fci = rki (2.16)
Pk = zL^ki/zl^ki, (Pi = 0) (2.17)
dk = 2/ti + Pkdkh (di = z0) (2.18)
OLk = zl^Tki / dÂ£ Adk (2.19)
= fc1 + atfcdfe (2.20)
ffc = rjk_i a/cAdk (2.21)
until Tk = 0.
2.4.1 Spectral Bounds on the Error for PGG
Convergence (r* = 0, using exact arithmetic) occurs in k < n steps. Usually,
however, we treat this direct method as an iterative one, and stop with k
We define the error e* by e = ( Xk)TA(x Xk). We state a spectral bound
on e*;. Suppose there exist positive mi,m2 such that
mic(v,v) < a(v,v) < m2c(v,v), Vu G Va
(2.22)
Then (see [22])
ejt < 2e0 ^
\
yjmi + v/mTy
16
(2.23)
When we have mi,m2 that satisfy the above and the ratio 7712/^1 is the
smallest possible, we call k = m2/mi the condition number of the preconditioned
system, and write the condition number bound on the error
We call c(* , ) the preconditioner.
17
CHAPTER 3
THE BILINEAR PRECONDITIONER
In this chapter, we give a general form for subspace decomposition precondi
tioners, as in Mandel [34]. We then define the first of two preconditioners, which
uses the space of continuous piecewise bilinear functions as one of the subspaces.
This preconditioner was first proposed in [7].
3.1 Preconditioning by Subspace Decomposition
We now describe a general principle for constructing the preconditioners
in this paper. For each element K, define the local space
VK = {u\K : u Va}
By integration in (1.2) over the elements K, we can write
a(u,v) = 'Â£/aK(u,v), aK(u,v) = Jj {V u)TVv.
K
K
Denote the energy norm on K by
M k = {aK{u,u))x/2.
Now let Vk be split in a direct upper sum of nx + 1 subspaces
Vk = VuK VK,
so every vr Vr can be written uniquely as the sum of tik + 1 components,
VK = VK,0 + + VK,nK, VK,i Vi. (3.1)
We then define
c(u,v) = "^2 CK(u,V), Ck(u,V) = a,K{UKfl,VKfl) + ... + a/f(wK,nK>vK,nK) (3.2)
K
On each element, c/e(,) is a block diagonal approximation to a/f(,), with un
blocks, each corresponding to one of the subspaces Vx,i of Vk c(,) is the global
assembly of the local contributions from each element, and retains some of the
block diagonal structure from the local parts. Each of the blocks is better condi
tioned than the original global system, and can be solved in parallel.
Obviously, if for all K G A,
mK,icK(u,u) < aK(u,u) < mKi2CK(u,u), Vu Â£ Vk, (3.3)
then (2.22) holds with some
77ii > min mjf.ij m2 ^ max ttik,2
K K
Thus the condition number k = 77i2/t71i can be bounded using local bounds from
each element, independently of the number of elements. Let ttik,\ and m^i2 be
the largest and smallest, respectively, such that (3.3) holds, and denote the local
condition number
fn.R,2
kk =.
TTtK, 1
19
Figure 3.1: Thin element K.
The following theorem, given in [33, Theorem 4.1], shows that to bound the
condition number, it is necessary and sufficient to bound the decomposition (3.1)
in energy. For other versions and related results, see [7, 29, 34, 52, 47].
Theorem 3.1 Let bx be the least number such that
nK
\uK,i\]< ^ Vu Vff. (3.4)
1=0
Then
1>K < *K < (nK + l)6tf
In this and the following chapters, we give specific examples of the abstract
decomposition (3.2). and use Theorem 3.1 to estimate the condition number
kk for each preconditioner. The preconditioners differ only in the choice of the
subspaces, Vx,i
3.2 Formulation of the Bilinear Preconditioner
20
Let K = Gk(KCk) A be an element as in Fig. 3.1. Define the de
composition of the local space Vk by specifying the associated projections of an
arbitrary v (= Pp(Ke). Let
5
uk = ^2uK,it uk G Vk, (3.5)
=0
where
UKfl is the mapped bilinear interpolant of u; that is, UK,o{vi) = uk{v{),
i = 1,2,3,4, and uK,o o GK Pi{KCk)\
for i = 1,2,3,4, uk,%\ = uuk,o on the side a,, UK,i = 0 on dK\si, ux,i Vk,
and \uK,i\K is minimal for all such functions;
UK,5 = U Â£;=0 UK,i
By summation over all elements, we can see that the preconditioner c{u,v)
can be written as
c(u,v) = c(uq,v0) + c(ue,ve) + '*r,c(uKt5,vK'5),
e K
where u = u0 + Â£e ue + Hk uk,s, and
Uu is the piecewise mapped bilinear interpolant of u;
Ue is a function determined by its values on one edge e of the finite element
mesh, nonzero only on the two elements adjacent to the mesh;
21
uk,s is a function nonzero only on element K.
The system (2.16) thus decouples into one subproblem for each side e, one sub
problem for the interior of each element K, and a subproblem, the matrix of
which is the stiffness matrix of the discretization of (1.1) by bilinear elements.
Thus, one can see the analogy between this preconditioner and the element shape
functions described in Chapter 2. For variants of this method, see [7, 33]. The
implementation of the method and comparisons with other methods from the
literature will be given in Chapter 5.
3.3 The First Condition Number Bound
This preconditioner was proposed and studied in [7], where it was proved that
kk < (7(1 + log2 p) in the case e = 1. Here we show how /< deteriorates for
Â£ 0. This analysis first appeared in [36].
Theorem 3.2 For the preconditioner defined by the decomposition (3.5), it holds
that
Cxe~2
where Ci,C2 > 0 do not depend on e or p.
Proof: In view of the assumptions on the mapping Gk we may assume with
out loss of generality that
k = kt.
22
In view of Theorem 3.1, we need only to show that for the minimal fc/c in (3.4),
bx < Ce 2(l+log2p), (3.T)
bj( > C,e 2, Ci > 0. (3.8)
In [7], it was proved that (3.7) holds for e = 1. We give the proof here for the
sake of completeness. Let u Â£ Vk. Because adding a constant to u will change
only the component uk,o hy the same constant, we may assume without loss of
generality that JJK u = 0. It follows from Theorem 2.2 and Lemma 2.1 that
kfl\oo,oo,tf < (7(1 + log1/2p)uliA,
\uk,o\i,k < (7(1 + log1^2p)ui,/f
Now u(vi) = u/cio(u;), i = l,2,3,4, and from the Trace Theorem and Theorem 2.4,
there exist functions w, Pp(K), i = 1,2,3,4, 60 that w, = u uk,o on a,, Wi = 0
on dK \ 3{, and
k.i,ft < (7(1 + logp)ui,tf.
By the the definition of ux,i, we have Il.K" ^ kii,ft. The estimate of uk,5
follows from the triangle inequality, and we may conclude that (3.4) holds with
bk < (7(1 + log2 p) for e = 1. The general case of (3.7) follows using (2.5).
To prove (3.8), consider a polynomial f(x) such that /(1) = /(1) = 0 and
let u(x,y) = f(x) on K. Then the component uk,o = 0, and
K,k = eJ_1 \f'\2dx = cl/li,(i,i)
23
(3.9)
Let v H'{K) be determined by the boundary conditions
v = f on si, v = 0 on dK \ si,
(3.10)
and by the requirement that Mi,# is minimal. Then the component uk,i satisfies
\uK,l\\,K ^ Ml,K,
(3.11)
and v is the solution of the boundary value problem Av = 0 in K with the
boundary conditions (3.10). Expanding / in a sine series,
f(x) = ^ an sin nirx,
n=l
we can write v as
. ^ sinh(ra7r(e y)) .
v(x,y)= 2^ an:  sinTur.
n=l
Now, using Greens theorem,
sinh 2mre
K dK 1
By orthogonality of sine functions, it follows that
dv
dy
dx.
i/=i
. 1 ,cosh27i7re 1 ^ a2n 1
n=l
using the inequality
cosh t 1
^ht>< <>0,
and Parsevals equality. Thus using (3.9), we get
2
0,(0,1)
K.. i
wIk, l/lh.,.)'
24
Figure 3.2: Condition numbers for linear preconditioning.
Choosing f(x) = (x 1)(1 x) and using (3.11), we get (3.8).
Figure 3.2 shows the numerically calculated condition numbers for p = 2 to
16 and e = 1 to e = 213. We can see that the condition numbers indeed grow as
e2. The influence of p on the condition number is seen only for relatively small
aspect ratios; for very small e, the influence of e prevails. The numerical results
are in complete correspondence with Theorem 3.2.
25
CHAPTER 4
THE ADAPTIVELINEAR PRECONDITIONER
Examination of the proof of the lower bound in Theorem 3.2 shows that the
degradation of the condition number is caused by decomposing shape functions
from opposite long sides into different subspaces. This motivates a preconditioner
which decomposes Pp(Ke) into fewer subspaces. The following preconditioner,
proposed and analyzed in [36], has a much lower condition number than the
Bilinear Preconditioner of the previous chapter. In this chapter, we will define the
preconditioner, state the results of [36], and give a new analysis which shows that
the new preconditioner has a condition number which is bounded independently
of e.
4.1 Formulation
For K = Gk(Kc), we now define the decomposition of Vk by
3
UK = (4.1)
i=0
where
uk,o is the mapped bilinear interpolant of u: UK,o(vi) = uk(v,), i = 1,2,3,4,
and uK,o o GK Pi{Ke)]
uk,\ and uk,3 are defined by the boundary conditions UK,i = u uk,0 on
UK,i = 0 on dK\si, i = 1,3, and by the requirement that UK,i Â£ Vk and
\uk,i\k is the minimal possible;
UK, 2 = UK, 0 Uk,\ UK, 3.
4.2 The First Condition Number Bound
In [36] we showed that the condition number Kk deteriorates only as Â£1
rather than as e2. We give that proof here, primarily to contrast the result and
the method with that of the next section.
Theorem 4.1 For the preconditioner defined by the decomposition (4*1), it holds
that
Ci min{p,Â£1}
where C\ > 0 and C2 do not depend on e or p.
Proof: As in the proof of Theorem 3.2, we may assume without loss of gener
ality that
k = ks.
27
Because of Theorem 3.1, we only need to show that the minimal 6k in (3.4)
satisfies
bK < Ce\\+\og2P) (4.2)
> Ci min{p,e1}, C\ > 0. (4.3)
A
Let u Â£ Vk = Pp(K) = Pp(Kc). Because adding a constant to u will change
only the component uk,o by the same constant, we may assume without loss of
generality that ffK u = 0. Then we have from Theorem 2.2,
K.) < Ce~1/2(1 + log1/2p)ulix, i = 1,2,3,4, (4.4)
and from Theorem 2.3,
u(ui)u(u4) < C(1 +log1/2p)uI,*r (4.5)
u(v2)u(v3) < C7(l + log1/2p)ui,/f. (4.6)
It follows from (4.4)(4.6) and Lemma 2.1 that
hfi.oli./f < Ce~1/2( 1 + logll2p)\u\iyK (4.7)
Further, (4.5) and (4.6) give
ktf,oi/2,ai < C( 1 flog 1/2p)ui,K, i = 1,3, (4.8)
because Uk,o is linear on all sides, and because of the scaling invariance prop
erty (2.4). From (4.8) and Theorem 2.1, we have
ttK,oi/2,Si < C7(l+ log1/2p)u1)/<, 1 = 1,3. (4.9)
28
Now Theorem 2.3 and the definition of uki0 imply that
tt tijr.ollo.oo,* < C{1 + log1/2p)ui,k, * = 1,3. (4.10)
Theorem 2.4 and (4.9), (4.10) now give the existence of functions w, Pj>(K)
such that Wi = u uk,o on Si, Wi = 0 on dK \ a,, i = 1,3, and
Kli,* < Ce~1/2(1 + logp)tii,K.
By the definition of k,,, we have \ux,i\\,K < iwii,K'> which together with (4.7)
and the Triangle Inequality concludes the proof of (4.2).
To prove (4.3), consider the function
( >> xPy
u(x,y) =
on K = Ke. Then the bilinear interpolant of u is UK,o(x,y) = xy/e. By a simple
computation,
ll ,K = 
+
eP
e 2p + 1 3(2p 1)'
Thus
l'u/c,oi1/e
1
and if p = [1/e],
Mi,k ^
which proves (4.3).
29
Condition Number
Figure 4.1: Condition numbers for improved preconditioning.
Fig. 4.1 summarizes numerically evaluated numbers kk for p = 2 to 16 and
t 1 to 213. The computed results agree with the previous theoretical result,
but seem to show that the upper bound (4.2) is not sharp for e 0. In the
following section we will improve this upper bound.
30
4.3 An Improved Condition Number Bound
It appears in figure 4.1 that, for a given p, the condition number remains
bounded as e 0. This is indeed the case, and we now develop an upper bound
for the condition number which grows as a function of p, but is independent of
e. We accomplish this via energy bounds as in section 8, but this time in terms
of the 6eminorm6,
du
dx
and
o,K,
. The idea is motivated by the following,
an operator interpolation theorem, in the spirit of [28].
Theorem 4.2 Let T : V i> V be a linear operator on a space V, and let
ufl be seminorms on V, and for e G (0,1] let
\u\2e=e\u\2A + t\u\l
define a new seminorm on V, and define
\T\C = sup
uÂ£V
u
Me / o
Suppose, that T satisfies, for some C3, C4 > 0,
ITu^ = 0, if\u\A = 0
\Tu\b = 0, if\u\g = 0
n < C1
Tu
l"l.i
^ < C2, Vu G V.
31
. Then, for any e Â£ (0,1],
T < max{Ci,C2}.
(4.11)
Proof: Let u Â£ V, \u\e ^ 0, be given. Suppose first that ^ = 0. Then
1 Tu\c 1 TuU
l*e Ml ~ ^
So (4.11) holds in this case. Now suppose that u^ ^ 0. Then
e\Tu\\ + \\Tu\\
Â£\u\a + JMb
h(e),
and A. is a monotone continuous function of e. Thus, since A(l) < Ci, and
lim h(e)
c+ 0 ' '
(4.11) follows.
This result can be applied with V = Pp{Kf), and
du 2
u\a = dx 1 0,^1
12 du 2
u\b = 9y J 0,*i
and with linear operators defined by, e.g., bilinear interpolation. Theorem 4.2
indicates that we should be able to bound
du 2 1 +  du
\dx dy
32
independently of e, if we can bound
du
dy
0 ,ky
. We will now develop bounds for
interpolation, trace, and extension operators in terms of these seminorms.
The following lemma makes use of the properties of Legendre polynomials
(see, e.g. [1, 24]) to obtain a bound on the energy of bilinear interpolation in
terms of the new seminorms.
Lemma 4.1 Let u P2(Ki), and suppose Uq is the bilinear interpolation of u
on K\. Then
Ouq 2 du
dx 0,*i < Cp2 dx
duQ 2 du
dy 0,*! < Cp2 dy
0 ,KV
2
0,/fi
(4.12)
(4.13)
for some C > 0, independent of p. These inequalities are sharp, up to a constant
factor.
Proof: To prove (4.12) note that u can be written uniquely in the form
i=0 j=0
where /, is the ith degree Legendre polynomial and
LAy) = <
y,
if j = o
if j = 1
33
(Note: Lj{1) = Lj( 1) = 0, for j = 2,... ,p).
We use the following properties of Legendre polynomials
1 nr.. f^UWljWdx 2i + 1 (4.14)
fi(l) = (1)* (4.15)
;,(i) = i (4.16)
l'i = E (2n + l)*n n=0 (4.17)
n even
where 6{j is the Kronecker delta. From these it follows that
du
dy
a:
0 ,Kl
p p
p p
i=0;'=l
4a?
j
dxdy
SS(2i+l)(2jl)
p 4a?
^2i + l
i=0
(4.18)
(4.19)
(4.20)
It also follows that if is the bilinear interpolation of Z,(x)Lj(y), then Up =
Sf=o aijuij By direct application of the properties of Legendre polynomials
and their primitives (see above),
dujjj
dy
0, if J # 1
< 1, if .7 = 1,1 even
. x, if j = l,i odd
Thus,
du0
dy
= A + Bx,
34
where,
A= J2 i
. t=u
i even
B= Â£ a,i.
i=0
t odd
Then, using the Cauchy inequality for sums,
2
du0
dy
= 4A2 + B2
o
< 4(A2 + B2)
= ^ f $1 a<1 I + ^ I Â£ a'i
i=0
I even
t=0
^ i odd
(p + 2)(2p + l) * 4a?,
2 ^ 2i + 1
<
<
(p + 2)(2p + 1)
du
dy
o ,kl
where the last inequality follows by (4.20). Thus (4.13) holds, and by inter
changing the roles of x and y, we get (4.12).
To see that the bound is sharp, let
p
u(*,y) = XX2* + i)/,(*),
i=0
and uo(x,y) = {A + Bx)y, where
A= E (2i + l),
1=0
i even
35
B= Â£(2i + l).
i=0
i odd
Observe that A > Cjp2 and B > C\p2 for some positive constant c\. Then we
have
for some constant Co, while
2
> A2 + B2 > 2c\p\
0,*!
du0
dy
Thus,
du 2
dy 0,*1 2ci 8y
The following is a discrete extension inequality, in the spirit of (2.4), but in
terms of the new seminorms.
Lemma 4.2 (Extension from the side) Let f be a polynomial of degree p. Then
there is an u> Â£ Pp(K\) such that
"l*i /
>1*3 0
du> dy 2 0,Kt <
du dx 2 0,Kt < C^lfWls,
If, in addition, /(1) = /(1) = 0, then u>S2 = u;S4 = 0.
36
Proof: Let g be defined on (1,1) by
(*) = ^ [*) ',.(*)].
where Z, is the ilh Legendre polynomial. Then the following properties hold:
(4.15) and (4.16), imply
s(i) = ^ [(i)p (i r1] = i,
s(i) = ^ [i 1] = 
From (4.14), we obtain
IMIS.* = i
2 2^4
4 2p+T + 2p lj 3p
Applying (4.17), we get
Islt, 1 E (4" + 2)+ E (to+ 2)
n=0 n=0
n+p ddd n+p1 odd
= 2pz.
Let u>(x,y) = g(x)f(y) on K\. It follows immediately from these properties
^(i>jO = ^(i)/(3/) = f{y)
>(i>y) = g{i)f{y) = o
of g that }f /(1) = /(l) = 0 then u>(, 1) = w(, 1) = 0
Ssllo,^ = IMI3J/IL ^ Â£1/1?,
lo.K, = HUI/llti =P2ll/llolSr
duj
dx
The following lemma is a trace inequality with respect to the new seminorms.
37
(4.21)
Lemma 4.3 (Trace Inequality) Let v Â£ Pp{K\), and let f = vSl. Then
\f\
2
Ml
< cp2
dv
dy
2
0,/fi
Proof: We can expand
p p
v(x,y) = J2J2aiMx)Li(y)
i=0 j=0
where, Lj are as in Lemma 4.1. Then
Now,
Thus, we have
p p
f(v) =
i=0 j=o
m=e
EM1)'
j=l Lt=0
Therefore, by Cauchys inequality for sums,
7 2
/II2
o,i E
j=l Li=0
EMi)f
< E(p + i)EUzjZTf
j=1 i=0 i
'2p + lx p p
)
s (,+1)mEE40^)
= (z> + l)(:
< 3p2
">11
0,Ki
lO.tf!
38
Lemma 4.4 Let f be a polynomial of degree p on [1,1], let /0 be the linear
interpolant of f, and let g = f f0. Then
Hi,(i,i) 
Proof: Let Z,, i = l,...,p be defined a6 in Lemma 4.1. Then / has the
expansion
/ = !>,Â£,
i=0
and
l/ll,(l,l) = 51 bl 2j 1
Because Z,(1) = Z,(l) = 0, for i > 1, L0(y) = 1, and L\(y) = y, we have
/o = + hiy = b0L0(y) + b\L\{y). Therefore, g = f fo has the expansion
9 = 53 biLi
i=2
. So
Hi,(U)= 5Z bl2i 1 l^li,(i,i)
The following theorem gives a bound on the projections defined by the Adap
tiveLinear decomposition.
Theorem 4.3 Let u G Pp(Ki), v = u o F~l, and v,, z = 0,1,2,3 be defined on
Ke by
39
Vq is the bilinear interpolant of v on Ke
iU = (vv0)L
vi minimizes ak (,), subject to: <
, Is;  2,3,4
v2S3 = (v ~ vo),
v
2*, = 0, i =1,2,4
V3 = V (Ei=oi)
Then we have
\vo\2l
^ c,(ep4 + p)M?i*.> * = 1,2
Nj,*, < c2(epl +p2)\v\\tkt.
Proof: For v0, the bilinear interpolant of v on Kc, let u0 be the bilinear
interpolant of uon K\. Then v0 = u0 o Fe 1. Thus, by Lemma 4.1, we have
I Vo 
2
1 ,Kt
du0 2 1 du0
dx + 0,Ki Â£ dy
du 2 2 chi
dx OtAT Â£
= CP2M Ik.
(4.22)
(4.23)
(4.24)
For !, let u,u0 be as above, g = uSl, and / = (u r*o)si By Lemmas
4.2, 4.3 and 4.4, there is a function w Pp(K\) such that Vi = u; o F~x on dKe,
and
du
dx
< cp2
o.tfi
2
0,si
(4.25)
40
2
(4.26)
duj
dy
0,Ki
*
So, by the minimality of V\,
1*11\kt <
aiof,
l
2
1,K'
By (2.6),
w o F, 1 2 . = e 5a; 2 1 +  du
5 O.ATi Â£ dy
o.fti
Lemma 4.3 gives
dw 2 1 duj
dx + oJCi e dy
U,Ki
< rf'lflL + fr
c
pe
2
0,! '
Since /(l) = /(l) = 0,
CP2Â£lflL + ^ rfelflL + ^1/lL,
=(cp2t+^)\f\lr
From Lemma 4.4, we have
e
C M 2
pe
pe
Now, by the trace lemma, ( 4.3),
(P2e + ~)l^lLi ^ (ciP4(r + )
du
dy
0 ,Kt
= (cxp4e + )
U1 du 2 )]
dy o,kl /.
41
Finally, by (2.6),
/ 4 . Cl p.
(Ci ft + )
[ (l du 2 du 2 11
H dy 4" Â£ 0,*1 dx o ,kj.
= ci (e V +p)Hlij5rt,
with c, c', and ci constants, each independent of e,p, and the bound for V[ is
proved. Similarly, the bound for vj follows.
For v2, we have, by the triangle inequality, Lemma 4.1 and Lemma 4.2,
M If, = IwK + wi+Wa)!^
^ Mi,kt + + M + va\kt
\v\i,ke + KlJ,*, + M + v3\2l k' < [1 + cop2 + 2ci(ep4 + p)]M?,*f
< c2(ep4 +P2)\v\2hkc.
We cannot apply Theorem 4.2 directly to results given above, because the
minimum energy extension operator is a different operator for each e. We can,
however, obtain a bound on the condition number which is independent of e.
Theorem 4.4 The preconditioner defined by the decomposition (41) has a con
dition number which is bounded by
< ^P2(!+ lgp) (427)
where C is a positive constant, independent of e, p.
42
Proof: As stated before, we lose no generality in assuming that K = Ke. It
follows immediately from Theorem 4.3 and Theorem 3.1 that there is a C0 > 0,
independent of p, e such that
K* < C0(p2 + ep4),
from which we can immediately get the bound < C0p4, but we can do much
better. From Theorem 4.1, we know that there is a Ci > 0 such that
liÂ£r < Cl
1 + log2 p
Thus, there is a positive constant, C3, such that
Kk < C3min{p2 + ep4,
2,4 1 +log2p
}
Since p2 + ep4 is a monotonically increasing function of e > 0, and (l+log2p)/e is
monotonically decreasing, their minimum achieves it6 maximum value at a point
e* > 0 where the two functions are equal:
n2 r*n4 1 + P
p + Â£ p = .
e*
This occurs at
e =
1 + (5 + 41og2p)1/2
2p2
p2 + e
4 2
P =P
1 + (1 + log2p)
2(5 + 4log2p)1/2
< C4p2(l + logp)
Then we have
for some positive constant C4. Therefore,
*K ^ C3C4p2(l + logp).
The results of this chapter combine to give the condition number bound
Kk < Cmin{(l + log2p)/Â£,p2(l + logp)}
which more fully predicts the behavior of the condition numbers given in fig
ure 4.1.
44
CHAPTER 5
IMPLEMENTATION, VARIATIONS AND CONCLUSIONS
We propose a hybrid algorithm which combines the U6e of the preconditioners
from Chapters 3 and 4. The choice of preconditioner can be made on each
element independently; precondition nearly square elements with the simpler
Bilinear Preconditioner, and precondition the more distorted elements with the
more effective AdaptiveBilinear Preconditioner.
With a certain ordering of the hierarchical basis functions using the shape
functions from Chapter 2, the matrix A corresponding to this problem has the
block form:
/ \
An Au A13
A21 A22 A23
An A32 A33 ,
The An block corresponds to the stiffness matrix of the original problem, dis
cretized with bilinear elements. The A22 block is a block 7diagonal system, with
blocks of order p, each corresponding to the edge basis functions of a single edge,
shared bet weed two elements. The A33 block is a block diagonal system, with
blocks of order (p l)2, each corresponding to the interior basis functions of a
single element.
Instead of the original shape functions, we change the side shape functions
to satisfy the minimum energy condition of Chapter 3. This changeofbasis can
be performed on each element independently. The basis functions are no longer
hierarchical, so one of the advantages of prefinement (e.g. [56]) is lost. The
global matrix is then assembled in the usual way, giving the following matrix in
block form:
V
An B12 A13
B21 B22
Asi A33
/
The matrix M corresponding to preconditioner from Chapter 3 has the block
form:
/ \
Mn
M22
< ^33 )
The blocks correspond to the original problem:
= An, M22 decouples into small edge problems, Af33 = A33 is block diago
nal, with diagonal blocks corresponding to the interior functions of each element.
This preconditioner is different from that proposed in [16], in that the interior
functions are not eliminated, and the vertex basis functions are not changed at
all. This allows the global (coarse grid) system to be solved a priori, or in paral
lel. This is simpler than the previous algorithm, but does not generalize well to
46
three dimensions, because of reasons cited in [7], namely that the energy of the
vertex functions grows very quickly as a function of p. The preconditioners in
[34, 7, 33, 36] advocate elimination of the unknowns, rather than orthogonaliza
tion. The method in [31] uses a symbolic elimination, with iteration on a reduced
system. This is similar to the methods developed in [46, 47], which use variations
of the Schwarz algorithm on the reduced system. The methods presented here
bear some similarity to the elementbyelement techniques of Hughes et al (see,
e.g., [26]), but the EBE methods appear to require many more iterations.
The interior unknowns of the AdaptiveBilinear Preconditioner are coupled
to the edge functions corresponding to long sides of the corresponding element,
so the procedure is not quite as simple. One can, by eliminating the interior
variables from the edge functions, one can iterate on the reduced system. The
interior functions can be eliminated, by substructuring, on each element inde
pendently. One obtains in this case a subproblem corresponding to a chain of
thin elements connected on long sides. Adjacent chains (and nearly square ele
ments) are decoupled, however, and thus can be solved in parallel. Thus, we have
an algorithm which adapts to distorted elements, and which still decouples the
original problem into much smaller subproblems.
As was noted in [7], the method of Chapter 3 does not generalize well to three
dimensions, because the condition number grows too quickly as a function of p.
Other methods have been proposed for the 3dimensional case [7] [35] [32] for
47
which the idea of combining subspaces for distorted elements appears promising;
the analysis techniques of Chapters 3 and 4 should be applicable to those cases
as well. The method of Chapter 3 was based upon a similar one for the ^version
finite element method and finite difference methods [16]. The techniques devel
oped here should also have their analogues for that case, when subdomains are
distorted. The pversion finite element method is closely related to the spectral
methods [24] [41] and the techniques given here may be applicable to analyzing
and improving performance of domain decomposition methods (see, e.g., [42]).
The technique of analyzing the behavior of numerical methods by considering the
energy of subspaces in the onesided energy seminorm of Chapter 4 appears to
have wide applicability.
48
BIBLIOGRAPHY
[1] George Arfken. Mathematical Methods for Physicists. Academic Press Inc.,
Orlando, third edition, 1985.
[2] I. Babuska and H. C. Elman. Some aspects of parallel implementation of the
finite element method on message passing architecture. J. Comput. Appl.
Math., 27:157187, 1989.
[3] I. Babuska, H. C. Elman, and K. Markley. Parallel solutions of linear systems
arising from the pversion of finite elements by conjugate gradient method.
In preparation.
[4] I. Babuska, M. Griebel, and J. Pitkaranta. The problem of selecting the
shape functions for a ptype finite element. Int. J. Numer. Methods Engrg.,
28(8):18911908, 1989.
[5] I. Babuska, B. A. Szabo, and I. N. Katz. The pversion of the finite element
method. SIAM J. Numer. Anal., 18:515545, 1981.
[6] Ivo Babuska. The p and hp versions of the finite elelement method: The
state of the art. In D. L Dwoyer, M. Y. Hussaini, and R. G. Voigt, editors,
49
Finite Elements, Theory and Applications. Springer Verlag, New York, 1988.
[7] Ivo Babuska, Alan W. Craig, Jan Mandel, and Juhanni Pitkaranta. Efficient
preconditioning for the pversion finite element method in two dimensions.
SIAM J. Numer. Anal., to appear.
[8] Ivo Babuska and Manil Suri. The p and hp versions of the finite element
method. International Conference on Spectral and High Order Methods for
partial Differential Equations, Como, Italy, June 1989; Comput. Methods
Appl. Mech. Engrg., to appear.
[9] Ivo Babuska and Manil Suri. The hp version of the finite element method
with quasiuniform meshes. Math. Model. Numer. Anal., 21:199238, 1987.
[10] Ivo Babuska and Manil Suri. The optimal convergence rate of the pversion
of the finite element method. SIAM J. Numer. Anal., 24:750776, 1987.
[11] Petter E. Bj0rstad and Olof B. Widlund. Solving elliptic problems on regions
partitioned into substructures. In Garrett Birkhoff and Arthur Schoenstadt,
editors, Elliptic Problem Solvers II, pages 245256, New York, 1984. Aca
demic Press.
[12] Petter E. Bj0rstad and Olof B. Widlund. Iterative methods for the solution of
elliptic problems on regions partitioned into substructures. SIAM J. Numer.
Anal, 23(6):1093 1120, 1986.
50
[13] Petter E. Bj0rstad and Olof B. Widlund. To Overlap or Not to Overlap: A
Note on a Domain Decomposition Method for Elliptic Problems. SIAM J.
Sci. Stat. Comput., 10(5):1053 1061, 1989.
[14] James H. Bramble, Richard E. Ewing, Joseph E. Pasciak, and Alfred H.
Schatz. A preconditioning technique for the efficient solution of problems
with local grid refinement. Comput. Meth. Appl. Mech. Engin., 67:149 159,
1988.
[15] James E. Bramble, Joseph E. Pasciak, and Alfred H. Schatz. The con
struction of preconditioners for elliptic problems by substructuring, I. Math.
Comp., 47(175):103 134, 1986.
[16] James H. Bramble, Joseph E. Pasciak, and Alfred H. Schatz. An iterative
method for elliptic problems on regions partitioned into substructures. Math.
Comp., 46(173):361369, 1986.
[17] James H. Bramble, Joseph E. Pasciak, and Alfred H. Schatz. The construc
tion of preconditioners for elliptic problems by substructuring, II. Math.
Comp., 49:116, 1987.
[18] James H. Bramble, Joseph E. Pasciak, and Alfred H. Schatz. The construc
tion of preconditioners for elliptic problems by substructuring, III. Math.
Comp., 51:415 430, 1988.
51
[19] James H. Bramble, Joseph E. Pasciak, and Alfred H. Schatz. The construc
tion of preconditioners for elliptic problems by substructuring, IV. Math.
Comp., 53:1 24, 1989.
[20] G. Brussino, R. Herbin, Z. Christidis, and V. Sonnad. Parallel multilevel
finite element method with hierarchical basis functions. Manuscript, IBM
Kingston, NY.
[21] S. Foresti, G. Brussino, and V. Sonnad. Multilevel solution methods for the
pversion of finite elements, parallel implementations and comparison with
other solution methods. Technical Report KGN137, IBM Kingston, NY,
1988.
[22] Gene H. Golub and Charles F. Van Loan. Matrix Computations. John
Hopkins University Press, second edition, 1989.
[23] D. Gottlieb and R. S. Hirsh. Parallel pseudospectral domain decomposition
techniques. ICASE Report 8815, NASA Langley, February 1988.
[24] D. Gottlieb and S. A. Orszag. Numerical Analysis of Spectral Methods.
SIAM, Philadelphia, 1977.
[25] B. Guo and I. Babuska. The hp version of the finite element method, i: The
basic approximation results, ii: General results and application. Comput.
Mechanics, 1:1, 2141, 203220, 1986.
52
[26] Thomas R. Hughes and Robert M. Ferencz. Fully vectorized ebe precon
ditioners for nonlinear solid mechanics: Applications to largescale three
dimensional continuum, shell and contact/impact problems. In Roland
Glowinski, Gene H. Golub, Gerard A. Meurant, and Jacques Periaux, edi
tors, First International Symposium on Domain Decomposition Methods for
Partial Differential Equations, Philadelphia, 1988. SIAM.
[27] Claes Johnson. Numerical Solutions of Partial Differential Equations by the
Finite Element Method. Cambridge University Press, Cambridge, 1987.
[28] S. G. Krein, Ju. I. Petunin, and E. M. Semenov. Interpolation of Linear
Operators. American Mathematical Society, Providence, 1982.
[29] Pierre Louis Lions. On the Schwarz alternating method. I. In Roland Glowin
ski, Gene H. Golub, Gerard A. Meurant, and Jacques Periaux, editors, Do
main Decomposition Methods for Partial Differential Equations, pages 142,
Philadelphia, 1988. SIAM.
[30] Yvon Maday and Rafael Munoz. Spectral element multigrid. II. Theoretical
justification. J. Sci. Comp., 3(4):323354, DEC 1988.
[31] Jan Mandel. On block diagonal and Schur complement preconditioning.
Numer. Math., to appear.
53
[32] Jan Mandel. A domain decomposition method for pversion finite elements
in three dimensions. In Proceedings of the 7th International Conference
on Finite Element Methods in Flow Problems, April 37, 1989, Huntsville,
Alabama. University of Alabama at Huntsville, 1989.
[33] Jan Mandel. Hierarchical preconditioning and partial orthogonalization for
the pversion finite element method. In Tony F. Chan, Roland Glowinski,
Jacques Periaux, and Olof B. Widlund, editors, Third International Sympo
sium on Domain Decomposition Methods for Partial Differential Equations,
pages 141156, Philadelphia, 1990. SIAM.
[34] Jan Mandel. Iterative solvers by 6ubstructuring for the pversion finite el
ement method. Comput. Methods Appl. Mech. Engrg., 80:117128, 1990.
International Conference on Spectral and High Order Methods for Partial
Differential Equations, Como, Italy, June 1989.
[35] Jan Mandel. Twolevel domain decomposition preconditioning for the p
version finite element method in three dimensions. Int. J. Numer. Methods
Engrg., 29(5):10951108, 1990.
[36] Jan Mandel and G. Scott Lett. Domain decomposition preconditioning for
pversion finite elements with high aspect ratios. Applied Numer. Anal., to
appear.
54
[37] Jan Mandel and Steve McCormick. Iterative solution of elliptic equations
with refinement: The twolevel case. In Tony Chan, Roland Glowinski,
Jacques Periaux, and Olof Widlund, editors, Domain Decomposition Meth
ods, Philadelphia, 1989. SIAM.
[38] Jan Mandel and Steve McCormick. Iterative solution of elliptic equations
with refinement: The model multilevel case. In Tony Chan, Roland Glowin
ski, Jacques Periaux, and Olof Widlund, editors, Domain Decomposition
Methods, Philadelphia, 1989. SIAM.
[39] J. T. Marti. Introduction to Sobolev Spaces and Finite Element Solution of
Elliptic Boundary Value Problems. Academic Press, London, 1986.
[40] Joseph E. Pasciak. Domain decomposition preconditioners for elliptic prob
lems in two and three dimensions: First approach. In Roland Glowinski,
Gene H. Golub, Gerard A. Meurant, and Jacques Periaux, editors, First In
ternational Symposium on Domain Decomposition Methods for Partial Dif
ferential Equations, Philadelphia, 1988. SIAM.
[41] A. T. Patera. A spectral element method for fluid dynamics: Laminar flow
in a channel expansion. J. Comput. Phys., 54:468488, 1984.
[42] A. Quarteroni and G. SacciLandriani. Domain decomposition precondition
ers for the spectral collocation method. J. Sci. Comput., 3:4576, 1988.
55
[43] E. M. R0nquist and A. T. Patera. Spectral element multigrid. I. Formulation
and numerical results. J. Sci. Comput., 2:389406, 1987.
[44] E. M. R0nquist and A. T. Patera. Spectral element multigrid. I. Formulation
and numerical results. J. Sci. Comput., 2:389406, 1987.
[45] H. A. Schwarz. Gesammelte Mathematishe Abhandlungen, 2:133143, 1890.
[46] Barry F. Smith. An optimal domain decomposition preconditioner for the
finite element solution of linear elasticity problems. Technical Report 482,
Department of Computer Science, Courant Institute, 1989.
[47] Barry F. Smith. Domain Decomposition Algorithms for the Partial Dif
ferential Equations of Linear Elasticity. PhD thesis, Courant Institute of
Mathematical Sciences, New York, 1990.
[48] G. Strang. Linear Algebra and Its Applications. Academic Press, New York,
1976.
[49] G. Strang and G. J. Fix. An Analysis of the Finite Element Method.
PrenticeHall, New York, 1973.
[50] B. A. Szabo. PROBE Theoretical Manual. Noetic Technologies, St. Louis,
MO, 1985.
56
[51] Michael Vogelius. An analysis of the pversion of the finite element method
for nearly incompressible materials. Numer. Math., 41:3953, 1983.
[52] Olof B. Widlund. Optimal iterative refinement methods. In Tony Chan,
Roland Glowinski, Jacques Periaux, and Olof Widlund, editors, Domain
Decomposition Methods, Philadelphia, 1989. SIAM.
[53] J. Wloka. Partial Differential Equations. Cambridge University Press, Cam
bridge, 1987.
[54] Harry Yserentant. Hierarchical bases give conjugate gradient type methods
a multigrid speed of convergence. Appl. Math. Comp., 19:347 358, 1986.
[55] Harry Yserentant. On the multilevel splitting of finite element spaces. Nu
mer. Math., 49:379 412, 1986.
[56] 0. C. Zienkiewicz and A. W. Craig. Aposteriori error estimation and adap
tive mesh refinement in the finite element method. In David F. Griffiths,
editor, The Mathematical Basis of Finite Element Methods, Oxford, 1984.
Clarendon Press.
57

PAGE 1
DOMAIN DECOMPOSITION PRECONDITIONERS FOR THIN RECTANGULAR p VERSION FINITE ELEMENTS by Gregory Scott Lett B.A., University of Colorado at Denver, 1982 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Mathematics 1990
PAGE 2
@1990 by G. Scott Lett All rights reserved.
PAGE 3
This thesis for the Doctor of Philosophy degree by Gregory Scott Lett has been approved for the Department of Mathematics by Jan Mandel William L. Briggs
PAGE 4
Charbel Farhat Thomas F. Russell Roland A. Sweet I 01'4 l
PAGE 5
Lett, Gregory Scott (Ph.D., Applied Mathematics) Thesis directed by Professor Jan Mandel The use of the pversion finite element method to solve an elliptic partial dif ferential equation results in a very large linear system which must be solved nu merically. We consider iterative solution, using the conjugate gradients method, preconditioned by domain decomposition preconditioners, and analyze the con dition number of two recently developed such preconditioners for problems in !R2. The pversion finite element method achieves accuracy by increasing the de gree p of the elements, rather than by decreasing their diameter, as in the more familiar hversion. Thus, there are many degrees of freedom on each element. The preconditioners considered here treat each element as a subdomain, allowing much of the work of generating and solving the preconditioner to be performed on each element independently. Most published analyses of preconditioners for the finite element method do not consider the case of elements with high aspect ratios, a case which happens often in practice, and consider only elements which are nearly, e.g. square. We show that, for the first preconditioner considered here, the condition number
PAGE 6
grows with the square of the aspect ratio, and thus its efficacy degrades sig nificantly. We also show that, for the second preconditioner, which adapts to elements of different aspect ratios, the condition number is indepen dently of the aspect ratio. Thus the second preconditioner can be more effective than the first for the case of very thin elements. We accomplish this analysis using energy seminorms which are defined especially for thin subdomains, and basis polynomials which are orthogonal with respect to these seminorms. This approach gives a greater insight into the behaviour of these methods than previ ously published methods. The form and content of this abstract are approved. I recommend its publication. Signed Jan Mandel iv
PAGE 7
For my wife, Elizabeth
PAGE 8
CONTENTS 1 INTRODUCTION 2 PRELIMINARIES 2.1 Function Spaces on Thin Domains 2.1.1 Sobolev Spaces . . . 2.1.2 lnvariance and Equivalence of Seminorms 2.1.3 Short Side Trace Theorem 2.1.4 Polynomial Spaces on 2.2 The Discrete Form of the Model Problem 2.3 The Basis (Shape) Functions for the pVersion Finite Element Method ........................ 2.4 The Preconditioned Conjugate Gradient Algorithm 2.4.1 Spectral Bounds on the Error for PCG ... 3 THE BILINEAR PRECONDITIONER 3.1 Preconditioning by Subspace Decomposition 3.2 Formulation of the Bilinear Preconditioner 3.3 The First Condition Number Bound. VI 1 6 6 6 8 9 10 12 13 15 16 18 18 20 22
PAGE 9
4 THE ADAPTIVELINEAR PRECONDITIONER 4.1 Formulation ............. 4.2 The First Condition Number Bound. 26 26 27 4.3 A Different Condition Number Bound. . . . . . . . 31 5 IMPLEMENTATION, VARIATIONS AND CONCLUSIONS 45 vii
PAGE 10
List of Figures 2.1 Thin domain 3.1 Thin element K. .............. 3.2 Condition numbers for linear preconditioning. 8 20 25 4.1 Condition numbers for improved preconditioning. . . . . 30 viii
PAGE 11
CHAPTER 1 INTRODUCTION Elliptic partial differential equations appear frequently in physical models in science and engineering. Frequently, we must make use of digital computers to approximate their solutions. The numerical solution of partial differential equa tions requires two main choices. Firstly, the method of discretizing the equation must be chosen, such as finite difference, finite element, or finite spectrum meth ods. Secondly, the method of solving the resulting discrete system of equations must be selected. This investigation will deal primarily with the second aspect of this problem. We assume that the discretization technique is the pVersion finite element method, and concentrate on analyzing some recently formulated preconditioners for iteratively solving the resulting linear systems. Our model problem will be the linear boundary value problem 6u J, inn 'U 0, on an where n is a bounded, piecewise smooth domain in with boundary an.
PAGE 12
The standard weak form of this problem is u E V: I I = I I Vv E V n n where V = HJ(O). The solution to the weak form is known to be the same as that for the original boundary value problem, when u is sufficiently smooth. It is useful to be able to change this problem, using Green's Theorem, to the symmetric form: u E V: I I VvTVudgjdy =I I Vv E V. (1.1) n n In the Galerkin finite element method [49] we choose anndimensional subspace, S, of V, and approximate the weak form (1.1) as where uES: a(u,v)=f(v) ,VvES, a(u,v) I I(VvfVudzdy, n f(v) = I I fvdzdy. n (1.2) Henceforth, the bilinear form a(,) will be understood to be defined on S x S, and the functional, /,will be defined on S. Once a basis, {t, 2 ,J, for S has been selected, we transform problem (1.2) into a linear system of equations, Az = b, where A = (ai;),aii = a(i,;). We will study the solution of this system by the preconditioned conjugate gradients method (PCG). In particular, 2
PAGE 13
we shall analyze some effective preconditioners for this method. A preconditioner can be defined as a second symmetric bilinear form, c( ) on S x S, such that the system of the form c( w,
PAGE 14
finite difference and finite element meshes. The pversion finite element method, considered in our work here, is distinguished from the hversion finite element method and finite difference methods only in the way in which refinement is achieved. The pversion increases the degree of few finite elements, rather than increasing the number of elements. Thus, the idea of using one (or few) pversion elements as subdomains of domain decomposition algorithms seems natural. It is known experimentally that the effectiveness of these preconditioners is adversely effected by the aspect ratio ofthe subdomains [47, 34]. Few theoretical results have been published, except in the case of the Schwarztype methods, for which it is well established that the shape of the overlapping regions determines the rate of convergence of the method. This paper gives the results of previous work [36], and improves on the published theory, giving some results which explain the behavior of certain domain decomposition methods for the practical case of high aspect ratio subdomains. In Chapter 2, we shall define the space S, the basis (or shape) functions, { t/J1 tP2, ... tPn} used in the pVersion finite element method, describe the PCG algorithm, and state several useful properties of the algorithm. We will discuss the desirable properties of a good preconditioner. We shall also quote various results from the literature which will be needed later. In Chapter 3, we shall describe a recently introduced domain decomposition preconditioner for the PCG algorithm, which treats each pVersion element as a 4
PAGE 15
subdomain. We shall show that the performance of this preconditioner degrades under certain very realistic assumptions, namely when the subdomains have high aspect ratios. It will be shown, as was done in [36] that the condition number degrades as the square of the aspect ratio. In Chapter 4, we shall modify the preconditioner from Chapter 3, and show that the new preconditioner has superior spectral properties to the original. The condition number for this preconditioner was introduced in [36], where we showed that the condition number degrades no faster than the aspect ratio. Here we will show that the condition number is bounded independently of the aspect ratio. 5
PAGE 16
CHAPTER 2 PRELIMINARIES In this chapter we will introduce some notation, define our problem, define the Preconditioned Conjugate Gradients Algorithm, and state some results of functional analysis on thin domains. 2.1 Function Spaces on Thin Domains In this section, we define the norms and function spaces used in this paper, and state several propositions concerning functional spaces on a thin element. 2.1.1 Sobolev Spaces We first define some norms and seminorms. We shall work with these explicitly given seminorms and norms for various domains ( cf. [53, 39]). On a domain CJ C R 2 define the usual L2norm of a function u. on CJ, llu.IIL2(o), by
PAGE 17
On a segment s of 80, define = I u.2ds. II Now, consider the Sobolev seminorm [53] (2.1) and the trace seminorm u.( t) u.( r) ( 2 ) 1/2 lttht>. = ! ( ) drdt (2.2) The corresponding norms are given by llu.lli2(0) + Ju.li,o, JJu.JJi/2,11 JJu.JJL2(s) + Ju.Ji/2,s" We further denote and ilu.Jlk,oo,s = L ess sup IDo:u.J. lo:l:5k 8 and, finally, where vis defined to be equal to u. on 8 and extended by zero to all of an. We now define several spaces of functions. L2 ( 0) = H0 ( 0) is the space of (equivalence classes of) functions u. for which < oo. The space H,( 0) 7
PAGE 18
A s" f' = (1,<) kt:t: 3,t: v1 = ( 1, e) 82 v2 = (1,e) Figure 2.1: Thin domain Kt: is the space of (equivalence classes of) functions u for which llull:.o < oo, for l = 0, 1/2, 1. The space Cgo( 0) consists of infinitely differentiable functions with support contained in a subdomain o of 0. We denote by V the completion of C:f(O) with respect to the H1(0)norm. 2.1.2 lnvariance and Equivalence of Seminorms Consider the family of domains kt: = (1,1) x (e,e), 0 < e 1 (see Fig. 2.1). We shall use the associated family of bijections Fe ( :c, y) = ( :c, ey). We will also occasionally use the domains kt:t: = ( E:, E:) X ( E:, E:) and the corresponding mappings Fee(:c,y) = (e:c,ey). Let 81,e = {1} x (e,e) be one short side of Kt: as in Fig. 2.1. By a simple 8
PAGE 19
computation, we have for any u. E H1(Kt) the invariance of seminorms, (2.3) (2.4) Because we have the equivalence of seminorms, e112lul < lu o F11 < e112lu.l l,K1 E l,K. 1,K1 (2.5) It is easy to see that the constants in (2.5) are sharp by considering the functions u.( :z:, y) = :z: and u.( :z:, y) = y. Finally note that I I 'U = I I 'U FEl = :2 I I 'U (2.6) K1 K. K., 2.1.3 Short Side Trace Theorem With the definitions (2.1) and (2.2), we get a version of the trace theorem for the family of domains KE. Theorem 2.1 [36] There is a constant C independent of e such that for all e E (0, 1], lu.ll/2,s1, $ Clul1,k., \lu. E H1(KE) 9
PAGE 20
Proof: Let u E H1(Ke), ffi<,eu = 0. Put v = u o Fe. Then v E k,, and by (2.6), JJ 1(1 v = 0. From the trace theorem (see, e.g., [53]), and using (2.3) and (2.4) we get It remains to note that lul1,k ::; lul1,k, 0 2.1.4 Polynomial Spaces on Ke Let Pp( Ke) be the space of all real tensor product polynomials of de.gree p on Ke, that is, u E Pp(Ke) if and only if p u( y) = L c;; E i,j=O The following proposition, from [36], is concerned with bounding the I lt,i<, seminorm of a bilinear function from its vertex values. Fig. 2.1. Then 4 luol1,k.::; ce112(luo(v,)+ luo(v2)uo(v3)1) + e112E luo(vi)l, where C does not depend on u0 ore E (0, 1]. 10 i=l
PAGE 21
In [7, Corollary 6.3), it was proved that This result was extended to a family of thin domains, again in [36). Theorem 2.2 There is a constant C independent of e such that for all u E Pp(K!) such that JJ i<. u = 0, and all e E (0, 1). Proof: Write u = v o F!l, and use (2.5), (2.6), and (2. 7). 0 On the short sides 81,! and 83,! we have the following bound independent of e. Theorem 2.3 [36) There is a constant C independent of e such that for all e E (0, 1] and all p. Proof: Let e = 1. Because the inequality (2.8) is invariant to adding a constant function to u, suppose without loss of generality that JJ j(1 u = 0; then (2.8) follows from (2.7). For a general e E (0, 1), (2.3) and (2.4) give and it remains to note that lulk S juli< . 0 We will also need the following discrete extension theorem. 11
PAGE 22
Theorem 2.4 [36] Let u E Pp(.l(e:) such that u(v1 ) = u(v4 ) = 0. Then there ezists a function v E Pp(lt) such that v = u on s1,e:,v = 0 on 8Ke: \ s1,e:, and for all e E (0, 1], and the constant does not depend one or u. Proof: Let e = 1. It was proved in [7, Theorem 6.6] that Noting that we put v = u on .91 ,11 v = 0 on 8K1 \ .91 ,11 and extend by [7, Theorem 7.5] to a function v E Pp(KI) to get (2.9) for e = 1. The general case then follows using (2.3), (2.4), and (2.5). D 2.2 The Discrete Form of the Model Problem We consider solution of (1.1) by the usual Galerkin finite element method [27], except that we use a family of reference elements defined by Ke: = ( 1, 1) x (e,e:), e E (0,1], rather than a single reference element. Let A= {K} be a partition of S1 into nonoverlapping elements K, where each K is the image of one reference element Ke:, KEA 12
PAGE 23
where GK E are bijections such that IGKI l,oo,K.K < ChK, (2.10) IG11 K l,oo,K.K < ch1 Kl (2.11) and IJc I K O,rx>,KrK < Chi<, (2.12) IJll GK O,rx>,KK < ch2 Kl (2.13) where JcK is the Jacobian of GK, hK = diam(K). Roughly, this means that each element K is nearly one of the reference elements, We then define the finite element space and the discretization of (1.1) is u EVA : a(u,v) = f(v), Vv EVA. (2.14) 2.3 The Basis (Shape) Functions for the pVersion Finite Element Method In order to solve (2.14) numerically, we must select a basis for ). A hierarchical basis, described in, e.g. (6], has three types of shape functions defined on a square reference element .k 1 : 13
PAGE 24
the nodal shape functions; the side shape functions; the internal shape functions. The nodal shape functions, Ni; i = 1, 2, 3, 4, are the usual bilinear functions determined by their values at the nodes (vertices) of the element. Now, define Pili= 1, by where lj is the jlh Legendre polynomial. The side functions, S1,j, j = 1, ... ,p1, associated with the side {( :z:, y) E { 1} X ( 1, 1)} are defined by The other side functions are defined similarly. The interior functions, Bj,k, j, k = 1, ... ,p1, are defined by These basis functions will not be used further here, but give a motivation for the formulation of the preconditioners in later chapters. Note that there are (p + 1 )2 degrees of freedom on each element. The pversion finite element method achieves convergence by increasing the degree p of the elements (see, e.g., [6] and the references cited therein). The more familiar (hversion) finite element method 14
PAGE 25
achieves convergence by decreasing the size h of elements, with p remaining fixed. Published theoretical results (e.g., [5, 25, 6]) indicate that solutions of high accuracy can be obtained with the pversion at lower computational cost than with the hversion. The pversion finite element method is quite similar to the spectral methods [24]. 2.4 The Preconditioned Conjugate Gradient Algorithm Once a basis, { 1 2 n} for VA has been chosen, the solution to the discrete problem can be written n 1L = L Cii i=O The discrete problem (2.14) can be written as the linear system (2.15) t1J 1, ... n. This system can be written in matrix form, A:z: = b, It is this problem which we will solve by the method of Preconditioned Conjugate Gradients (PCG) [22]. We must first define a preconditioning bilinear form, c( ), also symmetric and positivedefinite, with corresponding matrix, 15
PAGE 26
M = (mij),m;j = c((!Ji, tPi) We then define :z:0 = 0, and r0 = b, k = 1 and perform Zk1: Mzk1 rk1 (2.16) /3k T I T zk1rk1 zk_zrk2, (/31 = 0) (2.17) dk Zk1 + /3kdk1, (d1 = zo) (2.18) Clk zL1rk1/dr Adk (2.19) Zk Zk1 + akdk (2.20) Tk rk1 akAdk (2.21) until rk = 0. 2.4.1 Spectral Bounds on the Error for PCG Convergence (rk = 0, using exact arithmetic) occurs in k n steps. Usually, however, we treat this direct method as an iterative one, and stop with k
PAGE 27
When we have m2 that satisfy the above and the ratio m2/m1 is the smallest possible, we call tt = m2/m1 the condition number of the preconditioned system, and write the condition number bound on the error (y'K 1)2k ej 2eo y'K + 1 We call c( ) the preconditioner. 17
PAGE 28
CHAPTER 3 THE BILINEAR PRECONDITIONER In thls chapter, we give a general form for subspace decomposition preconditioners, as in Mandel [34]. We t.hen define the first of two preconditioners, which uses the space of continuous piecewise bilinear functions as one of the subspaces. This preconditioner was first proposed in [7]. 3.1 Preconditioning by Subspace Decomposition We now describe a general principle for constructing the preconditioners in this paper. For each element K, define the local space By integration in (1.2) over the elements K, we can write a(u,v) = L:ax(u,v), K Denote the energy norm on K by ax(u,v) = J jC\lufVv. K Now let VK be split in a direct upper sum of nx + 1 subspaces
PAGE 29
so every VK E VK can be written uniquely as the sum of nK + 1 components, VK = VK,O + '' + VK,nK' VK,i Elf;. (3.1) We then define c(u,v) = L cK(u,v), cK(u,v) = aK(uK,o,VK,o) + ... + aK(uK,nK1VK,nK). (3.2) K On each element, cK(,) is a block diagonal approximation to aK(, ), with nK blocks, each corresponding to one of the subspaces VK,i of VK. c(,) is the global assembly of the local contributions from each element, and retains some of the block diagonal structure from the local parts. Each of the blocks is better conditioned than the original global system, and can be solved in parallel. Obviously, if for all K E A, then (2.22) holds with some Thus the condition number tt = m2/m1 can be bounded using local bounds from each element, independently of the number of elements. Let mK,l and mK,2 be the largest and smallest, respectively, such that (3.3) holds, and denote the local condition number fflK,2 ltK = . fflK,l 19
PAGE 30
Figure 3.1: Thin element K. The following theorem, given in [33, Theorem 4.1], shows that to bound the condition number, it is necessary and sufficient to bound the decomposition (3.1) in energy. For other versions and related results, see [7, 29, 34, 52, 47]. Theorem 3.1 Let bK be the least number such that "K L luK,ilk S bKiulk, 'VuE VK. (3.4) i=O Then In this and the following chapters, we give specific examples of the abstract decomposition (3.2). and use Theorem 3.1 to estimate the condition number ttK for each preconditioner. The preconditioners differ only in the choice of the subspaces, VK,i 3.2 Formulation of the Bilinear Preconditioner 20
PAGE 31
Let K = E A be an element as in Fig. 3.1. Define the decomposition of the local space VK by specifying the associated projections of an arbitrary u E Let where 5 UK= L UK,it UK E VK, i=O (3.5) UK,o is the mapped bilinear interpolant of u; that is, UK,u(v;) uK(v;), fori= 1, 2, 3, 4, uK,i = uuK,o on the sides;, UK,i = 0 on 8K\s;, UK,i E VK, and luK,iiK is minimal for all such functions; By summation over all elements, we can see that the preconditioner c( u, v) can be written as c(u,v) = c(uo,vo) + L c(ue,ve) + L:c(uK,s,VK,s), e K u0 is the piecewise mapped bilinear interpolant of u; Ue is a function determined by its values on one edge e of the finite element mesh, nonzero only on the two elements adjacent to the mesh; 21
PAGE 32
'ILK,s is a function nonzero only on element K. The system (2.16) thus decouples into one subproblem for each side e, one subproblem for the interior of each element K, and a subproblem, the matrix of which is the stiffness matrix of the discretization of (1.1) by bilinear elements. Thus, one can see the analogy between this preconditioner and the element shape functions described in Chapter 2. For variants of this method, see [7, 33]. The implementation of the method and comparisons with other methods from the literature will be given in Chapter 5. 3.3 The First Condition Number Bound This preconditioner was proposed and studied in [7], where it was proved that KJ< ::::; C(l + log2 p) in the case e. = 1. Here we show how KJ< deteriorates for e 0. This analysis first appeared in [36). Theorem 3.2 For the preconditioner defined by the decomposition (3.5), it holds that (3.6) where cl, 02 > 0 do not depend one or p. Proof: In view of the assumptions on the mapping GK, we may assume without loss of generality that 22
PAGE 33
In view of Theorem 3.1, we need only to show that for the minimal bK in (3.4), (3.7) (3.8) In [7], it was proved that (3.7) holds for e = 1. We give the proof here for the sake of completeness. Let u E VK. Because adding a constant to u will change only the component 'UK,o by the same constant, we may assume without loss of generality that JJ K u = 0. It follows from Theorem 2.2 and Lemma 2.1 that !uK,olo,oo,K < G(l + log112 P)lult,K, Now u(vi) = uK,o(vi), i = 1, 2, 3, 4, and from the Trace Theorem and Theorem 2.4, there exist functions Wi E P,(K), i = 1, 2, 3, 4, so that Wi = u'UK,o on si, Wi = 0 on 8K \ si, and By the the definition of 'UK,i, we have luK,ilt,K lwdt,K The estimate of 'UK,s follows from the triangle inequality, and we may conclude that (3.4) holds with bK G(l + log2 p) fore= 1. The general case of (3.7) follows using (2.5). To prove (3.8), consider a polynomial f(rt) such that f( 1) = /(1) = 0 and let u(rt,y) = f(rt) on K. Then the component 'UK,o = 0, and luli,K = e /:1 lf'l2drt = elfli,(t,t) 23 (3.9)
PAGE 34
Let v E H1 ( K) be determined by the boundary conditions v = f on s17 v = 0 on 8 K \ s11 (3.10) and by the requirement that lvi1,K is minimal. Then the component UK,! satisfies (3.11) and v is the solution of the boundary value problem 6.v = 0 in K with the boundary conditions (3.10). Expanding fin a sine series, 00 = L n=l we can write v as 00 sinh(mr(ey)) = L an h sm n=l sm 2n7rE: Now, using Green's theorem, 1 lvltK = II(VvfVv =I =I v K 8K 1 y 11=1 By orthogonality of sine functions, it follows that 2 1 2cosh2n7re 1 1 I ll2 lvi1,K = 2 n7ransinh2n7re 2::; 2 =;If o,(0,1) using the inequality n=1 n=l cosh t 1 > , t > 0, sinh t t and Parseval's equality. Thus using (3.9), we get lvli,k. > _!.. luli,k. e2 1/lt(1,1) 24
PAGE 35
Aspect ratio Figure 3.2: Condition numbers for linear preconditioning. 1)(1using (3.11), we get (3.8). 0 Figure 3.2 shows the numerically calculated condition numbers for p2 to 16 and e = 1 to e = 213 We can see that the condition numbers indeed grow as e2 The influence of p on the condition number is seen only for relatively small aspect ratios; for very small e, the influence of e prevails. The numerical results are in complete correspondence with Theorem 3.2. 25
PAGE 36
CHAPTER 4 THE ADAPTIVELINEAR PRECONDITIONER Examination of the proof of the lower bound in Theorem 3.2 shows that the degradation of the condition number is caused by decomposing shape functions from opposite long sides into different subspaces. This motivates a preconditioner which decomposes P,(Ke:) into fewer subspaces. The following preconditioner, proposed and analyzed in [36], has a much lower condition number than the Bilinear Preconditioner of the previous chapter. In this chapter, we will define the preconditioner, state the results of [36], and give a new analysis which shows that the new preconditioner has a condition number which is bounded independently of e. 4.1 Formulation ForK= GK(Ke:), we now define the decomposition of VK by where 3 UK= LUK,i, i=O ( 4.1)
PAGE 37
'ILK,o is the mapped bilinearinterpolant ofu: 'UJ<,o(vi) = uK(vi), i = 1,2,3,4, 'ILK,I, and uK,3 are defined by the boundary conditions 'ILK,i = u 'ILK,u on Si, 'ILK,i = 0 on 8K \ si, i = 1, 3, and by the requirement that 'ILK,i E VK and luK,iiK is the minimal possible; 'ILK,2 = 'ILK,o 'ILK,l 'ILK,J 4.2 The First Condition Number Bound In [36] we showed that the condition number KK deteriorates only as e1 rather than as e2 We give that proof here, primarily to contrast the result and the method with that of the next section. Theorem 4.1 For the preconditioner defined by the decomposition (4.1}, it holds that where C1 > 0 and C2 do not depend onE or p. Proof: As in the proof of Theorem 3.2, we may assume without loss of generality that
PAGE 38
Because of Theorem 3.1, we only need to show that the minimal bK in (3.4) satisfies ( 4.:l) ( 4.3) Let u E VK = Pp(K) = Pp(lt:). Because adding a constant to u will change only the component 'ILK,o by the same constant, we may assume without loss of generality that JJ K. u = 0. Then we have from Theorem 2.2, and from Theorem 2.3, (4.5) ju(v2)u(va)l < C(1 + log1 / 2 p)juh.K (4.6) It follows from (4.4)(4.6) and Lemma 2.1 that (4.7) Further, (4.5) and (4.6) give (4.8) because 'ILK,o is linear on all sides, and because of the scaling invariance property (2.4). From (4.8) and Theorem 2.1, we have ju'ILK,olt;2,s; :$ C(1 + log112 p)juh,K, i = 1, 3. (4.9) 28
PAGE 39
Now Theorem 2.3 and the definition of uK,o imply that Theorem 2.4 and (4.9), (4.10) now give the existence of functions Wi E P,,(K) such that Wi = uUK,o on si, Wi = 0 on 8K \ Si, i = 1, 3, and By the definition of UK,i, we have iuK,ih,K iwdt,K, which together with (4.7) and the Triangle Inequality concludes the proof of ( 4.2). To prove ( 4.3), consider the function z"y u(:z:,y) = e on K = Ke. Then the bilinear interpolant of u is UK,o(z,y) = zyje. By a simple computation, Thus I 2 1 UK,oi1,K > 3g and if p = [1/e:], C, which proves ( 4.3). 0 29
PAGE 40
107 ............ : ....... i. . .; i. ;. ; ............ i. ... 106 ..... ... .... J! 10' ........... : .... : .... .. I j 104 . :a B 103 u =; ... 102 j. j. ti t .... ... j . 101 10 lQO 10' 104 Aspect Ratio (1/ epsilon) Figure 4.1: Condition numbers for improved preconditioning. Fig. 4.1 summarizes numerically evaluated numbers "' for p 2 to 16 and e 1 to 213 The computed results agree with the previous theoretical result, but seem to show that the upper bound ( 4.2) is not sharp for e 0. In the following section we will improve this upper bound. 30
PAGE 41
4.3 An Improved Condition Number Bound It appears in :figure 4.1 that, for a given p, the condition number remains bounded as E + 0. This is indeed the case, and we now develop an upper bound for the condition number which grows as a function of p, but is independent of e. We accomplish this via energy bounds as in section 8, but this time in terms of the seminorms, 112 and II 112 The idea is motivated by the following, O,K1 II o,K1 an operator interpolation theorem, in the spirit of [28]. Theorem 4.2 LetT : V V be a linear operator on a space V, and let iuiA, luis be seminorms on V, and fore E (0, 1) let 2 2 1 2 lule: = E luiA + luis E define a new seminorm on V, and define sup ITule: uEV Suppose, that T satisfies, for some 01, 02, Oa, 04 > 0, ITuiA = 0, ifluiA = 0 ITuls = 0, ifluls = 0 ITI1 s o1 JTuJd < 0 V V JuL, 2, u E 31
PAGE 42
. Then, for any e E (0, 1], (4.11) Proof: Let u E V, lule: =/: O, be given. Suppose first that lu!A = 0. Then So (4.11) holds in this case. Now suppose that lu!A =/: 0. Then + + h(e), and his a monotone continuous function of e. Thus, since h(l) S Oh and lim h(e) = e:+U (4.11) follows. 0 This result can be applied with V = Pp(Kt), and 18u 2 O,Kl 8u 2 By o,K1' and with linear operators defined by, e.g., bilinear interpolation. Theorem 4.2 indicates that we should be able to bound = II::IC. + 32
PAGE 43
independently of e, if we can bound II 112 . We will now develop bounds for II O,Kt interpolation, trace, and extension operators in terms of these seminorms. The following lemma makes use of the properties of Legendre polynomials (see, e.g. [1, 24]) to obtain a bound on the energy of bilinear interpolation in terms of the new seminorms. Lemma 4.1 Let u E P;Ckt), and suppose u0 is the bilinear interpolation of u on K1 Then llnuol'. < Cp'llaul' ( 4.12) OK O,Kt 1 8uo 2 8u 2 < Cp2 ( 4.13) 8y o,Kt 8y o,kt for some C > 0, independent of p. These inequalities aresharp, up to a constant factor. Proof: To prove ( 4.12) note that u can be written uniquely in the form p p L L aiili( :c )Li(Y) i=O j=O where li is the ith degree Legendre polynomial and 1, if j = 0 L;(y)= y, ifj=1 J!!_l[iI(T/)dT/, j = 2, ... ,p 33
PAGE 44
(Note: Lj( 1) = Lj(1) = 0, for j = 2, ... ,p). We use the properties of Legendre polynomials li(:c)lj(:c)d:c 26ij (4.14) 2i + 1 li( 1) ( 1)i ( 4.15) li(1) 1 ( 4.16) i1 L: (2n + 1)ln ( 4.17) t n=O n even where 6ii is the Kronecker delta. From these it follows that 8u 2 l'J, [ t. t, a;;l;( )1;1 (y) r dzdy 8y (4.18) u,k1 p p 4 2 L::E aij. (4.19) i=O j=l + 1)(2J 1) p 4 2 > (4.20) i=O + 1 It also follows that if Wij is the bilinear interpolation of li(z)Lj(y), then u0 = I:f=o aijWii. By direct application of the properties of Legendre polynomials and their primitives (see above), 8Wij = 8y Thus, 0, if j # 1 1, if j = 1, i even z, if j = 1,i odd 8uo =A+Bz, 8y 34
PAGE 45
where, p A= I: a;l, i=U i even p B = L a;1. i=O i odd Then, using the Cauchy inequality for sums, 2 Buo By o,k1 4A2 + 3 < 4(A2 + B2 ) 4 ( t. a;1 ) 2 + 4 ( .t. a.,) 2 1 even 1 odd < 4 (p + 2 ) t a; I 2 i=O < (p + 2)(2p + 1) t 2 i=O 2i + 1 < (p + 2)(2p + 1) 11Bull2 2 By o,Kl where the last inequality follows by ( 4.20). Thus ( 4.13) holds, and by interchanging the roles of :c and y, we get ( 4.12). To see that the bound is sharp, let p u(:v,y) = L(2i + l)l;(:c)y, i=O and u0(:c,y) =(A+ B:c)y, where 'P A= L {2i + 1), i=O i e\'en 35
PAGE 46
, B = L (2i + 1)o i=O i odd Observe that A c1p2 and B c1p2 for some positive constant c1o Then we have aau 2 = t(2i + 1 )2 (.2 ) = 2 t(2i + 1) ::; cup2 y O,K1 i=O 2, + 1 i=O for some constant eo, while Thus, au 2 a 2 < Co uo ay o,kl 2cr ay o,kl 0 0 The following is a discrete extension inequality, in the spirit of (2.4), but in terms of the new seminormso Lemma 4.2 (Extension from the side) Let f be a polynomial of degree po Then there is an w E P,( K 1 ) such that wls1 I wls3 0 awl' ay o,k1 < C] I 12 pI 1,s1 aw 2 < 0 o,kl 1/, in addition, f( 1) = /(1) = 0, then wl,2 = wls4 = Oo 36
PAGE 47
Proof: Let g be defined on ( 1, 1) by ( 1)P g(:v) = 2[lp(:v) lpl(:v)]' where li is the ith Legendre polynomial. Then the following properties hold: (4.15) and (4.16), imply g( 1) = ( ;)P [( 1)P( 1)pl] = 1, g(1) = ( ;)P [11] = 0. From ( 4.14), we obtain 2 ![ 2 2 ]<_! ll91lo,sl 4 2p + 1 + 2p 1 3p. Applying ( 4.17), we get p1 p2 L: (4n + 2) + L: (4n + 2) n=O n=O n+p ddd n+p1 odd 2p2. Let w( :v, y) = g( :v )f(y) on K1 It follows immediately from these properties w(1,y) = g(1)f(y) = f(y) w(1,y) = g(l)f(y) = 0 of 9 that iff( 1) = f(1) = 0 then w(:v, 1) = w(:v, 1) = 0 11:./(1 = jjgjj6,s1 IfiLL IfiLl 0 = I91Llllfll6,sl = The following lemma is a trace inequality with respect to the new seminorms. 37
PAGE 48
Lemma 4.3 (Trace Inequality) Let v E Pp(K1), and let f = vj81 Then ( 4.21) Proof: We can expand p p v(a::,y) = :E:Ear;li{a::)Li(Y) i=U j=U where, l1, L; are as in Lemma 4.1. Then Now, p f(y) = LLai;(l)1L;(y). i=O j=O Thus, we have Therefore, by Cauchy's inequality for sums, D 38
PAGE 49
Lemma 4.4 Let f be a polynomial of degree p on [1, 1], let fu be the linear interpolant of j, and let g = ffo Then lgli,(1,1) lfli,(1,1)' Proof: Let Li, i = 1, ... ,p be defined as in Lemma 4.1. Then f has the expansion p f = LbiLi i=O and I 12 .f.. 2 2 f 1,(1,1) = 2i 1' =1 Because Li( 1) = Li(1) = O, for i > 1, L0(y) = 1, and L1(y) = y, we have fu = bo + b1y = boLo(Y) + b1L1(y). Therefore, g = ffu has the expansion p g = LbiLi i=2 So I 12 .f.. 2 2 I 2 g 1,(1,1) = b; 2i1 fll,(1,1)' =2 D The following theorem gives a bound on the projections defined by the Adap; tiveLinear decomposition. Theorem 4.3 Let u E Pp(Kt), v == u o F!1 and vi, i = 0, 1, 2, 3 be defined on 39
PAGE 50
v0 is the bilinear interpolant of v on { Vtls1 = (vVu)lst v1 minimizes ak. ( ), subject to: ; Vtls; = 0, ?, = 2, 3, 4 v2 minimizes ax).' ) subject to: { V2lsa = (vVo)lsa v21s;=O, i=1,2,41 Then we have < < Ct(ep 4 + i = 1,2 < c2(ep'1 + Proof: For v0 the bilinear interpolant of v on let u0 be the bilinear interpolant of u on K1 Then v0 = u0 o Thus, by Lemma 4.1, we have "' lau0 112 + lau0 2 = o,k, e ay o,k, ( 4.22) < cp'II::IC. + (4.23) ( 4.24) For vh let u, u 0 be as above, g = uj811 and f = (uu0)lw By Lemmas 4.2, 4.3 and 4.4, there is a function wE Pp(K!) such that v1 = w o on and aw 2 o,kl < 40 (4.25)
PAGE 51
8w 2 < 8y o,k1 ( 4.26) So, by the minimality of v1 2 I 1!2 lv111 K. < w o F. . l,K. By (2.6), 8 2 1 8w 2 I 112 w w 0 p!. = E: + l,K. 8:e u,k1 e: 8y u,k1. Lemma 4.3 gives + 5 cp'eifli . + ;llfu: .. Since f( 1) = f(1) = 0, 2 2 c I 2 2 12 d I 12 cp e:lflt,s1 + pe: I fllo,sl cp e:lf 1,s1 + pe: f 1 . 1 I 2 c 2 = (cp E: +)!fils pe: 1 From Lemma 4.4, we have 2 d 2 2 c' 2 (cp e:+ pe:)lflt,s 1 (cp e:+ pe:)l9lt,s 1 Now, by the trace lemma, ( 4.3), = (c,pe + [ U CJ l 41
PAGE 52
(c,pe + c:) [ n Finally, by (2.6), 8u 2 8y +c8u2 )] 0 K 8aJ O,K 1 1 (c,p'e+ c:p) [ +I = c,(t'p' +P)Iul:.k, with c,c', and c1 constants, each independent of E:,p, and the bound for v1 is proved. Similarly, the bound for v3 follows. For v2 we have, by the triangle inequality, Lemma 4.1 and Lemma 4.2, 0 We cannot apply Theorem 4.2 directly to results given above, because the minimum energy extension operator is a different operator for each c. We can, however, obtain a bound on the condition number which is independent of c. Theorem 4.4 The preconditioner defined by the decomposition (4.1} has a condition number which is bounded by where C is a positive constant, independent of c, p. 42 (4.27)
PAGE 53
Proof: As stated before, we lose no generality in assuming that k = Ke. It follows immediately from Theorem 4.3 and Theorem 3.1 that there is a Cu > 0, independent of p, g such that from which we can immediately get the bound ttk :::; C0p\ but we can do much better. From Theorem 4.1, we know that there is a C1 > 0 such that Thus, there is a positive constant, C3 such that Since p2 + ep4 is a monotonically increasing function of e > 0, and (1 + log2 p )/ g is monotonically decreasing, their minimum achieves its maximum value at a point e* > 0 where the two functions are equal: This occurs at 1 + (5 + 4log2 p)112 g = 2p2 Then we have 2 .. 4 2 1 + ( 1 + log 2 p) 2 ( ) p +e p =p 2(5+4log2p)1/2 ::;C4p 1+logp 43
PAGE 54
for some positive constant C4 Therefore, 0 The results of this chapter combine to give the condition number bound which more fully predicts the behavior of the condition numbers given in figure 4.1. 44
PAGE 55
CHAPTER 5 IMPLEMENTATION, VARIATIONS AND CONCLUSIONS We propose a hybrid algorithm which combines the use of the preconditioners from Chapters 3 and 4. The choice of preconditioner can be made on each element independently; precondition nearly square elements with the simpler Bilinear Preconditioner, and precondition the more distorted elements with the more effective AdaptiveBilinear Preconditioner. With a certain ordering of the hierarchical basis functions using the shape functions from Chapter 2, the matrix A corresponding to this problem has the block form: Au A12 At3 A21 A22 A23 AJt A32 A33 The A11 block corresponds to the stiffness matrix of the original problem, dis cretized with bilinear elements. The A22 block is a block 7diagonal system, with blocks of order p, each corresponding to the edge basis functions of a single edge, shared betweed two elements. The A33 block is a block diagonal system, with blocks of order (p 1 )2 each corresponding to the interior basis functions of a.
PAGE 56
single element. Instead of the original shape functions, we change the side shape functions to satisfy the minimum energy condition of Chapter 3. This changeofbasis can be performed on each element independently. The basis functions are no longer hierarchical, so one of the advantages of prefi.nement (e.g. [56]) is lost. The global matrix is then assembled in the usual way, giving the following matrix in block form: Aat Aaa The matrix M corresponding to preconditioner from Chapter 3 has the block form: Maa The blocks correspond to the original problem: M11 = A11 M22 decouples into small 'edge' problems, M33 = Aa3 is block diagonal, with diagonal blocks corresponding to the interior functions of each element. This preconditioner is different from that proposed in [16), in that the interior functions are not eliminated, and the vertex basis functions are not changed at all. This allows the global (coarse grid) system to be solved a priori, or in parallei. This is simpler than the previous algorithm, but does not generalize well to_ 46
PAGE 57
three dimensions, because of reasons cited in [7], namely that the energy of the vertex functions grows very quickly as a function of p. The preconditioners in [34, 7, 33, 36] advocate elimination of the unknowns, rather than orthogonalization. The method in [31] uses a symbolic elimination, with iteration on a reduced system. This is similar to the methods developed in (46, 47], which use variations of the Schwarz algorithm on the reduced system. The methods presented here hear some similarity to the elementbyelement techniques of Hughes et al (see, e.g., (26]), but the EBE methods appear to require many more iterations. The interior unknowns of the AdaptiveBilinear Preconditioner are coupled to the edge functions corresponding to long sides of the corresponding element, so the procedure is not quite as. simple. One can, by eliminating the interior variables from the edge functions, one can iterate on the reduced system. The interior functions can be eliminated, by suhstructuring, on each element independently. One obtains in this case a subproblem corresponding to a chain of thin elements connected on long sides. Adjacent chains (and nearly square elements) are decoupled,however, and thus can be solved in parallel. Thus, we have an algorithm which adapts to distorted elements, and which still decouples the original problem into' much smaller subproblems. As was noted in [7], the method of Chapter 3 does not generalize well to three dimensions, because the condition number grows too quickly as a function of p. Other methods have been proposed for the 3dimensional case [7] [35] [32] for 47
PAGE 58
which the idea of combining subspace& for distorted elements appears promising; the analysis techniques of Chapters 3 and 4 should be applicable to those cases as well. The method of Chapter 3 was based upon a similar one for the hversion finite element method and finite difference methods [16]. The techniques devel oped here should also have their analogues for that case, when subdomains are distorted. The pversion finite element method is closely related to the spectral methods [24] [41] and the techniques given here may be applicable to analyzing and improving performance of domain decomposition methods (see, e.g., [42]). The technique of analyzing the behavior of numerical methods by considering the energy of subspaces in the onesided energy seminorm of Chapter 4 appears to have wide applicability. 48
PAGE 59
BIBLIOGRAPHY [1] George Arfken. Mathematical Methods for Physicists. Academic Press Inc., Orlando, third edition, 1985. [2) I. Babuska and H. C. Elman. Some aspects of parallel implementation of the finite element method on message passing architecture. J. Comput. Appl. Math., 27:157187, 1989. [3) I. Babuska, H. C. Elman, and K. Markley. Parallel solutions of linear systems arising from the pversion of finite elements by conjugate gradient method. In preparation. [4] I. Babuska, M. Griebel, and J. Pitkaranta. The problem of selecting the shape functions for a ptype finite element. Int J. Numer. Methods Engrg., 28(8):18911908, 1989. [5] I. Babuska, B. A. Szabo, and I. N. Katz. The pversion of the finite element method. SIAM J. Numer. Anal., 18:515545, 1981. [6] lvo Babuska. The p and hp versions of the finite elelement method: The state of the art. In D. L Dwoyer, M. Y. Hussaini, and R. G. Voigt, editors, 49
PAGE 60
Finite Elements, Theory and Applications. Springer Verlag, New York, 1988. [7] Ivo Babuska, Alan W. Craig, Jan Mandel, and Juhanni Pitkaranta. Efficient preconditioning for the pversion finite element method in two dimensions. SIAM J. Numer. Anal., to appear. [8] Ivo Babuska and Manil Suri. The pand hp versions of the finite element method. International Conference on Spectral and High Order Methods for partial Differential Equations, Como, Italy, June 1989; Comput. Methods Appl. Mech. Engrg., to appear. [9] Ivo Babuska and Manil Suri. The hp version of the finite element method with quasiuniform meshes. Math. Model. Numer. Anal., 21:199238, 1987. [10] Ivo Babuska and Manil Suri. The optimal convergence rate of the pversion of the finite element method. SIAM J. Numer. Anal., 24:750776, 1987. [11] Petter E. and Olof B. Widlund. Solving elliptic problems on regions partitioned into substructures. In Garrett Birkhoff and Arthur Schoenstadt, editors, Elliptic Problem Solvers II, pages 245256, New York, 1984. Aca demic Press. [12] Petter E. and Olof B. Widlund. Iterative methods for the solution of elliptic problems on regions partitioned into substructures. SIAM J. Numer. Anal., 23(6):1093 1120, 1986. 50
PAGE 61
[13) Petter E. Bj!Zirstad and Olof B. Widlund. To Overlap or Not to Overlap: A Note on a Domain Decomposition Method for Elliptic Problems. SIAM J. Sci. Stat. Comput., 10(5):1053 1061, 1989. [14) James H. Bramble, Richard E. Ewing, Joseph E. Pasciak, and Alfred H. Schatz. A preconditioning technique for the efficient solution of problems with local grid refinement. Comput. Meth. Appl. Mech. Engin., 67:149 159, 1988. [15) James H. Bramble, Joseph E. Pasciak, and Alfred H. Schatz. The con struction of preconditioners for elliptic problems by substructuring, I. Math. Comp., 47(175):103134, 1986. [16) James H. Bramble, Joseph E. Pasciak, and Alfred H. Schatz. An iterative method for elliptic problems on regions partitioned into substructures. Math. Comp., 46(173):361369, 1986. [17) James H. Bramble, Joseph E. Pasciak, and Alfred H. Schatz. The construc tion of preconditioners for elliptic problems by substructuring, II. Math. Comp., 49:116, 1987. (18) James H. Bramble, Joseph E. Pasciak, and Alfred H. Schatz. The construc tion of preconditioners for elliptic problems by substructuring, III. Math. Comp., 51:415 430, 1988. 51
PAGE 62
[19) James H. Bramble, Joseph E. Pasciak, and Alfred H. Schatz. The construetion of preconditioners for elliptic problems by substructuring, IV. Math. Comp., 53:1 24, 1989. [20) G. Brussino, R. Herbin, Z. Christidis, and V. Sonnad. Parallel multilevel finite element method with hierarchical basis functions. Manuscript, IBM Kingston, NY. [21] S. Foresti, G. Brussino, and V. Sonnad. Multilevel solution methods for the pversion of finite elements, parallel implementations and comparison with other solution methods. Technical Report KGN137, IBM Kingston, NY, 1988. [22] Gene H. Golub and Charles F. Van Loan. Matri: Computations. John Hopkins University Press, second edition, 1989. [23] D. Gottlieb and R. S. Hirsh. Parallel pseudospectral domain decomposition techniques. !CASE Report 8815, NASA Langley, February 1988. [24] D. Gottlieb and S. A. Orszag. Numerical Analysis of Spectral Methods. SIAM, Philadelphia, 1977. [25] B. Guo and I. Babuska. The hp version of the finite element method, i: The basic approximation results, ii: General results and application. Comput. Mechanics, 1:1, 2141, 203220, 1986. 52
PAGE 63
[26] Thomas R. Hughes and Robert M. Ferencz. Fully vectorized ebe precon ditioners for nonlinear solid mechanics: Applications to largescale three dimensional continuum, shell and contact/impact problems. In Roland Glowinski, Gene H. Golub, Gerard A. Meurant, and Jacques Periaux, edi tors, First International Symposium on Domain Decomposition Methods for Partial Differential Equations, Philadelphia, 1988. SIAM. [27) Claes Johnson. Numerical Solutions of Partial Differential Equations by the Finite Element Method. Cambridge University Press, Cambridge, 1987. [28] S. G. Krein, Ju. I. Petunin, and E. M. Semenov. Interpolation of Linear Operators. American Mathematical Society, Providence, 1982. [29) Pierre Louis Lions. On the Schwarz alternating method. I. In Roland Glowin ski, Gene H. Golub, Gerard A. Meurant, and Jacques Periaux, editors, Do main Decomposition Methods for Partial Differential Equations, pages 142, Philadelphia, 1988. SIAM. [30] Yvon Maday and Rafael Muiioz. Spectral element multigrid. II. Theoretical justification. J. Sci. Comp., 3(4):323354, DEC 1988. [31] Jan Mandel. On block diagonal and Schur complement preconditioning. Numer. Math., to appear. 53
PAGE 64
[32] Jan Mandel. A domain decomposition method for pversion finite elements in three dimensions. In Proceedings of the 7th International Conference on Finite Element Methods in Flow Problems, April 37, 1989, Huntsville, Alabama. University of Alabama at Huntsville, 1989. [33] Jan Mandel. Hierarchical preconditioning and partial orthogonalization for the pversion finite element method. In Tony Chan, Roland Glowinski, Jacques Periaux, and Olof B. Widlund, editors, Third International Sympo sium on Domain Decomposition Methods for Partial Differential Equations, pages 141156, Philadelphia, 1990. SIAM. [34] Jan Mandel. Iterative solvers by substructuring for the pversion finite el ement method. Comput. Methods Appl. Mech. Engrg., 80:117128, 1990. International Conference on. Spectral and High Order Methods for Partial Differential Equations, Como, Italy, June 1989. [35] Jan Mandel. Twolevel domain decomposition preconditioning for the p version finite element method in three dimensions. Int. J. Numer. Methods Engrg., 29(5):10951108, 1990. [36] Jan Mandel and G. Scott Lett. Domain decomposition preconditioning for pversion finite elements with high aspect ratios. Applied Numer. Anal., to appear. 54
PAGE 65
[37] Jan Mandel and Steve McCormick .. Iterative solution of elliptic equations with refinement: The twolevel case. In Tony Chan, Roland Glowinski, Jacques Peria.ux, and Olof Widlund, editors, Domain Decomposition Methods, Philadelphia., 1989. SIAM. [38] Jan Mandel and Steve McCormick. Iterative solution of elliptic equations with refinement: The model multilevel case. In Tony Chan, Roland Glowinski, Jacques Peria.ux, and Olof Widlund, editors, Domain Decomposition Methods, Philadelphia., 1989. SIAM. [39] J. T. Marti. Introduction to Sobolev Spaces and Finite Element Solution of Elliptic Boundary Value Problems. Academic Press, London, 1986. [40] Joseph E. Pa.scia.k. Domain decomposition preconditioners for elliptic problems in two and three dimensions: First approach. In Roland Glowinski, Gene H. Golub, Gerard A. Meura.nt, and Jacques Peria.ux, editors, First Intemational Symposium on Domain Decomposition Methods for Partial Differential Equations, Philadelphia., 1988. SIAM. [41] A. T. Patera.. A spectral element method for fluid dynamics: Laminar flow in a. channel expansion. J. Comput. Phys., 54:468488, 1984. [42] A. Qua.rteroni and G. Sa.cciLa.ndria.ni. Domain decomposition preconditioners for the spectral collocation method. J. Sci. Comput., 3:4576, 1988. 55
PAGE 66
[43] E. M. and A. T. Patera. Spectral element multigrid. I. Formulation and numerical results. J. Sci. Comput., 2:389406, 1987. [44] E. M. and A. T. Patera. Spectral element multigrid. I. Formulation and numerical results. J. Sci. Comput., 2:389406, 1987. [45] H. A. Schwarz. Gesammelte Mathematishe Abhandlungen, 2:133143, 1890. [46] Barry F. Smith. An optimal domain decomposition preconditioner for the finite element solution of linear elasticity problems. Technical Report 482, Department of Computer Science, Courant Institute, 1989. [47] Barry F. Smith. Domain Decomposition Algorithms for the Partial Dif ferential Equations of Linear Elasticity. PhD thesis, Courant Institute of Mathematical Sciences, New York, 1990. [48] G. Strang. Linear Algebra and Its Applications. Academic Press, New York, 1976. [49] G. Strang and G. J. Fix. An Analysis of the Finite Element Method. PrenticeHall, New York, 1973. [50] B. A. Szabo. PROBE Theoretical Manual. Noetic Technologies, St. Louis, MO, 1985. 56
PAGE 67
[51] Michael Vogelius. An analysis of the pversion of the finite element method for nearly incompressible materials. Numer. Math., 41:3953, 1983. [52] Olof B. Widlund. Optimal iterative refinement methods. In Tony Chan, Roland Glowinski, Jacques Periaux, and Olof Widlund, editors, Domain Decomposition Methods, Philadelphia, 1989. SIAM. [53] J. Wloka. Partial Differential Equations. Cambridge University Press, Cam bridge, 1987. [54] Harry Yserentant Hierarchical bases give conjugate gradient type methods a multigrid speed of convergence. Appl. Math. Comp., 19:347358, 1986. [55] Harry Yserentant. On the multilevel splitting of finite element spaces. Nu mer. Math., 49:379412, 1986. [56] 0. C. Zienkiewicz and A. W. Craig. Aposteriori error estimation and adap tive mesh refinement in the finite element method. In David F. Griffiths, editor, The Mathematical Basis of Finite Element Methods, Oxford, 1984. Clarendon Press. 57
