Fourier methods of image reconstruction

Material Information

Fourier methods of image reconstruction
Henson, Van Emden
Publication Date:
Physical Description:
xii, 267 leaves : illustrations, photographs ; 29 cm


Subjects / Keywords:
Fourier transformations ( lcsh )
Radon transforms ( lcsh )
Fourier transformations ( fast )
Radon transforms ( fast )
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )


Includes bibliographical references (leaves 264-267).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Doctor of Philosophy, Department of Mathematics.
Statement of Responsibility:
by Van Emden Henson.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
23451160 ( OCLC )
LD1190.L62 1990d .H46 ( lcc )


This item is only available as the following downloads:

Full Text


FOURIER METHODS OF IMAGE RECONSTRUCTION by Van Emden Henson B.S., University of Utah, 1979 M.S., University of Colorado at Denver, 1988 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Mathematics 1990


This thesis for the Doctor of Philosophy degree by Van Emden Henson has been approved for the Department of Mathematics by William L. Briggs William H. Clohessy Thomas A. ; Stephen F. McCormick Thomas F. Russell Date


Henson, Van Emden (Ph.D., Applied Mathematics) Fourier Methods of Image Reconstruction Thesis directed by Professor William L. Briggs The set of all line integrals of a function is known as its Radon transform. This thesis is concerned with methods of recovering a function u(x,y) from its Radon transform. Two algorithms are developed to invert the Radon transform by taking advantage of the Central Slice theorem, that describes an important relationship between the Fourier transform of u( x, y) and the Fourier transform of its Radon transform. The first algorithm is a straightforward modification of a standard method, and differs from it in the interpolation operators that are used. Standard meth ods use expensive interpolators based on sampling theory. In this thesis, the interpolation is performed by a simple Lagrangian polynomial interpolation. A detailed error analysis is performed, by examining the algorithm as a cascade of operators. As a result of this analysis, the nature of the errors committed in each stage of the method, and the relationships between the various types of errors, is detailed. This leads to an improved algorithm, that is then examined analytically, and through numerical experiments. The second algorithm is based upon a direct inversion of the Fourier trans form in polar coordinates. It is developed as an Anterpolated Discrete Fourier


transform (ADFT), a method for computing the discrete Fourier transform of a data set on an irregularly spaced grid. The AD FT is first developed for the general one-dimensional case, and an error analysis is performed. This leads to an optimization problem for selecting parameters of the AD FT, which is then solved. The method is then extended to the two-dimensional image reconstruc-tion problem, implemented, and analysed. Numerical experiments are presented, and compared to those for the first algorithm. Finally, a formula is developed for the inversion of the Radon transform of a function expanded in a translational basis. This leads to a heirarchy of Central Slice theorems, and opens the possibility that a priori knowledge about the unknown function can be used to tailor a basis for a given problem. The form and content of this abstract are approved. I recommend its publication. Signed William 1. Briggs iv


For Jennifer Christine if 1\ I, II II ,, I j;


CONTENTS List of Figures . IX Acknowledgements XI 1 INTRODUCTION 1 2 THE RADON TRANSFORM AND ITS PROPERTIES 7 2.1 The Radon Transform . . . . . 9 2.1.1 An Example: The Characteristic Function of a Disk 12 2.2 Properties of the Radon Transform 13 2.2.1 Existence . 14 2.2.2 Linearity . . 16 2.2.3 Shifting Property . 17 2.2.4 Evenness and the Scaling Property 18 2.2.5 Linear Transformation of Variables 19 2.3 The Central Slice Theorem . . 22 2.4 Finding Radon Transforms with the Central Slice Theorem 24 3 A COLLECTION OF IMPORTANT PROPERTIES OF THE DISCRETE FOURIER TRANSFORM 27 3.1 The Discrete Fourier Transform . . . . 27 3.1.1 The Reciprocity Relation . . . . 30 3.2 The Relationship between the Discrete and Continuous Transforms 31 3.3 The Inverse Discrete Fourier Transform . 32 3.4 Refining the Frequency Grid . . . 34 3.5 Getting the FT-IFT Pair from the DFT-IDFT Pair 37 3.6 Error in Using the DFT to Approximate the FT 40 3.6.1 Trapezoidal Rule Error . . 41 3.6.2 Error by Summation . . 43 3.6.3 Uniform Bounds for the DFT Error 45 3.6.4 Numerical Examples 53 3. 7 Error of the Inverse DFT . 56 3.8 The Grids . . . 60 3.9 The Grid Reciprocity Relations 63 3.10 DFTs on Irregular Sets: The Anterpolated DFT 64 3.10.1 Operation Count for the ADFT . 69 3.10.2 Error Analysis for the Anterpolated DFT 72 3.10.3 Selection of p and N. 79 3.10.4 An ADFT Example . . . 86


3.10.5 Some Open Questions about the ADFT 90 4 FOURIER TRANSFORM INVERSION METHODS 92 4.1 The FT Inversion Method 93 4.2 The Standard DFT Method 94 4.3 The Forward Transform 95 4.4 The Inverse Transform 96 4.5 The Interpolation Problem 99 4.5.1 Computing == r:f}(w,q,). 100 4.5.2 Extents of the grids rJ,?M and nlN 103 4.6 Operation Count for the Standard Method 104 4. 7 Sources of Error 106 4.7.1 The Computational Cascade 107 4.7.2 Vector and Operator Norms llO 4.7.3 Operator Norms for :F1 and :F;1 111 4.7.4 Operator Norm for r:f 111 4.7.5 The Error in Computing the Forward Transform. 115 4. 7.6 The Error in Computing the Inverse Transform 117 4.7.7 Error in the Interpolation 127 4.7.8 Asymptotic Behavior of IIEzlb 134 4.7.9 The Behavior of the Error of Reconstruction 139 4.8 Some Comments and Observations 140 4.8.1 Comments: Bandlimiting 142 '1: r ;;,. J 'j) 4.8.2 Comments: The Discrete L2 Norm and the Infinity Norm 144 4.9 A Phantom Image 146 4.10 Numerical Examples 1: Operators Applied to Exact Data 148 4.10.1 Numerical Examples 2: Image Reconstruction Examples 157 4.10.2 The Image Artifacts . . 161 4.11 Oscillation in }(w, 4>) due to Shifted Images 166 4.12 Improving the Reconstruction by Frequency Grid Refinement 171 4.12.1 The Effect of Xf,N on the Error in the Forward Transform 172 4.12.2 The Effect of Xfl on the Interpolation Error 174 4.12.3 The Effect of XfrN on the Inverse DFT Error 174 4.12.4 The Effect of Xf!' on the Total Error 175 4.12.5 The Effect of XfrN on the Operation Count 177 4.13 Numerical Examples 3: The Effect of XfrN 179 5 ANTERPOLATED IMAGE RECONSTRUCTION 189 5.1 The Central Slice Theorem in Polar Coordinates 189 5.2 The Two-Dimensional Anterpolated DFT 192 5.3 Operation Count for the AIR 198 5.4 Error Analysis for the Anterpolated Image Reconstruction 201 5.4.1 IIE...tll, The Error of Anterpolation 202 vii


5.4.2 !lEAF!!, The Propagated DFT Error 207 5.4.3 I lEe II, The Cubature Error ....... 212 5.4.4 The Behavior of the Reconstruction Error 220 5.5 Improving AIR by Frequency Grid Refinement .. 221 5.5.1 The Effect of Xf.l on the Operation Count of AIR 222 5.5.2 The Effect of Xf..N on the Cubature Error I lEe II .. 223 5.5.3 The Effect of Xf..N on the Anterpolation Error IlEAl! 224 5.5.4 The Effect of xf..N on the Propagated DFT Error I lEAF II 225 5.5.5 The Net Effect and Cost of Xf..N .. 227 5.6 Selection of N. and p . 228 5.7 Numerical Examples 3: The AIR Algorithm 231 6 FOURJER METHODS APPLIED TO FUNCTIONS EXPANDED IN TRANSLATIONAL BASES 236 6.1 Translational Bases . 237 6.2 The Translation Theorem 239 6.2.1 A Brief Digression 242 6.3 The Translation Theorem in Two Dimensions 246 6.4 Expanding Both Sides of 'Ru = f in Translational Bases 248 6.5 A Family of Discrete Central Slice Theorems . . 251 7 CONCLUSIONS AND OPEN QUESTIONS 7.1 Summary ... 7.2 Open Questions . . . . A CUBATURE IN POLAR COORDINATES BIBLIOGRAPHY Vlll 253 253 256 259 264 ,, I


List of Figures 2.1 An x-ray in the plane of a two-dimensional object . 9 2.2 The geometry of the Radon transform. . . . 11 2.3 The Radon transform of the characteristic function of a disk. 14 2.4 The Radon transform of a function with a dense plug. 16 2.5 The Radon transform of a hollow cylinder. . . 17 2.6 The Radon transform of a shifted disk. . . . 19 2.7 The Radon transform of the characteristic function of an ellipse. 21 2.8 The Radon transform of the characteristic function of a square. 26 3.1 Refining the frequency grid. . . . 38 3.2 The function f(x) and the auxiliary function g(x). 52 3.3 Errors in approximating the FT by the DFT. 57 3.4 Reciprocity relationships on the grids. 64 3.5 An example of the formal D FT. . 87 3.6 The error of the ADFT for constant p. 88 3.7 The error of the ADFT for constant N.. 89 4.1 The polar and Cartesian frequency grids. 98 4.2 The geometry of the interpolation operator :r;: 101 4.3 Extents of the Cartesian and polar grids. 105 4.4 The regions of summation. . . 120 4.5 Norms of the error vectors as functions of N. 151 4.6 E:r, The error in the forward transform. 153 4.7 E:r-1, The error of the inverse transform. 154 4.8 Er, The error of interpolation as a function of N. 156 4.9 The exact solution for the elliptical rim 'head' phantom. 158 4.10 The reconstruction error for the head phantom. 159 4.11 The reconstruction of the head phantom, p = 0. 160 4.12 The reconstruction of the head phantom, p = 1. 160 4.13 The reconstruction of the head phantom, p = 3. 161 4.14 The periodic nature of the artifacts, p = 0. 162 4.15 The periodic nature of the artifacts, p = 3. 163 4.16 Profiles of the reconstructions: Center of the image. 165 4.17 Profiles of the reconstructions: Edge of the image. 166 4.18 The projection f(p,O) and transform }(w,O) of a centered ellipse. 169 4.19 Oscillation in }(w, ) due to shifting f. 170 4.20 The oscillation as a function of. . . . . 170 'I ;'.i


4.21 Photograph of the functions u(:v,y) and v(:v,y) and their Radon transforms. . . . . . . . 180 4.22 Top: The reconstruction of u(:v,y) with p = 0. Bottom: The reconstruction of v( :v, y) with p = 0. . . . 182 4.23 Top: The reconstruction of u(:v,y) with p = 1. Bottom: The reconstruction of v(:v, y) with p = 1. . . . 183 4.24 Top: The reconstruction of u(:v,y) with p = 3. Bottom: The reconstruction ofv(:v,y) withp= 3....... . 185 4.25 Error of the reconstruction for several values of S with p = 0. 186 4.26 Error of the Reconstruction for several values of S with p = 1. 186 5.1 Relationship between and the w;;j, grids. 193 5.2 Geometry of the cubature interpolataion. . 213 5.3 Images reconstructed by using the AIR algorithm. 233 X


ACKNOWLEDGEMENTS A lucky person will encounter, sometime during his or her life, one great teacher. A teacher of this sort is measured not by the lessons delivered, but by the profound influence the teacher has on the philosophy, and therefore the life, of the student. I have been very lucky; I have encountered three such teachers. Bill Briggs, my adviser, is one of them. Without his guidance and direction this dissertation could not have been written, and I would not have whatever claim to scholarship I have today. Without his friendship I would be less of a person. I shall judge my own career as a professor a complete success should I ever influence a student as he has influenced me. There are several others whose assistance, guidance, and general good nature contributed mightily to this work. I am deeply indebted to Steve McCormick, who provided partial support for this research, as well as many suggestions for im proving the thesis. Achi Brandt provided the original ideas underlying this work, and has continued to provide ideas throughout the process. The other members of my Ph.D. committee, Bill Clohessy, Tom Manteuffel, and Tom Russell, have each contributed in their own way, and I thank them. I must especially thank Tom Manteuffel for recruiting me into the Ph.D. program in the first place. Roland Sweet, who should have been on the committee, was a great help on many occa sions. Thanks are due Debbie Wangerin, Stuart and Erin Wright, and Michael


and Christa Leman, each of whom provided help, support, and friendship that kept me from losing my mind. Special thanks go to Craig Rasmussen just for being Craig Rasmussen. My parents, Phil! and Louise Henson, have always been enthusiastic in their love and their encouragement; this has made it easy to choose to do difficult things. Mary and George Tyner, my wife's parents, have helped immensely. The other two great teachers in my life are Ariel S. Balli, Jr., and Jim Ruther-ford. Gentlemen, my gratitude is deep and heartfelt, if many years overdue. i1 Finally, and above all, I thank my wonderful wife Teri. Her love and rock-steady support have kept me going for many years. That baby in her arms will keep me going for many more. This research was supported in part by grants NSF DMS-8704169 and AFOSR86-0126. xii


CHAPTER 1 INTRODUCTION This thesis is concerned with the classical tomography problem, the determi nation of the value of a function of two variables u(x,y) from the values of line integrals through the domain of u. In 1917 Johann Radon published a detailed analysis [36] of the problem, along with an inversion formula. In recognition of that achievement, the set of all line integrals of a function is known as its Radon transform. Direct implementation of Radon's formula is numerically unstable, so alternative solutions methods must be sought. This study is devoted to methods of solution that are based upon the Central Slice theorem, which describes an important relationship between the Fourier transform of a function u(x,y) and the Fourier transform of the Radon transform of u. It leads to a simple inversion formula, whose implementation is unfortunately fraught with difficulties. The most serious difficulty is the necessity of the interpolation of the Fourier trans form from a polar grid to a Cartesian grid. Most of this thesis is concerned with examining the nature of the interpolation problem, and a detailed analysis of two approaches to solving it. It must be noted at the outset that in recent years a tremendous amount of work has been done on this problem. The mathematical problem underlies


a host of applications, in fields such as radio astronomy, medical imaging, exploration seismology, electron microscopy, nuclear magnetic resonance, optics, non-destructive material testing and analysis, and stress analysis, among many others. As a result, the number of workers who have conducted research on this problem is very large, as is the number of different approaches that have been proposed. As might be expected under these circumstances, there is no "best" solution method, and all the common algorithms have certain advantages and disadvantages, as well as adherents and detractors. The present work was originally motivated by a simple observation regarding the nature of the relationship between the spatial-domain representation of a function u and the frequency-domain representation of the same function, as they are related through the discrete Fourier transform ( D FT). Specifically, the grid spacing of the rep res entation in one domain is related to the grid spacing of the representation in the other domain by the reciprocity relation: 7r D.:z:D.w = N' where D.:z: is the sample rate in the spatial domain, D.w is the sample rate in the frequency domain, and N is the number of points used in the representation. For fixed N, selecting a small value of D.:z: implies a large value of D.w. Viewed from the perspective of multigrid methods [7], a fine grid in space transforms to a coarse grid in frequency, while a coarse grid in space transforms to a fine grid in frequency. It should be possible to use this relationship to employ multigrid 2


or multigrid-like methods to advantage. Indeed, the reciprocity relation can be used to improve the reconstruction techniques based on the Central Slice theorem, although to date it has not led to an algorithm that can be termed a multigrid method. The simple operation of extending the spatial domain by padding the data with many zeros has the effect of refining the frequency grid. One result of this is that the interpolation from the polar frequency grid to the Cartesian frequency grid can be performed with greater accuracy, improving the overall reconstruction. In this thesis two algorithms are developed to invert the Radon transform by way of the Central Slice theorem. In order to develop these algorithms, a thorough understanding of the D FT is required, as is a careful analysis of how it is related to the continuous Fourier transform. The thesis is therefore designed as follows. Chapter 2 is expository in nature, and is dedicated to the definition and properties of the Radon transform. Chapter 3 is devoted to the D FT and its properties, along with a discussion of how it is related to the continuous transform. Since the introduction of the family of fast Fourier transforms ( F FTs) to compute it, the DFT has become a tremendously popular operation ([12], [20]). Because of its popularity, the D FT has been well-studied, and a large portion of Chapter 3 is devoted to filling some small but important gaps in the understanding of the error committed by approximating the continuous transform by the D FT. The latter half of the chapter is spent in the development of an algo-3 j I


rithm termed the Anterpolated Discrete Fourier Transform (ADFT), a method for computing the DFT of a data set given on an irregularly spaced grid. In Chapter 4, the first of the image reconstruction algorithms is presented. This algorithm is a straightforward modification of a standard method, and consists of a set of Fourier transforms followed by an interpolation from the polar frequency grid to the Cartesian frequency grid, followed by a two-dimensional inverse Fourier transform. It differs from previously reported algorithms in the interpolation operators that are used. Standard methods ([35], [39], [40]) use expensive interpolators based on sampling theory. Instead, we use a simple La-grangian polynomial interpolation. Our treatment also differs from previous work in the analysis of the algorithm. A detailed error analysis is performed by ex-amining the algorithm as a cascade of operators, allowing precise identification of the sources of error and their relative magnitudes. An improved algorithm is introduced by applying the extension technique mentioned above. In addition, we address the question of whether or not taking an increasingly large number of samples of the Radon transform will cause the algorithm to converge to the continuous solution. In fact it will not, but it is shown that taking ever larger numbers of samples in combination with ever greater extension factors will cause the algorithm to converge, formally, to the continuous solution. An important point should be made here. The idea of padding the input data with zeros to improve the reconstruction has been proposed prior to this study. A similar technique is reported in [23], and the technique is again employed in 4


[33] and (35]. However, the technique used in [23] differs in several important features, and results in a much more expensive algorithm. The extension used in [33] and [35] is more closely related to the technique employed here, but again there are important differences which will be discussed in Chapter 4. Chapter 5 presents the second reconstruction algorithm, which is based upon a direct inversion of the Fourier transform in polar coordinates. It is developed by viewing the polar frequency grid as an irregular grid (relative to the Cartesian grid), and applying a two-dimensional version of the ADFT from Chapter 3. A detailed error analysis, similar to the cascade analysis in Chapter 4, is applied to this algorithm. The reconstruction algorithm works, in that it produces images which are of similar quality as those produced by the method in Chapter 4. It is, however, somewhat more expensive than the conventional approach. A number of interesting issues arise in the analysis of the algorithm and lead to some open questions for future study. At least one of these lines of inquiry could lead to an alternative algorithm that might be competitive. Finally, Chapter 6 develops a method for the inversion of the Radon transform of a function expanded in a translational basis. This leads to a hierarchy of Central Slice theorems, and opens the possibility that a priori knowledge about the unknown function can be used to tailor a basis for a given problem. To date this approach has not led to a practical algorithm, but it has proven useful for another reason. The Central Slice Theorem applies to the continuous problem. While the question of convergence to the continuous solution is addressed in 5


Chapters 4 and 5, the literature about the image reconstruction problem is almost entirely devoid of discussion of discretization error. That is, what constitutes the "exact solution" to the numerical problem, and how is it related to the solution to the continuous problem? The analysis in Chapter 6 provides a framework in which many different bases can be chosen for the expansion of the unknown function u, much as is done in finiteelement solutions of partial differential equations ([26], [31], [42]). A summary of the results of this study occupies Chapter 7, along with a discussion of the problems that remain open. 6


CHAPTER 2 THE RADON TRANSFORM AND ITS PROPERTIES Suppose an x-ray is passed along a straight line through a homogeneous object. If the length of the travel path is x and the x-ray has an initial intensity 10 then the intensity, I, of the x-ray that emerges from the object satisfies where J.L, the linear attenuation coefficient, is dependent on the material making up the object. If the x-ray passes through one material along a path of length xb and then through another material along a path of length x2 the emerging x-ray will be attenuated according to where J.Ll and J.L2 are the attenuation coefficients of the two materials. An x-ray passing through many materials will be observed to decay as Passing an x-ray through a non-homogeneous object may be modeled by letting the number of materials grow to infinity while the length of the travel path through each material becomes infinitesmal. The decay of the x-ray, upon passage


to the limit, behaves according to where p( x) is the linear attenuation function, and L designates the line along which the x-ray passes through the object. Forming the ratio of emergent to incident intensity and taking the logarithm, the behavior of the x-ray intensity satisfies hp(x)dx Consider passing many x-rays, along many paths, through an object. Suppose that the x-rays are directed so that the travel paths are all coplanar, and that the linear attenuation function in the plane of the x-rays can be described by the function of two variables p( x, y ). The attenuation function of an object may depend on many factors, including the material characteristics and the characteristics of the x-ray beam. In many cases a principal factor is the density of the material [23]. Throughout this work we will refer to p(x,y) as the density function, or often as the image, rather than the linear attenuation function. For each x-ray the travel path can be parameterized by the distance, s, traveled along the path, and the attenuation in the x-ray intensity is hp(x,y)ds Figure 2.1, indicates the geometry of one such x-ray through p(x,y). If the x-ray source and detector are moved along parallel straight lines past an object then 8


Figure 2.1: An x-ray in the plane of a two-dimensional object a profile of the intensity attenuation can be constructed. The basic problem of Computer Aided Tomography (CAT) is to reconstruct the density function Jl( x, y) from the knowledge of the x-ray attenuation along many paths through the object. When this problem can be solved, it may be done through the inversion of the Radon transform. 2.1 The Radon Transform Let u(:z:,y) be an arbitrary function of two variables in the :z:y-plane. We may regard u as the density of a planar object. The Radon transform of u(:z:, y ), which is also a function of two variables, is defined as the set of all line integrals of u(:z:,y), 'R.{u(:z:,y)} = h u(:z:,y)ds The transform is named for Johann Radon (36], who studied the transform and many of its properties, and discovered inversion formulae by which the function 9


u(x, y) may be determined from nu There are many choices for the two variables in the transform domain. They consist of the two parameters necessary to specify a line through the object. For example, let p be a real number and ; be an angle measured from the positive x-axis. Then the condition p = x cos;+ y sin; (2.1) determines the equation of a line through the xy-plane, normal to the unit vector { == (cos ;,sin; )T, and a distance p from the origin, measured along { Letting i be the vector (x,y)T, the parameterization (2.1) can also be written p=i-{. (2.2) If L represents this line, then either (p, ;) or (p, {) could be used as the variables of the Radon transform, which is then denoted by [Ru](p,;) == h u(x,y)ds or [Ru](p,D == h u(i)ds Throughout this thesis we will refer to the Radon transform of a function u(x,y) in terms of the parameterization specified by p and ;. Figure 2.2 shows the geometry, in which ;gives the angle of the ray that is normal to the line L and is measured clockwise from the positive x axis. Then p is the signed distance from the origin to the point where the line L meets the ray at angle ;. These should not be confused with polar coordinates, however. A function of polar coordinates 10


Figure 2.2: The geometry of the Radon transform. must be single-valued at the origin, but there is no reason to suppose that the line integrals of u(:e,y) passing through the origin have the same value for all. We may also use the parameterization of the line in conjunction with the Dirac delta distribution 5. Recalling that /._: f(:e)5(:e)d:e = f(O) in one dimension and /._:;_: f(:e,y)5(:e,y)dxdy = f(O,O) in two dimensions, then the Radon transform can be specified by or [nu](p,)= r u(:e,y)5(p-:ecos-ysin)d:edy, JR, ['Ru](p,{) = r u(i)5(p-i. {)di. JR, The Radon transform may be thought of as a projection operator. Specifically, if the transform ['Ru] is considered as a one-dimensional function of p dependent 11


on a parameter (or{), then the profile given by the set of all line integrals (for fixed asp varies) is a projection of u( x, y) onto the one-dimensional subspace in which p varies between -oo and oo. Henceforth we shall use the term projection to indicate the values of the Radon transform corresponding to a fixed value of (or[). Taken over all angles, the transform will frequently be referred to as the set of projections. The Radon transform may be defined for functions in higher dimensional spaces as well. For this purpose a distinction is sometimes made between two types of transforms. Integrating u(x), for i E Rn over the set of all affine subspaces of dimension n -1 is called the Radon transform. For example, if n = 3, the Radon transform is the set of all integrals of u(x,y,z) over all planes in R3 Letting i E R3 and {be any unit vector in R3 and considering all pER, the Radon transform is written ['R.u](p, {) = f u(x)o(p-x {)di. Ja, The second transform is called the X-ray transform, and is defined as the set of all line integrals of u(x) for x E Rn. This study will deal exclusively with the Radon transform of functions of two variables. 2.1.1 An Example: The Characteristic Function of a Disk Consider the density function consisting of the characteristic function of a 12


disk of radius R, centered at the origin. That is ll for x2 + y2 :::; R2 u(x,y) = 0 otherwise. Since this object is symmetric with respect to the projection angle a single projection will suffice to determine the Radon transform. Consider the projection angle = 0, so that the p-axis of the projection corresponds to the x-axis of the image space. The line integrals are then taken along lines parallel to the y-axis, at a distance p from the origin. For all values of IPI > R, the lines do not intersect u( x, y ), and the projection has zero value. For IPI :::; R the transform is J)R'-p' ['R.u](p, 0) = -)R'-p' dy = 2) R2 -p2 Invoking the symmetry of u(x,y), the Radon transform is l2VR2 p2 for IPI:::; R ['R.u](p, ) = 0 otherwise (2.3) A common method of displaying the Radon transform is shown in Figure 2.3. The transform is shown on a Cartesian grid with p and axes, with the value of the transform ['R.](p, ) displayed as height above the p-plane. 2.2 Properties of the Radon Transform The Radon transform has been well studied, largely because it has proven useful in many diverse areas. Exhaustive bibliographies can be found in the standard works([l6], [23], [35]). Many properties of the Radon transform will be 13


I Figure 2.3: The Radon Transform of the Characteristic Function of a Disk. The disk has a radius of 2/3, is centered on at the origin, and has unit density. essential in developing the inversion methods of this study, so they are discussed here. For a detailed treatment, see [16] or [19]. 2.2.1 Existence The line specified by p = x{is, of course, of infinite extent. For the transform to exist at all, it is necessary that u( x, y) be integrable along L. At the least this implies that u( x, y) must decay at infinity. It is common to see workers in this field require that the function u( x, y) belong to some reasonable space of functions. Typically, such spaces may include: the space of all C"" functions with compact support restricted to some specified set; the space of all C"" functions with compact support; the space of rapidly decreasing C"" functions (the Schwartz space); or the space of all integrable C"" functions with arbitrary support. The present study is motivated by the medical imaging problem, so we will 14


be concerned primarily with functions possessing the following characteristics. First, unless otherwise indicated, we assume compact support, usually specified by u(x,y) = 0 for lxl 2: A, IYI 2: A, or in polar coordinates by u(p,) = 0 for p 2: A. Second, we impose no smoothness requirements, although heirarchies of results dependent on smoothness are presented. We do not, for example, insist that the density function u(x, y) be continuous. It is easy to imagine objects imbedded within objects (e.g., bone in flesh) with sharp discontinuities. The density function may or may not be differentiable, although it commonly will be differentiable almost everywhere. We will assume that the density function is bounded, and that in turn the projections are bounded. It is necessary to consider this last assumption carefully. Figure 2.4 illustrates a situation that should be considered. The density func-tion is homogeneous except for a "plug" of some dense material. It is possible that this material will effectively stop an x-ray of some given energy, in that the emergent ray might be sufficiently diminished that it does not register on the receiver. In that case the emergent energy is zero over the projection of the plug, and thus the logarithm of the energy ratio is unbounded. For the energy to be zero, of course, the plug must have infinite density, a physical impossibility. We may therefore assume that there is some maximum density in the object, and replace the unbounded portion of the projection by some prespecified maximum, as the figure shows. This is not technically correct, but then neither is a plug of infinite density. It means that we are replacing u(x,y) by some approximate 15


Figure 2.4: The Radon transform of a function with a dense plug. density function w(x, y) that differs from u over the extent of the plug. It also ensures that both the density function and the projections may be assumed to be bounded. 2.2.2 Linearity Consider the Radon transform of a linear combination of functions n {au(x,y) + ,Bv(x,y)} { (au+ ,Bv)c(p-X COS tPy sin

Figure 2.5: The Radon Transform of a hollow cylinder, centered at the origin. The inside radius is 0.-/5 and the outside radius is 0.6. The object has unit density. istic function of a disk of radius R2 where R2 < R1 Then, applying the linearity of the transform, we find that the function 0 otherwise has the Radon transform for !PI ::; R (2.4) otherwise. This transform is displayed in Figure 2.5. 2.2.3 Shifting Property Given a function u(x), consider the effect on the Radon transform [Ru](p, 4>) of shifting u by a vector ii Then ('R.u( x-ii)](p, {) = f u( x-ii)o(p x {)dx Jn., 17


Letting if= i-ii yields [Ru(i-ii)J r u(ifJ5(p (if+ ii) ()dif Ja, r u(ifJ5(p ii (-if. ()dif Ja, = [Ru](p-ii (). Thus the effect of shifting the function u( i) is to shift each projection a distance ii (along the p axis. For functions of two variables, the shifting property is written as [Ru(x-a,y-b)](p,) = [Ru(x,y)](p-acos-bsin,). (2.5) As an example, consider the Radon transform of the characteristic function of a disk of radius R, centered at the point (a, b). This is the transform of u(x-a, y-b ), where u( x, y) is given by (2.3). The transform is l2}Ri(p-a cosbsin )2 [Ru](p,) = 0 for IP -a cos -b sin :'0 R otherwise, and is shown in Figure 2.6. 2.2.4 Evenness and the Scaling Property The Radon transform is an even function of (p, (), in that [Ru]( -p, -e) r u(i)5( -p-i. ( -e))di Ja, r u(i)5(i{-p)di Ja, [Ru](p,{) 18


"::; 0 146 0... :f Figure 2.6: The Radon Transform of a shifted disk. The disk has unit density and a radius of 0.3. Its center is located at (0.1, -0.3). When written as a function of p and this becomes ['Ru]( -p,

u(Y) is given ['Ru(Ai)] = f u(Ai)5(apAi a{)di Ja, = ldet(A-1)1 { u(YJ5(p-A-1y{Jdy Ja, = ldet(A-1)1 { u(Y)5(p-y[A-'f{)dy Ja, = ldet(A-1)f['Ru](p,[A-1f{J As an example, consider the change of variables given by Under this transformation the characteristic function of a circle of radius R becomes the characteristic function of an ellipse: 11 u(x,y)= 0 otherwise. Invoking the linear transformation property gives ['Ru(Ai)] = ab['Ruj(p, {),where { = [A-'jT[. Since { = (acos,bsinJT is not a unit vector, we write it as a multiple of the unit vector z, that is, { = ICIZ). Then applying the scaling and linear transform properties to the Radon transform of the characteristic function of a disk yields ['Ru] = R2cfl) 2' for p2 :S 1(12 R2 and zero otherwise. Note that 1(1 = J a2 cos2 + b2 sin2 The Radon transform of the characteristic function of an ellipse is one that shall be used extensively in the later chapters. Figure 2. 7 shows the transform of an ellipse. 20


0.70 Figure 2. 7: The Radon Transform of the characteristic function of an ellipse. The object is centered at the origin with a major axis of length 1.4 coinciding with the x-axis, and a minor axis of length 0.6 coinciding with the y-axis. Each projection has the same basic shape, that of a scaled upper semicircle. As the projection angle changes, the width and height of the projection change accordingly. A special case of the linear transformation property is the rotation of the function u( i). Let A be a rotation matrix, for example A o ( (2.6) This is an orthogonal matrix, with A -I =AT and det(A) = 1, so that the Radon transform of the rotated function is [Ru(Ai)](p,{) = [Ru(i)](p,A{). In two dimensions, where { = (cos, sin JT, rotation by the matrix in (2.6) yields A{= [cos( B + ),sin( B + )JT, so that the angle B + can be used to specify the 21


direction vector, and the Radon transform of the rotated function is [Ru(AX)](p,cf;) = [Ru(x,y)](p,O+cf>). 2.3 The Central Slice Theorem The remainder of this study is either based upon, or related to, the Central Slice theorem. This fundamental theorem relates the Fourier transform of a function to the Fourier transform of its Radon transform, and in so doing, provides the basis for a number of methods for inverting the Radon transform. Recall that if f( x) is a function, integrable on R, then the Fourier transform of f is defined as 1 /"" F {!} = f(w) = ;;c y 27r -oo while the inverse Fourier transform is ;::-1 {J} = f(x) = _1_ !"" }(w)eiwzdw v'21r -oo For functions of more than one variable, i.e., i E Rn, the forward transform becomes F {!} = }(w) = 1 ( f(i)e-''"dx (27r )n/2 Ja. while the inverse transform is defined similarly. Specifically, for functions of two variables, we have i = (x,y)T and w = (wz,wy)T, giving 1 /"" !"" "( ) F {!} = f(wz,wy) =f(x,y)e- zw.+yw, dxdy 27i -oo -oo 22


Consider the Fourier transform of a projection. That is, for fixed {, the onedimensional Fourier transform of [Ru](p, {) is computed, with respect top. Then -1 = (211')-, u(w), where w = w{ ranges over all of Rn. The Central Slice theorem says that the Fourier transform of a projection u(i), projected onto the subspace given by the line p{, is equal to a constant multiple of the values of the n-dimensional Fourier transform u( w) along a line through the transform in the direction { Because this result is so important to this study, we write out the theorem explicitly for the case i E R2 Theorem 2.1 (Central Slice) Let the image u(:v,y) have a two-dimensional Fourier transform, u(wz,wy), and a Radon transform, [Ru](p,). IfRu(w,) is the one-dimensional Fourier transform, with respect to p, of the projection [Ru](p, ), then where w2 = w; + and= tan-1(wy,wz). That is, the Fourier transform of the projection of u onto the line in the direction of the unit vector (cos ,sin? 23


is exactly a slice through the two-dimensional Fourier transform of u( x, y) along that direction. 2.4 Finding Radon Transforms with the Central Slice Theorem Typically, Radon transforms of functions are found by direct integration, or by geometrical arguments. The Central Slice theorem provides yet another method for finding Radon transforms. Given a function u(x, y ), its two dimensional Fourier transform u(w.,w") is determined, and then evaluated in the polar coordinates (w,, is the projection [Ru](p, ), multiplied by 1/ ../21r. We illustrate this method by finding the Radon transform of the characteristic function of a square. Suppose (,y) l 1 for lxl c::; A, IYI < A 0 otherwise. Its Fourier transform is = ...!:._ fA fA e-i(w.z+w,y)dxdy 27r }_A }_A = .2_ ( e'"'A :-e-iw.A) ( e'"'A :-e-iw,A) tWy 2 sin( w.A) sin( wyA) 7I' W:r: Wy (2.7) Converting to polar coordinates in the frequency domain by w. = w cos and wy = w sin wA sm 24 ....


Let X(x) denote the characteristic function of the interval [-1,1]. A well known Fourier transform is Therefore, in light of the Fourier transform convolution property, if j = F {!( x )} and g = F {g(x)}, then ;::-1 {j(w)g(w)} = [f g](x), where the convolution is defined by [f*g](x)= r:f(r)g(x-r)dr. Hence, the Radon transform of the function in (2. 7) can be found by [Ru](p,) =;;::-1 { {i Asin(wAcos)} y; wAcos or, upon taking inverse Fourier transforms, y::-1 { {i y; wAsm; ( "' ) [Ru](p, ) = X A COS Slll COS It is not difficult to show that the convolution of the characteristic functions of two intervals is given by 0 B+D B + D + x -D-B :S x ::S: B-D 2B B D :::; x ::S: D B D+B-x D-B :S:x :S:D+B, 25


o.B49 Figure 2.8: The Radon Transform of the characteristic function of a square. The object has unit density, has sides of length 1.2, and is centered at the origin. where 0 < B :S D. For example, a projection of the characteristic function of a square defined in (2.7) where 0 ) A( sin) with similar definitions for -o sm ->

CHAPTER 3 A COLLECTION OF IMPORTANT PROPERTIES OF THE DISCRETE FOURIER TRANSFORM The image reconstruction methods employed in this work rely upon the Fourier transform (FT). Computer implementations of these methods require discretization of the equations involved, and especially the calculation of the Fourier transforms of various data sets. The usual approach in signal processing problems of this type is to use the discrete Fourier transform ( D FT) for approximating the Fourier transform of a sampled function. Since it will be important to understand exactly what occurs when this transform is applied, this chapter is dedicated to exploring the relationship between the D FT and the FT. In the latter half of this chapter, an algorithm for evaluating a DFT on irregularly spaced data is developed. 3.1 The Discrete Fourier Transform The algorithms presented in this work begin by computing the Fourier transforms of sets of projection data. Consider the problem of computing an approximation to the Fourier transform 1 !"" f(w) = rn= f(x)e-wzdx, v2tr -oo (3.1)


when the function f(x) is known only at a regularly spaced points. Specifically, our interest is in transforming sampled projections, so the data are zero outside some finite interval, say f(x) = 0 for lxl >A. In this case the Fourier transform JS 1 f(w) = fiC f(x)e-"""dx. y21l' -A (3.2) This integral is commonly approximated by a method known as the discrete Fourier transform that maps a length N sequence, {fn} to another { }k=N/2 sequence, A also of length N, by the rule k=-N/2+1 for (3.3) In another equivalent definition the index of summation runs from 0 to N-1, as does the index k. If the input sequence is considered to be periodic (in = fn+N ), then the output sequence {}.} is also periodic. Occasionally, for convenience, the discrete transform of the sequence {fn} will be indicated by the notation DFT{fn}. It is crucial to understand exactly what is being approximated, and how accurately, by (3.3). In order to identify this sum with a Riemann sum that converges to an integral like (3.2), it is necessary to identify the DFT kernel with the FT kernel e-iw. The length of the interval of integration is 2A, so dividing this into N subintervals gives Ax = 2A/ N. A grid with N equally spaced points in the half open interval (-A, A] is defined by the points Xn = nAx for n == -N/2 + 1, ... N/2. Thus the DFT kernel becomes e-iw, which in turn 28


implies that w = k-n: /A. Letting then the trapezoidal rule for approximating the integral in (3.2) is !A F(x )dx <=::: .6.x {F( -A)+ F(A) + 2 11 F(xn)} -A 2 N n==-2+1 and ifF( -A)= F(A) this may be written Designating by the sequence of functional values f(xn), the sum in the trapezoidal rule becomes the D FT sum when w is chosen to be k7rjA. Multiplying the DFT by thus produces the trapezoidal rule approximation to the FT The assumption that F( -A)= F(A) must be considered carefully, to see what it implies. It is a valid assumption, given the condition f(x) = 0 for lxl?: A. This in turn means that f(x), and therefore F(x), can be extended outside [-A, A], as a 2A periodic function. Continuity or smoothness at A is not implied here, although these properties will frequently be added. 29


3.1.1 The Reciprocity Relation Since the DFT gives output values for k = -N/2 + 1, ... N/2, a natu-ral frequency grid is created when it is used to approximate the FT. Letting Llw = 1r /A, then the frequency values are Wk = ktl.w, which range over the interval ( -(N/2 -1)tr, (N/2)tr]. This means that the DFT computes the set of approximations N N for k = -2 + 1, ... ,2 There is no reason that the integral must be approximated only at these values, and in fact the sum could be evaluated for any value of w (although sampling theory certainly restricts the set of valid results). The uniform sampling has probably evolved from long association of the DFT with the fast Fourier transform (F FT), an extremely efficient method of computing (3.3) that requires a uniform sampling in w. There is, however, a very important relationship between N, the number of points on the spatial and frequency grids, and sample rates in space and frequency, given by Llx and Llw respectively. Since the argument of the exponential in the trapezoidal approximation, -i(nLlx)(kLlw), is to be identified with the arguement of the DFT kernel, -i2trnk, it follows that 27r (Llx)(Llw) = N (3.4) The import of this equation is that the D FT has a sample interval in space that is inversely proportional to the sample interval in frequency, and the relation is 30


therefore known as the reciprocity relation. It holds regardless of what interval [-A,A] is used for the integration. However, there is a version of the reciprocity relation that relates the total extents of the spatial and frequency grids. Let X= N t:>.x and F = N t:>.w. Then XF = 21rN (3.5) These two equations will be important in many of the discussions that follow, and we will return to them shortly. It is useful, however, to first examine the relationship between theFT and the DFT more closely. 3.2 The Relationship between the Discrete and Continuous Transforms We have seen that the DFT yields a set of approximations to values of the FT: N N for k=-2+1, ... ,2 The error in this approximation will be discussed shortly. For the moment, we are interested in relating the D FT to the FT, not from the point of view of error analysis, but in order to understand precisely what is being approximated. Specifically, what happens as N--> oo? Note that t:>.x = 2A/ N --> 0 as N --> oo, and the summation indeed approaches the integral, in the limit. It is important to realize, however, that the DFT does not converge to the continuous Fourier transform as N --> oo. The DFT is 31 ......... !. d ,! 'I : i ii li 1,, I j.: I I i' ,..,


evaluated only at the values Wk = k1r I A, and thus } 1 !A As N-->oo, DFT{fn -->-f(x)e---x-dx 2A -A where ck is the k'h coefficient of the Fourier series expansion off( x) on the interval [-A, A], 00 f(x) = L In the case that f(x) = 0 for !xl > A, the Fourier transform is related to the Fourier series coefficients by It is misleading, then, to think that the D FT becomes the Fourier transform as the number of data samples becomes infinite over [-A, A]. The important point is that, even though letting N --> oo implies that the sum converges to the Fourier integral, the transform data are sampled, not continuous. In fact, the reciprocity relation (3.4) shows that the sample rate /::;.w= N/::;.x 21r N N2A = A 7r (3.6) remains constant for all N as N--> oo. The DFT, as normally defined, does not provide any information whatsoever about j(w) if w # k7r I A! This distinction will prove to be of crucial importance in analysing the image reconstruction problem. 3.3 The Inverse Discrete Fourier Transform In the previous paragraph it was shown that increasing the number of sample 32


points to infinity causes the DFT to converge to a subset of the Fourier coefficients for f(x ). Now, consider the inverse discrete Fourier transform (IDFT), which is defined as for (3.7) It is not difficult to show that the discrete orthogonality of the kernel implies that the I DFT is indeed the inverse of the DFT: IDFT{DFT{fn}}=fn. With the same step sizes as are used in the forward transform, .6-w = 7r /A and .6-x = 2Aj N, the I D FT appears as If fn = L jk Je;;;-lf+l It is tempting to view this sum as an approximation to the inverse Fourier trans-form (I FT) integral, f(x) = -1 /"' V21r -"' but it can be seen that this is not the case. It should be noticed immediately that the limits of the sum cover only a very small range of w, from -(N7r)/(2A) to (N7r)/(2A). Letting N---. oo does cause this range to span the whole real w line, but by (3.6), .6-w remains constant. Thus, the I DFT does not converge to the inverse Fourier transform as N becomes infinite. Recalling what happened to the DFT as N ---. oo, however, indicates what will happen to the I DFT as N ---. oo. Letting the number of samples become 33 ii


infinite, the DFT produced, as its output, the Fourier series coefficients j. = c,. Substituting these coefficients into the IDFT, while noting that Xn = nAx = 2An1N, we see that the IDFT becomes for xE[-A,A], which is exactly the evaluation (at the points x = 2Anl N) of the Fourier series representation for f(x). We summarize these observations as follows: Let f(x) = 0 for !xi > A. Then, with Ax= 2AIN, we have As N-> oo, Ax-> 0 and DFT{J(nAx)}(k)-> q. The DFT corresponds to Fourier series analysis. As N-> oo IDFT{c.}(n) -> f(nAx), nAx E [-A,A]. The IDFT corresponds to Fourier series synthesis. 3.4 Refining the Frequency Grid We have seen that, as the number of samples in the interval [-A, A] becomes infinite, the D FT converges to a sampling of the FT, evaluated at Wk = k1r I A. An obvious question is what to do if values of the Fourier transform are needed at points other than k1r I A. One answer is to apply the Shannon Sampling theorem [25]: Theorem 3.1 (Shannon Sampling Theorem) If f(x) = 0 for lxl2: A, and 34


ll.w ::; 1r I A, then where }(w) = f }(kll.w)sinc (L(w-kll.w)), Je;-oo Sill X smc( x) = x (3.8) If }., the output values of the D FT, are considered an acceptable approxi-mation to fork= -NI2 + l, ... NI2, then the values for }(w) can be obtained by the approximation (3.9) Consider the cost of this approximation. Evaluating the I D FT to give jk will require 0( N log N) operations if done with an F FT. Calculating the approximation in (3.9) requires an additional O(N) operations for each desired w. If this method were used, for example, to compute the N values at the halfsample points k7ri(2A) from the points given by the DFT, the total cost would be O(N log N) + O(N2). A second approach to approximating }( w) at values other than k1r I A can be developed from the reciprocity relation (3.4). By way of example, suppose it is desired that j be computed with the spacing ll.w = 1r I (2A) instead of 1r I A. From (3.4), 7r ll.w =-2A implies 35 4A ll.x = fi


Therefore, cutting the frequency spacing in half, while holding the number of points fixed, implies doubling the spatial sample interval. Doubling the spatial sample distance, of course, will have a deleterious effect on the accuracy of the numerical scheme, a point to be considered again later. Also, the spatial grid now has twice the extent, N t.x = 2X, as that of the original grid. Since f( x) is zero outside the interval [-A, A], this means the function must be extended with zeros in [-2A, -A] and [A, 2A]. Another effect of this approach is that, by holding N fixed while sampling twice as often in frequency, frequencies range over only half of the extent as they do if the original D FT is used, i.e., over [-Nrr/(4A),Nrr/(4A)]. In exchange for refined sampling of the low frequencies, we have given up some accuracy due to increasing Llx, and the high-frequency information of the transform has been lost To compute the frequency samples at the points krr / (2A) without losing accuracy or high frequency information, we again turn to the reciprocity relation. Holding the spatial sample interval fixed while halving the frequency sample interval implies that the number of points N must be doubled: From (3.5), this in turn gives (t.x) Llw = 2rr 2 2N. 2 N = 2XF. 211" The effect, then, of doubling the number of points and holding Ax fixed is that the frequency extent remains the same, but the frequency sample interval is 36


halved. The spatial extent is doubled, although all values of f(xn) are zero for Xn E [-2A,-A] or [A,2A]. The numerical accuracy of the approximation to the integral is unchanged, as it depends on Ll.x. If an F FT is used to approximate the Fourier transform values on the refined frequency grid, it can be done in 0(2N log 2N) operations, which is far more efficient than employing the Shannon Sampling theorem. We must still take up the question of accuracy, however, which we do in Section 3.6 below. Figure 3.1 sums up the relationships between the spatial and frequency grids that are created by these manipulations. We shall have occasion to invoke this grid refinement technique in a number of image reconstruction algorithms. A more detailed discussion of grid relationships therefore appears in a subsequent section. 3.5 Getting the FT-IFT Pair from the DFT-IDFT Pair The frequency grid refinement method can be used to complete the analysis of how the discrete transform pair is related to the continuous transform pair. It has already been noted that the DFT converges to the Fourier integral as N --+ oo, but that only a finite set of approximate values is produced. It was also noted that letting N --> oo did not cause the I DFT to converge to an integral. By refining the frequency grid so that Ll.w __, 0, the DFT and the IDFT will converge to the FT and the I FT, respectively. This is done in the following manner: 37 p; ,I:


Space Domain Frequency Domain :F I I I I I I I I -<=9 I I I I I -A 0 A F _.f. E E 0 -2 4 4 2 :F IIIII : : : I = -2A -A 0 A 2A _.f. 0 .f. 4 4 :F 'I I I I I I I I I I I I I I I -<=9 I I I I I -2A -A 0 A 2A F _.f. 0 .f. t -2 4 4 2 Figure 3.1: Refining the frequency grid. Top: The original pair of space and frequency grids. Center: Doubling ll.z while retaining N points gives a spatial coarse-grid with twice the eztent of the original, and transforms to a frequency fine-grid with spacing ll.w /2 and half the eztent of the original. Bottom: Doubling the number of points while holding ll.z fized produces a spatial fine-grid with twice the extent of the original, and transforms to a frequency fine-grid with spacing ll.w /2 and the same eztent as the original. 38 r I' I t


Pick an integer P > 1, and, holding the number of points N fixed, let fl.x = 2P Aj N. The reciprocity relation now gives fl.w = 1r j(P A), and the DFT becomes f -i:J,.1\.\! ne-;rN .., = 1 2PA f(nfl.x)e-i('PN'J(}:';,) 2PA IT ..., If 2],A fl.x L f( nfl.x )e-i(nA)(kAw) Now, as N--> oo, the sum converges to 1 !PA J( ) -iwzd 2PA -PA x e x for w = k-rrj(PA). Therefore, for fixed P, if N--> oo, then t.x--> 0 and the summation converges to the integral giving the k'h Fourier coefficient of a 2P A-periodic function that coincides with f(x) on [-PA,PA]. Furthermore, this integral also gives the transform values for k E Z. The continuous transform may be obtained, formally, by applying this technique for every P as P --> oo. The continuous inverse transform may also be obtained from this process, which may be summarized as 1. Let f( x) be given, where f(x) = 0 for lxl 2: A. 2. For some P > 1, discretize the function on the interval [-PA,PA] by choosing N equally spaced points nf>x, -N/2 + 1 n N/2, where 39


D.x = 2P A/ N. The D FT on these points is an approximation to the Fourier coefficients for the 2PA-periodic function coinciding with f(x) on [-P A, PAJ, and also to the sampled Fourier transform fA As N -> oo, D.x -> 0 and the D FT converges to the Fourier coefficients of this function. 3. The IDFT for the DFT performed in (2) is a truncated Fourier series synthesis of the 2P A-periodic extension of f. Letting N -> oo in this context is to utilize all the terms of the series, and the sum converges to f(2P An/ N) for n = -N/2 + 1, ... N/2. 4. Letting P -> oo in steps (2) and (3) has the effect of mapping the interval of integration in the forward transform from [-P A, P A] to ( -oo, oo ). The Fourier series analysis represented by the DFT then converges to (a con-stant multiple of) the Fourier transform. For the I D FT, letting P -> oo causes D.w -> 0, so the I DFT converges to an integral. The Fourier se-ries synthesis then becomes the inverse Fourier transform. This may be described symbolically by AsP -> oo and N --> oo, 3.6 Error in Using the DFT to Approximate the FT The previous sections have been devoted to examining the relationship between the D FT and the FT. The purpose has thus far been to answer the 40


question, what is approximated by the D FT? It is also important to ask, how good is the approximation? Accordingly, we turn our attention to determining expressions that estimate the error that is committed when a DFT is used to ap-proximate values of the FT. Several expressions will prove useful in the analysis of the image reconstruction methods that follow. 3.6.1 Trapezoidal Rule Error Since the DFT is a numerical quadrature using the trapezoidal rule, we begin by examining the error bound for that quadrature method. This error bound may be found in any standard text on numerical analysis. It is: Theorem 3.2 ([15]) Let g(x) E C2[a, bj and let h (b-a)fN. Then if J: g( x )dx is approximated by [ 1 N-1 1 ] TN {g} = h 2g(a) + j; g(a + jh) + "2g(b) the error in the approximation is and is therefore bounded according to Applied to the trapezoidal summation of the DFT, with x,. = 2An/N and 41


Wk = k1r I A, the error term becomes N f (xn) e-i2>mk/N =-(2A)a-\ d?2 (f(x)e-ik-.!A)I 12 N dx eE[-A,AJ Several comments are in order regarding the use of this error term for the DFT. First, the error statement depends on the integrand belonging to C2 [ -A, A]. While this is a reasonable condition for many problems, the projections used in the image reconstruction problem are frequently not even continuous. Second, this error description has some limitations even if applied to theFT of functions that are in C2[-A, A]. TheFT requires an integration over ( -oo, oo ). Unless f(x) = 0 for I xi 2': A, the DFT does not approximate the entire integral, and this is a source of additional error. Suppose, however, that f(x) E C2[-A,A] and f(x) = 0 for lxl 2': A. Then the error term given above is useful, in that for every fixed value Wk = k1r I A, the DFT converges to }(wk) as N--> oo. Examining theerror more closely, the derivative term is (f("' )e-ih!A) le = [( '; r m) 2i ( ';) f'(E) + f"(e)] e-ik,e/A = k2 [ (:/ !(0i G'rA) f'W + :2rw] e-ibe/A Letting h(E) represent everything to the right of the P in this last expression, the trapezoidal rule error term for the D FT becomes 42


II For fixed k, then, the error in computing j(wk) diminishes as (k/ N)2 likeN --> oo. However, (3.10) is less useful if the goal is to show that, as N --> oo, the vector generated by the D FT approaches the corresponding vector of values of the FT. This is because, for any N, the DFT computes approximate Fourier transform values for frequencies up to k = N/2. The error bound at the highest frequencies is thus which is no better than 0(1). The trapezoidal rule error bound is therefore not useful if the goal is to show that a sequence of D FT vectors approaches the vector of FT values. Other tools are required for that task. 3.6.2 Error by Summation A useful error expression can be derived by a process that also leads to the Poisson Summation Formula. The analysis that follows is classical; it is repeated here because the process itself sheds light on the nature of the errors in the D FT approximation to the FT. Let f(x) be a function possessing a Fourier transform j(w). It is not required that f( x) have compact support. Select a spatial sampling interval klx and consider the 211" / klx periodic function of w 7r 7r --

This function may be expanded in a Fourier series on the interval [ -1r I 1r I g(w) = 00 L ene :/r:: 00 I: -inA:tw c,e (3.12) n=-oo n=-oo where 1; ( ) in.:l.zwdw en=gwe 211" = Substituting from (3.11), the integral is = Substituting this into (3.12) yields the series representation 00 g(w) = -L y'27r n=-oo (3.13) Comparing (3.12) and (3.13), the two series expansions for g(w), we see that 00 00 ( 27rl) L e-mAzw = L f wx. y'27r n=-oo l=-oo X In the case w = 0 this expression is the Poisson Summation Formula. For the purpose of determining the error in using the DFT to approximate theFT, this expression is rearranged to read j(w) (3.14) 44


3.6.3 Uniform Bounds for the DFT Error Consider the following well-known fact about the Fourier coefficients of a periodic function with p continuous derivatives: Theorem 3.3 ([48], [49]) Let f(x) be a periodic function. If f(x) hasp continuous derivatives, then for some positive constant G the Fourier coefficients of f( x) satisfy (3.15) for every k. We can use this fact to develop a uniform bound for the error in using the DFT to approximate the FT of a spatially-limited function. Suppose f(x) = 0 for lxl A, and let f be sampled on a regular grid with spacing Xn = nb.x = 2An/ N. Then with Wk = kt.w, where b.w = 1rjA, the DFT may be used to compute an approximation to j(wk), as noted earlier. Equation (3.14) then becomes j(wk) {i A t f(xn) e-=k/N = f j(wkNlb.w) y;N '!jQ (3.16) Following a method similar to that of [1], equation (3.16) and Theorem 3.3 can now be used to prove Theorem 3.4 Let f(x) = 0 for lxl A, and let the 2A-periodic extension of f(x) belong to CP(-oo,oo), p > 1. If the Fourier transform values j(wk) are approximated by 45


fork = -N/2 + 1,-N/2 + 2, ... ,N/2, then the error in this approximation satisfies where C is a constant that is independent of k or N. Proof: Theorem 3.3 holds, since the 2A-periodic extension of f(x) is in QP( -oo, oo ). Applying the triangle inequality in equation (3.16) yields 00 IJ(wk)-Jkl L IJ(wk-Nll>w)l (3.17) l= -> 1#0 In this case, the Fourier coefficients of f(x) satisfy ck ---J./f}(wk), so (3.17) may be written j}(wk)-Jkj < ick-Nli 1#0 and, by Theorem 3.3, {2 00 I 1 lp j}(wk)-Jkl k-Nl 1#0 Factoring N-P from the series, and letting H = this becomes (3.18) The series in this last expression may be rewritten as L:-k-+L:-k-00 I 1 lp 00 I 1 lp 1=1 N l 1=1 "N + l (3.19) Noting that -1/2 k/ N 1/2, two cases need to be considered. First, assume that 0 k/N 1/2. In this case the second series in (3.19) converges by 46


comparison with the Riemann zeta function 00 1 ((p) = I: n" p > 1 n=l which converges to a positive constant. Letting m = l-1 and a = 1 -k/ N, the first series in (3.19) may be written "" I 1 lp 111p 00 ( 1 )p fo -a-m = ;;; + f, a+m Since 1/2 ::; a ::; 1, the sum in this last expression also converges by comparison to ((p). Noting that 11/aiP::; 2P, then I:-+I:< 00 I 1 lp 00 I 1 lp 1:1 1-kt -l l:l -kr + l -k 1 2P + 2((p) for 0 $ N $ "2 On the other hand, if -1/2 $ k/N::; 0, then the first series in (3.19) converges by comparison to ((p), while the substitutions m = l-1 and a= k/N + 1 may be used in the second series to show that it, too, is bounded by 2P + ((p ), and therefore (3.19) is bounded by 2P+2((p) for the entire range k/N::; 1/2. Letting C = H(2P + 2((p)), and substituting this expression back into (3.18), establishes c < (3.20) I Examples will be shown shortly demonstrating that this error bound is loose. The error in using the DFT to approximate theFT frequently diminishes more rapidly than this bound suggests. If the conditions on f( :x) are more carefully 47


" specified, however, stronger results may be obtained. A function f(x) is said to satisfy Dirichlet's conditions in the interval (a, b) if it satisfies one of the following: 1. f( x) is bounded in (a, b) and the interval can be broken up into a finite number of open sub-intervals, in each of which f( x) is monotonic; or 2. f(x) has a finite number of infinite discontinuities in the interval (a, b), and when arbitrarily small neighborhoods of these points are excluded, f(x) is bounded in the remainder of the interval, and this can be broken up into a finite number of open sub-intervals, in each of which f(x) is monotonic. Further, the integral J: f(x )dx is absolutely convergent. With these conditions, stronger statements can be made about the behavior of the coefficients, for example Theorem 3.5 ([9)) Let f(x) be bounded and satisfy Dirichlet's conditions in (-A, A). Then the Fourier coefficients for the 2A-periodic extension of f(x)satisfy for some constant M > 0 independent of k. For any p 1, assume that f(x) and its derivatives, up to the (p-1)'', are bounded, continuous, and satisfy Dirichlet's conditions in (-A, A). For r == 0, 1, ... ,p-1, assume that lim f(rl(x) == lim f(rl(x) z-A-z--A+ 48


If the p'h derivative is bounded and satisfies Dirichlet's conditions in the same interval, then the Fourier coefficients for the 2A-periodic extension of f(x) satisfy for some constant M > 0 independent of k. With the aid of Theorem 3.5, we can strengthen the bound given in Theorem 3.4, at the cost that f(x) must satisfy a more stringent set of conditions. Theorem 3.6 For p 2:: 1, assume that f( x) and its derivatives, up to the (p-1 )'', are bounded, continuous, and satisfy Dirichlet's conditions in (-A, A). For r = 0, 1, ... p-1, assume that and that the p'h derivative is bounded and satisfies Dirichlet's conditions in the same interval. If the Fourier transform values j(wk) are approximated by fork= -N/2 + 1,-N/2 + 2, ... ,N/2, then the error in this approximation satisfies where C is a constant that depends on f, but is independent of k or N. Proof: Under the conditions of the theorem, the Fourier coefficients satisfy hi :::: M/(lkiP+l ), by Theorem 3.5. The proof now follows by substitution of this condition in place of (3.15) in the proof of Theorem 3.4. 49 I


We now have an error bound for the approximation of the FT of a spacelimited function f(x) by the DFT that is applicable to a fairly wide class of functions. Essentially, a bound is given for any f( x) that is bounded and con tin-uous on (-A, A), and has only finitely many intervals of monotonicity. There is one class of function, however, that occurs frequently in the image reconstruction problem, to which this bound does not apply: the class defined in the first part of Theorem 3.5, which includes functions that are discontinuous, but otherwise well behaved. By Theorem 3.5, the Fourier coefficients of such functions are bounded in absolute value by M/lkl for some positive constant M. However, the technique used in Theorem 3.4 will not work in this case, since for p = 1 the Reimann zeta function is unbounded. Therefore, a somewhat more complicated procedure is required. We do not attempt to address this entire class of functions. Rather, we restrict our attention to functions that have a finite number of jump discontinuities, since this property is characteristic of the projections of many objects. Theorem 3. 7 Assume that f(x) = 0 for lxl 2:: A and that f is bounded in (-A, A) and continuous except for a finite number, q, of jump discontinuities. Suppose also that f( x) and its first derivative satisfy Dirichlet's conditions on every open subinterval of (-A, A) that does not contain a jump discontinuity of f. Let the Fourier transform values j(wk) be approximated by


fork= -N/2 + 1,-N/2 + 2, ... ,Nj2. Then the error in this approximation satisfies where C is a constant that is independent of k or N. Proof: Note that the compact support implies that f(-A+) = f(A-). Let the points at which the q jump discontinuities occur be designated iii 1 iii2, .. :iiq. Let h = 2Aj N, and define the points Xn = nh. Then the DFT will be computed using values of f(xn) for n = -N/2 + 1, -N/2 + 2, ... N/2. From the set of points xn, identify the following special points: Letting z0 =-A and yq+l =A, construct the auxiliary function g(x), defined as q f(x) for x E U[zi,Yi+ll j=O g(x) = f(Yi) + [j(zi)Yi) for X E [yj, Zj] Figure 3.2 shows f(x) and g(x). The function g(x) is bounded, continuous, and satisfies Dirichlet's conditions in (-A, A), while its first derivative is bounded and satisfies Dirichlet's conditions on the same interval. Therefore, by Theorem 3.6, the error in computing the transform values g(wk) by


" j(x) g(x) Figure 3.2: The function f(x) and the auxiliary function g(x). fork= -N/2 + 1, -N/2 + 2, ... Nf2 is bounded by (3.21) for some constant G > 0. Now g(x) coincides with f(x) except on the q subin tervals (Yi> z;) for j = 1, 2, ... q. Therefore, Since f is bounded in (-A, A), lf(x)l :<:; M for some M > 0, and, by the definition of the auxiliary function, /g(x)-f(x)l :<:;2M for all x in (-A, A). Thus, and, noting that each of the intervals (y;,z;) has width 2AfN, (3.22) Finally, invoking the triangle inequality for (3.21) and (3.22), while noting that 52


by their definitions fJk = }k, we have where C = B + G, which was to be proved. 3.6.4 Numerical Examples c < N' I That the error bounds derived above are relatively sharp can be demonstrated by several numerical examples. Several functions are used, and in each case the DFT of the sampled function f(xn) has been computed for a number of different values of N. The Fourier transform of each function was also computed analytically. The results are displayed in Figure 3.3. For each function, a variety of choices for N was used, and the natural logarithm of the ratio maxlki

Tl The function is not continuous, but satisfies the conditions of Theorem 3. 7. The Fourier coefficients for f( x) are Co = 1, and Ck =a sinc(k7l'a) k = ... so that \c,\ = 0(\k\-1). The error in approximating theFT by the DFT should be 0(1/ N), by Theorem 3. 7. The graph of the error appears in the upper left corner of the figure. For certain values of N the maximal error falls right on the predicted value, and for all other values it is somewhat better. The overall trend of the graph follows the predicted curve. 2. The second example is the chapeau function, J 1-\xl f( X) = A( X) = l 0 The Fourier coefficients for f(x) are 1 co= 2' and c, = 1+(-1)k+l ( k7l' )2 for lxl S 1 otherwise. k = ,, .... The function f( x) satisfies the conditions of Theorem 3.6 with p = 1, so that the error should be 0(1/ N2). The graph is the uppermost on the right side of the figure. Only values of N corresponding to powers of 2 were used, and thus the curve corresponding to O(N-P) appears as a straight line with slope -p. It may be seen that the error is almost exactly O(N-2), as predicted. 54


II 3. Let f( x) be defined by 11-x2 f(x)= 0 for x E [-1, 1] otherwise with Fourier coefficients given by 2 Co= 3' ( -1)k+l and Ck = ( ) krr 2 k=,, .... This function satisfies the conditions of Theorem 3.6 with p = 1, so that the error should be 0(1/N2). The error appears in the center graph on the left. As in the previous example, only values of N corresponding to powers of 2 were used so that an error behaving as O(N-P) appears as a straight line with slope -p. Again, the behavior of the error is almost exactly as predicted. 4. Let f( x) be defined by for xE[-1,1] otherwise. The Fourier coefficients are 8 Co=-, 15 24( -1 )k+l and ck = (krr)4 k = ... This function satisfies the conditions of Theorem 3.6 with p = 3, so that the error should be 0(1/N4). The lower graph on the right corresponds to this experiment, and it may be seen that the error behaves nearly as predicted. The size of the error diminished so rapidly with N that it was not possible to measure it accurately for any values of N greater than 90. 55


5. Let f( x) be given by for x E [-1, 1] otherwise. The Fourier coeflicien ts for f( x) are 128 k ( 3840 40320) eo=315, and Ck=(-1) (k1r)6 -(k1r)B k = ,, .... This function satisfies the conditions of Theorem 3.6 with p = 5, so that the error should be 0(1/ N6). The graph is lowermost on the left, and the results are not as precise as could be hoped, mainly because the magnitude of the error falls off so rapidly that it could not be measured accurately for values of N larger than 32. While the error appears to be closer to O(N-5 ) than the predicted O(N-6), the latter may be the asymptotic behavior. 3.7 Error of the Inverse DFT The error that is committed if a function f( x) is approximated by the I D FT may be analysed with the same procedure that led to the DFT error expression (3.14). The roles of the frequency and spatial domains are interchanged, however. As before, assume that f(x) has a Fourier transform given by j(w). Let A> 0 be specified, and suppose the values of f(x) for x E [-A,A] are used to construct the 2A periodic function 00 g(x) = L f(x2Al). (3.23) 56


" _, ... ... _, _,. _, f(:rJ=x(:r) --W' --' N-J'' -w' "" "' ,., '" w' f(:r}=(l-:r') W' f(:r)=(l-:r2)' _, W' _, _, ....., N-.2 f(:r}=A(:r) N-J' ...... _, f(:r}=(1-:r')2 -------w' ---_, N- -..;; -" _, Computed Errors of t!uJ DFT Natural. Log of Error plotted as a. solid line, Inverse powers of N are plotted as dashed lines for compariSOn. N-' ., Figure 3.3: Errors in approximating the FT by the DFT. 57


Expanding g(x) in a Fourier series, we have 00 g(x) = 'L: Ckeikn/A, k=-oo where Ck = g(x)e-kn/Adx 1 !A 2A -A Substituting the definition of g( x) into the integral yields Ck = _!__ JA f f(x2Al) e-ibz/Adx 2A -A l=-oo = ]__ foo f(x) e-ikn/Adx 2A -oo = Therefore, the series representation for g(x) is g(x) = /'!. f j (h) e-ikn/A AV2 k=-oo A and equating (3.25) with (3.23) yields 00 'l: f(x2Al). l=-oo Rearranging this yields the expression 00 2: f(x2Al) 1=-QQ 1;1'0 (3.24) (3.25) (3.26) which should be compared to (3.14). If f(x) is approximated by the finite sum I DFT {j} = f!. I: j (k1r) e-iku/A AV2 k=-N/2 A then the error in this approximation is f(x)-IDFT{j} 2: j(krr)e-iku/A+ f f(x-2Al). (3.27) A 2 lki>N/2 A 1;1'0 58


We will be especially interested in functions having compact support, f( x) = 0 for /xi 2: A. In this case the right-hand side of (3.26) vanishes and (3.28) which merely says that a compactly supported function possessing a Fourier transform is the sum of its Fourier series. In this case the I DFT is simply the partial sum of the series, and the error in the approximation is N/2 f(x) I; Ck e-ikn/A = k=-N/2 I: Ck e-ik1rz/A. JkJ>N/2 For functions satisfying the conditions of Theorem 3.5, it is not difficult to bound the error in approximating f( x) by the partial sum of its Fourier series: Theorem 3.8 Forp 2: 1, assume that f(x) and its derivatives, up to the (p-1)'', are bounded, continuous, and satisfy Dirichlet's conditions in (-A, A). For r = 0, 1, ... ,p1, assume that and that the p'h derivative is bounded and satisfies Dirichlet's conditions in the same interval. Iff( x) is approximated by the partial sum of its Fourier series, N/2 """" C e-ile-,rzfA L...J k k=-N/2 then the error in this approximation satisfies Nj2 f(x) I; Ck e-ihz/A k=-N/2 where C is a constant independent of N. 59 c < -NP'


II Proof: Bounding the error by the triangle inequality yields f(x) N/2 L Ck e-ikn/A < k=-N/2 By Theorem 3.5, the Fourier coefficients satisfy for some M > 0. Therefore the integral test gives Letting C = 2P M completes the proof. 3.8 The Grids I In light of the previous discussion relating theFT, the DFT, and the Fourier series representation of a function, we can establish a system of grids that will be useful throughout this work. In order to specify a grid, a notational convention must be adopted. The notation must indicate whether a given grid contains spatial domain or frequency domain data, as well as indicating the extent and sample rate of the grid. It must also indicate whether the grid is in Cartesian or polar coordinates. Accordingly, we define the following conventions. 60


The standard or natura/spatial grid is that grid whose extent and sampling are determined by the original discretization of the problem (for image reconstruction, this means the grid on which the original projection data are recorded). The natural spatial grid in one dimension is designated by the symbol The h indicates the grid spacing, that is, h = .O.:v or, in the case of a projection, h = .D.p. The number of samples on the grid is given by the subscript Nand the locations of the gridpoints are given by :Z:n = nh (or Pn = nh). The spatial extent of the grid, which is designated by X, is determined by X= Nh. The convention will be that, as n runs from -N/2 + 1 to N/2, runs from -X/2 + h to X/2. This notation can be extended to define multi-dimensional grids. Here we are concerned with two-dimensional grids. For the reconstruction problem, for example, the natural spatial grid underlies the set of all projections. This grid is denoted nj!M, where N designates the number of data points per projection, with a spacing h = .O.p, and M designates the number of projections, with an angular spacing of () = 1r / M. The standard or natural frequency grids are those obtained by using aD FT to approximate the FT of data given on a standard spatial grid. The same type of superscript/subscript notation will be employed, using the symbol r to indicate that the grid data are in the frequency domain. Thus r1 is the grid containing the transform of data on Again, it is assumed that the grid is centered, with N points and a frequency spacing of).= .O.w. The length of the grid is F = N)., 61


and it extends from ( + 1)>. Similarly, the two dimensional grid is used for the transforms of the projections, which are defined on fl}!M. Note that is a two-dimensional grid containing the M one-dimensional transforms of the M projections. It will be seen in Chapter 4 that it is necessary to interpolate the polar coordinate frequency domain data, ji(w), which is on to a Cartesian frequency grid, which is designated Finally, a Cartesian coordinate spatial grid is needed for the reconstructed image, so the symbol D.'Jv'Jv is employed. We will use this notation rather loosely, in that a symbol like rA,'N will be used for both the grid itself and for the space of grid functions that are defined on the grid. This allows operator notation to be employed without ambiguity. Thus the Fourier transform of a one-dimensional data set may be indicated by the mapping while the inverse two-dimensional Fourier transform would be the mapping -r-1 r'' nhh .r2 NN -> "NN Often, an operator is to be employed only on the first index of a two-dimensional data set, in which case the subscript "1" will be used on the operator. For example, taking Fourier transforms of each of the views in a set of projections may be designated by the mapping 'I:" nhB r>.B .r1 "NM-> NM 62


3.9 The Grid Reciprocity Relations The DFT reciprocity relation (3.4) will play an important role in the development of reconstruction algorithms. Using the notation for the grids, the reciprocity relation is which implies that h).= 2tr N XF = 2trN. (3.29) (3.30) These relations will be important in that they govern which grids are employed m moving between spatial and transform domains. This is demonstrated as follows: The Fourier transform of a grid function on flR, is a grid function on rJ.,. Let this be indicated by the notation Equation (3.29) implies that if the sample interval in the spatial domain is doubled to 2h while the number of points on the grid is held constant at N, then the grid spacing on the frequency domain transform is halved, to >.j2. This Fourier transform pair can be indicated by The extent of these grids is determined by (3.30). Therefore, N(2h) = 2X and N >.j2 = F /2. So the spatial grid has twice the extent and twice as coarse 63


:F I I I I I I I I <==? I I I I I -A 0 A F _E. 0 E E -2 4 4 2 nh N (12h N fk :F Ill: II:: I <==? -2A -A 0 A 2A _E. 0 E 4 4 Figure 3.4: Reciprocity relationships on the grids. The fine-grid in space ts mapped by the DFT to a coarse-grid in frequency, while the IDFT maps a fine grid in frequency to a coarse-grid in space. a sample rate as the natural grid !1\:;, while the corresponding frequency grid has half the extent and twice as fine a sample rate as the natural grid These relationships are illustrated in Figure 3.4. In the language of multigrid, we would say that transforming a fine grid function in space produces a coarse grid function in frequency, while a coarse grid function in space is transformed to a fine grid function in frequency. 3.10 DFTs on Irregular Sets: The Anterpolated DFT As defined in (3.3), the DFT is performed on data that are given on a regular grid, with constant spacing between the data points. In many applications, the data for a problem are not spaced regularly. It is of interest, then, to determine how a discrete Fourier transform may be computed for such a data set. To perform this computation, we develop an idea originally due to Achi Brandt ([3], 64


[4], [5]) .. Later, for the Radon inversion, the method will be extended to two dimensions. To begin, it is necessary to decide what is meant by a discrete Fourier transform for irregularly spaced data. Therefore, the concept of a formal D FT is introduced, which is defined as follows: Consider any set of N ordered points in the interval [0, X), satisfying 0::; :z:o < :z:1 ... < XN-1

operations. Instead of forming this sum directly, though, an approximation to it will be computed, using an F FT to accelerate the computation. The procedure begins with the definition of an auxiliary grid, n,., covering the range of values [0, X). Let N, be an integer, whose value will be determined shortly, and let the grid spacing h be defined by h == X/ N,. Then the auxiliary grid consists of the points yT = (r1)h, for r = 1, 2, ... N,. Suppose that the value of some function g(yT) on the regular grid n,. is to be interpolated to the gridpoints :Vj by Lagrangian interpolation. We will identify the interpolation by specifying the degree of the polynomial to be used. Thus, p-degree Lagrangian interpolation is computed using a polynomial of degree p or less. For each :c;, the p+ 1 nearest neighbors on the grid fll\.. must be located. Let these points be designated ... Y"(j,p) The interpolation neighbors should be determined modulo X, so that a point near the limits of the interval [0, X) may have neighbors near the other end of the interval. (This is justified because, as will be seen shortly, the function to be interpolated for the formal DFT is X-periodic.) For each :c;, the p+ 1 Lagrangian interpolation weights are computed by (3.32) The interpolation of the function g to the grid points :Vj is given p g(:cj);:::: ,L Wn(:Vj)g(y"(j,n)) n=O Letting g be the vector of function values g(yk) and g the vector of interpolated 66 '"


values g(xj), the interpolation may be written in matrix form g = [7;], where r: is the N x N. interpolation matrix mapping a function on !1}. to the gridpoints {xj}. The entries of this matrix are l Wn(xj) if = m [7;]jm = 0 otherwise. We are now ready to compute an approximation to equation (3.31). The strategy will be to interpolate, for each Wt, values of the kernel e-iw,z; from the auxiliary grid D} That is, equation (3.31) is approximated by N-1 u(wz)"" I: u(xj)e(wz,xj) (3.33) i=O where p e(wz,xj) = L wn(xj)e-'"''"Ci,l. n=O Let the column vector of exponential values e-iwY be designated ei, and the vee-tor of interpolated kernel values e(wt, Xj) be denoted ez. Then this interpolation can be written Notice, however, that e1 is to be used as the l'h row of the matrix giving the kernel of the summation in (3.33). To compute e1 as a row vector, the adjoint of the matrix equation equation is needed, namely (ezl = (I;et)r = (eif[I;f 67 I


Define theM-vector u = u(w1 ) and theN-vector ii = u(x;), and denote the matri-ces giving the kernels of equations (3.31) and (3.33) by Wand W, respectively. Let the M X N. matrix whose [th row consists of e-iw,y. for the N. points on be designated W. (It is useful to observe that this matrix consists of M consecutive rows of the standard DFT kernel for a uniform grid with N. points.) Then the formal DFT is approximated by u Wit Wit = W[I;]Ti This notation can be simplified slightly by denoting the vector created by multiplying it by [I;JT as u. Since the matrix [I;JT is the adjoint of the Lagrangian interpolation matrix, the process of computing ii = [I;JT it has been dubbed anterpolation. Then the approximation to the formal D FT is (3.34) which we call the anterpolated discrete Fourier transform ( ADFT). The ADFT, as a matrix multiplication, requires O(M N.) operations. In general, N. will exceed N, so as a matrix-vector multiplication, the ADFT has no advantage over (3.31). If, however, N. is selected appropriately, the approximation can be computed quite rapidly. To see this, let M. = max{Mt. M2}. Then if N. is selected such that N. :?: 2M., and at the same time N. is a number for which an F FT module exists, then the fast Fourier transform can be applied to 68


compute the DFT summation N. N. N. for l = -2 + 1,-2 + 2, ... 2 Recalling that h = X/N., it may be seen that the DFT summation therefore yields (ljN.)u((2rrl)jX). Multiplying by N. thus yields a set of values that includes, as a subset, all the desired values of u(wt). Computing the ADFT, then, consists of two phases: 1. u is computed from ii by anterpolation: u = [I;JT ii. 2. u is computed from u by a fast Fourier transform. 3.10.1 Operation Count for the ADFT The cost of computing the AD FT consists of the cost of computing the interpolation weights, the cost of computing the vector u = [I:JT ii, and the cost of the F FT on N. points. Computing the (p+1)N interpolation weights, wn( x;), by the formula in (3.32) is essentially just the cost of computing the numerator, since the regular spacing on means that the denominators of w ,( "';) are independent of j. To compute p the numerators, the product II (:v;-"'"(i,m)) is computed for each :v;, requiring m=O 2p + 1 operations. Then the n'h interpolation weight can be obtained by dividing by the product of (:vm-X"(j,m)) with the precomputed denominator, requiring 2 operations for each of the p + 1 weights associated with the point x;. Calculation of the weights thus requires O(N(p + 1)) operations. It is important, however, 69


to note that the calculation of the weights is dependent only on the relationships between the gridpoints {yk} and {x;}, and is independent of the data set, u(x;). This means that if a known set of gridpoints {x;} and a standard auxiliary grid nj.., are to be used repeatedly, the interpolation weights wn(x;) may be precomputed and stored, and need not be included in the cost of the algorithm. This will be assumed to be the case. The matrix [I:JT is N, x Nand the data vector ii is N x 1, so the computation of u = [I:JT ii would be 0( N N,) if performed as a matrix-vector multiplication. There is, however, a much more efficient method. The index table tt(j, n) can be stored along with the interpolation weights. For each x; and for each n, the value of tt(j, n) is the index of the n'h interpolation neighbor that is used to interpolate from nj.., to the grid point Xj. The periodic nature of the kernel being interpolated means that the interpolation is always to a grid point x; in the center of the set of p interpolation points. (As is well known [13], Lagrangian interpolation is better behaved when this is the case.) If pis odd, then x; always lies between YK(i,9) and YK(j,2'J=-'-)' while if pis even, then is the closest gridpoint on nj.., to x;. Computing the vector u is then very easy, and may be done in 2N(p + 1) operations, according to the algorithm: 1. Initialize u(yr) = 0 for all Yr E nj.., (1:::; T:::; N.). 2. For j = 0, 1, ... N-1 For n = 0,1, ... ,p 70


Having computed the values of u, consider now the cost of the F FT portion of the ADFT. This is simply the cost of an N,-point F FT. In the next section criteria for choosing N, will be determined. For now the only requirement is that an F FT can be computed on a vector of length N,. As such, N, must have factors for which F FT modules are available (for example, FFTPACK [45] has modules for powers of 2,3,4,5,7, and 11). For the purpose of an operation count, however, it is easiest to assume that N, is a power of 2. Indeed, we shall see that we have great flexibility in our selection of N., and since powers of 2 or 4 produce the most efficient F FTs, this is a good assumption. In this case, the cost of the F FT portion of the ADFT is O(N, log2 N,). The costs of the ADFT can now be computed. In terms of data storage it requires four arrays. One is the N-point vector containing the input data, ii. In addition, an N,-point complex vector is required for the input and output of the F FT. Assuming that the weights are precomputed and stored, two auxiliary arrays are necessary, an N x (p + 1) real (or double precision) array holding the interpolation weights wn(:t;), and theN X (p+ 1) integer array of indices K(j,n). If the operations of multiplication and addition are counted equally, and if the weights and indices are pre-stored, the operation count is C1N, log2 N, complex operations for the F FT portion of the algorithm, where C1 depends on the choice ofF FT algorithm. The computation of u entails 2N(p + 1) operations that are 71


real or complex according to whether iZ is real or complex. Counting all both phases of the algorithm, the total operation count of the AD FT is therefore C,N. log2 N. + 2N(p + 1) This should be compared with the operation count of the formal DFT, which is O(M N). The computation of the ADFT is more efficient provided M is larger than 2(p+ 1) + (N.Iog2 N.)/N), a condition that will generally occur in practice. 3.10.2 Error Analysis for the Anterpolated DFT One of the attractive features of the AD FT is that the interpolation is performed on the kernel, which has known smoothness properties, rather than the data set, which generally has unknown smoothness properties. Since interpolation error depends on the smoothness of the interpolated function, the error committed by the anterpolated D FT is relatively easy to analyse. Consider the error in p-degree Lagrangian polynomial interpolation, when the interpolation is from a set of p + 1 grid points that are equally spaced. Let these grid points be designated eo, e,' ... 'ep A function !(X), whose values are known at these gridpoints, is to be interpolated to the point X E [eo, epl Let X = eo+ th, where his the gridspacing and t E [O,p). The approximation to !(eo+ th) is the value of the Lagrangian interpolation polynomial P,( eo + th ), p Pp(x) = :Ew;(x)f(e;) where i=O 72


Defining 7rp(t) = t(t-l)(t-2) ... (t-p) and ( E [eo,ep], the error in the interpolation is bounded according to (3.35) where Qp+l = See [24], pages 264-270, for a derivation of this error term. It is useful to bound this error more precisely. To do this, it is necessary to examine the behavior of the factorial polynomial 7rp(t). This polynomial has been well-studied, and many results can be found in various standard numerical analysis texts ([13], [24], [37], [51]). Several facts about 7rp(t) will prove useful, and are listed here: If pis even, 7rp(t) is anti-symmetric about the point p/2, while if pis odd, 7rp(t) is symmetric about p/2. If t + 1 is any non-integral point in the interval [O,p/2], then while if tis any non-integral point in the interval [pj2,p], then The maximum value of l1rp(t)! occurs fortE [0,1], and recurs (symmetrically about the point p/2) in [p-l,p]. 73


These facts are often used to bound ma.x,E[O,p) J7rp(t)J. For example, a bound on J7rp(t)J is found in [51] that leads to the error bound (3.36) These standard results, however, are developed for the case that x can be anywhere in In the present case the interpolation is always to the center subinterval. Thus for p odd, t E while for p even, either t E + 1] or t E 1, With this restriction, the error bound in (3.36) can be greatly improved. To shorten the discussion, assume that p is odd. This is the most common method, where p = 1 gives linear interpolation and p = 3 give cubic interpolation. Similar results can be obtained for p even. The error bound will result from application of the following lemma: Lemma 3.1 Let p > 1 be an odd integer and consider the factorial polynomial 1rp(t). Then Proof: max l1rp(t)l < 1)!]2 max,E['T',zt'l J1rp(t)l = maxtE["f',zt'l Jt(t-1)(t-2) ... (t-p)l < C1 C2 Ca 74


where C, = max,EI-'?."t'-llt(t1) ... (t+ 1)1 C, = max,el-'?."t'-ll(t-Ca = max,EI-'?."t'-ll(t-1)(t2) ... (tp)l Consider the first term, C1 Since t E follows that c, (p; 1) (p; 1 -1) ... (2) = (p; 1)! Similarly, t E implies a bound for C3 namely C3 (2)(3) ... (p; 1 ) = (p; 1 )' Bounding C2 may be accomplished with ordinary calculus: max I (t-_P --1) (t-_P +_1) I= max lept + !__P, --11 tE[-'?,"t'-J 2 2 tE[-'?,"t'-J 4 The function t2 -pt + (p2 -1)/4 is everywhere negative on its maximum absolute value of 1/4 occurring at t = pj2. Therefore, C2 = 1/4, and multiplying C1 C2 and C3 together completes the proof. I This bound on 111'p(t)l can be used to find an error bound for the ADFT. In this case the functions being interpolated are Of these functions, the only ones whose values are of interest are those with l between -M1 and M2 Recalling the definition M, = max{Ml>M2}, the largest 75


absolute value among the frequencies of interest is WM = (211" M.)/ X. Therefore, Inserting this and the bound from Lemma 3.1 into equation (3.35) gives ( hw )P+l[(tl!.)IJ2 le-i"''; e(w x )I < M 2 /, J -4(p + 1 )! (3.37) [(z.! )']' This can be applied to get an error bound on the ADFT. Letting Cp = <(;+1l! then Cp(hwM)P+lNIIull Finally, substituting h = X/ N. and WM = (21r M.)/ X into this expression gives the error relation (3.38) We wish to use an error bound like (3.38) to construct a mechanism for se-lecting the degree of interpolation, p, as well as the value of N In order to do this, it will be convenient to use a more precise bound for than Cp, which is obtained by the following lemma: Lemma 3.2 If p is an odd integer greater than 1, then a) llrp and b) l1r p( t) I 1 max < tE[Zf!,l.:j:l] (p + 1)! 2P+l 76


Proof: Since pis odd the factorial polynomial1rp(t) is symmetric about the point p/2, that is A proof of this fact may be found on page 266 of [24]. Since the points (p-1) /2 and (p + 1)/2 are successive roots of a polynomial with simple roots there is exactly one critical point between them, and the absolute value of the factorial polynomial achieves its maximum in ( 9, at that point. But a symmetric function having an isolated extremum necessarily achieves that extremum at the point of symmetry, hence the factorial polynomial takes on its maximum absolute value in ( 9, at p/2, establishing part a). From part a) it follows that max,E[9,zttll7rp(t)l is given by Upon simplification this becomes so that l1r p( t )I max tE[9,zt'J (p + 1)! = 1 [p2(p-2)2(p-4)2 ... (1)2] 2P+l (p + 1)p(p-1) (2)(1) 1 ( p2 ) ( (p-2)2 ) ( (1)2 ) 21'+1 (p + 1)p (p-1)(p-2) .. (2)(1) 1 < 2P+l since all the factors in the last expression are less than 1 in magnitude. I 77


Armed with this bound for it is possible to devise a mechanism for selecting N, and p. Substituting the bound into equation (3.37), along with h = XjN, and WM = (27rM,)/X, gives Finally, substituting this result into the equation for ju-Wii.j that led to (3.38) establishes Theorem 3.9 Suppose the formal DFT, u = Wu 1 i.e. N-1 u.(w,) = 2: u(xj)e-'"'";, j=O is to be approximated by the anterpolated DFT (3.34} using an FFT of length N,. Then the error in the ADFT approximation to the formal DFT is bounded, for -M1 M2 by I Since the bound holds for all desired values of it then follows that where II lloo is defined as the maximum absolute value of the vector entries. It is also worth noting that an error bound for each frequency can be obtained by replacing WM with w1 in the derivation, leading to (3.39)


This is especially useful information for those occasions when only the low frequency components are of interest, or when the accuracy required of the approx-imation is greater for the low frequencies than for the high frequencies. 3.10.3 Selection of p and N. The error bound just derived is useful in that it provides a way to select the operational parameters N. and p. Recall that the goal is to calculate an approximation to u to some prescribed accuracy, lu-Wul :S E/\u\1. No limits on the acceptable values of are given, but in practice we will want to make the error small. Accordingly, it will be assumed that %:: 1. Comparison with Theorem (3.9) gives the requirement ( Mr)P+l -N

To see how this may be accomplished, recall that the work involved in com puting the ADFT with N, points on the auxiliary grid !1l:,. and p-degree Lagrangian interpolation is O(N,log2 N,) + O(N(p + 1)) The value of the constant on the 0( N(p+ 1)) term depends on whether the weights and indices are pre-stored, or calculated "on the fly". Assuming that they are pre-stored, the constant is 2. The constant on the first term depends on several factors. For example, if the input data are real, or have certain symmetries, then a specialized F FT can be used, producing faster computation ([47], [6], [22]). The specialized FFTs generally have a complexity of (Nfq)log(Nfq) for some number q > 1. If such an algorithm can be applied to the ADFT, the operation count can be modified, and most of the effect can be absorbed into the constant on O(N,log2 N,). For the analysis that follows, we assume the weights and indices are pre-stored. The variety of available F FT algorithms, however, pursuades us to leave the constant on the first term as an unspecified parameter, C1 The total work in computing the ADFT can therefore be written as a function of the two parameters N, and p. For a fixed problem size (N and M,), and a prescribed error tolerance t, the work in computing the AD FT to the required accuracy is W(N.,p) = C1N,log2 N. + 2N(p + 1) (3.41) where N, and pare selected to conform to the constraint in (3.40). This constraint 80


may be rewritten as (N) ,!1 N. :2: M.1r Note that p :2: 0. The case p = 0 corresponds to nearest neighbor interpolation, for which the value of N. required to achieve the desired accuracy satisfies N. :2: (M.1rN)jE. Since equality will suffice to ensure that the required accuracy is attained, this case provides the upper bound over all values of p :2: 0. A lower bound for N. can also be obtained, by examining the effect of choosing extremely high degrees of interpolation. Formally, (N) ,!1 lim = 1 p-oo which provides the lower bound M. 1r < N. for all values of p. It is now possible to formulate the problem of selecting N. and p as an optimization problem: Problem 3.1 To compute the ADFT approximation (3.34) to the formal DFT (3.31) to the prescribed accuracy E, the parameters N. and p must be selected to minimize the work function W(N.,p) = C1N.log2 N. + 2N(p + 1) subject to the constraints (N) ,!1 N. :2: M.1r and 0 :<:; p < oo 81


The constraints imply that M,1r < N,:::; (M,1rN)jE. Questions concerning the existence and uniqueness of optimal solutions are fairly easy to answer. To begin, we need to examine the behavior of W(N,,p). This function is continuous with respect to each of its variables on its domain. Examination of the partial derivatives ew aN, = C,(log2 N, + 1) and oW =2N ap reveals that both of the partials are everywhere positive. This observation leads to Lemma 3.3 LetS= {(N,,p): N, > M,7r(N/E)11(P+1)}, and let aS be that portion of the boundary of S given by {(N.,p): N, = M,7r(N/E)1 f(p+lJ}. Then if (xo,Yo) E S, there exists a point (e,1J) E aS such that W(e,'l) < W(xo,Yo) 1 Proof: Since (x0,yo) E S, the point (e,yo) E aS, where e = M,1r ro+'. Furthermore, e < Xo. Since the partial derivative of the work function with respect toN. is everywhere positive, then W(e,yo) < W(:co,Yo). I The utility of Lemma 3.3 is that the optimization problem can be rewritten as a problem in a single variable. Since for every point in S there is some point along aS that requires less work, it is only necessary to seek a minimum from the points of aS. This can be done by parameterizing N, and p as functions of a single variable. 82


On 85, N. = Morr(NjE)11(P+1l. Define the new variable (N) b-' (3.42) so that N.=Moll'b and (3.43) Since 0 :'0: p < oo, then the value of b is restricted to the interval (1, N / ]. Substituting these expressions into (3.41), the work equation may be rewritten as a function of b alone: (3.44) The problem is to minimize (3.44) subject to the constraint 1 < b :'0: (N / ). Once ,, b (and therefore N.) is determined, the necessary value of p can be obtained from (3.43). This leads to an answer to the existence and uniqueness question. Theorem 3.10 There exists a unique value b that minimizes (3.44) subject to 1 < b :'0: (N/). Therefore, the work function ... W(N., p) = 01N.log2 N. + 2N(p + 1) has a unique minimum, subject to the constraints and 0 :'0: p < oo Proof: W(b) is continuous and differentiable with respect to bon (1, N/]. Differentiating equation (3.44) yields 2Nln (!f) W (b)= C1Moll'log2(M,II'b) + b(lnb)2 (3.45) 83


II It simplifies matters to introduce the positive constants C1M,1r (N) K1= ln 2 K2=2M,1r, and K3=2Nln in which case (3.45) may be written Any critical point, where W'(b) = 0, must therefore satisfy b(lnb) 2ln(K2b) = Ka/ K1. Now W' is also continuous and differentiable on (1, Nj], and differentiating yields W"(b)= K1 K (lnb+2) b + 3 b2(Jnb)3 Since b > 1, we see that W"(b) > 0 for all bE (1, N/E], so any critical point in the interval must correspond to a local minimum. Examination of the endpoints reveals that I (K2N) 2 W (Nj) = K1ln ---ln(N/) and since K1 > 1r, K 2 > e, and 1, we have that W'(Nj) > 0. Further, it is apparent that W'(b)-+ -co as b-> 1. Therefore, the continuous function W'(b) must have an odd number of sign changes on (1,N/]. But W"(b) > 0 implies that W'(b) is increasing everywhere, so it has exactly one sign change in the interval. The point b0 at which this occurs is therefore a global minimum for W(b). Therefore, the value W(.;, ,P ), where .; = M,1rb0 and ,P = Iogb, ( 1, 84


is a global minimum for W(N,,p) on 8S, and is thus the unique solution to Problem 3.1 I The values of N, and p obtained in this manner are real numbers. There is only a limited number of integers for which efficient F FTs exist, and Lagrangian interpolation requires p to be an integer. Further, this entire discussion has been predicated on the assumption that p is an odd integer, although a similar analysis can be made for p even. Once the theoretical values of N, and p are determined, they must be modified to allow computation. There is some flexibility in this, but it certainly suffices to select N, to be the first integer larger than M,1rb for which an F FT exists, and to choose p to be the smallest odd integer greater than In order to find the optimal values of p and N., it is necessary to find the value of b satisfying 2 K3 b(lnb) ln(K2b) = K1 (3.46) While an analytic solution of this equation cannot be found, Newton's iteration may be used. Table 3.1 displays optimal parameters N, and p for several typical combinations of N, M,, and along with the number of Newton iterations required. Note that the number of iterations of the Newton method changes very 85


little. N M. Iter N. p N M. Iter N. p 32 8 .1 14 48.7 7.7 128 32 .1 14 193 9.9 32 32 .1 15 142.6 15.5 128 64 .1 14 325.7 13.8 32 64 .1 15 257.6 22.2 128 128 .1 14 570.7 19.4 32 8 .01 14 52.9 9.8 128 32 .01 13 206.7 12.1 32 32 .01 14 150.1 19.1 128 64 .01 14 344.1 16.6 32 64 .01 15 267.8 27.1 128 128 .01 14 595.6 23.1 Table 3.1 Optimal parameters N. and p computed for typical problems. 3.10.4 An ADFT Example To illustrate the ADFT, consider the problem of computing the formal DFT of the function u(x) = [(1r-x)/7r]2 sampled on an irregular grid. The irregular grid consists of N = 128 points Xj randomly spaced in the interval (0, 27r ). Since the extent of the interval is 27r, the frequencies w1 are just the integers I, and the formal D FT is N-1 il(l) = L u(xj)e-ilz;, -64 .,:; l.,:; 64. i=O The sampled data are shown at the top of Figure 3.5. To emphasize the irregular grid spacing, the value of each data point is plotted as a vertical bar at the location of the point. The formal D FT is a complexvalued 128-point vector, and the real part is plotted on the bottom of Figure 3.5. The ADFT was used to approximate the formal DFT, with values of N. = 128, 256, 512, 86


f(x) = ((7T -x}/7T ]" 0.8 0.6 0.4 0.2 0.0 Tr Data on Irregular Crid 40 Formal DFT .. 30 ., 20 :;: 10 ""' 0 -10 -64 0 W' a.venumber Figure 3.5: The function f(x) = [(.,.-x)j1r]2 sampled on an irregular grid, and its formal D FT. Only the real part of the formal D FT is plotted. and 1024. Figure 3.6 displays, for each choice of N,, the absolute value of the error lu(l)-[Wu](Z)!, plotted as a function of l. Linear interpolation (p = 1) was used in each case. Several observations can be made. First, note tha_t increasing the value of N, produces a noticeable decrease in the error, just as the theory predicts. It may also be seen that the error increases with increasing wavenumber, as might be inferred from (3.39). Figure 3.7 displays the effect of using different values of p for fixed N,. It may be seen that the error decreases rapidly as p is increased, as predicted by the theory. The optimal parameter problem of the previous section was designed to find N, and p such that while the theory predicts that the error should decrease at least as fast as ( )"+' 87

PAGE 100

II f(x) = [(1T x)/1TY 11:(l) w;:;c L) 1 5 5 {l 4 4 "' .i! 3 .i! 3 }2 }2 "" 1 -.:1 0 -64 -32 0 32 64 0 32 ADFT N. = 128 p = 1 N. = 256 p = 1 Absolute Error Absoiute Error 5 5 4 4 "' "' .i! 3 .i! 3 }2 }2 "" 1 -.:1 0 0 64 32 0 32 64 64 32 0 32 64 ADFT N. = 512 p = 1 ADFT N = 10Z4 p = 1 Absolute Error Absolute Error Figure 3.6: The absolute value of the error of the AD FT, plotted as a function of wavenumber. Each graph corresponds to a different choice of N., while linear interpolation (p = 1) is used for all. decreases as p, N,, or both are increased. To show how well the algorithm ac-tually performed, Table 3.2 gives both the infinity and L2 norms of the error lu(l)-[Wu](l)l for several values of p for each of N. = 256 and N, = 512. For N, = 256, the factor ( y+l is diminished by 0.6169 each time p is increased by 2. Examination of the table reveals that for N, = 256 the actual error is di-minished by a factor of approximately 0.3 as p is increased from 1 to 3, and by a factor of approximately 0.4 with each succeding increase, slightly better than the theory predicts. Similarly, the theoretical decrease in ( y+l as p is increased by 2 is 0.15421 for N, = 512, while the table reveals that the actual decrease is approximately 0.11 for each increase, a slightly better result. Numerical experiments produced similar results for other irregularly sampled functions as well, 88

PAGE 101

0.4 0.4 j(x} = ((rr x)/rrY 64 512 64 32 0 32 64 ADFT p = 5 N. = 512 Absolute Error .. ;J 0.4 ,;
PAGE 102

such as f( x) = 1 l7r-x) 7!" and f(x) = cos(55x) + cos(4lx). A wide range of functions with various degrees of smoothness was examined. In these experiments the ADFT behaved in a similar as it did for the function discussed above. There is dramatic improvement with increasing values of N., and p. As might be expected, the error diminished faster with smooth functions than discontinuous functions, although this effect was not so dramatic as in the case of the D FT. 3.10.5 Some Open Questions about the ADFT The development ofthe ADFT is far from complete, and many questions have arisen that have not been addressed. It may be noticed that the ADFT shown in Figure 3.5 has a real part that is an even function of frequency. Although not shown, the imaginary art is an odd function of frequency, so that the ADFT is Hermitian, that is, u(w) = u( -w). Since the data u(x;) and the entries of the anterpolation matrix [I:JT are real, the vector [I:JT i1 is real-valued. The AD FT is computed by applying the F FT operator to this vector, and since the F FT acting on a real vector always produces a Hermitian (extended periodically, this symmetry is called conjugate symmetric) vector [11], this result is to be expected. Applying the D FT to data vectors with other symmetries (even, odd, quarterwave, etc.) yields output vectors with other types of symmetries [47]. It is natural 90

PAGE 103

to ask which of these symmetry properties are inherited by the formal DFT or the ADFT. It seems reasonable to postulate that if the irregular gridpoints are symmetrically disposed and the function u(xj) is symmetric, then the symmetry property of the DFT might be inherited by the formal DFT and the ADFT. Like the continuous transform, the D FT has several other important prop erties, such as linearity, the convolution and correlation properties, the shifting property, the modulation property, and Parseval's relation. The linearity prop erty can be established immediately, by noting that both the formal DFT and the ADFT can be written as matrix operations, so they are linear operators. To what extent the other properties hold is an open question. Much of this chapter has been spent addressing the question of how the DFT is related to the continuous Fourier transform. This same question may be asked of the formal DFT. To what extent does the formal DFT approximate the FT? Answering this question may prove to be a lengthy process. Many related questions will also arise. For example, how does the sampling theorem apply to an irregular grid? What frequencies can be represented accurately, and what constitutes aliasing? Is there some analog to the Poisson Summation Formula? Many problems feature irregularly spaced data, so it may be assumed that these questions are of real practical interest. 91

PAGE 104

CHAPTER 4 FOURIER TRANSFORM INVERSION METHODS The Central Slice theorem relates the Fourier transform of a function to the Fourier transform of its Radon transform. It leads to a Radon inversion formula that is attractive because it is fast and direct. Many workers have analysed this approach ([23], [28], [35], [39]), but despite this analysis a number of questions re main unresolved. This chapter begins with a review of the Central Slice theorem and an outline of a reconstruction method based on the theorem. This is followed by a discussion of the sources of error in the method and techniques that may be used to reduce the error. The most obvious source of error is an interpolation step that is essential to the algorithm. Certain interpolation schemes have been extensively analysed, particularly schemes based on the Shannon Sampling theo rem ([33], [34], [39], [40]). While these schemes have useful theoretical properties, they tend to be expensive to implement. A simpler polynomialbased interpola tion scheme is applied here, and the costs and errors of the method are analysed. A method for improving the algorithm, based on frequency grid refinement, is then examined.

PAGE 105

" 4.1 TheFT Inversion Method Recall the Central Slice theorem for an image u( x, y) in two-dimensions: Theorem 4.1 (Central Slice) Let the image u(x,y) have a two-dimensional Fourier transform, u(w%,wy) and a Radon transform given by ["Ru(x,y)](p,) = f(p,) If }(w, ) is the one-dimensional Fourier transform of the projection f(p, 4> ), then where w2 = w; + and= tan-1(wy,wz) That is, the Fourier transform of the projection of u onto the line in the direction of the unit vector (cos, sin 4> )T is exactly a slice through the two-dimensional Fourier transform of u(x, y) along that direction. The assemblage of the Fourier transforms of all projections }(w, 4>) defines a twodimensional function (in polar coordinates) that, by the Central Slice theorem, must equal v'21ru(wz,wy) The image u(x,y) can be recovered from }(w,) by a two-dimensional inverse Fourier transform. Performing the I FT on the polar coordinate data j( w, 4> ), this would be Letting :Fn and :F;;1 represent, respectively, the n-dimensional Fourier transform and n-dimensional inverse Fourier transform operators, this can be written 93

PAGE 106

more compactly as ( 4.2) where u = u(x,y) and f = f(p,) = [R.u(x,y)](p,). This notation will prove to be especially valuable since the same notation can be used for both the continuous and discrete cases. 4.2 The Standard DFT Method In practice there is only a finite number of projections and only a finite number of samples along each projection. Therefore, in order to apply ( 4.2), the problem must be discretized. Assume that there are M projections, f(p, ;) = [R.u](p, ;),whose directions are given by ; for j = 1, 2, ... M-1, where ; = j1r j M. This means that the direction vector is [cos(;), sin(; )JT. Then the projections are evenly spaced over [0, 1r ). It is assumed that u(x, y) has compact support, and the problem has been scaled so that u(x,y) = 0 for Jxz + yz > 1. This implies that f(p,) = 0 for IPI > 1. Further, it will be assumed that the projections are evenly sampled at the locations Pn = (2n)JN for n = -N/2 + 1, ... Nj2. The sampling interval is therefore l::..p = 2/ N, giving a total of N + 1 points along each projection. (Note, however, that since !(1,) = 0 for all, the samples at n = N/2 are not actually required.) Since the Radon transform is even, f(p,) = f(-p,+ 1r), this gives an evenly sampled set of projection data covering the unit disk. Using 94

PAGE 107

II the grid notation of Chapter 3, the discretized projection data set is specified on d nhB gn NM The algorithm proceeds by reading ( 4.2) right to left: 1. Compute an approximation to the Fourier transform of each projection in the data set. This gives a sampled approximation of the two dimensional Fourier transform of the unknown function u( x, y ). The transform data set d r>-9 1s on gn NM 2. From this sampling, compute a sampled approximation of u(x,y) by way of an inverse Fourier transform operator, producing an image on !L'rJ'N. 4.3 The Forward Transform The first step of the algorithm is to calculate Fd, where f(p, rPi) 0 for IPI 2: 1. The integral must be approximated for j = 0, 1, ... M-1. In the standard algorithm, this is discretized using the D FT fork= -N/2 + 1, -N/2 + 2, ... ,N/2. Since Ap = 2/ N, the reciprocity relation (3.4) gives Aw = 1r. This means that the frequency values of the transform are k7r for k = -N /2 + 1, ... N /2, and each 95

PAGE 108

" such value corresponds to one of the Fourier modes on the interval [-1, 1]. Making this choice for approximating :F1 is equivalent to approximating the Fourier coefficients ck for lkl ::::; N/2 of the 2-periodic function f(p,;). Computing the DFT for each of theM projections produces frequency data on rFfM 4.4 The Inverse Transform The next step in the algorithm is to compute the inverse transform ( 4.1). This means multiplying the transformed projections j(w, ;) by l/..;27i and applying :F:;1 It would be possible to discretize ( 4.1) directly, for example by where Aw = 1r a11d A; = 1r / M. We are at liberty to reconstruct at any convenient sampling of (x,y) within the region of support for u, {(x,y): x2 + y2::::; 1}. It seems, however, to be almost universal to select Ax= Ay = 2/N. This is done so that the final reconstructed image has the same sampling along the x and y axes as Ap in the projection set. It also means that image values for N2 points must be computed. Equation ( 4.3) is not a particularly good discretization for ( 4.1). Even if it were, however, it is very expensive to compute. The summation requires O(N M) operations to evaluate, and since the reconstruction must be done for N2 image points, this portion of the inversion would require a prohibitive O(M N3 ) operations. The problem stems from the fact that j( wk, ;;) is a data set on the 96

PAGE 109

II polar frequency grid fJ/M, for which an F FT algorithm is not available. In the next chapter a reconstruction algorithm is developed that is, in fact, based on discretizing ( 4.1 ). For now, though, a different approach is taken. In order to reconstruct an image of N2 pixels on [-1, 1] x [-1, 1], and do so efficiently, it is useful to have the transform data, }(w, ), on the Cartesian frequency grid as shown in Figure 4.1 This grid gives transform data for -N /2 + 1 :<::: m, n :<::: N /2. In this case the operator :F:;' can be approximated by the two-dimensional I DFT u(j t::.x, kt::.y) = N 2:: m,n=-lf-+1 ( 4.4) which can be computed using a two-dimensional F FT, in O(N2log2 N) operations. Requiring that the grid spacing on the image conform to the grid spacing of the projection data means that t::.x = t::.y = 2/ N; and, therefore, = t::.w. = 1r by the reciprocity relation. In order to obtain ]( w, ) on it is necessary to interpolate the values from the polar grid fJ/M This is the central issue of the Fourier reconstruction method. With this in mind, the symbolic description of the algorithm, (4.2), is modified to read 1 -z:--1 -T"Y 'L' j U == fiL .r2 .Lp6 .Tl v21r (4.5) where X:'f : fJ/M --+ is an interpolation operator that maps the polar representation of j(w, ) to the Cartesian grid. 97

PAGE 110

je Ia I :.. 4 I Ia t 4 t 4 .. r!.!. NN w, I Figure 4.1: The polar and Cartesian frequency grids. The polar grid consists samples of j(w,) for eight projections with t:J. = 1rj8. The data on the polar grid must be interpolated to the Cartesian frequency grid. The frequency sample rate >. on the Cartesian grid is equal to the radial frequency sample spacing, t:J.w, on the polar grid. 98 w X

PAGE 111

" 4.5 The Interpolation Problem We have seen that the standard Fourier inversion method may be written in operator notation as The general consensus among workers in this area is that the most significant source of error in the algorithm is the operator r;:, which maps the frequency do-main data from a polar grid to a Cartesian grid. Many interpolation schemes have been proposed in the literature, but those most commonly discussed are nearestneighbor interpolation [23], [35], a simple bilinear scheme [23], and schemes based on the Shannon Sampling theorem [33], [34], [35], [39], [40]. The discussions tend toward the qualitative, although careful error analysis is included in [33] and [35]. The basic Fourier method ( 4.5) is generally thought to produce images that are inferior in quality to those created by backprojection of filtered projections. Typically, the reconstructions are characterized by several types of reconstruction artifacts, in the case of the nearest-neighbor and linear schemes. The more sophistocated interpolations based on the sampling theorem produce much better images, but tend to be quite expensive to compute. In this study the standard reconstruction method employs Lagrangian interpolation, in both the radial ( w) and angular ( 4>) directions. The degree of the interpolating polynomial can be specified for either of the two variables, and need not be the same. Indeed, it will be shown that mixed degree interpolation 99

PAGE 112

is generally preferable to choosing the same degree in both directions. 4.5.1 Computing u(w.,wy) =x;Ji(w,) The data to be interpolated consists of the set of one-dimensional transforms, j(wk, ;), given on the grid ry:M The target grid is n/N, the Cartesian fre-quency grid. The frequency sample intervals on should be the same in both directions (t.w. = b.wy), so there is no ambiguity in using >. to indicate either sample interval. The value for >. is determined from the reciprocity relation by the requirement that the final image have the same sampling in z and y as the projection data has in p. Thus, h may be used to indicate b.z, b.y, or b.p. Since h has been given (for the projections) as 2/N, the reciprocity relation implies that >. = 1r. The geometry of interpolation is indicated in Figure 4.2. Let p be the desired degree of interpolation in the radial direction, q the desired interpolation degree in the angular direction, and (m>., n>.) the target point on to which a value of j is to be interpolated. For this discussion, assume that p and q are odd positive integers. Linear interpolation corresponds top = 1, while p = 3 is cubic interpolation. We will also designate by p = 0 the nearest-neighbor interpolation. Only slight modifications are required for even degrees. It will be convenient if the indexing scheme is altered somewhat. The Fourier transforms of the projections should be stored in polar coordinates, with w 2:: 0 and; = j1r j M, where j = 0, 1, ... 2M -1. Then the location of the target point 100

PAGE 113

"'o o Figure 4.2: Geometry of Interpolation. For each., n>.) in polar coordinates is ( .jm2 + n2.\, tan-1(n/m)), provided the branch cut for tan-1 is placed along the positive real axis. Values are first interpolated to the locations Vm2 + n2.\ along each of the q + 1 radial lines of f:;JM whose directions ., n>.) is then interpolated from these q + 1 intermediate values, along the arc at the radial distance ../m2 + n2>.. Define the quantities Wmn = a( m, n) = .jm2 + n2 l .jm2 + n2 J (4.6)
PAGE 114

Then the interpolation is given by q p ( l7r) y'2;u(m>.,n>.) Wmn +i>.,mn + M ( 4.7) where vz(m, n) and w;(m, n) are the Lagrangian weights in the angular and radial directions, respectively. These weights are computed by v (m n) = rr (p;(m,n)j) (!") r=O 1 #! ( )-TIP (a(m,n)-j) w, m,n (" ") jo::O 1. -J #i and (4.8) It should be noted that we always place the target point in the center interval, between the interpolation neighbors corresponding to ( q -1) /2 and ( q + 1) /2 in the angular direction, and to (p-1)/2 and (p + 1)/2 in the radial direction. This poses no difficulty in the angular direction, since the grid is 27rperiodic in Further, it poses no difficulty in the radial direction near the origin, since the evenness of the Radon transform means that the necessary values on the opposite side of the origin are contained within the data set. Slight modification to the indexing scheme given by (4.6) is necessary in this case, however. Only for frequencies near N D.w /2 can the target point not be restricted to the center of the interval. For these points, the grid is extended with zero values to the required length so that the center interval may be used. In practice, the decay of the Fourier coefficients insures that this approach produces errors that are sufficiently small. After interpolation, the inverse two-dimensional Fourier transform may be approximated Since the data are now on the Cartesian grid, this can be done 102

PAGE 115

efficiently using a two-dimensional F FT with a spatial sample rate h = 2/ N and a frequency sample rate >. = 1r. The standard algorithm may be summarized in the following steps, where we assume the discretized projection data f(Pn, ;) are given on flj/M. 1. For each j, approximate j(w, ) = :Fd(p, )on rJ!M by using an N-point F FT. q p ( 111") v'2,;-u(m>.,n>-)=t;t;v 1(m,n)w;(m,n)f Wmn+i>-,mn+ M where v1(m,n) and w;(m,n) are defined in (4.8). lf: u(jh,kh)= L using an N2-point F FT. 4.5.2 Extents of the grids rJ!M and rRr\ In the continuous problem :F;1u can be evaluated, at least in theory, in either polar or Cartesian coordinates. In either case, the integration is carried out over the entire (w,) or (wz,wy) plane. In the discretized problem the summation is necessarily finite, so it is important to examine the extents of the grids involved 103

PAGE 116

in the interpolation. The transform data j are computed first on fJ/MThe values of j are thus known only on the disk lwl N).j2. The two-dimensional IDFT, however, requires values of jon the regular grid covering the square lwzl N).j2, lw.l N).j2. There are portions of the polar grid that do not overlap the Cartesian grid, as shown in Figure 4.3. Since extrapolation of data by Lagrangian weights is generally inaccurate the values of j are set to zero in the non-overlapping portion of the grid: u(m).,n).) = 0 for m2 + n2 :::: ( There are two consequences of this action. First, it is obvious that the re construction will be in error if u has a sizeable contribution from this (high frequency) part of the frequency domain. This is an aliasing problem, implying that the projections are undersampled, and may be addressed by using a larger value of N. Second, since interpolation is performed only for those points (m).,n).) with V'm2 + n2 < N/2, considerably fewer than N2 interpolations are required. It is easy to see that as N -> oo, the number of interpolations approaches N2tr /4, or roughly 75% of the full Cartesian grid. 4.6 Operation Count for the Standard Method The computational cost of the algorithm includes the cost of approximating the forward transforms, the cost of the Lagrangian interpolation, and the cost of the two-dimensional inverse transform. 104

PAGE 117

-N>/2 -NV< 0 Figure 4.3: Comparison of the extent of the Cartesian frequency grid with that of the polar grid. The values of ii(wz,wy) are set to zero in the regions where the grids do not overlap. The M forward transforms entail O(Nlog2 N) operations each, where the assumption that N is a power of 2 is made to simplify bookkeeping. As mentioned in the previous chapter, the operation count can be reduced by employing specialized transforms, but the estimate just given will suffice for our purposes. Computing the radial interpolation weights by (4.8) entails p additions and p multiplications for each weight, noting that the regular spacing on implies that the denominators are independent of the target point, and thus can be precomputed. Similarly, the angular interpolation weights require q additions and q multiplications each. For each target point, p + 1 radial and q + 1 angular weights are required. The cost of computing the interpolation weights is thus 2N2[(p + l)p + (q + l)q]. If the grids f7/M and are to be used repeatedly, these interpolations weights can be pre-computed and stored, and the cost of computing them need not be included in the complexity analysis. This is assumed 105 1) ,, ,, 1:. ,,

PAGE 118

II to be the case hereafter. The interpolation ( 4. 7) itself requires 2(p + 1 )( q + 1) operations for each of the O(N2 ) target points. The cost of computing the two-dimensional I DFT is the cost of 2N onedimensional I D FTs, each of length N. Since the I D FT and the D FT have the same operation count, this means that the inverse transform step of the algorithm has a cost of O(N2log2 N). Putting the three phases of the algorithm together, the total cost of the standard FT inversion method is where the constants 01 and 03 depend on the specific F FT algorithms and the constant 02 depends on the percentage of points of that lie inside the region covered by 4. 7 Sources of Error The Radon transform literature is confusing when it comes to the topic of error analysis. Much of the error analysis that is done is heuristic in nature, and different workers hold differing views about the types, and relative importance, of the errors in reconstruction algorithms. In the Fourier reconstruction literature, the most commonly cited sources of error are [39], [40] [28] : 1. Undersampling of the projections, f(p,

PAGE 119

3. Truncation of the frequency domain to ](w,) for lwl S N1rj2. 4. Interpolation error in r;J. 5. Undersampling of the frequency data, 6. Error in evaluating u by :F2 1 These topics are generally treated individually in the literature. There is some value, however, in attempting to examine the error in the algorithm in terms of the computational cascade, thereby examining each source of error in terms of its "downstream" effect. This approach is taken here, in the hope that it will shed some light on the nature of the problem. Corrective measures for each error can then be examined in terms of their effects on the computation that follows them. In this process, we will address the following question: Given N samples of each of M equally spaced projections of an image u(x, y), can the image vector produced by the Fourier reconstruction method ( 4.5) be expected to converge to the exact image u(x,y) as N, M, or both become large? If the process does not converge to the image, what is the source of the error, and what steps can be taken, and at what cost, to minimize it? 4.7.1 The Computational Cascade Consider the algorithm as a cascade of operations on vectors. Given / = 'Rii on the grid f!'J! M the algorithm may be written as follows. 107

PAGE 120

1. Compute I= Fdon the polar grid rF?M 2. Compute u = r;% I on the Cartesian grid 3. Compute 1i = :F:;1u on the image grid D.WN The identities of the vectors[, }, u, and 1i should be obvious from the description. To begin, it should be noted that the first vector given, [, differs from the actual projection set by discretization error. Also, even if all the computations in the algorithm were perfect, the final vector, 1i, would be in error because of discretization. This error will not be treated here. Instead, we will assume that the input vector f holds the exact values of 'Ru on D.j:!M, and that the exact solution consists of the set of exact values of the image on OW,v When the DFT operator :F1 is applied to the exact solution is I, the set of values obtained by sampling }(w,
PAGE 121

Ill The last step in the cascade is to apply :F;1 This operator, applied to u, would result in the exact image, ii, plus an error term, E:F-' Applied to the errors of the previous stages, it distributes those errors on thus: where This gives us a definition for the error in the reconstruction by the standard method. Defining the error by E = ii u, the reconstruction error may be examined in terms of its components: (4.10) Also, for any choice of norm 11, ( 4.11) The importance of (4.10) is that it provides a framework for categorizing the sources of error in the list given earlier, and that it permits analysis of the interdependence of the errors. It will also serve to identify the effects of corrective measures. 109

PAGE 122

4.7.2 Vector and Operator Norms In order to make use of (4.11), it is necessary to select the norms that will be used. The vector E is in image space, a subspace of RN', as are E:F-l, :F:;1 Ez, and :F:;1I') E:F. Care must be exercised in working with these quantities, however, as they are the result of operators acting on vectors from different spaces. For example, E:F is in Fourier space, which is a subspace of cNM, and is made up of the assemblage of M vectors in CN. The vector norms employed here are the discrete infinity norm and the discrete L2 norm. These are defined for vectors in RN or eN by JJiilco = m>J.X lfil and J The limits in the sum for the discrete L2 norm vary, depending on how the vector is indexed. It must be understood that the sum is over all components of the vector, and the fraction in front of the sum is the number of components. Note that JJJlJ2::; JJiiloo The L2 operator norm of T : V -> W is defined as ( 4.12) where llTiJI2 is the vector norm for the vector space W, and Jlill2 is the vector norm for V. A similar definition may be used for the operator norm JITIIco With these definitions, we may begin to examine ( 4.11) more closely. 110

PAGE 123

4.7.3 Operator Norms for F1 and F2 1 Obtaining 2 operator norms for the DFT operators is straightforward, and depends on Parseval's relation: (for the one-dimensional DFT) if the vector j is the D FT of /, then It follows that Therefore, IIF!\b == lj-/N. Similarly, Parseval's relation for the two-dimensional DFT can be used to find the operator norm for F:;1 For two dimensions, if jki is the N2-vector given by the DFT of the N2-vector lmn' then N N N N ""-2 2"""2 6 6\fmnl == N 6 6\J
PAGE 124

The operator I;% is defined by the sum in ( 4. 7) as y q p ( l7r) = Wmn +t-\,mn + M Bounding this sum with the triangle inequality yields from which follows q p l[r;%j](m-\,n-\)l L L lv1(m,n)l lw;(m,n)lllilloo l=:O i=O Taking the maximum over all elements of u gives and, therefore, from ( 4.12), ( 4.13) The problem of bounding the infinity norm of the operator I;% thus reduces to finding bounds for the Lagrangian weights. This is done as follows: Lemma 4.1 Let the Lagrangian weights v1 ( m, n) and w ;( m, n) be given by ( )-II (p.(m,n)-j) v1 m, n -. (I .) J=O J jil P (a( m, n) -j) and w;(m,n)=IT C .) j=O 1.-) #i where q and p are odd positive integers and the quantities p.( m, n) and a( m, n) are given by (4.6). Then the weights are bounded by q+1 p+l lv1(m,n)l -2 -and lw;(m,n)l -2 -112

PAGE 125

Proof: The quantity n) is a real number between ( q-1)/2 and (q+ 1 )/2, while the quantity a(m,n) is a real number between (p -1)/2 and (p + 1)/2. It suffices to find (4.14) and apply the result to both sets of weights. Considering first the denominator, note that q II II-il = (!)!(q -!)! j=O #! It is apparent that since (O)!(q)! > (1)!(q -1)! > (2)!(q-2)! > ... > (q; l )! (q; 1}, then the denominator in (4.14) is minimal at j == (q 1)/2. To bound the numerator, observe that From Lemma 3.1 we know that 17rq(t)l :::; If j of (q 1)/2, then It-il 2: 1, and this bound on 17rq(t)l can serve as an upper bound on the numerator in (4.14). However, in the case that j == (q 1)/2, the quantity It-jl can be arbitrarily small (we need not be concerned with the case It-jl == 0 for which 2:1lvt(m,n)l == 1). In the case j == (q-1)/2 the numerator I ( q-1 ) ( q + 1) ( q + 1 ) I max t(t -1) ... t---+ 1 t---t--1 ... (t-q) ter.i::.! .<.!.] 2 2 2 I. l 2 113

PAGE 126

:\: I can be bounded by cl c2 c3, where q-1 C1= max 1t(t-1) ... (t---+1)1, ter.!.::.! .i..!] 2 t q+1 c2 = max l(t--)1, 2 q+1 q+1 C3= max l(t---1)(t---2) ... (t-q)l. 2 2 The terms C1 and C3 are bounded by ( !, as in the proof of Lemma 3.1. Now I( q+1)1 max t---= 1 2 so the numerator in ( 4.14) is bounded from above by [ (.if-) !r A similar argument shows that this estimate holds for the case j = (q+ 1)/2 as well. Combining the minimal denominator with the maximal numerator yields 9 l(t ")I [(tl.!.21)!]2 q+1 max IT -':7:-----=,J,:..: < 1(1-j)l ( 9 )! (.if-)! = -2-Applying this bound to the definitions of v1(m,n) and w;(m,n) completes the proof. I Using Lemma 4.1 we find that II :r;J' I b ::; ll:r;J' II"" ::; Several comments are in order First, it should be noted that this is a pessimistic bound for the operator norm. Experimental data shows that, for the image reconstruction problem, the discrete L2 norm of the vector that results from Lagrangian interpolation in both the radial and angular directions generally differs very little from the L2 norm of the input vector. The pessimistic 114

PAGE 127

bound would be of some concern if it were our intent to try to eliminate the error in the algroithm by letting p and q become large. Our concern here, however, is to analyse the algorithm as Nand/or M get large. In this case, it is sufficient to show that the operator norm does not grow with NorM. For a general p-degree interpolation problem with Lagrangian weights L;(x), Ap = max I L;( x) I is called the Lesbegue constant, and is often used to bound the error of interpolation. In the proof of Lemma 4.1 it is shown that Ap = O((p + 1)2 ) for the geometry of the reconstruction problem. In [37] it is shown that, for any geometry of the interpolation nodes, Ap grows at least as fast as log p, so that the least upper bound we could hope to achieve for llx;J'II would be O(logplog q). 4.7.5 The Error in Computing the Forward Transform The projections of the image, f(p,
PAGE 128

vector E:F has a norm satisfying G N"'' ( 4.15) for some constant G > 0. From this it is apparent that liEF! I -> 0 as N-> oo, and the vector j. Note that no band-limiting assumption is required for this convergence. Since the image has a convergent Fourier series, the coefficients decay, and taking N large enough will eventually produce a spatial sample rate so small that aliasing is essentially eliminated. Recalling the bound on the operator norm ll:z;J'II, we see that applying the interpolation operator to the D FT error can cause some magnification in this error, but in the worst case, The error in the DFT therefore remains O(N-"') after the interpolation operator is applied. Finally, consider the action of the operator F:;1 The operator norm is IIF21II2 = N, so that any residual error from the DFT and the interpolation may be magnified by the inverse transform step as < (p + 1)2(q + 1)2 G -4 Na-1 This result would indicate that, if the projections are not so smooth that a> 1, then the error due to the DFT cannot be guaranteed to vanish as N-> oo. 116

PAGE 129

This possibility holds for very common cases, where the projections are bounded but not necessarily continuous. 4.7.6 The Error in Computing the Inverse Transform In order to bound the error in the inverse transform, it is necessary to extend Theorem 3.8 to functions of two variables. Since the image u(x, y) has compact support, we begin by noting that the relation between the Fourier transform and the Fourier series coefficients of a compactly supported function on R2 is a straightforward extension of the same relationship for functions on R. If u( x, y) = 0 for \xl 2: A, \y\ 2: A, then the Fourier expansion is 00 u(x,y)= 2:: m:::::-oo n=-oo with coefficients 1 fA fA Cmn = --2 f(x,y)e-i(mn+n"y)fAdxdy 4A -A -A while the Fourier transform is From this it is easy to see that For the image u(x, y), which has support on [-1, 1] x [-1, 1], the relationship between the Fourier transform and the Fourier coefficients is Cmn = 'ju(mtr, mr). 117 :;.

PAGE 130

The operator :F;1 is approximated by the two-dimensional IDFT in equation (4.4), where the sample rate >. is 1r. As we found with the one-dimensional I D FT, this approximation is merely the partial sum of the Fourier series for u( x, y ), and the error in the approximation is due to truncating the infinite series. Accordingly, we consider the error due to truncating a Fourier series of a function of two variables. Consider the separable case in which u( x, y) = g( x )h(y) where g and h each vanish outside [-A,A]. The Fourier coefficients of this function are = = JA u( x, y) 4A -A -A g(x)e-imn/Adx) h(y)e-in.,y)/Ady) 2A -A 2A -A am bn 2 2 where am and bn are the Fourier coefficients of the functions g(x) and h(y) respectively. If the functions g and h individually satisfy the conditions of Theorem 3.5, then we may bound the error in the I D FT as follows: Theorem 4.2 Letu(x,y) = g(x)h(y), whereg andh each vanish outside [-A,A]. For p :2: 1, suppose that g( x) and its derivatives, up to the (p-1 )", are bounded, continuous, and satisfy Dirichlet's conditions in (-A, A), with for j = 0, 1, ... 1 p -1. Suppose that the p'h derivative is bounded and satisfies Dirichlet's conditions on the same interval, and, for r :2: 1, suppose that h(y) 118

PAGE 131

satisfies the same conditions with p replaced by r. Then the error in approximating u( x, y) by the partial sum of its Fourier series satisfies N/2 N/2 u(x,y)I: I: m=-N/2 n=-N/2 where a = min(p, r) and 0 is a constant independent of N. Proof: The error in the approximation is the sum over all members of the series that are not included in the partial sum. That is N/2 N/2 f(x, Y) L L Cmnei(m,-z+n"ll)/A = L Cmnei(m,-z+n,.y)/A m=-N /2 n=-N/2 J where J is the set of all ordered pairs ( m, n) of integers where one or both of lml, lnl > N/2. Therefore, N/2 N/2 f(x,y)L L Cmnei(mn+n,.y)/A < L lc,nl m=-N/2n=-N/2 J J is the set of all ordered pairs of integers ( m, n) in which neither integer is smaller in absolute value than N1 = N/2 + 1. The series over J may be broken up into twelve subseries as follows: 00 -oo oo L lc,ol + L lc,ol + L lenni m=N1 m=-Nt n=N1 -oo 00 00 -oo oo + I: Ieoni + I: I: lc,nl + I: I: lc,nl m=-1 n=Nt oo -oo -IX) -oo oo N/2 + I: I: lc,nl + I: I: lc,nl + I: I: lc,nl m=ln=-Nt oo -N/2 m:;;;;;;-1 n==-N1 m=N1 n=l -00 N/2 -oo -N/2 + I: I: lc,nl + I: I: lc,nl + I: I: lc,nl (4.16) 119

PAGE 132

II n 0 3 0 @ 0 2 1 m @ @ 0 4 0 Figure 4.4: Regions of summation outside the square N /2 ::; m, n ::; N /2. Each of the twelve subseries on the right hand side of equation (4.15), counted in the order that they appear, corresponds to the numbered region of the m, n plane. If the coefficients Cmn are placed on a Cartesian grid with m and n axes, the error consists of the sum over all points outside the square [-N/2, N /2] x [-N /2, N /2]. Figure 4.4 indicates the regions covered by each of the subseries. The first two summations on the right-hand side of (4.16) are the sums (outside the square) along the m-axis, while the next two sums are on then-axis. The next four sums are, in order, over the absolute values of the Fourier coefficients in the regions marked 5, 6, 7, and 8 in Figure 4.4. The last four sums correspond to the regions marked 9, 10, 11, and 12. Because the function f(:x,y) is separable, the Fourier coefficients satisfy (by 120

PAGE 133

Theorem 3.5) ICmnl :::: M, lmiP+llnlr+l ICmol < M2 ( 4.17) lmiP+l Ieoni < M3 lnl+l for some numbers M1 M2 M3 > 0. Because of the symmetrical distribution of the regions of summation in Figure 4.4, substituting the bounds on the coefficients from ( 4.17) into ( 4.16) yields L:lc,.,l < 2 J Each of the terms in ( 4.18) can be bounded. The first term satisfies the second satisfies and the third satisfies < 2 {"" M2 dx = jN/2 xP+l 2 {"" Ma dy = )N/2 y+l < 4M' (f, < M1((p + 1)2'+2 ( 4.18) where ((x) is the lliemann zeta function. Finally, the last series may be bounded by mt, < 4 M1 X::,) lnl:+l) < 2P+2 M1((r + 1) pNP 121

PAGE 134

The four bounds may then be added together, giving Letting D1 = 2+1 [M3 + 2M1((p + 1)] r this simplifies to and D2 = 2P+1 [M2 + 2Ml((r + 1)], p Taking C = D1 + D2 completes the proof. I There is, of course, no reason to suspect that the image u( x, y) is the product of functions g( x) and h(y ). It is likely that we may have more knowledge of the smoothness of the projections than the image. Accordingly, it is worthwhile to determine a relation between the smoothness of the projections and the error in reconstructing the image with a truncated Fourier series. In order to do this, it is necessary to show that bounds like those given by Theorem 3.5 for the Fourier coefficients of the projections f(p, q,) can be used to bound the sum of the neglected terms in the double Fourier series. The Fourier coefficients of u(x, y) are constant multiples of the Fourier transform values u( mrr, n1r ), which by the Central Slice theorem are constant multiples of j( v'm2 + n2rr,
PAGE 135

decays as w varies continuously, rather than at the values w = k1r. To gain this knowledge, we need to generalize Theorem 3.5 to apply to the Fourier transform. This may be done with only a minor modification to the proof of that theorem [9]: Theorem 4.3 For fixed , let the projection f(p, >) be bounded, satisfy Dirich let's conditions in the interval [-1, 1], and vanish for !PI 2: 1. Then j(w, >), the Fourier transform of the projection, satisfies where K > 0 is independent of w. K Proof: Since f is bounded and satisfies Dirichlet's conditions, the interval [-1,1] can he broken up into a finite number, s, of intervals (a;,a;+t) in which f(p, ) is monotonic. The Fourier transform is thus (4.19) Now the Second Mean Value theorem for integrals holds that, if g(x) is hounded and monotonic in (a, b) and h( x) is bounded and integrable, and has at most a finite number of sign changes in (a, b), then t g(x)h(x)dx = g(a) t h(x)dx + g(b) t h(x)dx, where e E [a,b]. Applied to the Fourier transform (4.19) this gives 123 ,, ,,,

PAGE 136

where E [aj, ai+1l Then and, bounding the sum term by term, we have where His the least upper bound of lf(p,)1 for p E [-1,1]. I Suppose that, in addition to the conditions of Theorem 4.3, f(p, 4>) is con tinuous on [-1, 1], and that its partial derivative with respect top is bounded and satisfies Dirichlet's conditions in [-1, 1]. Then integrating by parts yields j(w,) = _1_ /1 f(p,)e-ipwdp v'21r -1 = -1 (-1 /1 8f e-ipwdp) iw J27r -1 8p where we have used f( -1, 4>) = f(1, 4>) = 0. Since a f I 8p satisfies the conditions of Theorem 4.3, the term inside the parentheses is bounded in absolute value by H/lwl. Therefore, jj(w, 4>)j ::; Applying the same argument to the partial derivatives of successively higher orders, we obtain the necessary generalization of Theorem 3.5: Corollary 4.1 For fixed 4>, let the projection f(p, 4>) and its partial derivatives with respect top, up to the (p-1 )'', be bounded, continuous, and satisfy Dirichlet's 124

PAGE 137

conditions on [-1, 1]. Suppose that lim 8'1 p--1+ 8p 1 8' f liD-, p-1-8p for s = 0, 1, ... (p-1). If the p'h partial derivative with respect top is bounded and satisfies Dirichlet's conditions in [ -1, 1], then where H is independent of w. Finally, we can apply Corollary 4.1 to obtain an error bound on the inverse transform step of the reconstruction that is based on the smoothness of the projections: Theorem 4.4 Let the projection having the least smoothness satisy the conditions of Corollary 4.1 for some p > 1. Then the error in approximating the image u(x,y) by the partial sum of its Fourier series satisfies 2 N/2 N/2 C u(x,y)-;: L L ::; Np-1 m=-N /2 n=-N /2 where C > 0 is a constant independent of N. Proof: From the proof of Theorem 4.2 we have ( 4.20) N/2 N/2 L L 7r 7r m=-N/2 n=-N/2 J where we have written the Fourier coefficients Cmn as m1r, n1r ), and J designates the set of all pairs ( m, n) with one or both of lml, In I > N (2. By Theorem 125

PAGE 138

4.1, this error is no larger than where Wmn = 7rVm 2 + n 2 Since all members of the set J satisfy )m2 + n2 > N /2, it follows that loh 1"" H 21r H ( 2 )p-l < --wdwd = ---. -o ";wP+1 p-1 N1r Letting C = (2P H)/( 1rP-2(p1)), we arrive at the error bound in ( 4.20). 1 Depending on whether we know the smoothness of the image or the projections, we can use either the bound from Theorem 4.2 or that of Theorem 4.4 to give \\EF-'1\oo At first glance, it would seem that the former bound is superior to the latter. Indeed, given the same value of p for both bounds, only one power of 1/ N is lost in Theorem 4.2, while two are lost in Theorem 4.4. This is deceptive, however, as the statement that ( 4.21) is much weaker than Since the geometric mean is less than or equal to the arithmetic mean and this inequality is sharp, it follows that if j(w, )satisfies (4.21), the most that can be inferred about the Fourier coefficients of u( x, y) is 126

PAGE 139

4.7.7 Error in the Interpolation Consider now the error that occurs in the interpolation of u from j. Error bounds for p-degree Lagrangian interpolation are generally related to the (p + 1 )'' derivative of the interpolated function provided the function is sufficiently differentiable. Failing that condition, the error may be tied to some other measure of the variation in a function, such as the modulus of continuity, the maximum deviation a function may sustain in a given interval. We begin by citing the following property of two-dimensional Fourier transforms: Theorem 4.5 ((49]) Let g(x,y) be absolutely integrable over R2 with Fourier transform F.{g(x,y)} = g(wz,wy) If the function xg(x,y) is absolutely integrable over R 2 then the partial derivative of g with respect to Wz exists and is continuous on R2 Moreover, F.{xg(x,y)} Similarly, if yg(x, y) is absolutely integrable on R2 8gj 8wy exists, is continuous on R2 and -i8gj8wy is the Fourier transform ofyg(x,y). Applying the theorem to the image function u(x,y), we find that Corollary 4.2 Let the image satisfy iu(x,y)i U for some constant U, with u(x,y) = 0 for lxl;::: 1, IYI;::: 1. If F.{u} = u(wz,wy), then for any non-negative 127

PAGE 140

integers m and n, exists and is continuous on R 2 Moreover, Proof: To prove existence and continuity, we use Theorem 4.5 to establish the case m = 1, n = 1. Induction is then used to establish existence and continuity for all m and n. The inequality follows according to I Since the operator :r;% interpolates in polar coordinates, the error terms for the Lagrangian interpolation will include partial derivatives with respect to the polar variables w and ;. Therefore, it is necessary to find bounds for these partial derivatives. This may be done by Lemma 4.2 Let the image satisfy lu(x,y)l ::; U for some constant U, with u(x,y) = 0 for lxl2: 1, IYI2: 1. If'R{u} = j(w,;) and :;::2 {u} = u(wz,wy), then for any non-negative integer n, l[}njl ::; awn and ( 4.22) 128 ,, '

PAGE 141

Proof: The Central Slice theorem guarantees that u(w.,wy) = }(w, )under the change of variables w. = w cos and Wy = w sin. Since all the partial derivatives of u exist and are continuous, the same statement holds for the partial derivatives of j with respect to w and Indeed, it is not difficult to show that ( 4.23) and anJ= nl:n(-l)n-m(n) anu "'A. w ( ) sm '!'cos 'I' U'f'n m !:> n-m !:> m m=O UW:x: uw11 ( 4.24) Bounding the terms in these expressions with the triangle inequality, noting that t (n) = 2n m=D m and using the bound for the partial derivatives from Corollary 4.2 leads to and 1T: I Applied to j(w,), the operator r;% produces the Cartesian two-dimensional Fourier transform u plus the error vector Ex. In terms of the components of the vector, this is q p ( l1r) M + This interpolation can itself be analysed as a computational cascade, consisting of a one-dimensional interpolation in w followed by a one-dimensional interpolation 129

PAGE 142

in. Specifically, with s = w, we will let jw(s) represent the variation of j(w,) for fixed w, parameterized by the arclength. If the value of jw(s) is known at q+ 1 points s0 s1 Sq corresponding to the q + 1 nearest neighbors (in the angular direction) of j(w,), then the Lagrangian q-degree interpolation is ( 4.25) where vl(s) =IT (s-s;) =IT (w
PAGE 144

Similarly, writing w == Wo + rtl.w, the error in the radial interpolation is The interpolation error E(wz,wy) is therefore given by q E(wz,wy) == l:vz(s)E,,(w) + E.,(s) 1=0 ?rp(r)(tl.w)P+l rlJ'+lg ?rq(t)(t!.s)+ld+1j., == t;vz(s) (p + 1)! dwP+Jq) + (q + 1)! dsq+l (0 Finally, noting that ( tl.s )q+1 == ( wtl. )+1 and that a+l j., 1 a+l j ds+ 1 == w+l 8"+1 we find that the absolute value of the error in the interpolation is bounded by By Lemma 3.2, :::; 2-(p+l), and a similar relation holds for q, provided the interpolation is always to the center of the set of interpolation points. Under the same requirement, I:i=o lvz(s)l ::; (q + 1)2/2, by Lemma 4.1. Using these bounds, as well as the bounds on the partial derivatives from ( 4.22), the error in interpolating u(wz,wy) satisfies or I 132

PAGE 145

Before proceeding with the error analysis, a few observations should be made regarding ( 4.28). This error bound applies to an interpolation that is performed first in the radial direction, and then in the angular direction. It comes as no surprise, then, that the error bound should contain two terms that depend on the grid spacings in the radial and angular directions. The presence of jwj+l in the angular error should be carefully considered, since it has some interesting implications. First, it indicates that near the origin, as w --> 0, the error in interpolation is due almost entirely to error in the radial interpolation. This is as one might expect, since near the origin the radial lines of the polar grid are very close together, so that while the angular spacing does notchange, the gridpoints on common arcs are quite close together. Second, it shows that as w becomes large, the error due to the angular interpolation might be expected to grow, and perhaps dominate. However, it must be recalled that the values of the Fourier coefficients along any radial arm diminish like C IN" as w gets large. This effectively counteracts the factor of w in the error term, which is never larger than N !::1w 12. The error expression ( 4.28) offers an explanation for the phenomenon that is occasionally reported in the image reconstruction literature, namely that the error in the radial interpolation dominates the error due to angular interpolation, so long as a sufficient number of projections is used. Recalling that !::14> = 1r I M, we can relate the number of projection to the number of samples per projection by examining the largest value jwj = N llw 12. 133

PAGE 146

Being the largest value of lwl, this will also bound the infinity norm of the interpolation error. In this case, ( 4.28) becomes Choosing the number of projections, M, to satisfy M > N7r -2 the infinity norm of the error of interpolation is ( 4.29) ( 4.30) It is interesting to note that, based on entirely different reasoning, several workers ([35], [39]) also require that the number of projections, M, be related to the number of samples per projection, N, by ( 4.29 ). 4.7.8 Asymptotic Behavior of IIEzlb Consider the effect on the frequency grid M if the number of sample points per view, N, and the number of views, M, become large. From the discussion of the reciprocity relations in the previous chapter, recall that doubling the number of samples on the spatial grid, while holding the extent of that grid fixed, has the effect of doubling the extent of the frequency grid, while holding the frequency sample rate fixed. Therefore, t:..w remains constant as N -+ oo. We have just seen that IIEzlloo = O((t:..w)P+l) + O((t:..w)q+l). This is not surprising, since error in the interpolation is related to the grid spacing. Because 134

PAGE 147

l::.w remains constant as N --> oo, it follows that the infinity norm of the error of interpolation is independent of N. In fact, if the interpolation operator is applied to exact values of j(w, ), then the error committed in interpolating values of u(wz,wy) for w,! + (N7r/2)2 is exactly the same for any number of points larger than N, since exactly the same function values and weights are used. The term that interests us, however, is II:F2 1 Ezll2 Since IIEzlloc is constant as N --> oo, while the discrete L2 operator norm of :F2 1 is N, we could be led to the disconcerting conclusion that II:F21 Ez 112 -> oo as N --> oo However, the following argument shows that the discrete L2 norm of the error vector is asymptotically 0(1/ N). Theorem 4.7 Suppose that, for any, the projection f(p,) satisfies sufficient conditions so that H lwla for some H > 0 and a> 1. If the values of u(m..\,n.X) are interpolated from j(w,) according to (4.7}, for -N/2 + 1 m,n, N/2, then as N--> oo, the discrete L2 norm of the error in this interpolation behaves like IIEzllz = 0 Proof: We need to show that lim N 11Ezll2 < oo. Letting u(m.X, n.X) be the N-oo exact values of the the transform, the interpolated values, u(m.X, n.X), are q p ( h) u(m).,n.X) = Wmn + i.X,
PAGE 148

where Therefore, all values of j used in the computing u(m>.,n>.) satisfy provided ym2 + n2 > For these points, the magnitude of the interpolation cannot exceed q p JJr \vz(m,n)\lw,(m,n)\lwmnl" We have already bounded the Lesbegue constant for this interpolation by l;;o i.:::O so that in magnitude, the interpolated value is bounded by Now the exact value satisfies iu(m>.,n>.)i .::; so that the error in the interpolation, E(m, n) = u(m>., n>.) -u(m>., n>.), satisfies \E(m,n)l < ((p + 1 )2(q + 1 )2 + 1) JJr 4 \wmnl" The arithmetic mean of two numbers is at least as large as the geometric mean. That is, a + b 2Vab. It follows that Wmn Vm2 + n2>. > Aymn, 136

PAGE 149

and therefore that which in turn implies IE(m,n)l < ((p+1)2(q+l)2 +1) (H) 1 4 .\" (ymn)" ( 4.31) We are interested in the portion of the error that depends on m and n, giving the location of the target point on the grid. Taking everything else in ( 4.31) to be the constant K, (4.31) may be rewritten as K IE(m,n)l < (ymn)" Because this bound is dependent on the distance from the origin, four times the sum of the squares in the error of interpolation for all points in the first quadrant contains the the sum of the square error for those points satisfying ../m2 + n2 > The error for the points satisfying ../m2 + n2 ::; are bounded separately. By Lemma 4.2, the function j( w, ) (and therefore each of the values used in the interpolation) is bounded in magnitude by lj(w, )1 ::; 2 U 71" where U is an upper bound for ju(x,y)j. The interpolation cannot exceed this value multiplied by the Lesbegue constant, and since the correct value U.(m>., n>.) 137 ll

PAGE 150

is also bounded by 2U /-rr:, then the error in interpolation for components near the origin cannot exceed The right-hand side of this inequality is a constant independent of N, m, or n. Let it be designated f3. Then the sum of the squares of the interpolation error over the set of p2 points in the :::; m,n :::; no larger than p2f32 This set includes all of the points where -/m2 + n2 S Combining the sums of the squared error for the two regions, the discrete L2 norm of the interpolation error is bounded by Each of the sums are partial sums of the Reimann zeta function, and are bounded by ( (a). Therefore Multiplying both sides of this inequality by N and then letting N-> oo yields lim N IIErll2 < oo, which was to be proved. I 138

PAGE 151

4. 7.9 The Behavior of the Error of Reconstruction Recall the error expression ( 4.11 ): Having analysed each of the error vectors in the expression individually, we turn our attention to the overall behavior of the error in the reconstruction method. Beginning with the assumption that the projection with the least smoothness has Fourier coefficients satisfying JcJ :S C N-" for some a :::0: 1, it was found that the behavior of each component could be analysed as N -> oo. Recapping, it was noted that the error in the forward transform step satisfied Applying the interpolation operator to this error could result in magnification by a term that is dependent on the degree of the interpolation, but independent of N, so that This error vector is then passed through the 1"21 operator, which has a (sharp) discrete L2 operator norm of N, so that It was found that the infinity norm of the error of interpolation satisfied JJExlloo = 0((.6-w)P+l) + 0((.6-w)+'), 139

PAGE 152

while the discrete L2 norm asymptotically behaved as IIEzlb = o This in turn was passed through :F2-", with its operator norm N, and we can conclude that, at worst, where the expression for the infinity norm, depending on is likely to dominate. Finally, the error in the inverse transform was found to be bounded as It follows that, in the worst case, the error of reconstruction behaves according to ( 4.32) as N--> oo. 4.8 Some Comments and Observations Before presenting numerical examples illustrating the error analysis, there are several observations that should be made. The most obvious is that there is no assurance that the algorithm converges. We cannot assume that the error in the reconstruction will vanish as N --> oo, or even that it will be significantly diminished. Rather, letting N --> oo will ensure that the error due to the D FT 140 I .. .. .. ::1 II i '. ,. 'I I

PAGE 153

and IDFT will vanish (assuming a: > 2), but we expect that the error of the interpolation may be unchanged by increasing the number of samples per view. Importantly, even letting the number of views grow large along with the number of samples does not improve these error bounds. It is the interpolation step that is the cause of this difficulty. It seems natural, then, to consider addressing the problem by using higher degrees of interpolation. However, the consequences of that course of action must be considered. First, recall that the operator norm for Z:,f is bounded according to [[l;,;'[[oo < so that increasing p and q may adversely affect the improvement of III") EJ"[[. Similarly, the discussion of the asymptotic behavior of the error of interpolation indicates that the growth of p and q could affect how large N must become before [[Ezlb begins to behave as 0(1/N), which prevents the growth of [[F21Ez[[2 A more serious consideration about increasing p and q is the effect that it would have on the computational complexity of the algorithm. Recall from ( 4.9) that the complexity is If the number of views satisfies M ::0:: N 71' /2, as discussed above, then the operation count is O(N2log2 N) + O(N2(p + 1)(q + 1)). 141

PAGE 154

It is easy to see that attempting to improve the error characteristics of the algorithm by using high degrees of interpolation can have a serious effect on the computing time. In fact, since typical values of N are in the range of a few hundreds to a thousand, relatively small interpolation degrees can cause the interpola.tion stage to dominate the cost of the algorithm. It should also be noted that, in the standard algorithm, t::..w = 1r. Increasing p and q would have the effect of increasing the error bound on interpolation. Of course, it must be remembered that this is a bound, and that intuitively we expect that increasing p and q would improve, rather than degrade, the performance of interpolation. This observation leads naturally to the thought that it would seem to be a good idea to improve performance by decreasing b..w. In fact, if b..w -> 0, the error should eventually vanish, and the process converge to the exact solution. This idea will serve as the basis for an improvement to the algorithm. 4.8.1 Comments: Bandlimiting Most researchers in this area impose a bandlimiting requirement on the image, u(:!l,y). That is, given some w0 it is required that j(w,) = 0 for [w[ w0 The frequency w0 is usually assumed to be the Nyquist frequency, which is 1r / b..p, the highest frequency that can be resolved correctly with a spatial sample rate of D.p. True bandlimiting is not possible for the image reconstruction problem, as the uncertainty principle holds that no compactly supported function can have a compactly supported Fourier transform. Therefore the usual approach is to 142

PAGE 155

apply a filter to the projections, so that they are essentially bandlimited. This condition is characterized by the requirement that the integral of the absolute value l](w,<,b)l, over alllwl;:: wo, be smaller than some e. We have not imposed a bandlimiting requirement in the present discussion. It is important to understand where and how the lack of this requirement affects the previous discussion of the convergence and error characteristics of the image reconstruction process. Consider a function f( x) that satisfies the Nyquist condition, namely, that it is bandlimited with w0 1r: j ll.x. One immediate consequence is that the right-hand sides of both (3.14) and (3.26) vanish. This means that (except for roundoff error) the DFT gives }(w) exactly, and the IDFT gives f( x) exactly. For images, if the Nyquist condition is satisfied by f(p, <,b) for any <,b, then, except for numerical noise, IIEFII = 0 and liEF-> II= 0. Only the error of interpolation remains for such images, which is why most workers impose the bandlimiting assumption. As might be expected, the lack of a bandlimiting assumption affects the error terms for the forward and inverse transform steps. Specifically, it is built into every inequality of the form and the similar statements regarding j. To see this, recall that the proof of Theorem 4.3 required that the interval of integration be broken up into s intervals on which f(p,
PAGE 156

that the transform j(w, ) was dominated by the ratio of 4sH to \w\, where H was an upper bound on \f(p,)\. The key is that the numerator in all of these conditions has, built in to it, the number of oscillations, or the number of sign changes in the derivative. A function that is dominated by high frequency components, or with significant contributions from high frequencies, will tend to oscillate. The oscillation shows up in the bound on the Fourier coefficients, which in turn is passed on to the bound on the error in the D FT and I D FT. 4.8.2 Comments: The Discrete L2 Norm and the Infinity Norm We have noted several phenomena that are partly or wholly a result of the definitions of the norms used in the error analysis. It was necessary, for example, to show that as N--+ oo, \IEI\b behaves as O(N-1 ), in order to counteract the operator norm I\F2 1\12 = N. It should be apparent that the infinity norm is unchanged as N __, oo, for once the largest error in interpolation is made, the same error will be made in the interpolation of j for any larger N. The behavior of the discrete L2 norm of the interpolation error is really due to the fact that as N --> oo, the diminishing Fourier coefficients can produce only small values under interpolation, and the sum of the squared errors remains nearly constant. However, the division by N2 causes the discrete L2 norm to decrease like 0( N-1 ). In fact, if j(w, )is bandlimited, then the discrete L2 norm behaves exactly like c \\EI\\2 = N, 144

PAGE 157

as N --+ oo, where C is the discrete L2 norm of the interpolation error for the case that N /:;.w /2 = w0 Another phenomenon that is due to the definitions of the norms should be considered. For the most part, we have bounded the discrete L2 norm by noting that it can be no larger than the infinity norm. Our numerical experiments suggested that, while the infinity norm of the error in a procedure often performed exactly as predicted by a theoretical bound, say as O(N-"), the discrete L2 norm sometimes seemed to obey the same bound, at certain other times was somewhat better than the theoretical bound, and yet quite frequently behaved nearly as O(N-(a+!l). The relationship between these two norms carries some useful information regarding the nature of the error. Suppose, for example, that every component of an error vector eis exactly equal to the infinity norm llelloo In this case, 1 1 I lelia = (N L lie]!!,)' = llelloo On the other hand, suppose the value of the error is in a single component, and all other components have zero value. Then, for this situation, This accounts for the extra half order that is frequently observed. It indicates that the error is concentrated in a very few components, rather than spread out over the whole vector. In this manner the ratio lie] loo llel 12 145 I' '. I I :' I I I I I

PAGE 158

can be used to indicate something about how the error is distributed. We make the observation that this holds for functions in one dimension, while, for two dimensions using an N2-vector, the same reasoning will show that the discrete L2 norm of a vector whose non-zero values are restricted to a very few components will be smaller than the infinity norm by as much 1/ N. 4.9 A Phantom Image In order to illustrate the discussion of errors, several special data sets have been devised. In particular, the analytic Fourier transforms of certain images, and the analytic Fourier transforms of projections of these images, have been computed. The projections themselves have also been computed analytically. At any step of the algorithm, this analytic data is sampled and inserted in the cascade. In this way, the errors committed by each of the operators can be isolated. The phantom image is the characteristic function of an ellipse, that 1s, an object of unit density in the shape of an ellipse: 11 for + 1 u(x,y)= 0 otherwise. Recall from the discussion in Chapter 1 that the Radon transform of this object lS tiM I= 1 ::V'for[p[ (4.33) otherwise, 146

PAGE 159

where a = J a 2 cos2 0, we examine the series representation of J1 ( aw)j( aw ): 1 00 ( -1)k(aw)2k+1 hm "" ,..,..:.;,;...;-;-;--'---::-;-; w-o aw t'o 2(2k+l)k!(k + 1)! (1 00 (-1)(aw)2 ) = 2 + t; 2(2k+l)k!(k + 1)! 1 2 147

PAGE 160

The Fourier transform of the projection in the direction is therefore l [if abJ1 ( aw) f(w,rj>) = Y2 aw for w -:f. 0 ( 4.35) forw=O, where a = J a2 cos2 rJ> + b2 sin 2 >. It is an easy matter to determine what perfect interpolation would produce. Noting that + = w 2 and rf> = the two-dimensional Fourier transform of the image is ( 4.36) where 4.10 Numerical Examples 1: Operators Applied to Exact Data We begin with an example designed to ascertain the extent to which ( 4.32) describes the actual performance of the algorithm. A sequence of projection data sets was created for the same image, using the values N = 8, 16, 32, 64, 128, and 256. For each case the exact values of the projections, the Fourier transforms of the projections, the two-dimensional Fourier transform of the image, and the image itself were determined analytically. With all of these intermediate results known, the algorithm can be examined stage by stage, and the errors committed in each stage can be isolated and analysed. The image consisted of 148

PAGE 161

the characteristic function of an ellipse centered at the origin, and for each N the projection data was created according to ( 4.33) with parameters a = 0. 7 and b = 0.3. In each case the number of projections, M, was equal toN. The error of the forward transform E:F was determined by subtracting the values of j(w,) given by the DFT from the exact values computed according to (4.35). To examine the error of interpolation, these exact values were then used as the input vector for the interpolation routine. Er was determined by subtracting the output of the interpolation from the exact values of U.(wz,wy), which were computed according to ( 4.36). Finally, the error vector E:F- was determined by applying the two-dimensional I D FT operator to the exact values of u( wz, Wy) and subtracting the result from the 'exact' image. The question of what actually constitutes the exact solution to the discretized image reconstruction problem is difficult to answer, and very little agreement exists in the literature. For this study, we define the exact solution as being a set of N2 square pixels of width 2/N, where the value of a pixel centered at a point (x,y) is the value, u(x,y), of the density function at that point. (The pixels are arranged on a grid such that the center of the (m,n) pixel is at the point (2mjN,2njN) for m,n = -N/2 + l, ... N/2.) The projection f(p,
PAGE 162

which is unbounded as p -+ a, the edge of the ellipse. By Theorem 3.5 it might be assumed that the Fourier coefficients decay like lkl-1, but in fact they decay somewhat faster. This may be seen by applying a corollary : Corollary 4.3 ([9]) Suppose a function f( x) is bounded and satisfies the Dirichlet conditions everywhere except at a finite number of points x; where it becomes unbounded. If in the neighborhood of each Xj the function can be written in the form f(x) = .,P(x) (x-x;)v 0 < v < 1, where .,P( x) is monotonic to the left and right of Xj and at least one of its leftand right-hand limits does not vanish, then the Fourier coefficients of the function satisfy H lkll-v' where H is a constant independent of k. Since the corollary applies to the a f I op, integration by parts yields the rate of decay for the Fourier coefficients of f(p, ),namely, H lcnl .:::; lkl3/2. This result can be extended to bound the Fourier transform as lj(w,)1 < C/lwl312 in much the same way that Theorem 3.5 was extended to yield Theorem 4.3. Plots of the norms of the error vectors as functions of N are presented in Figure 4.5. For each of the three error vectors E:F, Ex, and E:F-', two plots 150

PAGE 163

hrof' of tAa FFT, ht.finil:ll N! >i _, N-3/2 -_, ' _, _, N/1 N/1 CrrtYr, IN..-pol4.t'Um f'T'Vf', L,H-' "' _,, _, );t-0.4 ;;; _, ' -i -0.6 >! ' N-t/2 -_, -.!' -OA tr:J/2 -_,. -' _,, _, ' N/1 N/ tDTT Erro-r, lnf'n.Uy Ntn"'t\ IDIT E?'T'f1'1", L, Nrn-m. """ / '" /NI/e ? / / 01!!< -' -,. / -"":; -1.0 / -? / / / / itt/t6 -/ _, / / N-' -2.0 .,. N/0 Figure 4.5: Norms of the error vectors as functions of N. 151

PAGE 164

are shown, displaying values of the infinity norm (left) and the discrete 12 norm (right). The plots consist of the natural logarithm of the ratio of the norm error for N points to the norm of the error for N = 8, plotted against the ratio N /8 on the abscissa. Since only powers of 2 were used for N, this means that, if the error norm behaves like O(N-"), the graph would be a straight line with a slope of -a. Consider the plot for IIEFIIoo, in the upper left corner. Since the Fourier transform of each projection decays like O(lwl-312), we know from (4.15) that the infinity norm of the error should behave like O(N-312). In fact, it may be seen that the error does behave nearly like O(N-312), although some fluctuation is present in the curve. It was noted in the Comments that the discrete 12 norm will behave like the infinity norm if the error is spread out over the vector, while if the error is concentrated in a few pixels the discrete 12 typically diminishes with increasing N faster than the infinity norm. For these two-dimensional vectors, the 12 norm may be smaller by as much as a factor of 1/N. Examining the plot of ratios of the discrete 12 norm of the DFT error IIEFIIoo (upper right in the figure), it is apparent that this norm decays almost exactly as O(N-312). Since the infinity and the discrete 12 norms behave similarly, we hypothesize that the error in the forward transform is spread out over much of the grid. That this is indeed the case is demonstrated in Figure 4.6. This figure displays the absolute value of the error EF, plotted as a surface over the (w, )plane. The error does appear to be spread over much of the plane. Consider the error of the inverse transform EF-'. The plot of the infinity 152

PAGE 165

Figure 4.6: E:F, The error in the forward transform. norm appears on the bottom left of Figure 4.5, while the plot of the discrete L2 norm is at the bottom right of the figure. The theory developed regarding the error of the inverse transform step only applies for functions whose Fourier transforms decay faster than N-2 and so does not actually apply to this example. However, even if that theory did apply, it would predict that the infinity norm of the error need not decay, and in fact could grow as fast as N112 Examination of the figure reveals that the infinity norm of the inverse transform does grow with increasing N, although at a rate closer to N1116 than to N112 On the other hand, the discrete L2 norm decays like O(N-112). From the comments above, this should be an indication that the error of this step is concentrated in a small number of pixels. This is the case, as may be seen by examining 153

PAGE 166

Figure 4. 7: ff:F-1, The error of the inverse transform. Figure 4.7. This figure displays the absolute value of the error E:F-1, plotted as a surface over the ( x, y) plane. The error is concentrated around the rim of the ellipse, at the discontinuity. Noting that any cross section of the actual image is a one-dimensional characteristic function on an interval, with a jump-discontinuity at the edge of the ellipse, the error E:F-1 is probably the Gibbs effect for this function. Next we examine the interpolation error. The theory we have developed prediets that the infinity norm of Ex does not depend on N, but rather on L:.w. The actual behavior of the norms of this error are displayed in Figure 4.8. Three different interpolation degrees are considered: p = 0, corresponding to nearestneighbor interpolation; p = 1, corresponding to linear interpolation; and p = 3, 154

PAGE 167

corresponding to cubic interpolation. Many experiments were conducted to ex-amine the effect of higher degree interpolation in the angular direction, with the result that the net effect was miniscule. Therefore the various degrees of interpolation operators have been applied only in the radial direction, while nearestneighbor interpolation is used in the angular direction. For nearest-neighbor interpolation (top), the infinity norm of the error is constant as N increases, indicating that the same maximum error is made for all values of N. Since the values of N are all powers of two, we note that all of the grid points used for any given value of N are used again for all larger values of N, so this result is not unexpected. Examining the behavior of the discrete L2 norm of this error (top right), it can be seen that this norm decays like N-1 indicating that the error is concentrated in a few components. On the middle row of the figure are plotted the curves for linear interpolation (p = 1), while the bottom row gives the curves for cubic interpolation (p = 3). The curve for the infinity norm is on the left, and the curve for the discrete L2 norm is on the right. In both cases, the infinity norm decays the first few times that N is doubled, but then becomes constant. The L2 norm decays at a rate somewhere between N-1 and N-312 again indicating a concentration of the error in a few components. Considering the behavior of all of the error components, it may be concluded that when the operators :F, z;, :F;' are applied to 'exact' data, the error analysis of the preceeding sections does indeed provide a useful model of the behavior of the algorithm. Therefore, we proceed to examine the effectiveness of the entire 155

PAGE 168

;'! IV' N/0 N/< _, ;: ;'! _, -"' \ -_, .!' ->'--------,o----e'-c,o-------c,c-------c--------l, N/1 N/0 .... -_, -1..0 >I _, JV' i:l -_, l '--------.---------,---------,---->------------1, Figure 4.8: ix, The error of interpolation as a function of N. 156

PAGE 169

reconstruction algorithm. 4.10.1 Numerical Examples 2: Image Reconstruction Examples A simple image may be used to demonstrate the performance of the reconstruction algorithm. The image consists of a thin elliptical rim of high density, filled with homogeneous material of moderate density. This can be thought of as a very simple model of the human head, where the high density material represents the skull and the moderate density material represents the grey matter. This problem was discretized using the values N = 8, 16, 32, 64, and 128. Reconstruction was performed on each data set using nearest-neighbor, linear, and cubic interpolation (p = 0, 1, 3, respectively). The elliptical rim has unit density (this should be taken as a relative value, and not the standard density of water), and the parameters a, bare 0.8 and 0.6, respectively, for the outside boundary of the rim, 0.65 and 0.5 for the inside boundary. The material inside the rim, representing the grey matter, has a density of 0.45. Figure 4.9 displays the sequence of 'exact' solutions, as defined in the previous section. It is easy to see that for very small N the exact solution has very little resemblance to an ellipse. From the error analysis and the discussion of the numerical examples with exact data it may be expected that although some improvement might be seen for the first few doublings of N (as the DFT and IDFT error is diminished), the overall error of the algorithm may be relatively constant for various values of N (as the interpolation error becomes dominant). This should be the case 157

PAGE 170

Figure 4.9: The exact solution for the elliptical rim 'head' phantom. for all values of p, although we expect the magnitude of the error to be smaller for higher values of p. The discrete L2 norm of the error as a function of N is plotted in Figure 4.10. Curves are plotted for p = 0, p = 1, and p = 3. It may be seen that the algorithm does behave as predicted, in that, after an initial improvement, the difference between the reconstrucion and the exact solution is nearly constant with increasing N. The sequence of images produced for the various values of N are displayed in Figure 4.11, for p = 0, Figure 4.12, for p = 1, and Figure 4.13, for p = 3. It can be seen that the image looks better for larger values of N in that the individual pixels do not stand out and the image appears more sharply defined. Comparing the corresponding images in Figures 4.9, 4.11, 4.12, and 4.13, we can easily make some purely aesthetic judgements. 158

PAGE 171

L Norm Error o Reconstruction p = 0 "' 0.2 p = 1 p=2 Figure 4.10: The L2 norm of the reconstruction error for the head phantom, plotted as a function of N. Curves are plotted for p = 0, p = 1, and p = 2. The reconstructions seem to improve both with increasing values ofN and with increasing values of p, up to a point. The eye does not, however, notice a great deal of difference between the images with N = 128 and N = 256, or, for fixed N, the difference between p = 1 and p = 3. The norms of the errors indicate that there is a difference, however. The trends of the error norms, as shown in Figure 4.10, indicate that the improvement with N is diminishing rapidly, and that the improvement with p seems to become almost constant. Since increasing the degree of interpolation by one increases the computational cost by O(N2), it is evident that some 'point of diminishing returns' will be reached quickly in attempting to improve the reconstruction by higher degree interpolation. 159

PAGE 172

Figure 4.11: Photograph of the reconstructed image, using nearest-neighbor inter polation {p = 0} for various values of N: Top: N=B, N=16, N=32 {left to right). Bottom: N=64, N=128, N=256 (left to right). Figure 4.12: Photograph of the reconstructed image, using linear interpolation (p = 1} for various values of N: Top: N=B, N=16, N=32 {left to right). Bottom: N=64, N=128, N=256 {left to right}. 160

PAGE 173

Figure 4.13: Photograph of the reconstructed image, using cubic interpolation (p = 3) for various values of N: Top: N=B, N=16, N=82 (left to right). Bottom: N=64, N= 128, N=256 {left to right). 4.10.2 The Image Artifacts Examination of Figures 4.11, 4.12, and 4.13, reveals that there appear to be systematic errors in the image reconstruction, taking on the form of circular or elliptical arcs that are tangent to the elliptical rim of the image. These have been reported by many authors ([23], [35], [39], [40]), although no complete explanation has been given for their cause. That they are indeed systematic is obvious in Figure 4.14, in which the reconstruction for N = 256 and p = 0 has been displayed nine times in a 3 X 3 matrix. The artifacts appear as larger ellipses centered at the same location as the original, tangent to the original, proportional in size to the original, but with the roles of the major and minor axes reversed. Upon closer examination, it may be seen that there are other ellipses at still 161

PAGE 174

Figure 4.14: The periodic nature of the artifacts, p = 0. larger scales present as well, and in fact there appears to be a succession of artifacts, of ever larger scales. Similar types of phenomena are quite common in other processes involving multi-dimensional Fourier transforms, such as seismic migration ([10], [38]). We hypothesize that they are caused in large part by a 'leakage', or frequency domain aliasing. If this is so, the problem of eliminating them might be addressed by frequency grid refinement. We will consider this in a later section. It is evident from the Figures that increasing the degree of interpolation im-162

PAGE 175

Figure 4.15: The periodic nature of the artifacts, p = 3. 163

PAGE 176

proves the images by smoothing them and eliminating some of the artifacts. However, as may be seen in Figure 4.15, which displays the replication of the image for N = 256 and p = 3, while some of the artifacts are eliminated the largest amplitude artifact is not eliminated. Details of the effect of increasing the degree of interpolation may best be seen by examining cross-sections through the reconstruction. Figure 4.16 shows cross-sections taken horizontally through the center of the reconstruction, that is along the x-axis of image space. The cross-sections are from the reconstructions with N = 256, for the values p = 0, p = 1, p = 3, and the exact solution. From the cross-section for p = 0, the failings of the nearest-neighbor interpolation are readily apparent. Inside the elliptical rim, the profile is characterized by a 'doming' effect over what should be a broad, featureless expanse. The rim itself should be squared off, but instead slopes steeply to the sides. The outside edge of the rim features an undershoot of nearly 50 % of the amplitude of the rim. Finally, there is a good deal of high frequency oscillation throughout the cross section. We can describe the nature of some of these features in terms of the frequency components. The doming effect is a low frequency artifact, indicating that the first or second Fourier coefficients in both the w. and wy directions are erroneous. The overall oscillatory nature of the profile indicates that some high frequency components are erroneous as well. By examining the profiles for linear and cubic interpolation it may be seen that both alleviate the problems in the lowest frequencies, while the cubic interpolation corrects some mid-range frequencies that the linear interpolation does not, 164

PAGE 177

p = 0 p = 1 1.5 I .5 I .0 I .0 0.5 0.5 0.0 0.0 -0.5 -0.5 -1.0 -1.0 -I -1/2 0 1/2 -I -1/2 0 1/2 p 2 Exact u( x, y) 1.5 1.5 1.0 I .0 0.5 0.5 0.0 0.0 -0.5 -0.5 -I. 0 -1.0 -1 -1/2 0 1/2 -1 -1/2 0 1/2 Figure 4.16: Profiles of the reconstructions: Center of the image. as evidenced by the low-amplitude gentle undulation in the center of the linear reconstruction. One good test of a reconstruction algorithm is examining how well it computes the reconstruction outside the object, where the image function has zero value. Another set of profiles, from the same reconstructions as in the previous figure is shown in Figure 4.17. For each interpolation degree, the profile shown runs parallel to the x-axis, but lies outside the ellipse entirely, as may be seen from the exact solution (bottom right). For the nearest-neighbor there is very highamplitude error in the regions near the edges of the profile, as large as one-half the maximum amplitude of the image. Recalling the images in Figure 4.14, it may be noted that these regions correspond to the most noticeable artifacts, that 165

PAGE 178

1.5 p = 0 1.5 p = 1 1.0 1.0 0.5 0.0 -0.5 -0.5 -1.0 -1.0 -1 -1/2 0 1/2 -1 -1/2 0 1/2 p = 2 Exact u(x,y) 1.5 1.5 1.0 1.0 0.5 0.5 0.0 0.0 -0.5 -0.5 -1.0 -1.0 -1 -1/2 0 1/2 -1 -1/2 0 1/2 Figure 4.17: Profiles of the reconstructions: Edge of the image. seem to form bridges connecting the copies of the images. 4.11 Oscillation in j(w,) due to Shifted Images In a previous section, it was shown that the error in the reconstruction consists of three terms, the error due to the forward transform, that due to the interpolation, and the error of the inverse transform. Bounds on the errors due to both the forward and inverse transforms were shown to depend on some power of ( 1 j N), so that in principle they can be reduced by taking N sufficiently large. Furthermore, 166

PAGE 179

the bound on the error due to interpolation depends on some power of b.w, where the power is determined by the choice of the degree of interpolation. After the inverse transform is applied, however, the error of interpolation is independent of N. The numerical examples that followed demonstrate that these properties of the bounds reflect the behavior of the error. Apparently, some technique for reducing the error of interpolation is necessary. Two methods that might be applied are increasing the degree of interpolation and refining the frequency grid, r).6 NM Some arguments were presented earlier to indicate that increasing the degree of interpolation, while it might help reduce the interpolation error, has several drawbacks. There is also an argument to be made that, for certain images, refining the frequency grid is not only preferable to increasing the degree of interpolation, it is essential to the reconstruction. Consider an image that is the characteristic function of a small ellipse centered at the origin, l (:v)2 (y)2 1 for -+ -< 1 u(:v,y)= .1 .2 0 otherwise Figure 4.18 shows the projection f(p, ) for = 0, and the Fourier transform of that projection j(w,O), which is computed according to (4.35). Now consider p,(:v, y) = u(:va, y-b), the characteristic function of an identical ellipse centered at (a, b). By the shift property of the Radon transform (2.5), we have ['R.u(:va,yb)](p,
PAGE 180

Applying the shift property of the Fourier transform, namely, :F {h(:v-o:)} = e-iwo.:;:: {h(:v)}, ( 4.37) then the Fourier transforms of the projections of the shifted ellipse are The effect of shifting the image is to multiply the Fourier transform of each projection by a complex exponential, or to modulate the Fourier transform. As a result, projections of an object that is shifted away from the origin may have Fourier transforms that are much more oscillatory than those of an identical object located near the origin (assuming that the unshifted image does not already have an oscillatory transform). Figure 4.19 demonstrates this effect, showing the projection for = 0, as well as the Fourier transform of the projection, of the ellipse of Figure 4.18, shifted to the point (0.4,0.4). Comparing Figures 4.18 and 4.19, the oscillation caused by the shift is readily apparent. It should be noted that the shift of an object does not necessarily make all of the projections oscillatory. If the ellipse from Figure 4.18 had been shifted only in the :v-direction, the projection for = 0 would be highly oscillatory, while the projection for = 1r /2 would be identical to that of the unshifted ellipse. Oscillation is also introduced in the direction. Figure 4.20 displays, for fixed w, the variation of j(w, ) for both the unshifted and the shifted ellipses. Two values of w are shown, w = 1r and w = 3071". Some oscillation has been introduced as varies for the low frequency, and a great deal of oscillation for the high 168

PAGE 181

] j(p,J 0 I 0.10 0.05 1 .S p Figure 4.18: The projection f(p, 0) and transform }(w, 0) of a centered ellipse. frequency. By comparison, Figure 4.19 demonstrates that shifting the image can introduce oscillations across the entire spectrum in the radial direction. However, the amplitudes of the high frequencies are much smaller than those of the low frequencies, as is typical for Fourier transforms of projections of images. We conclude from these experiments that if an image has features that are displaced far from the center of image space, computing the Fourier transforms of the projections results in an oscillatory function j( w, ). Much of the error in the reconstructed image is due to this oscillation in the function that must be interpolated. Further, the fact that rapid oscillation in the direction is introduced only for high frequencies, where the amplitude is small, provides another indication of why the choice of the degree of interpolation is more critical in the radial direction than in the angular direction. Therefore, refining the frequency grid appears to be the natural approach to counteracting the oscillation in j(w, ). 169

PAGE 182

E::'rl =J=( p=,=='.)
PAGE 183

4.12 Improving the Reconstruction by Frequency Grid Refinement Let f E RN X RM. Define the extension operator Xf,N (RN X RM) --> (R5 N x RM) by= Xfl f, where for j = 0,1, ... ,M -1, < SN -2 Applied to the projection set f(p, ), this operator has the effect of extending each projection by padding zeros on both ends of the projection, until the total number of points on the projection is SN. Note that the sample rate Ap is unchanged, and that the extent of each projection becomes [-S, S]. Performing a D FT along each projection is therefore an approximation of an integral over [-S, S], instead of over [-1, 1]. The reciprocity relation (3.4) gives the frequency sample rate for the DFT on the natural grid as 211" llw= N!lp Computing D FTs along each of the extended projections g yields a new approximation to j, with the new sample rate A I 211" AW '-l.W = SN!lp = s Since SN !lw' = N!lw, replacing N by SN and Aw by !lw/S produces no change to the extent of the frequency grid, which is still The standard reconstruction algorithm ( 4.5) can be modified to be 1 -r-1 'T"Y -r XSN U = fiL.r2 .Lp8 .rt N y27r 171 ( 4.38)

PAGE 184

The associated decrease in the frequency sample rate may be expected to reduce the error of interpolation. If the other phases of the algorithm are also helped (or unchanged), (4.38) should produce a better image. It is necessary to examine the error and operation count of the modlfied method, to see how the new algorithm works. 4.12.1 The Effect of Xf/ on the Error in the Forward Transform We begin by showing that the error committed in approximating the forward transform of the projection f(p, >) by the DFT is unaffected by the application of the extension operator Xf,.N. Theorem 4.8 Assume that, for all>, the projections f(p, >)have Fourier transforms that are bounded according to ( 4.39) for some a > 1. Then if the transform j(w, >) is approximated by j:::: :F1Xf!' f, the error of this operation satisfies which is of the same order as the approximation without the extension operator. Proof: Let fJ be defined by fJ = :F1Xf,.N f, and consider the comparison with J = :Fd. The DFT of the vector fJ is Vi 1 f( ./.. ) -i2wnk/ SN 9ki = SN L.. p,., 'l'i e 172

PAGE 185

where only the values of f(pn,
PAGE 186

4.12.2 The Effect of Xfl on the Interpolation Error The error of the interpolation is given in ( 4.30) as It is important to note that nothing in the derivation of this error bound is dependent on the number of points on the grid rp:M. With the extension operator applied, in fact, the error expression can be modified to read U [ (ll.w)P+1 (ll.w)H] IIEzll"" (q+1)2 s +2 s ( 4.40) which is the justification for employing the extension operator in the first place. If we assume that S 2: 11', then we are assured that ll.w IS 1. This in turn means that if fJ is defined as the lesser of p and q, the error of interpolation in the modified algorithm ( 4.38) satisfies [(ll.w)il+l] IIEzlloo = 0 S 4.12.3 The Effect of Xfl on the Inverse DFT Error In the derivation of the standard method, it was noted that the usual practice is to interpolate to a Cartesian frequency grid having the same spacing as the polar frequency grid in the radial direction, that is, ). = ll.w. If this is done in an algorithm using the extension operator, then the interpolation must be computed for C2S2 N2 points, where C2 is the fraction of the Cartesian grid that is covered by the polar grid. The error of interpolation, which depends on ll.w IS, 174

PAGE 187

is still given by (4.40). This is one way the extension operator has been used prior to this study [23]. There is no reason, however, that the extension operator must affect the size of the Cartesian grid r;/N For the algorithm we propose, the Cartesian grid retains the original spacing of >. in both the W: and wy directions, and the interpolation is still computed only for the C2N2 points ( m, n) satisfying all others being assigned a value of zero. The inverse transform in the modified algorithm is the same operation as the inverse transform in the standard method. Therefore, we conclude that the extension operator X fl, as applied here, does not affect the error of the inverse transform liE.?'-' II, which remains 0(1/N"'-2). 4.12.4 The Effect of Xfl on the Total Error The error of the modified algorithm u :F;1r;% :F1Xf,l f is still bounded according to the expression ( 4.11) where the extension operator affects Ez and EJ' in the manner given above. The interpolation operator r;% is unchanged from the standard method, and Theorem 4.8 shows that the order of the error EJ' has not been altered. Since the I D FT is still computed using N2 points on the Cartesian grid, it follows that II:F;-'r;% E.?' II remains 0(1/ Na-l ). 175

PAGE 188

The error of interpolation is passed through the inverse DFT operator, :F:;1 which has an operator norm of N. Combining this with ( 4.40), it appears that the error II:Fi1Erll depends on N(b.wjS)fJ+l. Assuming that the number of projections M is related to the number of points per projection N by (4.29), the improved algorithm has a total error whose behavior depends on the number of samples N, on the lesser of the interpolation degrees {3, and on the extension factor S: ( 4.41) Two observations should be made here. First, if this error is compared with the error of the standard method for the same value of N, the error bounds due to the DFT and the IDFT are unchanged, while the bound on interpolation is diminished by a factor of 1/ Si3+1. The extension method can be expected to produce better images, as we had hoped. Second, ( 4.41) provides a mechanism by which the algorithm converges formally to the exact solution. Letting N -> oo will eliminate the first and last terms of ( 4.41 ). The second term becomes unbounded as N -> oo unless the extension satisfies 1 S>Nw -' and, to guarantee convergence, S must become infinite faster than Nl/(fJ+l). It may be seen that this result parallels the discussion in Chapter 3 about how the D FT is related to the continuous transform. In both cases, getting to the continuous from the discrete requires a double limiting process in which the number 176

PAGE 189

of samples of a function on an interval must become dense and the interval itself must become infinite. 4.12.5 The Effect of Xf;N on the Operation Count The extension operator Xfl maps the N-point projection to an SN-point projection. No computation is involved in this operation. The effect on the computational complexity is that a DFT is computed on each of the M projections. This is done using an F FT so that the cost of each transform requires 0( N log2 N) operations in the standard method. After extension, each forward transform requires O(SNlog2 SN) operations. We note in passing that recent work defining an operator called the fast fractional Fourier transform [2] may lead to a significant reduction in the cost of computing the F FT of a sequence that has been extended with zero padding, but that the method has yet to be fully developed and implemented. The interpolation step requires 0( (p + 1 )( q + 1)) operations for each target point. Use of the extension operator has the effect of refining the grid from which the interpolation is computed, but has no effect on the number of points on the target grid. The extension therefore has no effect on the complexity of interpolation. Similarly, the inverse transforms are computed using the data on the target grid, so that the extension has no effect on the complexity of this operation. Using the new operation count on the forward transforms, the cost of the new 177

PAGE 190

algorithm ( 4.38) is ( 4.42) where 01 and 03 depend on the F FT algorithms selected, and 02 is the percentage of the N2 points on the Cartesian grid lying inside a circle of radius N t:.wj2. If the number of projections, M, is constrained to satisfy the condition M = N 1r /2, then the operation count is ( 4.43) A few simple examples can serve to indicate the relative cost and potential benefits of the extension method. Table 4.1 gives the ratio of the cost of the method using extension to the cost of the standard method, for several typical values of N, p, and q. Also included is the ratio of the interpolation error bounds of the methods. It may be seen from the table that the ratio of the total cost of the algorithm with S as an extension factor to the cost of the standard algo-rithm grows more slowly than Sin all cases. Furthermore, the growth rate of the complexity as S increases is slower for large values of N than for small values, and much slower for higher degrees of interpolation. The growth rate of the com plexity ratio for hi-cubic interpolation is just over half that for nearest-neighbor interpolation. In all cases the error of interpolation could be diminished by as much as s-2 If, as is often the case, the error of interpolation is much larger than the error of the forward or inverse transforms, then the potential benefit of 178

PAGE 191

the extension method seems to easily justify the cost. N p q s W(N) Tcfif N p q s 32 0 0 2 1.82 .!, 128 0 0 2 1.76 I .!, 4 I 4 32 0 0 4 3.71 I 128 0 0 4 3.46 .l i6 16 32 0 0 8 7.94 I 128 0 0 8 7.20 I I ii4 _SA_ 32 1 1 2 1.74 l 128 1 1 2 1.71 l 4 32 1 1 4 3.43 .l 128 1 1 4 3.28 .l 16 16 32 1 1 8 7.24 I 128 1 1 8 6.73 1 .. ;;;; 32 3 3 2 1.52 .!, 128 3 3 2 1.54 l I 4 32 3 3 4 2.73 I 128 3 3 4 2.75 .l I \6 !6 32 3 3 8 5.44 128 3 3 8 5.41 I 1 ii4 ii4 I Table 4.1 The ratio "':J(::-/, cost with extension factorS to cost of the unextended method, as well as the ratio of interpolation error bounds EJt::/, for various combinations of N, p, and q. 4.13 Numerical Examples 3: The Effect of We demonstrate the effects of the extension operator with two images. The first image we denote u(x,y). It consists of a set of concentric hollow ellipses arranged in a stairstep fashion, with 'density' values ranging from -2 to +3. Since u(x, y) has regions of negative density, it is not considered a model of some physical object. The second image, which we will call v(x, y ), consists of a thin hollow cylinder of high density material, and containing a small square of high density matter. This could be thought of as modeling the human head with a foriegn object, although this identification is of no import here. Figure 4.21 is a photograph showing three-dimensional plots of the the Radon transforms [Ru](p,) and [Rv ](p, 4>) in the upper left and upper right, respectively, as well as the exact 179

PAGE 192

Figure 4.21: Photograph of the functions u(x,y) and v(x,y) and their Radon transforms. solutions u(x,y) and v(x,y), in the lower left and lower right, respectively. The images were reconstructed using N = M = 128, and reconstructions were performed using the extension parameters S = 1, 2, 4, 8, and 16. Three interpolation degrees were used, p = 0, p = 1, and p = 3, applied in the radial direction. The results are displayed in the photographs in Figures 4.22, 4.23, and 4.24, showing the cases p = 0, p = 1, and p = 3, respectively. Each figure consists of two photographs, with the top photograph showing the reconstructions of u(x, y) and the bottom photograph showing the reconstructions of v(x,y). Within each photograph, the displays are (clockwise from the top left) for reconstructions using extension factors S = 1, 2, 4, 8, 16, and the exact image. Note that the artifacts 180

PAGE 193

are readily apparent for S = 1, where no extension has been used. For u(x,y), using nearest-neighbor interpolation, the improvement achieved by a single extension ( S = 2) is quite apparent to the eye, and the improvement brought about by extending to S = 4 can also be seen, although it is not so obvious. Improvement due to larger extension factors can be detected by examining the error norms, but only become noticeable to the eye when the images are displayed in color (in which case the improvement is dramatic). For v(x,y), the improvement is more noticeable to the eye for all of the extensions. For the linear interpolation the improvement in u(x,y) is again noticeable for the first extension. To the eye it is barely visible for the second extension, and cannot be detected in a black and white display for larger extension factors. For v(x,y) the improvement is once again most noticeable for the first extension, but can still be seen for the next higher factor of S, as well. With cubic interpolation, for both functions, the improvement is again visible upon extending the grid once, but is not seen for further extensions. The effect of the extensions can be observed in a somewhat more quantitative manner by examining Figures 4.25 and 4.26. The former shows the absolute value of the error in the reconstruction of u( x, y) along the line y = 0 (for the nearest-neighbor interpolation). Clockwise from the top left, the profiles represent the error in reconstructing with S = 1, S = 2, S = 4, and S = 8. The profiles show that the error falls off significantly as the number of extensions increases to S = 8. The plot for S = 16, not shown, does not differ significantly from that for S = 8. These plots may be compared with those in Figure 4.26, 181

PAGE 194

Figure 4.22: Top: The reconstruction ofu(x,y) with p = 0. Bottom: The recon struction ofv(x,y) withp = 0. 182

PAGE 195

Figure 4.23: Top: The reconstruction ofu(x,y) with p = 1. Bottom: The recon struction ofv(x,y) with p = 1. 183

PAGE 196

Figure 4.24: Top: The reconstruction ofu(x,y) with p = 3. Bottom: The recon struction of v( x, y) with p = 3. 184

PAGE 197

2.0 1.5 .:: 1.0 l 0.5 .., 0.0 -1 2.0 1.5 .:: 1.0 l .., p = 0, s = 1 -.5 0 1 Image space, x p = 0, s = z 2.0 .. ] 1.5 1.0 "' 0.5 2.0 {; 1.5 ;;! :a 1.0 "' 0.5 p = 0, s = 4 p = 0, s = 8 Figure 4.25: Error of the reconstruction for several values of S with p = 0. which carry the same information for the linear interpolation experiments. It may be seen in that figure that the reduction of error is much less noticeable, mainly because the reconstruction without extension (S = 1) is much better with linear interpolation than with nearest-neighbor interpolation. Plots are not shown for the cubic interpolation where the first extension (S = 2) produces an error as small as is seen in any of the experiments. Similar results were obtained for the reconstruction of v(x,y). The discrete L2 norms of the reconstruction are given in Table 4.2 for all the combinations of p and S that are used in the reconstruction of u(:z:,y) and v(:z:,y). Examination of the table confirms the qualitative results we have discussed. For example, reducing the error to the value 0.1052 in the reconstruction of u( :z:, y) required only S = 2 for the cubic interpolation 185

PAGE 198

2.0 p = 1, s = 1 2.0 p=1,S=2 .. 1l 1.5 :::: 1.0 ""' 0.5 2.0 p = 1, s = 4 2.0 p = 1, s = 8 1.5 .:: 0:: 1.0 0.5 "' 0.0 -1 -.5 0 .5 !mage space, x Figure 4.26: Error of the Reconstruction for several values of S with p = 1. u(x,y) 5=1 s =2 5=4 s -8 s -16 p=O 0.669438 0.243080 0.163689 0.122005 0.110893 p=1 0.309587 0.126438 0.106160 0.105172 0.105232 p=3 0.196876 0.105231 0.105105 0.105247 0.105283 v(x,y) S-1 S-2 s -4 S-8 s -16 p=O 0.352853 0.240861 0.186228 0.168548 0.165984 p=1 0.291328 0.174553 0.164073 0.163905 0.164087 p=3 0.264976 0.164203 0.163998 0.164132 0.164156 Table 4.2 Error Norms for the Extension Factors. 186

PAGE 199

while requiring S = 8 for the linear interpolation (although S = 4 gives a result nearly as good). The nearest-neighbor interpolation had not achieved this much reduction by S = 16. Similar results were obtained for v(x,y), as may be seen in the table. Consider the cost of the standard method, which is ( 4.44) where p =q= 0. With these parameters, this is the least expensive reconstruction method for the problem with M = N. Suppose that the problem behaves as in the example. Then to reduce the error to a value near the minimum the parameters must be altered to one of the combinations p = 0, S = 16, or p = 1, S = 4, or p = 3, S = 2. Changing the parameters top = 0, S = 16 induces an increase in the operation count of N2(15C1log2 N +64CI) over the cost given in ( 4.44). Changing to the parameters p = 1, S = 4 causes a cost increase of N2(3C1log2 N +8C1 +C2), while changing top= 3, S = 2 entails N2 ( C1log2 N +2C1 +2C2 ) extra operations. Thus the most efficient means of minimizing the error of the reconstruction (in this example) is to use cubic interpolation and S = 2. However, there is no guarantee that this behavior is universal, although a similar result was obtained in every experiment for this study. Moreover, there are instances where operation count is not the best measure of the expense of the method. For example, the F FT can be vectorized extremely well [46], but the interpolation step is much more difficult to vectorize. If the computation is to be mounted on a vector 187

PAGE 200

computer it may be more efficient to use more extensions and lower interpolation degrees. An open question of some interest is whether or not there exist images in which an extension factor of S actually produces an improvement of 1/ S2 in the error. In any case, it is apparent that choosing an optimal combination of reconstruction parameters for a given image is not a trivial task, and can depend upon computational equipment and implementational details as well as the nature of the data set. 188

PAGE 201

CHAPTER 5 ANTERPOLATED IMAGE RECONSTRUCTION In this chapter we will develop a two-dimensional reconstruction algorithm based on the anterpolated D FT. Since anterpolation is derived from Lagrangian interpolation, it is to be expected that this algorithm will have many features in common with the algorithms of the previous chapter. Indeed, this is the case. However, there are also some important distinctions to be made. One of the most important distinctions is that the inverse Fourier transform integral is approximated on the polar grid. 5.1 The Central Slice Theorem in Polar Coordinates We begin by examining the Central Slice theorem in polar coordinates. Let the image density function be u(r, 11). Then the Radon transform may be found by making the substitutions :z: = rcosl1, y = rsin11, and dxdy = lrldrdll, and using the cosine addition rule in ['Ru(:z:, Y )](p,
PAGE 202

The one-dimensional Fourier transform of the Radon transform with respect to p (taken in the direction specified by )1rldrdB v'2Jr 0 -oc This last double integral is v'27f times the two-dimensional Fourier transform of u(r,B), expressed in the polar variables w (the radial frequency) and

)lwldwdc/J (5.2) 190

PAGE 203

As in Chapter 4, it is assumed that f = 1?-u, the Radon transform of u, and that a sampling of the projections is available, say f( k!l.p, j D.
PAGE 204

Note that the cubature weights, \ll(k), are independent of j, indicating that the cubature weights do not depend on the azimuth, The reason for this is that the cubature formula (5.3) is interpolatory, derived by approximating the integrand by a product of linear polynomials in w and and , and integrating the approximation exactly. This formula reduces to the trapezoidal rule in 4> which, when applied to a periodic function, has equal integration weights at every sample point. The frequency data on the polar grid appear on an irregular grid, relative to the Cartesian frequency variables Wz and wy. With this in mind, we will convert the sum in (5.3) into a formal DFT in two dimensions, which will then be evaluated by means of a two-dimensional anterpolated DFT. 5.2 The Two-Dimensional Anterpolated DFT The task is complicated somewhat by the fact that the frequency grid is really two grids indexed by k and j, one for the Wz component and one for the Wy component. That is, the indices k and j are both required to specify the frequency in both the Wz and wy directions. Therefore, the irregular frequency grid can be thought of as the combination of separate grids for w= and wy, each indexed by k and j. These grids can be indicated with the notation w:; = kl:.w cos(j !:.4>) and = kl:.w sin(j!:.). Figure 5.1 shows the relationship between these grids. 192

PAGE 205

rM NM Grid t I I Ill 111 11 111 a 1 I 1 11111 11 111 II II wkj Grid Figure 5.1: The polar grid may be viewed as a combination of two irregular frequency grids. Frequency values for wi;1 are shown on the one-dimensional grid at the bottom, while values of w!1 are taken from the one-dimensional grid at the right. Letting Fki = lfl(k)j(ktlw,jtl), the cubature rule (5.3) can be written as M-1 'f u( x, Y) L L Fk;ei(zw:;+Y"':;) (5.4) i=O which is just a formal DFT in two dimensions. Any values of x andy could be used in (5.3). However, assume that N2 values of u( x, y) are desired, and that they are located at = r tlx and y, = ttly, where tlx =fly= tlp = 2/N. The integer indices rand t vary between -N/2 + 1 and N /2. Therefore, I S 1 and IYtl ::; 1 for all r, t. With these conventions, ( 5.4) can be written as u=WF 193

PAGE 206

where uis the N2x 1 column vector containing the image u(:c,., y1), F is theN M x 1 column vector containing if!(k)j(kb,.w,jb,.), and W is the N2 x N M matrix containing ei(z,w:;+Y"'f;l, the kernel of the transform. A convention regarding storage order on these vectors and matrices will be given shortly. We now proceed in a fashion entirely analogous to the one-dimensional derivation. Note that, while the integral in (5.2) runs from -oo to oo in the radial variable w, the cubature is performed over a finite range, [k[ :S N /2, in the radial direction. Letting WM represent N b,.w /2, the largest frequency (in absolute value) used in the cubature, the approximation covers the interval [ -wM, WM ]. Divide the interval I = [ -WM, WM] into N. intervals of equal width, with spacing h = 2wM j N Then the region I x I is covered by the au..xiliary grid The points on this auxiliary grid are regularly spaced, and denoted {Xz, Ym}, where Xz = lh and Ym = mh for l,m = -N./2, ... N./2. As in the one-dimensional case, p-degree Lagrangian interpolation is applied to the kernel in two dimensions. Some indexing arrays will be required. Let hold the X-index of the n'h interpolation neighbor to wk.; and let k, j, n) hold the Y -index of the nth neighbor of Then, for each k and j, the p + 1 nearest neighbors of the irregular grid point wk.;, in the w.-direction, on the regular grid are the gridpoints x".(k,j,O) x".(k,j,l) ... 'x".(k,j,p) Similarly, the regular gridpoints Y..,(k,f,O) Y..,(k,j,l) ... Y"(k,j,p) denote the p + 1 nearest neighbors, in the Wydirection, of the irregular gridpoint wZ; The Lagrangian 194

PAGE 207

interpolation weights for interpolating a function jj( wk;, wrJ from g( Ym) are given by and (5.5) and the hi-p-degree interpolation itself is p p g(wk;,w%;) :E :E Wkjn g(X".(k,j,n)> Y..,(k,j,m)) (5.6) m::::On:;;:O Let fi designate an N;'-point column vector, consisting of the values g(Xz, Ym) on the regular grid. We order this vector so that the second index varies the fastest, although the algorithm could be designed with either convention. Similarly, let g be the column vector of length N M containing the interpolated values g(w:;,w%;). Then the hi-p-degree interpolation can be written as p p g(Mk+j) = :E m=On;;;;:O or more conveniently as g = [T,;'tlfi where [I;;'J] is the N M x N'; interpolation matrix mapping a function on the auxiliary grid n';l. N. to the polar grid. A note should be made here regarding the notation. The symbols x and y are being used as subscripts or superscripts on several symbols. In each case, the symbol is being used to identify a direction on a grid in Fourier space. That is, is a frequency component in the w.-direction on the irregular frequency 195

PAGE 208

grid, while is used to identify a Lagrangian weight for interpolating from the regular to the irregular frequency grids in the w.-direction, and x..( k, j, n) is an index array giving the indices of interpolation neighbors in the w.-direction. However, x and y are also being used as the independent variables of the image function, u(x, y ). To avoid confusion, when the independent variables of the image are the intended usage, the subscripts r and t will be used on x and y, respectively, to serve as reminders that xT and Yt are points on the image grid. Lagrangian interpolation will be used on the kernel of (5.4), interpolating approximations to ei(z.w:;+Y"'%;) from the values of This means that (5.4) is approximated according to where M-1 lf u(x,y) I; I; j=O (5.7) Thus, letting e y, designate the N;-point column vector containing the values on the regular grid, for -N.j2 + 1 ::; l, m ::; N./2, and letting e y, designates the N M length column vector containing the interpolated values x.,y,) on the irregular grid, then the interpolation is As in the one-dimensional AD FT, it is the row vector ( e y, )T that is the row 196

PAGE 209

corresponding to (xnYt) in the kernel of (5.7), which is given by (5.9) Using W to designate the N'; x N M matrix of approximate kernel values with rows ( eTYt )T, and W to denote the N? x N7; matrix of kernel values on the auxiliary grid with rows (e:.,Ty,)T, we find that, by (5.9), W = W[I:tJT. Designating the N'; x 1 approximate reconstructed image as u, we can write the approximate reconstruction ( 5. 7) as u = wf ( 5.10) WF, where F = [I;'!]T F is the anterpolation of F. For an arbitrary two-dimensional vector Fki> equation (5.4) is a formal DFT, and u = WF is the corresponding two-dimensional ADFT. Note that the matrix W is a DFT matrix, and the matrix multiplication W F can be computed efficiently by means of an F FT. Applied to the image reconstruction problem, we call this the Anterpolated Image Reconstruction method (AIR). Let :F:;J designate the integral operator in (5.2), that is, the inverse twodimensional Fourier transform of a function in polar coordinates. Then the process leading to AIR may be summarized as follows. We wish to approximate the continuous image reconstruction u = :F:;J :Fd 197

PAGE 210

given /, the vector of length M N containing samples of the projections. The AIR algorithm consists of the following steps: The integral operator :F1 is approximated by M applications of the DFT to the projections. This gives an approximation to j, and is done using an FFT. The integral operator :F:;i is approximated by the cubature formula u = Wi[l j, where W is the matrix of the kernel on the irregular grid and i[1 is the diagonal matrix with the cubature weights on the diagonal. The cubature formula is itself approximated by u = W[:r;;tJTi[! j, where W is the matrix of kernel values on the regular auxiliary grid and [I:;'!]T is the adjoint of interpolation. The application of W is done with an F FT. 5.3 Operation Count for the AIR The cost of computing the AIR consists of the cost of computing the forward transforms, the cost of computing the interpolation weights, the cost of applying these weights in the anterpolation, and the cost of the matrix multiplication of the kernel with the anterpolated data. Of course, as in the one-dimensional case, the motivation behind the entire method is to perform this last task with an FFT. The forward transform step of this algorithm is identical to the interpolation methods of the previous chapter, and consists of M applications of a one-198

PAGE 211

dimensional F FT of length N, one for each projection. This step therefore has the cost of 0( M N log2 N) operations. The approximate kernel values are interpolated from the auxiliary grid at each of the M N points on the original grid. Using hi-p-degree Lagrangian inter polation, 2(p + 1) interpolation weights are required at each of the M N points. The regular spacing of rwM in both w and means that the denominators in (5.5) are independent of the location of the target point. Computing each weight therefore requires p multiplications and p additions, or 2p operations. The cost of making the interpolation weights therefore totals 4(p + 1 )pM N operations. As noted for the one-dimensional AD FT, and again for the interpolation methods, if the same geometry is to be used repeatedly for many reconstructions, it is best to pre-compute and pre-store the interpolation weights, so that this cost need not be included in the complexity analysis. There are MN cubature weights w(k), one for each of the gridpoints on the polar grid. The cost of computing P = l[1 j is therefore the cost of M N multiplica tions. The anterpolation operator [I;tf is a matrix with dimensions N; x M N, and is applied to the vector l[1 j. Performed as a matrix-vector multiplication, this would entail 0( N3 M N) operations. There is a much more efficient method of performing this computation, by making use of the index tables n) and which hold the X andY indices, respectively, of the n'h interpola tion neighbor of the polar gridpoint indexed (k,j). The method is given by the following algorithm: 199

PAGE 212

1. Initialize F(Xt, Ym) = 0 for all points on the auxiliary grid 2. Fork= -N/2 + 1, ... N/2 For j = 0, 1, ... M -1 Fori=0,1, ... ,p For n = 0, 1, ... ,p Set l <--Set m <-i) There are two multiplications and one addition for each update to F, and there are M N(p+ 1 )2 such updates. Thus the cost of computing theN? element vector [I;tJT[>Ir /] is 3M N(p + 1 )2 operations. The last portion of AIR is the computation of the F FT. This is applied to the N? x 1 data set F, which is considered to be an N. x N. matrix for this step. The cost is O(N? log2 N.). Summarizing the computational costs of the AIR, [>Ir /] is computed by M N multiplications. F = [I;tJT[>Ir /] can be computed in M N(3(p + 1)2 + 1) operations. u = I D FT2 { F} requires C2N? log2 N. operations. 200

PAGE 213

The total cost of the computation is therefore (5.11) operations. If the number of views satisfies the requirement M 2': N1r /2, as in Chapter 4, then this cost may be written (5.12) where the factors of 1r /2 have been absorbed into the constants, as have the extra factors multiplying N2(p + 1 )2 5.4 Error Analysis for the Anterpolated Image Reconstruction The exact solution, the set of samples of u(xny1), is given by il = This is approximated by the cubature formula, u = Wil! j. The error in this approximation is Ec = ii-u, which we call the cubature error. Because of the computational expense, however, we do not actually calculate ii., but instead use an ADFT to approximate it by u = W[I:;',f]Ti!! j. The error in this approximation, which is called the error of anterpolation, is defined as E.A = u-u. However, the vector u is not computed exactly, since anterpolation is applied, not to } but rather to an approximation to j, computed with an F FT. This approxi-mation results in j + E:F, which is identically the same error as was analysed in the previous chapter. Designating the vector that is actually computed by ii = W[I:;',f]Til!(j + E:F ), the error in this last approximation is E.A:F = u-ii, and 201

PAGE 214

we call this the propagated DFT error. Sumarizing the vectors and operators, u = :F:;J :Fdis the exact solution u = WW j is the cubature approximation to u u = j is the ADFT approximation to u u = + E:F) is the computed solution Ec = u-u is the cubature error -EA = u-u is the anterpolation error EA:F = u-u is the propagated DFT error Since u is the vector that is actually computed, we may compare it to the exact solution u to give the error expression u-u = u-u+u-u+u-u (5.13) and the error norm may be bounded by (5.14) Each of the vectors on the right-hand side of this expression will be examined individually. 5.4.1 IIEAII, The Error of Anterpolation We consider first the error due to approximating the cubature by the ADFT. Writing out u = WW j explicitly, the cubature is (5.15) 202

PAGE 215

and the anterpolation approximating the cubature, u = W[I:;-tJTw j, is M-1 lf u(x,y) = L L i=O (5.16) where e(wkj> Xn y,) is the interpolated kernel given by (5.8). Let R,(wkj> designate the error committed in approximating the kernel in (5.15) by the kernel in (5.16): Then the error in approximating the cubature by anterpolation is lY. I: w(k)](wk,;)R,(wk;,wr;). ( 5.17) The grid function x7 y,) is the approximation of the irregular-grid function ei(z,w:;+y,w:;l by interpolation of values of the regular-grid function ei(z,X +y,Y). Bi-p-degree Lagrangian interpolation in Cartesian coordinates is used for this calculation. From [24], if wki = + sh and = X">(k,i,o) + rh, then the error in this interpolation is given by ( 5.18) where each of the partial derivatives is evaluated at some point in the interval spanned by the interpolation neighbors. Noting that, for any non-negative integers m and n, I !Jm+n (ei(z,X+YYl) I = I lml In EJXm(Jyn XT Yt. 203

PAGE 216

and that, for the image reconstruction problem, lxrl :::; 1 for all r while IYtl :::; 1 for all t, we have 17rr(s)jhP+l 17rr(r)jhP+l 17rr(s)117rr(r)jh2p+2 (p+ 1)! + (p+ 1)! + [(p+ 1)!]2 (5.19) By Lemma 3.2 we know that, since the target point of the interpolation, represented by (s,r), always satisfies s,r E l1rr(s)l (p + 1)! so that (5.19) may be written as 1 <-2P+l (5.20) assuming that h :::; 2. Recalling that h = 2wM IN, = N Aw IN,, we see that N, may always be chosen to satisfy this con.dition. Using this expression in (5.17), and recalling that 1li(k) > 0 for all k, the error in approximating the cubature with anterpolation is bounded according to ( 5.21) We can use this expression to find a uniform bound for the anterpolation error, E.A, as follows: Theorem 5.1 Let the set of projections, f(w, ;), satisfy sufficient conditions so that for all ;, l](w,>)1 < 204 H lwl" (5.22)

PAGE 217

for some a > 2. Then the infinity norm of the anterpolation error satisfies where K is a constant independent of N, N., and p. Proof: Recall that the cubature weights are defined as ( Jlw )' Jl for k==O 3(2.-)'1' i[s(k) = \k\( IEA(XT)Yt)l 3 2 (27r)3/2 x ( + + + l}(w!f,i)l) (5.23) which holds for all xT and Yt with -N /2 r, t N j2. The bounds on the magnitudes of }(wk, 1/>;), as given by (5.22), become infinite as lwl ---> 0. By Corollary 4.2, however, we also have the bound l}(w,;)1 2Uj1r, where U is an upper bound on lf(p, 4>)1. While the actual values of U and H are not readily available, there must exist a frequency 1J where the two bounds are equal, that is It is possible that, for a given value of N, this special frequency lies outside the disk of radius N l:,.w/2 in the frequency domain. However, since the goal of this 205

PAGE 218

analysis is to examine the behavior of the error as N --> oo, we may assume that 7) is within the limits of the integration. Define the integer k, by Then the expression bounding the error becomes 3 (t:.w)2f:. (EA(xT, Yt)[ :S 2 ( 2rr)3 / 2 M-1 (2U 4U If H ) xi: -+-Z:::k+2 L k j;O 311" 7r b1 k;k,+J ( kf:.w )a where we have used wk = kf:.w. It may also be noted that the value k = N/2 is included in the last sum. The quantity inside the summation over j is li 2U (1 2 ) H -:; 3 + 2(k" +h.) + 2 k (kt:.w)a. (5.24) It is useful to 'borrow' one power of t:.w from if the sum over k in ( 5.24) is multiplied by f:.w, then it may be bounded by comparison with the integral: < !.N f:>.w __!!_ dw = _!!__ ( 1 k,f:>.w wa-1 a2 (k.LI.w)a 2 After multiplying by t:.w, the expression in (5.24) approaches the constant value as N --> oo. Since this expression is independent of j, the summation over j can be replaced by a multiplication by M. These facts, combined with the definition Ll. = rr / M, give lEA(x-,yt)[ <_ 3 (!:.)p+1 (r:.w)l (1 + 2(P + k )) + 2 H ] 2 2 3 (a 2)(k,f:>.w)2 (5.25) 206

PAGE 219

Excepting (h/2)P+l, everything in this last expression is independent of N, p, and N,, and may be taken as a constant K. Then the error expression becomes and, since h = N t::.w j N,, we arrive at Since this expression holds for all Xr and Yt> taking JIEAJioo = maxJEA(Xr,Yt)l r,t completes the proof. I In this analysis, we consider t::.w to be fixed, so that it can be absorbed in the constants. This analysis is useful in determining the behavior of the anterpolation error as N ---> oo. As N becomes large, it is necessary to take large values of N, to hold the error to some acceptable level. For fixed N, the value of N, can be selected to ensure that the error bound is diminished to some prescribed tolerance, as we will see shortly. 5.4.2 IIE.AFJI, The Propagated DFT Error Consider the fact that the forward transforms, }(w, t/>), are not computed exactly, but are instead approximated by M applications of a DFT on vectors of length N. This is the same process that was used to compute the forward transforms in the standard method of Chapter 4, and it produces exactly the same result, namely j + EF As before, the error norm liEF! I can be expected to 207

PAGE 220

behave like O(N-a) for some a;::: 1, provided the projections f(p, T -EA:F = ii.-ii. = W[I:;'.] iJi E:F This explains the name 'propagated DFT error,' as the vector EA:F is simply the result of applying the an terpolation operator to the error of the D FT. It is possible to bound IIW[I:;',j"]TiJi E:Fii directly by writing each component of the vector as M-1 !f. EA:F(XnYt) = L I: j=O where E:F(wk,
PAGE 221

With a somewhat more circuitous approach, we can clarify this point and arrive at a tighter bound. To do this, it will be convenient to add and subtract the cubature of the DFT error, yielding En = W[I;1JT1]i E:;: = W1Ji E:;: W[I;!]T1Ji E:;: W1Ji E:;: = (W1Ji W[I;!]T1Ji)E:;: Wl]i E:;: Thus, the norm of the propagated DFT error can be bounded according to The first term, (W'll-W[I;!]T'll)E:;:, is simply the difference between cuba-ture and anterpolation, applied to the DFT error. We have already examined the difference between these two operators applied to j. For the present problem, letting Ecn(xnYt) be the component of (W'll-W[I;1JT'll)E:;: corresponding to (xny,), then from (5.17) we have M-1 lf Ecn(XnYt) = L L i=D k=-Jt +1 Using the bound on from (5.20), this becomes (5.26) Since the Fourier coefficients of a compactly supported function are samples of the Fourier transform (up to a constant multiple), Theorems 3.5, 3.6, and 4.1 give the result that if, for any f(p, ) satisfies sufficient conditions that l}(w, 4>)1 ::; 209 H lwl"''

PAGE 222

for some a> 1, then the N-point vector of DFT error, if:F, satisfies ( 5.27) for some C > 0. Inserting this into (5.26) and writing the cubature weights explicitly, as in (5.23), we find that The terms in parentheses are independent of j, so the summation introduces a factor of M, which is cancelled by the denominator of/::,.>= 1r I M. The sum over k equals N2 I 4 N 12, so that which can be bounded by The constant K is no larger than Recalling that h = N t:.wi(N.) and taking the maximum over all xT and y, this becomes T (N t:.w)P+l K II(Ww-W[T,;'y l w)EFIIoo ::::; 2N. N<>-2 ( 5.28) Next, consider the expression W>It EF, which is cubature applied to the DFT error EF. Let EcF( "'" y,) represent the ( "'" y,) component of Ww EF. It follows that 210

PAGE 223

which can be bounded by M-1 w(k)IEF(Wk, ;)1 ::::; IIEFIIoo I: w(k). Summing over all the cubature weights alone is equivalent to performing cu-bature on a constant function. The full cubature is interpolatory, using linear polynomials, and therefore performs this integration exactly, yielding the area of the disk (divided by the constant (27r)3 1 2 which is built into the weights); therefore, M-1 lf I: I: w(k) = j::::O k=lf 7l' (NL'l.w)2 (27r)3/2 -2-' and Using (5.27) and taking the maximum over xT and this gives (5.29) Finally, putting together (5.28) and (5.29) gives the desired bound for the propagated DFT error: (5.30) We have shown Theorem 5.2 Let the AIR algorithm be used to reconstruct u(x,y), where the projections [Ru] = f(p, ) satisfy sufficient conditions such that for all c li(w, )1 ::::; lwl" 211

PAGE 224

where a> 2. Then, for any interpolation degree p and size of auxiliary grid N,, the propagated D FT error satisfies ([ND.w]P+l 1 ) ( 1 ) IIEAFIIoo = 0 2N, N"-2 + 0 Nc.-2 as N ___, oo. If, in addition, N, is always selected to satisfy 2N, 2: N D.w, then I 5.4.3 IIEcll, The Cubature Error The cubature formula (5.3) approximates the integral (5.2) first by approxi-mating the integrand as a tensor product of piecewise linear polynomials in the radial and angular directions, then by integrating the polynomial approximation exactly. In order to determine the error that is committed in this method, it is necessary first to examine the error in the polynomial approximation. We will approach this the same way we analysed the error of interpolation in the previous chapter since that, too, was a tensor product of polynomials in the radial and angular directions. Figure 5.2 shows the geometry of interpolating the value of the integrand. Let g(w, ; x, y) denote the integrand ](w, For convenience we will drop the x andy from this notation, and write g(w, ). Within the area element determined by the points (wk, ;), (wk+1 ;), (wk, ;+1), and (wk+11 ;+1), 212

PAGE 225

(c.i,,,) \ Figure 5.2: Interpolation for the cubature. The values of the integrand are in terpolated first for (w, 0 ) and (w, 2), using the values at w0 and w1 along the ray. These points are then used to interpolate the value at (w, ), using linear interpolation along the dashed arc. we consider first the interpolation of the value of j at the radial distance w along the two radii at angles ;) plus an error and g(w, i) fl(w,i) -(5.31) n=O where W = Wk +rt..w and ei lies SOmewhere in [wk,Wk+l] along the ray associated with i Similarly, along the ray specified by )can be approximated by parameterizing the variatiation in g (for fixed w) as a function of arclength, 213

PAGE 226

s = w. The parameterized function is then interpolated by I L w=(s)fJ(w,si+=fw) = m=O (t2-t)(b.s) 2 f)2g fJ(w,sjw) -2! 8s2(w,1J)' where s; = w;, s = w>; + tb.s, and"' E [w;,wi+1J. Noting that b.s = wb. 1 L w=(w)fJ(w,;+m) = g(w,) -(5.33) m=O The interpolation weights are ( A.) w w;+J Wo Wtp w>; w;+J and However, the values actually used in ( 5.33) are not fJ( w, ;) and fJ( w, ;+! ), but I I the right-hand sides of (5.31) and (5.32), respectively. Inserting these terms into (5.33) gives an expression for the error of the interpolation, denoted by R(w,): (t2t)(b..)2 o2g 1 (r2r)(b.w)2 82g R(w,) = 2! o>2(w,1)) + Wm(w) 2! (5.34) With this expression interpolation error, the cubature error is given by integration. For the elementary area specified by k and j, the error of the cubature is given by 1"'<+1 1Hl Ec(k,j) = R(w, )wdwd, w,e ; so the total cu bat ure error is the sum Ec(:c,y) = 2%1 NE1 (f<+' R(w,)wdwd>) 214

PAGE 227

Let G1 and G2 be the maximal absolute values of the second partial derivatives of g(w, ) in the and w directions, respectively. It is easy to show that 2:;;,=0 fwm(w)1 = 1, since the interpolation is linear. Using the changes of variables w = Wk + rllw and = j +til, the cubature error over an elementary area becomes Ec(k,j) = llwll t t R(wk + rllw, )(wk + rllw)drdt Using (5.34) in this expression, the integral can be bounded by Computing the value of the integral then leads to IE (k ')I < ((llw)2(ll)3Gl + (llw)4(ll)G2 ) (k c ,] -12 + 2 An expression bounding the total cubature error at ( x, y) is obtained by summing over all the elementary areas, yielding fEc(x,y)f :S: NI51 (llw)2(fl)3G1 + (llw)4(fl)G2 (k+ j=O k=O 12 2 2M [(llw)2(D.JSGI + (llw)4(D.)G2 NI51 (k + 12 k=O 2 N 2 ( llw )27r3G1 N 2 ( llw )47rGz 48M2 + 48 where the definition ll = 1r J M has been used. It may be noticed that the quantity (k + 1/2) in the sum represents the midpoint (in the radial direction) of each elementary area. The error may be viewed as the sum over all the areas of the average error in each area. 215

PAGE 228

1 Suppose that the number of projections is related to the number of samples per projection by M = N 1r /2, the same condition that was imposed in the previous chapter. Using the definitions of G1 and G2 the cubature error bound becomes (5.35) Unless the extension operator Xf/ is used prior to performing the forward DFTs, the value of !1w remains constant as the number of points is increased. It therefore appears that the cubature error could become unbounded as the number of points becomes infinite, which would seem to indicate that using more samples of the projections at a finer sampling may be disastrous for the AIR method. Before considering this question, however, we make the following observation. Suppose that cubature is used, not to compute a Fourier transform, but to approximate the integral of an arbitrary function of polar coordinates, h(r,). Replacing !1w by !1r = H/N, the error bound becomes Consisting as it does of terms of order N-2 with constants depending on the sec ond derivative of the integrand, this is precisely the kind of error expression that would be expected for a cubature based on approximation by linear polynomials, such as the trapezoidal rule. Returning to (5.35), recall that g(w,) = and note that since lxl ::; 1 and IYI ::; 1, the partial derivatives may be bounded according 216

PAGE 229

to and From Lemma 4.2 we know that, for any non-negative integer n, Therefore, the bounds on the partial derivatives of the integrand become and Inserting these expressions into (5.35), and noting that the maximal values of lwl occur when lwl = N Ll.w /2, we find that (5.36) Once again we notice that if Ll.w = H/N, the cubature error is O(N-2), but that if Ll.w is fixed (as it is in the AIR), this error bound approaches infinity as N --> DO. In fact, the error itself does not become unbounded by using large numbers of samples of the projections. Rather, if Ll.w is fixed and the projections satisfy certain smoothness conditions, the error remains finite as N -> DO. 217

PAGE 230

Theorem 5.3 For all>, let the projections f(p, >) satisfy sufficient conditions such that l](w, 2, and suppose the AIR algorithm is to be used for the image reconstruction. If the frequency sample rate f::.w is held constant, then as N -t oo the cubature error satisfies IIEclloo < Q, where Q is a constant that depends on f::.w, but is independent of N. Proof: Recall that the total error of cubature at the point ( x, y) is 2M -I N/2-1 ( ("<+l f'"H' ) Ec(x,y) = E lw. l; R(wk,;)wdwd where R(wk, ;) is the difference between the integrand and the polynomial ap-proximation in the elementary area from wk to wk+1 in the radial direction, and between the radii at angles ; and ;+!: 1 1 R(wk>>;) = g(w,>)-L L Wm()wn(w)g(wk+n,j+m) m=O n::::O The remainder can be bounded according to 1 1 IR(wk,>;)1 :S lfl(w,>)1 + L L lwm()llwn(w)jjg(wk+n,Hm)l. m::::O n=O Now, since wk ::; w ::; wk+l, then jg(w,)1 = l}(w,>)ei(zwco+Ywin)l = j}(w,>)1::; ::; 218

PAGE 231

and the same reasoning gives l9(wk+n' cPi+m)l :S H/lwkl" Hence, It is not difficult to show that, because the interpolation is linear, 1 1 L L lwm(cf>)llwn(w)l = 1. m::::::O n=O Therefore, 2H 2H IR(wk,cPj)l :S lwkl" = lki"(D.w)"' which is valid for all k # 0. Integration over the area element yields 2H(>)(D.w)2 (k + 1)2 P lki"(D.w)" 2 = H .6. (-2 + __!_) (D.w)<>-2 k<>-1 k" Fork= 0, recall that Corollary 4.2 gives 1/(w,q\)1 :S 2U/7r, where U is an upper bound on lf(p,)1. Integrating this bound yields Using D.q\ = 1r / M and performing the summation of all contributions to the error yields IEc(x,y)l < 2M-I [(D.wfD.cj>U + H(D.q\) N/2-1 (-2-+ __!_)) 1r (D.w)<>-2 {; k<>-1 k" [ (D.w)2U H1r N/2 -1 ( 2 1 )) < 2M + X' -+-M M(D.w)"-2 {;:, k<>-1 k" 2 H7r < 2(D.w) U + (D.w)"-2 [2((a1) +((a)], where (( x) is the Riemann zeta function. Letting Q = 2(D.w?U + (H7r)[2((a1) +((a)] ( D.w )<>-2 219

PAGE 232

and taking the maximum over all (x,y) completes the proof. I 5.4.4 The Behavior of the Reconstruction Error Having examined each of the terms in the expression for the error bound individually, we consider the overall error. Combining the results of Theorems 5.1, 5.2, and 5.3, then provided we select N. to satisfy 2N. ;:::: N ll.w, we have [( N ll.w)P+l] ( 1 ) IIU-ull::,; Q+O 2N. +0 Na-2 as N -> oo. This result is very similar to that obtained for the standard Fourier method, in that there is no assurance that the algorithm will converge to the continuous solution as the number of samples per projection (and the number of projections) become large. In both the standard method and the AIR algorithm, there are three error vectors that must be considered, and in both cases two of the error vectors can be made small by appropriate selection of the problem parameters. Just as taking N very large in the standard method will eliminate the errors due to the forward and inverse transforms but leave the interpolation error unaffected, so in AIR the error due to the forward transform can be eliminated by using large values of N. The anterpolation error can be eliminated by taking sufficiently large interpolation degree, p, combined with a sufficiently small spacing on the auxiliary grid, N ll.w J N The bound on the error of cubature, 220

PAGE 233

like that of the interpolation error in the standard method, is unaffected by the parameter selection. The cause of these problems is the reciprocity relation. It is due to the fact that simply taking more points per projections does not alter the frequency spacing, Aw. It was found in the previous chapter that refining the frequency grid by applying XfrN will diminish the error of interpolation, while simultaneous grid refinement in both space and frequency by taking larger values of both N and S would ensure that the discrete algorithm converges to the exact solution. It seems likely that the same approach will be fruitful for the AIR algorithm, and we examine this question next. 5.5 Improving AIR by Frequency Grid Refinement We have seen that the AIR method, like the standard method of Chapter 4, is not guaranteed to converge as the number of samples along each projection goes to infinity. Having noted the similarity between the error expressions of the two methods, it is natural to ask whether the extension operator XfrN may be expected to improve the reconstruction of the AIR method, as it does the standard method. As in the previous chapter, the operator would be applied to the projection data prior to the computation of the forward D FTs. In operator notation, the new algorithm is written as u = :F;;;J :F1XfrN f. 221 (5.38)

PAGE 234

To determine the viability of this algorithm, we examine the effect of the operator Xf,N on each of the error components in as well as the effect on the operation count. 5.5.1 The Effect of Xf,N on the Operation Count of AIR Assuming that M = N 1r /2, the operation count of the AIR method is (5.39) The operator Xf,N extends the projection vectors to S times their original length, but does not affect the number of projections. Thus, the operation count of the forward DFTs becomes This operation count entails times the number of operations required without the extension. Computing '!! j, that is, applying the cubature weights, requires one multiplication for each polar grid point, and thus requires M S N operations in the modified algorithm. This becomes S N21r /2 when the condition relating M to N is assumed. The cost of creating the interpolation weights is not included, since it 222

PAGE 235

is assumed that these are pre-computed and stored. The algorithm for the computation of [I::'!JT P consists of four nested loops over theM= N1r /2 projections of S N points each and the p + 1 weights in each direction. Three operations are required at each step, resulting in a total of 3(p+ 1)2 SM N operations. Combined with the multiplication F = iJ! j and using the relationship between M and N, this yields a count of operations, so that the cost of the cubature isS times the cost without extensions. Finally, we note that the operation count of the DFT portion of the anterpolated reconstruction is entirely unaffected by the extension operator, and the operation count remains K3N? log2 N The total cost of the modified algorithm -r-1 -r xsNf u = .r w,P .rl N lS ( 5.40) 5.5.2 The Effect of Xf.N on the Cubature Error IIEcll The reason for applying the operator X J.N is to refine the radial-frequency grid by a factor of S, reducing the frequency grid sample from !J.w to !J.wjS. Careful examination of the arguments leading to (5.36) reveals that the entire discussion remains intact upon application of X fl, provided N is replaced with 223

PAGE 236

SN and t.w is replaced with t.wjS. Then (5.36} becomes IEc(z,y)l For fixed N, the expression for the upper bound on the cubature error is dimin ished by a factor of s-2 by the application of Xf/. This is consistent with the behavior of the one-dimensional trapezoidal rule, in which doubling the number of quadrature points reduces the error bound by one-fourth. 5.5.3 The Effect of Xf;N on the Anterpolation Error IIEAii The error committed in the interpolation of the kernel depends on h, the grid spacing on the auxiliary grid, and is bounded according to For standard AIR, this fact was used to bound the error of anterpolation by evaluating the expression Utilizing the knowledge that li(w,
PAGE 237

where k, = r1J/Ll.wl For our purposes, it is useful to re-write this expression as where everything other than the factors involving S, Ll.w, k,, N, and N. have been absorbed into the constants H1 and H2 Examining the arguments leading to this expression, it can be seen that the only modifications necessitated by the inclusion of Xf,N in the algorithm are that we replace N by SN, Ll.w by Ll.wjS, and k, by Sk,. Making these substitutions leads to the error bound AsS--> oo, the first term in brackets approaches the constant H1(k,Ll.w)', while the second term vanishes. We conclude that the application of Xfl can be of some use in eliminating the anterpolation error, but that at some point there ceases to be any effect offurther extensions. The expression for the anterpolation error thus remains with a new constant k. 5.5.4 The Effect of Xf,N on the Propagated DFT Error IIE.v-11 Recall that the norm of the propagated D FT error could be bounded according to IIE.v-11 < II(Ww-W[Z:!Jrw)E.rll + IIW'VE.rll 225 (5.41)

PAGE 238

It was shown in Chapter 4 that the operator X f/ has no effect on the order of 1\E.r\\oo, which we take to be given by ( 5.42) In the earlier analysis of this expression it was noted that the second term on the right-hand side of (5.41) was bounded by \IE.rlloo multiplied by the sum of the cubature weights, giving Upon applying the extension operator Xf/, the cubature weights are modified to reflect the fact that there are now S N samples for each ;, and that they are spaced 6.w IS apart. The cubature weights still add up to the area of the disk divided by (27t-)312, however, and the area of the disk is not affected by the extension. The expression bounding \\Will E.rll is unaffected by the extension process. Examination of the process leading to a bound on \\(Will-W[I;"!]Tili)E.rll reveals that the argument remains valid upon substituting 6.w IS for 6.w and replacing the upper limit of the sum over k with SNI21. Using Xf/ and applying (5.42), this leads to the expression

PAGE 239

It may be observed that the expression in parentheses is dominated by the middle term, where the factor of S2 exactly cancels the S2 in the denominator of the term multiplying the expression in parentheses. It may be concluded that the bound on the propagated D FT error is unaffected by the extension operation. 5.5.5 The Net Effect and Cost of Xf,N We summarize the results of the analysis of the extension operator: For a given value of N, applying the extension operator Xf/ entails an increase in the computational cost of AIR, with anterpolation costing S times the cost of the same operation in the unextended case, and with the forward transforms costing S ( 1 +log S j log N) times the cost of the forward transforms of the original version. There is no effect on the expense of the F FT on the auxiliary grid. The bound on the cubature error is diminished by a factor of s-2 by the extension operation. The anterpolation error may be expected to decrease for small extension factors, but this improvement will stall as S becomes large, so the ratio of N b.w to N. remains the critical factor for controlling this error. The propagated DFT error bound is not affected by the extension. Cubature can be expected to be the largest source of error in most cases. For a cost of roughly S times the work of the original AIR, a decrease on this error by a factor as great as S-2 may be anticipated. The cost of the extension 227

PAGE 240

appears to be justified by the potential gain. At some point, however, the error of anterpolation and the propagated DFT error may dominate. Since these are unaffected by the extension method, further application of the extension operator will not help. 5.6 Selection of N. and p A method similar to that used for the selection of values for N. and pin the one-dimensional problem can be applied here. In light of the dependence of the anterpolation error on the quantity ( N t.w)P+l 2N. ( 5.43) it seems natural to require that this quantity be made smaller than some prescribed value E. The values should also be selected to minimize the amount of work needed for AIR while satisfying the first requirement. From (5.12), the work of the AIR algorithm may be written as a function of N. and p by ( 5.44) where the constants K1 and K3 depend on the F FT algorithms and K2 is approximately 37r /2. The requirement that the quantity in (5.43) be no greater than E < 1 can be written as 1 Nt>w (1)+' N. >--2 ( 5.45) 228

PAGE 241

Let S be the set of points in the (N.,p) plane satisfying this condition, with as the set of points where By differentiating this last expression it may be seen that the set of values of N. that are in as form a monotonically decreasing function of p. In fact, the region of the (N.,p) plane satisfying (5.45) is that portion of the first quadrant above the curve as (inclusive), and to the right of theN. axis (inclusive). Just as was found in the one-dimensional ADFT analysis, the optimal parameters N. and p, if they exist, will lie on as. Lemma 5.1 If TJ > [Nb.w/2](1/E)l/(P+l) and e = [ND.wj2](1jE)1f(p+1), then W(e,p) < W(TJ,p). Proof: Compute the partial derivative of both sides of (5.44) to find that awjaN. > 0. I We must therefore examine as. Consider the extreme cases, p = 0 and p -+ oo. If p = 0, which corresponds to nearest-neighbor interpolation, the condition in ( 5.45) becomes N. 2: N D.w / (2 ). On the other hand, using extremely large interpolation degrees leads to the condition N. 2: N D.w /2 as p-+ oo. This latter case gives a lower bound on N., and since equality in (5.45) suffices, the former case can be taken to be an upper bound on N Proceeding in a fashion entirely analogous to the one-dimensional case, the work function W(N.,p) can 229

PAGE 242

be parameterized on 8S by letting N. = N f::..wb/2, where b-(1) ; giving The work function is then (5.46) for 1 < b ::; 1/ E. It can then be shown that Theorem 5.4 There exists a unique value b that minimizes (5.46) subject to 1 < b::; 1/E, and therefore a unique point on 8S that minimizes Proof: Equation (5.46) can be written as where the four non-negative constants are H3=K3(Nflw)2/(4ln2), H4=ln(Nflw/2). The work function W(b) is a continuous, twice differentiable function on (1, 1/E] with W'(b)=2H3blnb + b(2H3H4+Hs) 230 2H2 b(ln b )3

PAGE 243

and Evidently, W"(b) > 0 for all bE (1, 1/e], so if a critical point W'(b) = 0 exists, it is necessarily a local minimum. As b-> 1, W'(b)-> -oo, and, at the other end of the interval Noting that (1/e)ln(1/e) > 1 and that K2 and K3 are of similar magnitude, it is evident that the first term on the right-hand side of this equation is of greater magnitude than the last, so that W'(1/e) > 0. Since W'(b) is continuous and of opposite signs at the endpoints of (1, 1/e], and since W"(b) > 0 throughout the interval, there is exactly one point at which W'(b) = 0, corresponding to a global m1mmum. I 5.7 Numerical Examples 3: The AIR Algorithm The performance of the AIR method will be illustrated using one of the phantoms created in Chapter 4, namely, the one consisting of the hollow cylinder containing a small square. The AIR method was used to perform the reconstruction using three different values for N. and three different interpolation degrees. Used were N. = 128, 256, and 512, and the interpolation degrees were p = 1, 2, and 3, corresponding to, respectively, bi-linear, bi-quadratic, and and hi-cubic 231

PAGE 244

interpolation. No analysis has been presented for the case p = 2, as it has been assumed throughout that pis an odd integer. It will be seen, however, that the behavior of the algorithm for p = 2 follows much the same pattern as it does when pis an odd integer. The reconstructions were photographed, and are displayed in Figure 5.3. The top row of images were generated using the constant value N, = 128, for p = 1, 2, and 3, from left to right. This value is not expected to give especially good results, since the condition N. 2: N Llw /2 is not satisfied. Indeed, the reconstruction corresponding to p = 1 is especially bad. However, it can be seen that the effect of increasing the interpolation degree is significant, as quadratic and cubic interpolations both produce recognizable images, even with N, too small. The middle row of images corresponds to the constant value N, = 256, while the bottom row has the reconstructions for N, = 512. The images are of very high quality, and in fact can only be distinguished visually if they are displayed in color. The algorithm behaves very similarly to the standard method, in that there is a large improvement upon first increasing p or N, above their minimal values, but the improvement stalls out after a certain point is reached. The discrete L2 norms of the error of the reconstructions are displayed in Table 5.1, and may be compared with the values in the bottom half of Table 4.2. It can be seen that the AIR has produced images that are as good as the best that were achieved in the standard method. In fact, this occurred with every 232

PAGE 245

N, = 128 N, = 256 N, = 512 p=1 0.503377 0.254740 0.180393 p=2 0.263810 0.176440 0.169604 p=3 0.230194 0.169669 0.170115 Table 5.1 Error Norms for the AIR Reconstruction. Figure 5.3: Images reconstructed by using the AIR algorithm. 233

PAGE 246

image that was reconstructed using both methods. The AIR method, if suffi-ciently high p and N, were employed, always produced images at least as good as the best produced by the standard method. On a few occasions it produced images of better quality than the best achieved by the standard method, but it has not yet been determined what features of an image lend themselves to this treatment. It must be noted, however, that the AIR reconstruction is more expensive to compute than the standard method because the inverse two-dimensional F FT is performed on a grid of N; points, and N, is typically at least twice N, which is the grid size used in the standard method. Moreover, the particular implementations used in this study were done using a highly vectorizable high-level special purpose image processing language. In this language the F FTs are vectorized very well, while the anterpolation, that is performing the multiplication [I':tJTw j, very much a scalar operation. One area for future work is to find an efficient method of computing the anterpolated vector. Finally, we consider the effect of applying the extension operator Xft to the AIR method. Table 5.2 is a comparison of the error norms using the indicated extension factor S with the same reconstruction without the extension. In fact, it may be seen that there a small improvement in every case. It was predicted by the theory that small extension factors may help the algorithm, but that the improvement would stall as S grew. We observe the improvement for the small factors of S. However, while the improvement is present, is is much smaller 234

PAGE 247

that the theoretical 'best possible' improvement of a factor of 1/ S2 Since the computational cost increases linearly with S, to the application of the extension to the method does not seem to be worth the cost. It remains an open question if there are images whose reconstruction will be aided significantly by the extension method. (N.,p): (128, 1) (256, 1) (256, 2) (256, 3) (512, 1) S=1 0.628624 0.403613 0.176728 0.172361 0.256288 S=2 0.568394 0.384226 0.169613 0.165425 0.242528 S=3 0.559755 0.360416 0.170727 0.164955 0.234373 Table 5.2 Error Norms for the AIR Reconstruction using extension factorS. 235

PAGE 248

CHAPTER 6 FOURIER METHODS APPLIED TO FUNCTIONS EXPANDED IN TRANSLATIONAL BASES In this chapter a reconstruction method is introduced that is based on the Fourier transform, as in the previous chapters, but is here specifically developed for use on functions expanded in a translational basis. Translational bases have long been applied in approximation theory, as well as being fundamental to finite element methods for solving differential equations [43]. In addition, translational bases play a fundamental role in the recent development of wavelet bases ([14], [29], [41]). Such bases have been used for image processing, edge detection, and signal compression, ([17], [27], [30], [52]), but have not previously been applied to the problem of reconstruction from projections. It will be shown that expanding the image in a translational basis leads, formally, to the development of a family of algorithms, with each choice of basis functions providing a different member of the family. The translational expansion will lead to a semi-discrete version of the Central Slice theorem. Making the appropriate choice of bases will lead in either of two directions. Certain choices lead to the fully continuous Central Slice theorem, while others lead to members of a family of Discrete Central Slice theorems.

PAGE 249

The material in this chapter is not developed to the same degree as the rnaterial of the previous chapters. No algorithms are analysed for error bounds or computational complexity, and there are no numerical experiments. As much as anything else, it should be considered to be an advertisement for future research, as there are questions raised whose answers may lead to entirely new image recon-struction methods. The central feature of the chapter, the Translation theorem, has analogs in differential and integral equations, and may have applications in those fields. 6.1 Translational Bases A translational spanning consists of a single function,
PAGE 251

6.2 The Translation Theorem We begin by establishing a specialized relative of the Central Slice theorem for functions that can be expanded in a translational basis. Recall that the Central Slice theorem relates the n-dimensional Fourier transform of the function u( i) to the set of one-dimensional Fourier transforms of its projections by the relation (6.1) where w = w[ ranges over Rn. To develop a similar relation when the density function u is expanded in a translational basis, let i E Rn and let k be a multiindex, that is, k E zn. Let be an n-dimensional vector of translation step-sizes establish the following result: Theorem 6.1 (Translation) Suppose that u( i) = 0 for Iii 2': A, that u( i) has a Radon transform given by 'Ru = f(p,(), and that u(i) can be expanded in a translational basis as a finite sum: u(i) = I::Ck
PAGE 252

to the Fourier transform of the Radon transform of the basis function defines a periodic function by "c -iw[k_ /(w[) L..J ke -_ _,. k 'R.P(w0 (6.3) provided the ratio exists. Proof: By the linearity of the Radon transform, 'Ru = f may be written L ckn {
PAGE 253

equation share the property (with projections and their Fourier transforms) that they may be treated either as functions in n-dimensional space or as a set of one-dimensional functions. They can be treated as Fourier transforms of some undetermined function, which can be found by inverse transforms. Alternatively, the left-hand side of the equation may be viewed as a Fourier series on some appropriate n-dimensional interval. Note that, while we have restricted the sum-mation to be finite, to ensure that u( i) = 0 for Iii :;:: A, this is not necessary for density functions that are not compactly supported. In such cases the density function must satisfy some other condition regarding decay at infinity, and the question about interchanging the order of summation and integration must be considered. The Translation theorem immediatly gives rise to some interesting questions. It is intimately related to the Central Slice theorem, although that theorem has not been invoked in the development. An interesting observation is that the left side is a periodic function for fixed {, although no such restriction is obvious on the right-hand side. This is a feature that is conspicuous in the application of Fourier series methods to solving non-homogeneous partial differential equations, where compatibility conditions must be imposed on the forcing functions. We are led to wonder whether compatibility conditions of some sort will be required in the reconstruction methods. It is possible to envision a family of algorithms based on this theorem, where different basis functions if?(i) are chosen. It is necessary to consider what types of spaces are spanned by different choices of if?(), and 241

PAGE 254

what types of properties of
PAGE 255

and therefore, if K(w) oJ 0 the solution may be obtained by dividing through by k and taking an inverse Fourier transform u(x) = :F-1 { K(w) Suppose that the unknown function u( x) may be expanded in a translational basis (with translation step-size h) u(x) = L:Ckt/>(xkh). (6.5) k It is not required that this sum be finite, although, since a Fourier transform must be computed, we must have lim ck = 0. Then equation (6.4) may be written k-oo Loo K(xy) { ckt/>(ykh)} dy = f(x), which, upon interchanging the integration with the summation, becomes /_: K(x-y)t/>(ykh)dy = f(x) The interchange is certainly possible if the sum is finite. For the purposes of illustration we assume that the necessary conditions are satisfied to allow the interchange of operations even if the sum is infinite. Making the change of variable z = y -kh this equation becomes which is a summation of convolutions: LCk[K t/>](xkh) = f(x). k 243

PAGE 256

Taking Fourier transforms of both sides and invoking the linearity and shift properties of the Fourier transform, yields LCke-iwkh,:r {K } = /(w)' k which becomes, under the convolution property, LCke-iwkhk(w)J,(w) = /(w). k If K(w) and J,(w) are bounded away from zero, this is the analog to the Translation theorem: (6.6) Equation (6.6) is the basis for a solution method. The left-hand side is the Fourier series expansion of a periodic function, with period 2/ h. Finding the coefficients Ck may be done by the usual method for Fourier series. The solution can then be constructed from equation ( 6.5). A very simple example can be constructed to illustrate this solution method. Suppose we wish to solve the convolution equation /_: 6(xy)u(y)dy =A G) The solution to this integral equation is the function A(x/2), which may be obtained immediately upon invoking the integration property of the delta distribution. Assuming that the unknown function can be expanded in a translational basis using (x) = A(x) and a translation'step-size h = 1, equation (6.6) becomes -iwk .:F[A(3)] '7;-eke = .:F[c(x)].:F[A(x)] 244

PAGE 257

Since F[A(x)] = ksinc2(3) and the Fourier scaling property gives F[A(3)] = a bit of trigonometry reduces (6.6) to This is simply the Fourier series expansion of 2 cos2 ( 1) expanded as a 211' periodic function. The right-hand side is even, so that the Fourier coefficients ck may be found by evaluating Ck = { 2cos2 (;) cos(wk)dw yielding the coefficients c_1 = c0 = 1, and c1 = with all other coefficients being zero. The solution u( x) can be recovered by 1 1 u(x) = 2A(x + 1) + A(x) + 2A(x -1), which indeed gives the solution A( 3 ). Another example of a problem that has a property like that of the Translation theorem is the differential equation u"(x) + a?u(x) = f(x). Once again it is assumed that u(x) = I:k ck(xkh), with h a fixed translation size. Invoking linearity and taking Fourier transforms yields + 0:2 L:Cke-iwkF[] = }(w). k k 245

PAGE 258

Applying the Fourier transform property .:F[<;Ii'] = -w2.:F[.P], collecting terms, and dividing through gives the translation property Again, this reduces to finding a set of Fourier coefficients, which may then be used to synthesize the solution, u(x ). Both of the examples given, like the Radon reconstruction problem, have nice theoretical solution methods based on the Fourier transform, and, also like the Radon problem, there are difficulties in the practical implementation of the methods. 6.3 The Translation Theorem in Two Dimensions As in the previous chapters, we are interested in the case commonly encountered in x-ray tomography, in which i E R2 The unknown function is then u(x,y), and the goal is to solve Ru =f. Assume that u may be expanded in a translational basis: N 2 u(x,y)= I; If L Cmn(x-m!::..x,yn!::..y), m::::-Jfn=-Jf (6. 7) Taking the Radon transform of u, and using the Radon shift property leads to JY N 2 L L c,.m[R](p-m!::..xcos.P-y!::..ysin .P, .P) = f(p, .P). Computing the Fourier transforms of each side and applying the Fourier transform 246

PAGE 259

shift property leads to the expression lf L L )fi(w, <;I>) = }(w, <;I>) If }(w,) vanishes faster than fi(w,), wherever the latter vanishes, then it is permissible to divide both sides by fi(w, ), yielding the two-dimensional statement of the Translation theorem, (6.8) Making the substitutions Wz = w cos, wy = w sin, Lz = 1/ and Ly = 1/ the left-hand side, equation (6.8) may be written }(w,) fi(w, ) This is the Fourier series of the function [}(w,)]/['R(w,)], expanded as a periodic function of and wy, with periods 21r Lz and 21r Ly, respectively. Furthermore, since the sums are finite the series converges. The coefficients can therefore be found from The integral on the right side of this equation has the form of a function of polar coordinates integrated against a kernel in Cartesian coordinates, so that we are immediately led to the same problem, for translational bases, that we faced in the previous chapters. The obvious question to ask is: How may the integral in (6.9) be evaluated? When the function [}(w, )]/['R(w, )] is discretized it will 247

PAGE 260

be specified on a polar coordinate grid, while the kernel e-i[(mw./L.)+(nw,/L,)] is naturally specified on a Cartesian grid. Certainly the methods of Chapters 4 and 5 are relevant: The function [}(w, 4>)]/[fi(w, cf>)] could be interpolated to the Cartesian grid, as in Chapter 4, or conversely, the kernel could be interpolated to the polar grid and then the approximate integration becomes a formal D FT, J leading to a method like the AIR algorithm. With a bit more analysis, however, something of a unification between the continuous and the discrete methods can be achieved. 6.4 Expanding Both Sides of Ru = f in Translational Bases We consider the case in which both the image u(x,y) and its Radon transform f(p, 4>) are expanded in translational bases. Let the expansions be u(x,y) = m n (6.10) f(p,ci>) = k Computing the one-dimensional Fourier transform with respect top and applying the Fourier shift property leads to LLCmne-iw(ml>zco+nl>yin4>)fi(w,cf>) = Lfk(cf>)e-iwkl>p4'(w). (6.11) m n k Making the assumption that the division by fi(w, 4>) may be justified, we can write this as m n 248 L:k fk(cf>)e-iwkt>p4'(w) R(w,cf>)

PAGE 261

Once again making the substitutions w. = w cos and Wy = w sin this becomes L 2::.: Cmne-i(mk\zw2+nayw11) m n (6.12) and once again we have arrived at the place where the interpolation question anses. At this juncture we make the following observation: Suppose that we assume the image consists of a grid of discrete values translated by multiples of f::.x = f::.y = 2/N. That is, we assume if?(x,y) = 5(x,y) and that It is not difficult to show [16] that if the Radon transform is computed of all members of a sequence that converges to a delta distribution, the sequence of Radon transforms itself converges to a delta distribution. That is, [7U(x,y)](p,) 5(p). From Fourier transform theory we know that :F2 {5(x,y)} = 1. Therefore (6.12) becomes lf lf L L Cmne-i(2mw,+2nw,)fN L fk()e-iwkt.pW(w). m=lfn=-1:} k If, in addition, it is assumed that the Radon transform f(w, ) is also a purely discrete function by using f::.p = 2/ N and w(p) = 5(p) so that f(p,) = 'f!k()5 (p249

PAGE 262

then 4-(w) = 1, as well. Hence (6.12) is now (6.13) The left-hand side becomes a two-dimensional DFT if w% = l1r and wy = j1r for l,j = -N/2 + 1, ... N/2. Similarly, if w = s1r for s = -N/2, ... N/2, then the right side becomes a DFT. In some sense, then, (6.13) is a discrete version of the Central Slice theorem. It should be noted that the conditions just specified for w%, wy, and w are incompatible. If both sides of (6.13) are given on regular grids then it is impossible to avoid the interpolation problem that plagues the standard method and the AIR. If, however, the grid on at least one side of equation (6.13) is not required to be uniform, then the equation becomes an equality between two formal DFTs (one of which can be a DFT). In that sense, this equation is a discrete Central Slice theorem, relating image space to projection space through the formal DFT. In fact, it is this formulation that leads to the continuous Central Slice theorem. Let the image be defined as values on the N2-point regular grid, given by using o(x,y) as the translational basis. Let the projections be given on an irregular grid by f(p,) = "Lf()o(p-pk()), k where k is a multi-index (m,n) and Pk = 2(mcos+nsin)/N. In this case the summations in (6.13) are Riemann sums. Letting N -t oo and noting that Cmn = u(2m/N,2n/N) the sum on the left-side of (6.13) formally converges to a 250

PAGE 263

multiple of u(w.,wy), while the right side converges to a multiple of ](w,), and the continuous Central Slice theorem is recovered. It may be noted that making precisely the incompatible selections w = w. = Wy = 1r leads exactly to the standard method of Chapter 4. In that case the left side of (6.13) is lf lf L L Cmne-i(2,, m=-lfn=-Jf or N2 times the two-dimensional D FT of the image space data u( x, y ), while the right side is L fk(

). To actually compute a solution from this equation an interpolation is required, and this becomes the problem of Chapter 4. 6.5 A Family of Discrete Central Slice Theorems Perhaps the most intriguing of the relationships that have been developed here is that of equation (6.11), which is LLCmne-iw(mAzcos.;HnAyinof>)1?_) = m n h This equation, like the Central Slice theorem, gives the relationship between the two-dimensional Fourier representation of the image and the set of onedimensional Fourier representations of its projections. This equation may be thought of as a family of Central Slice theorems, with a specific version for every 251

PAGE 264

combination of choices for Ir(p). With this in mind, we can recast the problem as a problem in approximation. A translational basis (or spanning set) function Ir(p) is selected for approximating projection space functions. Two questions must be answered. Can (6.11) be solved for the coefficients Cmn? For some choices of bases this may not be a problem, while for other choices it may be impossible. If it can be solved then the next question that must be addressed is: What functions can be approximated by the bases that have been selected, and how well can they be approximated? We close this chapter by observing that years of (hopefully productive) research could be spent in seeking answers to these two simple questions. 252

PAGE 265

CHAPTER 7 CONCLUSIONS AND OPEN QUESTIONS This work was originally begun with the goal of creating a multigrid image reconstruction algorithm from the idea that coarse-grid ==? fine-grid where the notation {'} indicates a discrete Fourier transform pair. Preliminary work [8] and related work by others ([21], [50]) provided encouragement that the path would be productive. Unfortunately, the long-sought multigrid reconstruc tion method remains elusive. Multi-level algorithms were constructed; however they have not proved advantageous. Local-grid refinement algorithms were also constructed, but they, too, have not proved advantageous. Yet the path has been anything but unproductive; it simply led in unexpected directions. It turned out to be a tortuous path, leading through many diverse fields: approximation theory, numerical quadrature and cubature, multigrid, integral and differential equations, Sobolev spaces and functional analysis, and most of all, returning again and again to the beautiful realm of Fourier analysis. 7.1 Summary The main contributions of this work we see as being the following:

PAGE 266

The development of the AD FT in one and two dimensions and the AIR algorithm, along with the associated analysis ofthe optimal parameter question and the errors in the approximation. The detailed analysis of the behavior of the standard method, especially the error analysis of the algorithm viewed as a cascade of operators. The systematic analysis of the extension operator for its real and potential effect and its computational cost. The introduction of the Translation property and its relation to other prob!ems. The elaboration of the error analysis and the uniform bounds for the error in using the D FT to approximate the FT. A few comments are in order about each of these points. It is thought that the AD FT probably has applications, in fact more 1m-portant and effective applications, in other areas. For the image reconstruction question it is not likely, without a major breakthrough, to ever become preferred over other methods. The images are fine, but the cost is high. In that sense the image reconstruction problem is a 'carrier', providing the opportunity to develop, implement, and analyse a method that will find better use elsewhere. It is interesting that the starting point, the retention of the Fourier integral in polar coordinates instead of interpolating to Cartesian coordinates, has also led to other 254

PAGE 267

algorithms. The most widely used algorithm for Computer Aided Tomography, filtered backprojection, is developed in [16] from the equation we give in ( 4.3), which is the same equation that led us to the AIR algorithm. The analysis of the standard method was originally intended to provide material for comparison with entirely new algorithms. It turns out that error analysis for image reconstruction is normally done in the setting of signal processing rather than numerical analysis, or at the other extreme in the setting of Sobolev spaces [32], and that a rigorous analysis had not been done for a reconstruction method using Lagrangian interpolation. In fact, we have not seen a discussion of the general Lagrangian interpolation operator for the image reconstruction problem at all, although it is extremely unlikely that it has not been used. During the implementation and analysis we were led to consider the extension operator. Again, while the idea is not original, the specific implementation and all the analysis is, and led to a significant improvement in the way the operator is used. In the long run it is the Translation property that may eventually be the most significant contribution of this thesis, although it is scarcely developed here. It may well lead to entirely new methods, and to customized families of methods, as the choices of basis functions are explored. By far the most surprising turn of the path came in the area of the error analysis of the D FT when it is used as an approximation to theFT of a compactly supported function. It was anticipated, once the need for the uniform bounds became apparent, that half an hour in the library would suffice to look them up. 255

PAGE 268

Many visits to the library later, they still had not been found, and thus began our exploration into Fourier analysis. In many ways this proved the most rewarding part of the journey, as much time and effort were spent in classical mathematics. The contributions we have made in this area are in the nature of filling some very specific gaps left by previous workers. The literature of Fourier transforms is so vast that we have, and can have, no assurance that they are indeed original. 7.2 Open Questions A number of open questions remain. Some have already been pointed out, such as the questions about the existence of images for which the theoretical improvement of the extension operator is achieved. The development of the AD FT in one and more dimensions is a process that will likely continue for some time, as will the identification and exploitation of the properties of the ADFT. Several other questions were opened in the course of the research, some of which have not even been mentioned in the dissertation. For example, as part of the examination of the error of the D FT we 'discovered' that the use of Simpson's rule to create a higher order D FT led directly to a formula remarkably like the F FT. We also discovered that implementing the formula led to improved approximation for the low frequencies and 0(1) errors on the high frequencies, a phenomenon for which a full explanation is lacking. The open questions about the translational basis methods have already been noted, but there is one aspect that we haven't mentioned, and which holds some 256

PAGE 269

promise. This is the relationship to wavelets, families of basis functions that are both translational and dilational. Essentially, a basis for functions in L2 can be constructed using the translations and dilations of a single function. A tremendous interest in wavelets has been generated recently. The connection with the image reconstruction problem is this: there are functions of two dimensions that are the products of identical one-dimensional functions of x, andy, that is, ill(x,y) = ,P(x),P(y), with the property that the Radon transform 'Ril'(p,), for a fixed projection angle, is a scaled copy of ,P. That is, ['R,P( x ),P(y )](p, q,) = !3..P( ap) Used as a translational basis for the image space, the shift theorem implies that we obtain a basis (or at least a spanning set) for projection space consisting of scaled and translated copies of the single function ,P(p ): a wavelet basis. That such functions exist is easy to see. Let the image space basis be the product of sine functions in x and y. The two-dimensional Fourier transform is the characteristic function of a square, and the Central Slice theorem says that a slice through this function passing through the origin is the Fourier transform of the projection of the original function. But any slice through the characteristic function of a square is the characteristic function of an interval, and its inverse Fourier transform is a sine function. Thus the product of sine functions in x and y has a Radon transform that is a sine function in p. There are others. Finally, of course, there is the open question: Can the reciprocity relation be 257

PAGE 270

used to develop a multigrid or multigrid-like image reconstruction algorithm? We have not given up on this, and in fact there are some ideas that have developed recently about combining a frequency domain local grid refinement (that can be produced with variations of the extension operator) with the approximate cubature of the ADFT through the AIR algorithm, that may lead to a competitive algorithm. 258

PAGE 271

Appendix A CUBATURE IN POLAR COORDINATES The literature of numerical quadrature and cubature is extensive. Among the most comprehensive references are [15], [18], and [44]. Each region over which an integration is required, and each set of data points in such a region, seems to present a new set of problems, leading to a proliferation of integration formulae. Accordingly, such a formula is developed here, for the problem of approximating the integration of a function of polar coordinates over a region !B J.e, A e, g(r,t/>)rdrdf The approach will be identical to the method that leads to the Newton-Cotes formulae in Cartesian coordinates. That is, the region of integration will be divided into regular elementary areas. The integrand will then be approximated on the elementary area by a Lagrangian polynomial of some predetermined order. This polynomial is then integrated exactly, giving a cubature formula for the I elementary area. The formulae for all such areas is then pieced together into a composite, or compound, integration formula. Typically, two dimensional formulae such as are needed here are developed for polynomial approximations in x and y. However, we require that the formula be specifically for data on a polar grid. Therefore, the polynomial will be a tensor

PAGE 272

product of a polynomial in the radial variable, P(r), and the arclength, P(u/>), along an arc a distance r from the origin. The selection of polynomial approximation in arclength does not seem particularly natural, especially as the integration will be around the entire arc for each r. There are, however, certain advantages to this selection for the image reconstruction problem. Since the integration in (5.2) is over the entire disk, the function j(w, )is necessarily 27l' periodic in. Thus we need to develop a formula to integrate over a disk in the (r,) plane. The polynomial in question will be linear in r. In fact, it will turn out that because of this choice the factor r will drop out of the integration with respect to, and in that direction the quadrature will reduce to the trapezoidal rule. As is well known, ([15], [18]) the trapezoidal rule has superconvergence properties when applied to periodic functions. Suppose the disk of radius R is divided into M pie shaped sections by radii spaced at a regular angular interval b.. = 27!' / M. Let each radius be divided into N equal intervals of length R/ N. Let rk = kb..r and ; = jb..p fork= 0, 1, ... N and j = 0, 1, ... M-1. By connecting the points (rk, ;) and (rk, ;+1), for all j, with arcs of radius rk for all k, a regular grid is superimposed on the disk. Consider the approximation of a polar function g(r, ), in an elementary area defined by rk :'0 r :'0 rk+1 and ; :'0 :'0 ;+1 In this elementary area, g(r, ) may be approximated using bilinear Lagrangian polynomial interpolation ( r-r+1) r;-r+1

PAGE 273

Integrating this expression over the elementary area and performing algrbraic simplification leads to the cubature formula ltl 1"'j+l 1"k 4>; g( r, )rdrd ::::: 12 (A.1) Having determined a cubature formula for the elementary area, it is a simple task to piece together a composite rule. If the integral to be approximated is 1B 182 A e, g( r, )rdrdf divide the interval [A, B] into N sub-intervals by rk =A+ with = (B-A)/N, and [01102 ] into M subintervals by;= 01 + (82-O,)jM. Letting g(rk, )j) be designated 9k,j, summing the cubature over all the area elements yields the composite rule (B (8' }A Je, g(r,)rdrd::::: 12 E [(3k + l)(gk,j + 9k,j+l) +(3k + 2)(9k+l,j + 9k+l,i+l )] (A.2) One question that arises is whether the formula must be modified in the event that A= 0. In this case (r0 ;),represents the origin, and is the same point for all j. Integrating such an elementary area yields t' f"'i+' Jo 1; g(r, )rdrd::::: 12 [(go,;+ 9o,;H) + 2(gl,j + 91,iH)] 261

PAGE 274

This is the same, fork= 0, as (A.1), provided that it is recognized that go,j is the same value for all j. If the integration is to be performed over the entire disk, then (A.2) can be simp,lified somewhat. In that case B2 = B1 so that 9k,o = 9k,M for all k. Then the right hand side of (A.2) can be written (b.r)2dq,N-1 ( M-1 ) 12 t; 2 t; [(3k + 1)gk,j + (3k + 2)gk+1,j] and the sum over k can also be simplified by combining term to yield the composite cubature formula for the disk of radius A {2" A (b.r)2b. M-1 [ N-1 ] Jo fo g(r,)rdrd 6 t; go,j + 6 {; kgk,j + (3N -1)gN,j It is often more convenient to integrate over the disk by {" j_A g( r, )rdrd lo -A (A.3) In this case the data are indexed by k and j as g(kb.r, kb.) fork = -N/2, -N/2+ 1, ... N /2 and j = 0, 1, ... M -1, where now b.= 71' / M and b.r = 2A/ N. Then the cubature formula in (A.3) becomes (b.r)2b. M-1 [ 3 N N/2-1 ] 6 f,; 2go,j + ( 21)(9-N/2,j + 9N/2,j) + 6 {; k(g-k,j + 9k,j) (A.4) and this is the form that will be useful in the image reconstruction problem. One final simplification will be helpful, and that is to write the cubature of g on the 262

PAGE 275

disk as where N { 1: g(r, if;)rdrd 1:1 t ii!(k)gk,j J==O ii!(k) = (t.r)'t..P 3 1) for k = 0 for This formula is the one used in Chapter 5. Among its properties are those familiar from the trapezoidal rule. If the integrand is, in fact, a polynomial of degree less than or equal to 1 in either or both of the radial and angular directions, the approximation is exact. It is shown in Chapter 5 that the error of this cubature formula looks very much like that of the trapezoidal rule, which is as we expect. 263

PAGE 276

BIBLIOGRAPHY [1] J. ARSAC, Fourier Transforms and the Theory of Distributions, Prentice Hall, Inc., Englewood Cliffs, NJ, 1966. [2] D. H. BAILEY AND P. N. SWARZTRAUBER, The Fractional Fourier trans form and applications. submitted for publication. [3] A. BRANDT. Private communication, 1990. [4] --, Multilevel computations of integral transforms and particle interactions with oscillatory kernels, in Proceedings of the IMACS 1st International Con ference on Computational Physics, 1990. [5] A. BRANDT AND A. A. LUBRECHT, Multilevel matrix multiplication and fast solution of integral equations, Journal of Computational Physics, 90 (1990). [6] W. L. BRIGGS, Further symmetries of in-place FFTs, SIAM Scientific and Statistical Computing, 8 (1987), pp. 644-655. [7] --, A Multigrid Tutorial, Society for Industrial and Applied Mathematics, Philadelphia, 1987. [8] W. L. BRIGGS AND V. E. HENSON, The FFT as a multigrid algorithm, SIAM Review, 32 (1990), pp. 252-261. [9] H. S. CARSLAW, Introduction to the Theory of Fourier's Series and Inte grals, Dover Publications, third ed., 1930. [10] J. F. CLAERBOUT, Fundamentals of Geophysical Data Processing, McGraw Hill, New York, NY, 1976. [11] J. W. COOLEY, P. A. W. LEWIS, AND P. D. WELCH, The fast Fourier transform algorithm; considerations in the calculation of sine, cosine, and Laplace transforms, Journal of Sound Vibrations, 12 (1970), pp. 27-34. [12] J. W. COOLEY AND J. W. TUKEY, An algorithm for the machine calcula tion of complex Fourier series, Math. Comp., 19 (1965), pp. 297-301. [13] G. DAHLQUIST AND A. BJORCK, Numerical Methods, Prentice-Hall, En glewood Cliffs, NJ, 1974.

PAGE 277

[14] I. DAUBECHIES, Orthonormal bases of compactly supported wavelets, Comm. Pure Applied Math., 41 (1988), pp. 909-996. [15] P. J. DAVIS AND P. RABINOWITZ, Methods of Numerical Integration, Aca demic Press, New York, NY, seconded., 1984. [16] S. R. DEANS, The Radon Transform and some of its Applications, John Wiley and Sons, New York, N.Y., 1983. [17] R. A. DEVORE, B. JAWERTH, AND V. POPOV, Compression of wavelet decompositions. in press. [18] H. ENGELS, Numerical Quadrature and Cubature, Academic Press, New York, NY, 1980. [19] I. M. GEL'FAND, G. E. SH!LOV, AND N. Y. VILENKIN, Generalized Func tions, vol. 5, Academic Press, New York, NY, 1966. [20] M. GENTLEMAN AND G. SANDE, Fast Fourier transforms -for fun and profit, in AFIPS Proceedings of the 1966 Fall Joint Computer Conference, Washington, D.C., 1966, Spartan, pp. 563-578. AFIPS Proceedings of the 1966 Fall Joint Computer Conference. [21] M. A. HARE, Multilevel discrete Fourier transforms, Master's thesis, Uni versity of Colorado at Denver, 1989. [22] V. E. HENSON, Parallel compact symmetric fast Fourier transforms, Master's thesis, University of Colorado at Denver, 1988. [23] G. T. HERMAN, Image Reconstruction from Projections, Academic Press, Orlando, Fl, 1980. [24] E. ISAACSON AND H. B. KELLER, Analysis of Numerical Methods, John Wiley and Sons, New York, NY, 1966. [25] A. J. JERRI, The Shannon sampling theoremits various extensions and applications, a tutorial review, Proceedings of the IEEE, 65 (1977), pp. 15651596. [26] C. JOHNSON, Numerical Solution of Partial Differential Equations by the Finite Element Method, Cambridge University Press, Cambridge, Ma, 1987. [27] W. M. LAWTON AND W. ZETTLER, Wavelets and image coding, in Sixth IEEE ASSP Multidimensional Signal Processing Workshop, Asilomov Grove, Ca, 1989. Poster Session. [28] R. M. LEWITT, Reconstruction algorithms: Transform methods, Proceed ings of the IEEE, 71 (1983), pp. 390-408. 265

PAGE 278

[29] S. MALLAT, Multiresolution approximations and wavelet orthonormal bases of L2(r), Trans. Amer. Math. Soc., 315 (1989), pp. 69-87. [30] --, A theory for multiresolution signal decomposition: the wavelet repre sentation, IEEE Trans. Pattern Anal. Machine Intel!., 11 (1989), pp. 674693. [31] S. F. McCoRMICK, Multilevel Adaptive Methods for Partial Differential Equations, Society for Industrial and Applied Mathematics, Philadelphia, Pa, 1989. [32] F. NATTERER, A Sobolev space analysis of picture reconstruction, SIAM Journal of Applied Mathematics, 39 (1980), pp. 402-410. [33] --, Fourier reconstruction in tomography, Numerische Mathematik, 47 (1985), pp. 343-353. [34] -.-, Efficient evaluation of oversampled functions, Journal of Computa tional and Applied Mathematics, 14 (1986), pp. 303-309. [35] --, The Mathematics of Computerized Tomography, John Wiley and Sons, New York, NY, 1986. [36] J. RADON, Uber die bestimmung von funktionen durch ihre integralwerte langs gewisser mannigfaltigkeiten, Berichte Sachsische Akademie der Wis senschaften. Leipzig, Math.-Phys., Kl., 69 (1917), pp. 262-267. [37] T. J. RIVLIN, An Introduction to the Approximation of Functions, Dover Publications, Inc., New York, NY, 1969. [38] E. A. ROBINSON AND S. TREITEL, Geophysical Signal Analysis, Prentice Hall, Englewood Cliffs, NJ, 1980. [39] H. STARK, J. W. WOODS, I. PAUL, AND R. HINGORANI, Direct Fourier reconstruction in computer tomography, IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-29 (1981), pp. 237-244. [40] --, An investigation of computerized tomography by direct Fourier recon struction and optimum interpolation, IEEE Transactions on Biomedical En gineering, BME-28 (1981 ), pp. 496-505. [41] G. STRANG, Wavelets and dilation equations: a brief introduction, SIAM Review, 31 (1989), pp. 614-627. [42] G. STRANG AND G. J. FIX, A Fourier analysis of the finite element vari ational method, in Centro Internazionale Matematico Estivo, 1971. 266

PAGE 279

[43] --, An Analysis of the Finite Element Method, Prentice-Hall, Englewood Cliffs, NJ, 1973. [44] A. H. STROUD, Approximate Calculation of Multiple Integrals, Prentice Hall, Englewood Cliffs, NJ, 1971. [45] P. N. SWARZTRAUBER, FFTPA CK, a fast Fourier transform software pack age. Available without charge from NANET. [46] --, FFT algorithms for vector computers, Parallel Computing, 1 (1984), pp. 45-63. [47] --, Symmetric FFTs, Mathematics of Computation, 47 (1986), pp. 323-346. [48] G. P. TOLSTOY, Fourier Series, Dover Publications, Inc., New York, NY, 1962. Translated from Russian by Richard A. Silverman. [49] J. S. WALKER, Fourier analysis, Oxford University Press, New York, N.Y., 1988. [50] T. YossEF, Bi-le vel properties of Fourier transforms, Master's thesis, Weiz mann Institute of Science, 1988. [51] D. M. YOUNG AND R. T. GREGORY, A Survey of Numerical Mathematics, vol. 1, Dover Publications, New York, NY, 1972. [52] S. ZHONG AND S. MALLAT, Compact image representation from multiscale edges, in Proceedings of the 3rd International Conference on Computer Vi sion, 1990. submitted for publication. 267