Citation
Wavelet-based medical image processing

Material Information

Title:
Wavelet-based medical image processing
Creator:
Redmond, Lisa Stroup
Publication Date:
Language:
English
Physical Description:
78 leaves : ; 28 cm

Thesis/Dissertation Information

Degree:
Master's ( Master of Science)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
Department of Electrical Engineering, CU Denver
Degree Disciplines:
Electrical Engineering
Committee Chair:
Bialasiewicz, Jan T.
Committee Members:
Radenkovic, Miloje S.
Choi, Min-Hyung

Subjects

Subjects / Keywords:
Wavelets (Mathematics) ( lcsh )
Diagnostic imaging -- Computer programs ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaf 78).
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Lisa Stroup Redmond.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
53886936 ( OCLC )
ocm53886936
Classification:
LD1190.E54 2003m R42 ( lcc )

Full Text
WAVELET-BASED MEDICAL IMAGE PROCESSING
by
Lisa Stroup Redmond
B.S.E.E. Memphis State University, 1986
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering
2003


This thesis for the Master of Science
degree by
Lisa Stroup Redmond
has been approved
by
Jan T. Bialasiewicz
Miloje S. Radenkovic
Min-Hyung Choi
4/3 Jo j
Date


Redmond, Lisa Stroup (M.S., Electrical Engineering)
Wavelet-Based Medical Image Processing
Thesis directed by Professor Jan Bialasiewicz
ABSTRACT
Wavelet transforms have become useful tools for various image processing
applications. This paper will describe three wavelet algorithms that
manipulate a medical image using multiresolution approximations and filter
bank techniques. All three algorithms break down the image into various
frequencies and then reconstructs the image using the sub-frequency images.
One of the first filter bank algorithm studied is called the conjugate mirror
filter, CMF, algorithm. This algorithm decomposes an image into vertical,
horizontal, and diagonal details as well as low frequency approximations
within different frequency bands using a filtering/downsampling procedure.
Then it is shown that the original image can be perfectly reconstructed by a
reversal of this procedure. Image compression and denoising is
demonstrated within the CMF algorithm framework. Next, the filter bank
algorithm called algorithme a trous is explained however, instead of down-
sampling the filtered image during decomposition, dilation of the filter is done


creating holes (trous is French). Only horizontal and vertical details along
with corresponding low frequency approximations are generated from this
algorithm. Perfect reconstruction by reversal of this procedure is theoretically
possible but I was not able to reproduce perfect results. Finally, the
algorithme a trous filter bank algorithm is used within an iterative conjugate
gradient technique that uses multiscale edge detection to reconstruct an
image.
This abstract accurately represents the content of the candidates thesis. I
recommend its publication.
Signed
Jan T. Bialasiewicz
IV


DEDICATION
I dedicate this thesis to my husband, Kevin. Without his support and cooking
abilities, the kids would have eaten a lot of fast food for dinner, as I was
buried in my books and connected to the computer for more evenings than I
care to remember. My deepest gratitude goes to him for believing that I could
actually get this degree while raising our two small blessings, Hillary and Joe.
I also dedicate this thesis to my mother, Patty. By example, she showed me
that one is never too old to go back to college. Her love of learning is one of
her many wonderful traits and I am thankful that she passed that trait to me.


ACKNOWLEDGEMENT
My thanks to my advisor, Jan Bialasiewicz, for introducing me to the world of
wavelets and opening up new opportunities in my professional career. I also
wish to thank Mike (Miloje) Radenkovic for advising me on which classes to
take during these past five years. Through his guidance, I have taken many
interesting signal processing classes which have greatly helped in
understanding wavelet applications within image processing.


CONTENTS
Figures ............................................................ix
Tables..............................................................xi
Chapter
1. The Continuous Wavelet Transform..................................1
2. The Discrete Dyadic Wavelet Transform and the Two-Dimensional
Conjugate Mirror Filter Algorithm ................................6
2.1 Uses for the Conjugate Mirror Filter Algorithm.................15
3. Algorithme A Trous...............................................20
4. Image Reconstruction from Modulus Maxima.........................27
5. Conclusion.......................................................38
Appendix
A. Conjugate Mirror Filter MATLAB Computer Program.................40
B. Image Denoising MATLAB Computer Program..........................47
C. Algorithme A Trous MATLAB Computer Program.......................55
D. Image Reconstruction from Modulus Maxima Using the Conjugate
Gradient Algorithm...............................................61
E. MATLAB Computer Program to Decompose Image to be
Reconstructed by the Conjugate Gradient Algorithm................65
F. MATLAB Computer Program for Generation of the Initial Search
Direction Used in the Conjugate Gradient Algorithm...............70
vii


G. Conjugate Gradient MATLAB Computer Program......................72
H. MATLAB Computer Program for Decomposing the Search
Direction Used in the Conjugate Gradient Algorithm.................74
I. Utilities...........................................................77
References..............................................................78
VIII


FIGURES
Figure
1.1 Daubechies, db4, and Symmlet, sym8, Mother Wavelets..............3
1.2 Scaling Functions of the Daubechies, db4, and Symmlet, sym8,
Mother Wavelets.......................................................4
2.1 Decomposition Filter Bank for the Conjugate Mirror Filter Algorithm..7
2.2 Reconstruction Filter Bank for the Conjugate Mirror Filter Algorithm.10
2.3 Original and Reconstructed Image of a Cross-Section of a
Healthy Neck.........................................................12
2.4 Approximation and Details of the Level 1 Decomposition of an
Image of a Cross-Section of a Healthy Neck.......................12
2.5 Approximation and Details of the Level 2 Decomposition of an
Image of a Cross-Section of a Healthy Neck...........................13
2.6 Approximation and Details of the Level 3 Decomposition of an
Image of a Cross-Section of a Healthy Neck........................14
2.1.1 Denoising of the Lena Image Using Hard and Soft Thresholding...17
2.1.2 Denoising of the Healthy Neck Image Using Hard and Soft
Thresholding......................................................18
3.1 Decomposition Filter Bank for the Algorithme A Trous.................21
3.2 Reconstruction Filter Bank for the Algorithme A Trous................21
3.3 The Spline Wavelet (Left) and its Scaling Function (Right).......22
3.4 Original and Reconstructed Healthy Neck Image Produced by
the Algorithme A Trous...............................................24
IX


3.5 Decomposition of the Healthy Neck Image Produced by
the Algorithme A Trous.........................................25
4.1 Use of Phase to Determine Modulus Maxima.......................29
4.2 Modulus Maxima Matrix for the Healthy Neck Image...............33
4.3 Modulus Maxima Matrix for the Neck Tumor Image.................33
4.4 Original and Reconstructed Healthy Neck Image Using the
Modulus Maxima Technique.......................................37
4.5 Original and Reconstructed Neck Tumor Using the Modulus
Maxima T echnique..............................................37
x


TABLES
Table
2.1 Daubechies db2 Discrete Filter Coefficients...........................11
3.1 Spline Filter Coefficients Used in the Algorithme A Trous.............23
4.1 Spline Filter Coefficients Used in the Modulus Maxima Image
Reconstruction ........................................................34
XI


1. The Continuous Wavelet Transform
Mathematical transforms are important in medical image processing because
they give additional or hidden information about the original image which may
not be obvious otherwise. Also, these transforms can be manipulated in such
a way as to reduce the amount of data needed to reconstruct an image. The
latter would be important for image storage and transmission. For image
analysis, the Fourier transform identifies the spectral components present in
an image but it does not provide information as to where certain components
occur within the image. If we are only interested in stationary signals, the
Fourier transform provides complete spectral analysis because all spectral
components exist at all times. However, if we are interested in non-stationary
signals, signals with transient phenomena, the Fourier transform would only
give us the spectral components within the image but not where they are
located (Polikar, 1999). So we need some other way to determine the time or
space localization of the spectral components. The wavelet transform solves
this problem by using a variable length window, representing a variable
frequency range, that is shifted along the image. This window is called the
wavelet. The following equation is that of the continuous wavelet transform
(CWT) of the signal f(t) which is a function of time:
1



(1.1)
where u is the translation in time or space of the mother wavelet, y/, and 5
is the scale. Larger scales correspond to dilated wavelets and therefore
lower frequencies. Smaller scales correspond to compressed wavelets and
therefore higher frequencies. Equation (1.1) actually represents the scalar
product of f(t) and y/us(t) that can be interpreted as a projection of f(t) onto
the direction in L2(r) defined by y/us{f)- The mother wavelet scaled and
translated as it appears in Eq. (1.1) is as follows:

't-u'
\ S )
(1.2)
The factor, l/V7, is a normalization factor to guarantee that all wavelets have
the same energy and this energy is unity if ||^(r| = 1 (Mallat, 2001). Figure
1.1 shows an example of two different mother wavelets: Daubechies (db4)
and Symmlet (sym8).
2


f
paubechie* Wavelet
^vrnmlei '.Va*ele:
Figure 1.1 Daubechies, db4, and Symmlet, sym8, Mother Wavelets
The continuous wavelet transform is computed by changing the scale of the
mother wavelet, shifting the wavelet in time, multiplying by the signal, and
integrating over all times. The resultant wavelet coefficients that are
generated from the CWT are known as details of the signal because the
wavelet is considered a high pass filter. In order to perfectly reconstruct the
signal from the wavelet coefficients, the low-frequency approximation must
also be added to the high frequency details. This is where the scaling
function is introduced. The scaling function, 0t(r), is considered the low pass
filter and it is an aggregation of wavelets at the larger scales not included in
the original continuous wavelet transform (Mallat & Zhong, 1992). The
wavelet function is the derivative of the scaling function. The scaling function
at scale 5 takes on the form
3


0,(t) = -n0
\s

\SJ
(1.3)
See Figure 1.2 for the scaling functions that correspond to the wavelets of
Figure 1.1.
. I
1 r
0 81*
06
0 4
o:>! /
I /
4
U|-
i
-C !
I
.n j I
Oajbschi&s Scaling Function
Symmiei Sra'ing Function
Figure 1.2: Scaling Functions of the Daubechies, db4, and Symmlet,
sym8, Mother Wavelets
The following equation is used to recover the original signal:

(1.4)
where
1 ,(t u

1 = f*0s(u)
(1.5)
and
4


C0
do) <
(1.6)
c=|l
Lf(u,s0) is the low frequency approximation of / at the scale s{). Eq. (1.6) is
called the wavelet admissibility condition for a real function, y/{t)e L2(r), to be
a wavelet (Mallat, 2001).
5


2. The Discrete Dyadic Wavelet Transform and the
Two-Dimensional Conjugate Mirror Filter
Algorithm
For computer programs, an input image is a discrete signal made up of pixels
defined within a certain resolution. Therefore the wavelet transform cannot
be computed at an arbitrary fine scale. For fast numerical calculations, the
scale is discretized along a dyadic sequence, {2;} for j > 0, therefore
normalizing the finest scale to j = 1 (Mallat & Zhong, 1992). The continuous
wavelet and scaling functions are replaced with discrete high and low pass
filters, respectively. The following equations show the relationship between
the discrete filters and their continuous counterparts (Mallat,2001):
The high and low pass filters, #[n] and h[n], respectively, are related in the
following way:
g[n] = ( l),_"/?[l ] (2.3)
In order to analyze an image, each row and column of the image can be
regarded as a signal. As these rows and columns pass through the low and
6


high pass filters, the image decomposes into low and high pass components
which are then downsampled by two. The filters which are used for
decomposition and perfect reconstruction are known as orthogonal filters and
called conjugate mirror filters (CMF) (Mallat, 2001). Figure 2.1 illustrates how
the filters are set up to create these banks of filters for decomposition.
Rows Columns
/;[/?] h[n\
------- j+.
,?[] h\n]
*
g["\
^2)--------- K,
Figure 2.1 Decomposition Filter Bank for the Conjugate Mirror Filter
Algorithm. Initialization: a0 = original image.
In the analysis step shown above, the original image is split into four half-size
images by filtering the original image with the conjugate low and high pass
filters, h[n] and §[]. First the rows of the original image are passed through
HP

HP

HP

LP LP

7


a high and low pass filter and then the columns of these two output images
are passed through a high and low pass filter that result in the four same-
sized images. These images are then downsampled by two which equates to
discarding every other row and column. The process of downsampling
doubles the scale. At each step, we obtain a resultant half-sized image that is
a smooth low frequency image called an approximation image, aJ+l, and three
detailed, high frequency images, dhj+l, drj+i, and ddj+l. These detail images are
referred to as horizontal, vertical, and diagonal detail images, respectively.
The subscript j refers to the scale (2J) or level of the image that is being
analyzed. The subscript j +1 refers to the next level in which the results
reside. For another level of decomposition, this analysis step is further
iterated on the low pass component, aj+l. Including the length L of the filters
in Eq. (2.3), we obtain the following relation between the low and high pass
decomposition filters:
g[z. + l-n] = (-iy'/i[/z] (2.4)
where n is the filter index.
The following are the formulas for the CMF implementation of the discrete
dyadic wavelet transform, DWT, that were shown graphically in Figure 2.1
8


(2.5)
aj+M = aj *hh[2n]
dj+l[n] = aj*hg[2n] (2.6)
d%i[n] = aj*gh[2n] (2.7)
dU [n] = aj*gg[2n] (2.8)
Note that [] = [jc, v] and yc[]= v[]c[n].
Reconstruction of the image is followed in reverse order from the
decomposition procedure. First, the approximation and details of the last
decomposed level are upsampled by two and passed through the synthesis
filters, h[n] and g[n], and then added to calculate the approximation at the
next scale level. The synthesis filters are the time reversal of the analysis
filters. If the filters used are not ideal halfband, then perfect reconstruction
cannot be achieved. Upsampling is achieved by adding a row and column of
zeros between pairs of consecutive rows and columns of the coarse scale
images, ai+l, db+,, d'j+l, ddj+i. This process is iterated until the final image is
reconstructed having used all levels of details generated during the
decomposition process. See Figure 2.2 for a block diagram of the
reconstruction procedure. The following formula is the CMF implementation
9


of image reconstruction known as the inverse discrete dyadic wavelet
transform, IDWT, that corresponds to the graphical realization in Figure 2.2:
i [] = J+\ hh[n) + d'M hg[] + Jj+l * + dj+l gg[n] (2.9)
Note that y[n]= y] is the image y[n] upsampled by two.
lj+1
di
d:
Columns
h\n]
Rows
h[n]
;+i
- Cl ,
Figure 2.2 Reconstruction Filter Bank for the Conjugate Mirror Filter
Algorithm.
The results of the MATLAB program, conj_mirror_filt.m, that performs image
decomposition and reconstruction follow. The program is located in Appendix
A. This program as well as subsequent programs were run using three levels
10


of decomposition. The conjugate mirror filters used are the Daubechies filters
for wavelets with 2 vanishing moments, db2. Table 2.1 shows the filter
coefficients. The image used is that of a cross-section of a healthy neck It
can be observed in Figure 2.3 that the reconstructed image is virtually the
same as the original image as shown by the signal-to-noise ratio of over
220dB for the recovered image. One can also observe the approximations
and details at each of the three levels of decomposition in Figures 2.4, 2.5
and 2.6.
n h[n] 9[n] h[n] g[n]
1 0.4829629 -0.1294095 -0.1294095 -0.4829629
2 0.8365163 -0.2241438 0.2241438 0.8365163
3 0.2241438 0.8365163 0.8365163 -0.2241438
4 -0.1294095 -0.4829629 0.4829629 -0.1294095
Table 2.1 Daubechies db2 Discrete Filter Coefficients.
11


OnpM
Figure 2.3 Original and the Reconstructed Image of a Cross-Section of
a Healthy Neck. These images were generated from the MATLAB
program, conj_mirror_filt.m.
Approximation for level 1
Vertical details for level 1
Horizontal details for level 1
20 40 60 60 100 120
Figure 2.4 Approximation and Details of the Level 1 Decomposition of
an Image of a Cross-Section of a Healthy Neck.
12


Approximation for level 2
Horizontal details for level 2
Vertical details for level 2
Diagonal details for level 2
Figure 2.5 Approximation and Details of the Level 2 Decomposition of
an Image of a Cross-Section of a Healthy Neck.
13


Approximation for level 3
Horizontal details for level 3
Vertical details for level 3
Diagonal details for level 3
Figure 2.6 Approximation and Details of the Level 3 Decomposition of
an Image of a Cross-Section of a Healthy Neck.
Note that the successive down-sampling by two requires the image to be
square and that its length and width be a power of 2 in order for this algorithm
to be efficient. The length and width of the image determines the number of
possible decompositions. For example, the neck image used is 256x256.
Therefore the number of possible decomposition levels is eight.
14


2.1 Uses for the Conjugate Mirror Filter Algorithm
Image compression benefits quite nicely from the conjugate mirror filter
algorithm. Details at every level can be thresholded so that only wavelet
coefficients above a certain level are kept while the others are zeroed out.
For a threshold value of 0.02, the conjugate mirror filter algorithm is able to
reconstruct the healthy neck image with 91% of the wavelet coefficients
reduced to zero. The signal-to-noise ratio of the reconstructed image is
approximately 30dB. This shows that image storage or transmission can be
reduced. When the image is retrieved, one only has to insert the zeroes
where needed perform the inverse discrete dyadic wavelet transform, IDWT,
to recover the original image.
An image contaminated with white noise can also benefit from thresholding
and using the IDWT to recover a cleaner image. The following example is
that of image denoising. First an image is contaminated with Gaussian white
noise of variance o2, then the DWT is performed using the CMF algorithm.
Thresholding is done on the horizontal, vertical, and diagonal details since
this is where the high frequency components of the image are stored. There
are two types of thresholding, hard and soft, that are incorporated into the
15


MATLAB program, denoise.m. See Appendix B for the program. Hard
thresholding takes any wavelet coefficient below a certain threshold value, T,
and sets it to zero. Hard thresholding is implemented in the following way:
(2.1.1)
Soft thresholding is the attenuation of wavelet coefficients whose value is
above a certain threshold, T. For those coefficients whose absolute value is
below the threshold, these coefficients are set to zero. The implementation of
the soft thresholding procedure is as follows:
For hard thresholding, the threshold level is set to three times the standard
deviation of the noise, 3a. For soft thresholding, the threshold level is set to
half as large as that of hard thresholding, 3cr/2 (Mallat, 2001).
The following examples were computed using the orthogonal Symmlet 4
filters within the MATLAB program, denoise.m. Both the Lena image and the
d"(x,y)-T
d"(x,y) = d"(.v,y)+T
if dnj{x,y)>T
if d{.x,y) if |r/;(.v.y)| (2.1.2)
0
16


neck image were corrupted with 15dB of white Gaussian noise. The results
of the denoising show up much better by using the Lena image shown in
Figure 2.1.1. This is because the edges are better defined. The neck image
in Figure 2.1.2 has singularities that are hard to discern but the wavelet
transform picks them up readily and makes them easier to see in the
reconstruction because the noise accentuates the edges.
Estimation using Hard Thresholding, SNR = 17.1516dB Estimation using Soft Thresholding, SNR = 17.6515dB
Figure 2.1.1 Denoising of the Lena Image Using Hard and Soft
Thresholding. These results were generated from the MATLAB
program, denoise.m.
17


Figure 2.1.2 Denoising of the Healthy Neck Image Using Hard and Soft
Thresholding. These results were generated from the MATLAB
program, denoise.m.
As you can see the thresholding removes the noise in the smooth regions but
traces of the noise remain near edges within the image. This is because
noise and edges both exhibit high frequency behavior. The visual quality of
the image is affected only slightly more by the soft thresholding than by the
hard thresholding. This is because the hard thresholding estimator leaves
18


unchanged the wavelet coefficients above T whereas the soft thresholding
estimator attenuates all noisy coefficients.
19


3. Algorithme A Trous
The algorithme a trous is another implementation of the discrete wavelet
transform that uses a filter bank structure. It was originally developed for
music synthesis using non-orthogonal wavelets but it is now of particular
interest in image processing applications (Shensa, 1992). The main
difference between the conjugate mirror filter algorithm (CMF) and the
algorithme a trous besides the type of filter, is in the implementation. During
decomposition, at every scale, the filters are dilated by putting 2' -1 zeros
between each of the coefficients of the analysis filters used in the algorithme
a trous. For the CMF, the filtered images were subsampled instead.
Likewise, during reconstruction, the CMF upsamples the detail and
approximation images before filtering. Again the filters of the algorithme a
trous are decimated at each level of reconstruction by putting 2J -1 zeros
between each of the coefficients of the synthesis filters. Only horizontal and
vertical details, d1 and d], respectively, as well as an approximation, cij, of
the image at each level are created. Separable circular convolutions are
used instead of separable straight convolutions to minimize border problems.
This has the effect of periodizing the image both in the horizontal and vertical
20


directions (Mallat, 2001). See Figures 3.1 and 3.2 for the block diagram of
the algorithme a trous decomposition and reconstruction, respectively.
Columns Rows
h\n\ /;[/?]
Figure 3.1 Decomposition Filter Bank for the Algorithme A Trous.
Columns Rows
h[ti] h[n]
Figure 3.2 Reconstruction Filter Bank for the Algorithme A Trous.
21


The non-orthogonal filters used for the a trous algorithm are called quadratic
spline filters. They have been shown to be optimum filters for image
processing. The mother wavelet and the scaling function are shown in Figure
3.3.
Figure 3.3 The Spline Wavelet (Left) and its Scaling Function (Right).
The MATLAB program, atrous_recon.m, in Appendix C is used to generate
the following results. This program also uses the two programs in Appendix I
for the shifting and filtering of the image in order to carry out circular
convolutions. The program in Appendix C was run using three levels of
decomposition. The spline filter coefficients are shown in Table 3.1. It can be
22


observed that the reconstructed neck image is similar to the original image as
shown by the signal-to-noise ratio of approximately 15dB.
n h[n] h[n] g[n] 9[n]
-2 -0.0442
-1 0.1768 0.1768 -0.3094
0 0.5303 0.5303 -0.7071 -0.9723
1 0.5303 0.5303 0.7071 0.9723
2 0.1768 0.1768 0.3094
3 0.0442
Table 3.1 Spline Filter Coefficients Used in the Algorithme A Trous.
The following equations used in the MATLAB program give the convolution
formulas that are cascaded to calculate the discrete dyadic wavelet transform
and its inverse.
DWT: ;+i \n] = a) *hjhi[n\ (3.1)
[] = , *gjS[n] (3.2)
ll o rs-' (3.3)
IDWT: aj[n] = Maj+l* hjhj[/?]+ c/;+l gjS[n]+dhj+, Sg) [n]) (3.4)
23


Note that = dilated analysis filter and y. = decimated synthesis filter
3V
The equation for the two-dimensional IDWT was deduced from the one-
dimensional IDWT shown in Mallats book (2001) on page 155. For the one-
dimensional case, the multiplication factor is V2. Therefore, I chose 14 for the
multiplication in the two-dimensional case. The reconstruction results are not
perfect but visually, the results look pretty good. See Figure 3.4 below. The
results are lighter in color and textures are seen more easily. Instead of using
the constant 14, perhaps a variable that changes depending on scale would
give better results.
Original Image Reconstructed Image SNR = 15.5619dB
Figure 3.4 Original and Reconstructed Healthy Neck Image Produced by
the Algorithm A Trous.
24


Figure 3.5 Decomposition of the Healthy Neck Image Produced by the
Algorithme A Trous. (a): Approximations of the image from levels 1,2,
and 3. (b): Horizontal details from levels 1, 2, and 3. (c): Vertical
details from levels 1,2, and 3.
Figure 3.5 above shows from top to bottom approximations and details from
levels 1,2, and 3. From left to right, the figure shows the approximations, the
horizontal details, and lastly, the vertical details at each level. The algorithme
a trous uses much more data to reconstruct the neck image than the CMF
algorithm by a magnitude for three levels of detail. Likewise, the a trous
25


algorithm for three levels of detail took a little over 3 seconds longer to
complete.
26


4. Image Reconstruction from Modulus Maxima
Edge detection within an image is very important in image compression and
reconstruction. By detecting the modulus maxima in a two-dimensional
dyadic wavelet transform across scales, edges are detected. Locations of the
modulus maxima, often referred to as multiscale edges, determine regions in
the wavelet transform domain with a high concentration of energy. For this
reason they are regarded as natural candidates for the reconstruction of the
original image.
To reconstruct an image, /, from its multiscale edges, the modulus maxima
matrices must first be generated and then an iterative algorithm called the
conjugate gradient algorithm is used for image reconstruction. First the
algorithme a trous is used to generate the wavelet coefficients at each level.
Next, the wavelet transform modulus and angle of the wavelet transform
modulus for each corresponding pixel location is generated. The wavelet
transform modulus is proportional to the modulus of the gradient vector
V(/ *n\x.y), and angle of the wavelet transform is the angle between the
gradient vector V(/ *n\x,y) and the horizontal and is the direction where
/(.v, y) has the sharpest variation (Hwang & Mallat, 1992). This is done for
27


each level of decomposition. The modulus maxima matrices are generated
from the wavelet transform modulus and the angle matrices. Finally, the
conjugate gradient algorithm reconstructs the image. WAVELAB (Donoho,
D., Duncan, M. R., Huo, X., & Levi, O. ,1999) provides a one-dimensional
version of the conjugate gradient algorithm. I used the framework of the one-
dimensional conjugate gradient algorithm to construct the two-dimensional
conjugate gradient algorithm used in this thesis.
The wavelet transform modulus, Mjf(x,y), and angle, Aj(x,y), are
calculated by the following equations using horizontal and vertical details, d1
and d'], respectively:
y) = ^(dj (a, y))2 + {d'j{ x, y)f (4.1)
Ahf{x, y) = arctan(cT (.v, y) / d) (a, y)) (4.2)
For each pixel location within each scale, a modulus maxima is detected at
location (*,y) if M jf(x.y)> Mjf{xt,yt) and M ,f(x,y)> Mjf{x2,y2).
Locations (a,,?,) and (x,,y2) are adjacent locations to (.*,>-) indicated by the
angle Ajf(x,y). The angle can be one of eight quantized directions: 0, n/4,
71/2, 37i/4, n, n/4, -n/4, -nJ2, or -3n/4 (Hsung, Lun, & Siu, 1998).
28


Figure 4.1 Use of Phase to Determine the Modulus Maxima.
Figure 4.1 above is a graphical description of how the angle is used to locate
the two points in which to compare a particular wavelet modulus. For
example, if the angle that corresponds to point (x,y) lies between the nJQ and
37r/8, indicated by dashed lines, the quantized angle becomes n/4. The line
tangent to this angle, shown by a short red line, correspond to the direction in
which the two adjacent positions to be compared are located. In this
example, the wavelet transform modulus located at (x, >), indicated by the red
square, is compared to the wavelet transform modulus located at (x,,y,) and
(x2 ,y2), indicated by the two blue squares. Positive angle values that
29


correspond to the negative angle values are included in the figure because
the MATLAB program in Appendix E works with positive angle values only.
Wherever the wavelet transform modulus is deemed to be maximum, its
location is noted by creating a modulus maxima mask matrix for each level.
All wavelet coefficients not located at the modulus maxima locations for their
corresponding level are set to zero.
The conjugate gradient algorithm can be thought of as an acceleration of the
frame algorithm (Grochenig, 1993). Frame theory states that under certain
conditions, one can recover a vector / in a Hilbert space H from its inner
products with a family of vectors {$,}r. The index set T may or may not be
finite. L, defined by Lf[n\ = (/,), is a frame operator if the following frame
condition is true:
(4-3)
/re f
where A and B are positive constants known as frame bounds. This equation
tells us that the representation of / with respect to a frame is complete but
redundant (Mallat, 2001). Completely eliminating the redundancy generates
a basis of the signal space. This basis is the smallest frame and is said to be
tight when A = B. The frame is an orthonormal basis if A = B = 1. However,
30


the frame algorithm is only as good as the estimates of the frame bounds and
calculating the frame bounds can be an arduous task. The conjugate
gradient algorithm solves this problem because the frame bounds do not
need to be known. The conjugate gradient algorithm as stated by Grochenig
(1993) is as follows:
Initialize: / = 0 , r0 = p0 = g p_t = 0 (4.4)
(PnLP,,)
(4.5)
fn +1 /,, + KP
'+i = r AnLpn
(4.6)
(4.7)
P+1 = LP ~
(LPLP)
(PnLPn)
(Lp,LPl!_t)
(4.8)
The vector Pn is the search direction and it is initialized along with the
residual vector rn with the initial gradient, g The initial gradient is the first
approximation of the image using a formula similar to the algorithme a trous
IDWT shown in Eq. (3.4) and the non-zero horizontal and vertical details
corresponding to the modulus maxima locations at every level. The semi-
IDWT equation of Eq. (4.9) follows. Notice that the factor V* from Eq. (3.4) is
removed from this equation.
Oj [] = aJ+l h,h, [/?]+ d'j+l gjS[n\+ d)+, 5g} [n] (4.9)
31


Note that h} and g; are the decimated synthesis filters of /?, and g;. Also
note that a0[n\ = Lpn. The variable Xn, which dynamically changes during
each iteration, is the step size in the pn direction. Lpn is the orthogonal
projection of pn on the image space defined by the inner products of the
image / with a family of functions {,}r. To try to make this more
understandable, the MATLAB program recon_mm.m in Appendix D, is
thoroughly explained.
In the MATLAB program recon_mm.m, the image to be analyzed and
synthesized using modulus maxima is initially decomposed into the number of
levels requested using the algorithme a trous technique of the dyadic wavelet
transform. Two images are analyzed by this method, the healthy neck image
and that of a neck with a tumor on the right side of the image. For these two
images, a threshold of 0.02 is used to further compress the images.
Therefore, the modulus maxima matrix masks are generated as stated before
with the stipulation that the wavelet transform modulus must also be greater
than or equal to the threshold value to become a modulus maxima point. The
modulus maxima matrix masks are generated for every level and only those
details that correspond with the masks are saved along with the final
approximation. This is done within the subroutine, mm_atrous.m, which is
32


located in Appendix E. The modulus matrix masks for the healthy and
tumorous neck can be seen in Figures 4.2 and 4.3, respectively.
Modulus Maxima for Level 1
Modulus Maxima for Level 2 Modulus Maxima for Level 3
Figure 4.2 Modulus Maxima Matrix for the Healthy Neck Image.
Modulus Maxima for Level 1 Modulus Maxima for Level 2 Modulus Maxima for Level 3
Figure 4.3 Modulus Maxima Matrix for the Neck Tumor Image.
33


The Figures 4.2 and 4.3 show how the tumor affects the modulus maxima
footprint of the healthy neck. The symmetry of the healthy neck modulus
maxima masks shown in Figure 4.2 no longer exists in the modulus matrix
masks of the tumorous neck shown in Figure 4.3. Also notice the lack of
modulus maxima in the area where the tumor exists (upper right corner of
image). In fact, the tumor seems to be delineated by a circle of modulus
maxima. How this information can be of use in tumor detection is beyond the
scope of this thesis.
Next, the subroutine atrous_up.m in Appendix F executes an algorithm similar
to the algorithme a trous version of the inverse dyadic wavelet transform
using the details and approximation from above using Eq. (4.9); however, the
synthesis filters, £;.[n], are the time reverse of g,.[/?]. See Table 4.1 for the
filters used.
n h[n] h[n] g[n] 9[n]
-1 0.1768 0.1768
0 0.5303 0.5303 -0.7071 0.7071
1 0.5303 0.5303 0.7071 -0.7071
2 0.1768 0.1768
Table 4.1 Spline Filter Coefficients Used in the Modulus Maxima Image
Reconstruction.
34


The subroutine atrous_up.m generates the first direction image which is
transferred to the subroutine, ConjGrad_2d.m, which is located in Appendix
G. The ConjGrad_2d.m subroutine orchestrates the conjugate gradient
algorithm by first calling atrous_down.m. Using the subroutine
atrous_down.m in Appendix H, the direction image is decomposed into the
previously stated number of levels, saving only those details that correspond
to the modulus maxima matrix masks initially generated in mm_atrous.m.
Again atrous_up.m is called to generate from these details and approximation
an approximation image but this one is used to generate the projection vector
Lpn. Since Lpn is a vector, the approximation image is transformed into a
vector by appending each row onto the previous row. The ConjGrad_2d.m
subroutine also transforms the direction image into the first search vector by
appending each row onto the previous row, thus making the vector, pn
(Kawata & Nalcioglu, 1985). Now the actual conjugate gradient algorithm is
executed by calculating An, /+l, rn+i, and pn+l using the equations listed
previously. The process of generating Lpn, A, /+l, /;,+l, and pn+l is iterated
until the relative mean-square reconstruction error for the neck tumor,
/ -/ /|/|, is less than 10e-3 and less that 3*10e-3 for the healthy neck.
The reconstructed image of the healthy neck in Figure 4.4 was obtained after
20 iterations of the conjugate gradient algorithm with the algorithme a trous
35


filter bank implementation. The reconstructed image of the tumorous neck in
Figure 4.5 was obtained after 25 iteration of the conjugate gradient algorithm.
The sharp variations in the image are preserved but visually, the image looks
lighter in shade. The algorithm reduces the amount of wavelet coefficients by
92% for both the normal and the tumorous neck image. Thus image
compression using the modulus maxima technique is quite successful as
shown by the following figures.
36


Original Image
Reconstructed Image from the Modulus Maxima SNR = 23 9892dB
Figure 4.4 Original and Reconstructed Healthy Neck Image Using the
Modulus Maxima Technique.
Original Image Reconstructed Image from the Modulus Maxima SNR = 32.6934dB
Figure 4.5 Original and Reconstructed Neck Tumor Using the Modulus
Maxima Technique.
37


5. Conclusion
Image restoration based on several wavelet transform algorithms have been
demonstrated. The results were shown to be favorable for both denoising
and image compression. Using the conjugate mirror filter algorithm, image
compression and denoising were accomplished by thresholding the wavelet
coefficients. Image restoration was also provided by the algorithme a trous
but with weaker results as compared to the conjugate mirror filter algorithm
and the iterative conjugate gradient algorithm. However in this thesis, the
algorithme a trous was mainly introduced as a gradual introduction to the
conjugate gradient algorithm, which uses the modulus maxima technique as
well as a form of the algorithme a trous to reconstruct an image with minimal
data.
Since x-rays and other medical imagery can quickly fill up a computers
memory, the modulus maxima technique as well as thresholding can help
alleviate storage problems. Image transmission of medical images can be
done quickly and efficiently with the reduction of data that the modulus
maxima and thresholding provides. Noisy images can also benefit from the
wavelet algorithms because images can be denoised without fear of losing
38


edge details. With the need for quick access to medical images for accurate
analysis and diagnosis especially when the doctor is across town or across
the country, these wavelet algorithms definitely have a future in the medical
field.
39


Appendix A
Conjugate Mirror Filter MATLAB Computer Program
%conj_mirror_filt(decomp_times)
%The following program decomposes a medical image into separate vertical,
%horizontal, and diagonal detail images as well as an approximation image
%using Daubechies db2 filter banks. The user of this program inputs the
%variable, decomp_times, so that the program knows how many levels of
%decomposition are desired. Each level of decomposition generates new
%detailed and approximation images. Then the program reconstructs the
%image from the the details and last approximation. The output of this
%program generates the detailed images and the approximation image for
%each level. The original image and the reconstructed image are also
%displayed as well as the signal-to-noise ratio of the reconstructed image.
function conj_mirror_filt(decomp_times)
%Coefficients for the db2 wavelet
filter = [ .482962913145 .836516303738 ...
.224143868042 -.129409522551 ];
%Read in the image of the cross-section of the healthy neck
A=imread(lnormal85_f256.bmp');
A=double(A);
Xrgb=0.2990*A(:,:,1 )+0.5870*A(:,:,2)+0.1140*A(:,:,3);
NbColors=256;
X=wcodemat(Xrgb,NbColors);
map=gray(NbColors);
%Low-pass filter for decomposition
h Jp = filter;
%Low-pass filter for reconstruction
gjp = wrev(hjp);
%High-pass filter for decomposition
h_hp = -((-1 ).A(1 :length(filter)) ).*wrev(h_lp);
40


%High-pass filter for reconstruction
g_hp = wrev(h_hp);
%Nomalize image
X = X./256;
[M,N] = size(X);
y = X;
If = length(filter);
%lnitialize index to help display the correct sized images at every level
ix = zeros(1 ,decomp_times);
n = N;
for k = 1 :decomp_times
n = n/2;
ix(k) = n;
end
%Loop for decomposing image
tic
for k = 1 :decomp_times
%Zero-pad the image
%Extend left and right of image with zeros
[M1,N1] = size(y);
right_ext = zeros(M1 ,lf-1);
left_ext = zeros(M1,lf-1);
y = [left_ext y right_ext];
%Extend upper and lower image with zeros
[M2,N2] = size(y);
upper_ext = zeros(lf-1 ,N2);
lower_ext = zeros(lf-1 ,N2);
y = [upper_ext; y; lower_ext];
%Filter image
%Convolve rows of original image or last approximation with...
%...low-pass filter
[M3,N3] = size(y);
for j = 1:M3
approx{k}(j,:) = conv(y(j,:),h_lp);
end
41


%Convolve columns of resultant image with low-pass filter
[A,B] = size(approx{k});
forj = 1:B
b1{k}(:,j) = conv(approx{k}(:,j)',h_lp)1;
end
%Convolve columns of the same resultant image with high-pass filter
forj = 1:B
horiz_details{k}(:,j) = conv(approx{k}(:,j)',h_hp)';
end
%Determine the part of the convolved image to keep
s = (B-(M1+lf-1))/2;
first = floor(s)+ 1;
last = B-ceil(s);
%Down-sample...
%Save every other column and row for approximation:
a{k} = b1{k}(first:2:last,first:2:last);
%Save every other column and row for horizontal details:
h{k} = horiz_details{k}(first:2:last,first:2:last);
%Convolve rows of original image or last approximation with...
%...high-pass filter
for j = 1:M3
details{k}(j,:) = conv(y(j,:),h_hp);
end
%Convolve columns of resultant image with high-pass filter
forj = 1:B
c1{k}(:,j) = conv(details{k}(:,j)',h_hp)';
end
%Convolve columns of the same resultant image with low-pass filter
forj = 1:B
vert_details{k}(:,j) = conv(details{k}(:,j),,hJp)1;
end
%Down-sample...
%Save every other column and row for diagonal details:
d{k} = c1{k}(first:2:last,first:2:last);
42


%Save every other column and row for vertical details:
v{k} = vert_details{k}(first:2:last,first:2:last);
y = a{k}l
[Q, R] = size(y);
q{k} = [Q R];
end
%Loop for reconstructing image
for k = decomp_times:-1:1
%Upsample and filter approximations...
M = q{k}(1);
N = q{k}(2);
upsamp_a{k} = zeros(2*M,2*N);
for m = 1:M
for n = 1:N
upsamp_a{k}(2*m,2*n) = y(m,n);
end
end
%Convolve columns of upsampled approximation with low-pass filter
for j = 1:2*M
col_a{k}(:,j) = conv(upsamp_a{k}(:,j)',gJp)';
end
%Convolve rows of resultant image with low-pass filter
[A,B] = size(col_a{k});
forj = 1: A
rec_a{k}(j,:) = conv(col_a{k}(j,:),g_lp);
end
%Keep only that part of the image which is needed
rec_a{k} = rec_a{k}(lf+1:end,lf+1:end);
%Upsample and filter horizontal details...
upsamp_h{k} = zeros(2*M,2*N);
for m = 1:M
for n = 1:N
upsamp_h{k}(2*m,2*n) = h{k}(m,n);
end
end
43


%Convolve columns of upsampled horizontal details with high-pass filter
for j = 1:2*M
col_h{k}(:,j) = conv(upsamp_h{k}(:,j)',g_hp);
end
%Convolve rows of resultant image with low-pass filter
forj = 1:A
rec_h{k}(j,:) = conv(col_h{k}(j,:),g_lp);
end
%Keep only that part of the image which is needed
rec_h{k} = rec_h{k}(lf+1:end,lf+1:end);
%Upsample and filter diagonal details...
upsamp_d{k} = zeros(2*M,2*N);
for m = 1:M
for n = 1:N
upsamp_d{k}(2*m,2*n) = d{k}(m,n);
end
end
%Convolve columns of upsampled diagonal details with high-pass filter
for j = 1:2*M
col_d{k}(:,j) = conv(upsamp_d{k}(:,j),,g_hp)';
end
%Convolve rows of resultant image with high-pass filter
for j = 1:A
rec_d{k}(j,:) = conv(col_d{k}(j,:),g_hp);
end
%Keep only that part of the image which is needed
rec_d{k} = rec_d{k}(lf+1 :end,lf+1 :end);
%Upsample and filter vertical details...
upsamp_v{k} = zeros(2*M,2*N);
for m = 1 :M
for n = 1:N
upsamp_v{k}(2*m,2*n) = v{k}(m,n);
end
end
44


%Convolve columns of upsampled vertical details with low-pass filter
for j = 1:2*M
col_v{k}(:,j) = conv(upsamp_v{k}(:,j),,g_lp)';
end
%Convolve rows of resultant image with high-pass filter
forj = 1 :A
rec_v{k}(j,:) = conv(col_v{k}(j,:),g_hp);
end
%Keep only that part of the image which is needed
rec_v{k} = rec_v{k}(lf+1 :end,lf+1 :end);
%Add the approximation, horizontal, diagonal, and vertical detail images
y = rec_a{k} + rec_h{k} + rec_d{k} + rec_v{k};
end
toe
%Calculate signal-to-noise ratio of reconstructed image
var_s = (std2(X))A2;
var_n = (std2(X y(1 :end-lf+1,1 :end-lf+1 )))A2;
SNR = 10*log10(var_s/var_n);
%Display results
for k = 1 :decomp_times
figure;
subplot(2,2,1)
imshow(a{k}(2:ix(k)+1,2:ix(k)+1),[]);
title(['Approximation for level ',num2str(k)]);
subplot(2,2,2)
b = abs(1 -(h{k}(2:ix(k)+1,2:ix(k)+1 )./(max(max(h{k}(2:ix(k)+1,2:ix(k)+1))))));
J = imadjust(b, [],[], 10);
subimage(J);
title(['Horizontal details for level ,num2str(k)]);
subplot(2,2,3)
b = abs(1 -(v{k}(2:ix(k)+1,2:ix(k)+1 )./(max(max(v{k}(2:ix(k)+1,2:ix(k)+1))))));
J = imadjust(b, [],[], 10);
subimage(J);
title(['Vertical details for level ,num2str(k)]);
subplot(2,2,4)
b = abs(1 -(d{k}(2:ix(k)+1,2:ix(k)+1 )./(max(max(d{k}(2:ix(k)+1,2:ix(k)+1))))));
J = imadjust(b,[],[],10);
subimage(J);
45


title(['Diagonal details for level ',num2str(k)]);
end
figure;
imshow(y(1 :end-lf+1,1 :end-lf+1),[]);
title(['Reconstructed Image, SNR = ,,num2str(SNR),'dB']);
figure
imshow(X,Q);
title('Original Image');
46


Appendix B
Image Denoising MATLAB Computer Program
%denoise(decomp_times, SNR)
%This program is used to demonstrate how wavelets can denoise an image.
%The wavelet used is a Symmlet sym4 discrete filter. The medical image of
%a cross section of a healthy neck is read into the program and, depending
%on the signal-to-noise ratio desired, Gaussian white noise is added to the
%image. The first time through the program, hard thresholding is
%implemented. The second time through, soft thresholding is implemented.
%The user inputs the level of decomposition as well as the signal-to-noise
%ratio desired. The program decomposes the image and then thresholds the
%details. Finally the program reconstructs the image with the thresholded
%detail coefficients. The program displays the clean image, the noisy image,
%and the results of hard and soft thresholding along with the resultant signal-
%to-noise ratio.
function denoise(decomp_times,SNR)
%Coefficients for the sym4 wavelet filter
filter = (sqrt(2))*[ 0.02278517294800 -0.00891235072085 ...
-0.07015881208950 0.21061726710200 ...
0.56832912170500 0.35186953432800 ...
-0.02095548256255 -0.05357445070900 ];
%Read in cross section of healthy neck
A=imread(normal85_f256.bmp');
A=double(A);
Xrgb=0.2990*A(:,:,1)+0.5870*A(:,:,2)+0.1140*A(:,:,3);
NbColors=256;
X=wcodemat(Xrgb,NbColors);
map=gray(NbColors);
[M,N] = size(X);
%Nomalize image
X = X./256;
47


%Low-pass filter for decomposition
h Jp = filter;
%Low-pass filter for reconstruction
gjp = wrev(h_lp);
%High-pass filter for decomposition
h_hp = -((-1 ).A(1 :length(filter)) ).*wrev(h_lp);
%High-pass filter for reconstruction
g_hp = wrev(h_hp);
If = length(filter);
%lnitialize constants for determining non-zero coefficients
nonzeros = 0; %Nonzeros before thresholding
nonzerosh = 0; %Used in hard thresholding
nonzeross = 0; %Used in soft thresholding
%Calculate noise from SNR input
var_s = (std2(X))A2;
var_n = 10A(log10(var_s) (SNR/10));
eta = sqrt(var_n)*randn(M,N);
%Calculate standard deviation of the noise which will be used to...
%...calculate the threshold value used to denoise the image,
sigma = sqrt(var_n);
for i = 1:2
y = X + eta; %Add noise to signal
%Calculate hard threshold from S. Mallet, "A Wavelet Tour of Signal
%Processing", pp. 462, Academic Press, 1999, 2nd edition,
if i == 1 %Hard thresholding, calculate threshold as follows:
thr = 3*sigma;
end
%Calculate soft threshold from S. Mallet, "A Wavelet Tour of Signal
%Processing", pp. 462, Academic Press, 1999, 2nd edition,
if i == 2 %Soft thresholding, calculate threshold as follows:
thr = 3*sigma/2;
end
48


%Loop for decomposing image
for k = 1 :decomp_times
%Zero-pad the image
%Extend left and right of image with zeros
[M1,N1] = size(y);
right_ext = zeros(M1 ,lf-1);
left_ext = zeros(M1 ,lf-1);
y = [left_ext y right_ext];
%Extend upper and lower image with zeros
[M2,N2] = size(y);
upper_ext = zeros(lf-1,N2);
lower_ext = zeros(lf-1 ,N2);
y = [upper_ext; y; lower_ext];
%Filter image
%Convolve rows of the image with the low-pass filter, hJp
[M3,N3] = size(y);
for j = 1:M3
approx{k}(j,:) = conv(y(j,:),h_lp);
end
%Convolve columns of the resultant image with the...
%...low-pass filter, hjp
[A,B] = size(approx{k});
forj = 1:B
b1{k}(:,j) = conv(approx{k}(:,j)',hJp)';
end
%Convolve columns of the same resultant image with the high-pass...
%...filter, h_hp
forj = 1:B
horiz_details{k}(:,j) = conv(approx{k}(:,j)',h_hp)';
end
%Determine the part of the convolved image to keep
s = (B-(M1+lf-1))/2;
first = floor(s)+ 1;
last = B-ceil(s);
%Down-sample...
49


%Save every other column and row for approximation
a{k} = b1{k}(first:2:last,first:2:last);
%Save every other column and row for horizontal details
h{k} = horiz_details{k}(first:2:last,first:2:last);
%Convolve rows of the image with the high-pass filter, h_hp
for j = 1:M3
details{k}(j,:) = conv(y(j,:),h_hp);
end
%Convolve columns of the resultant image with the...
%...high-pass filter, h_hp
forj = 1:B
c1{k}(:,j) = conv(details{k}(:,j),h_hp)';
end
%Convolve columns of the same resultant image with the ...
%...low-pass filter, hjp
forj = 1:B
vert_details{k}(:,j) = conv(details{k}(:,j)',h_lp);
end
%Down-sample...
%Save every other column and row for diagonal details
d{k} = c1{k}(first:2:last,first:2:last);
%Save every other column and row for vertical details
v{k} = vert_details{k}(first:2:last,first:2:last);
y = a{k};
[Q, R] = size(y);
q{k} = [Q R];
%Calculate amount of nonzero details
nonzeros = nonzeros + 3*Q*R;
%Hard thresholding
if i == 1
h{k} = h{k}.*(abs(h{k})>thr);
d{k} = d{k}.*(abs(d{k})>thr);
v{kj = v{k}.*(abs(v{k})>thr);
%Update nonzero coefficient count
nonzerosh = nonzerosh + nnz(h{k}) + nnz(d{k}) + nnz(v{k});
50


end
%Soft thresholding
if i == 2
x = abs(h{k});
h{k} = sign(h{k}).*(x >= thr).*(x thr);
x = abs(d{k});
d{k} = sign(d{k}).*(x >= thr).*(x thr);
x = abs(v{k});
v{k} = sign(v{k}).*(x >= thr).*(x thr);
%Update nonzero coefficient count
nonzeross = nonzeross + nnz(h{k}) + nnz(d{k}) + nnz(v{k});
end
%Loop for reconstructing image
for k = decomp_times:-1:1
%Upsample and filter approximations
M = q{k}(1);
N = q{k}(2);
upsamp_a{k} = zeros(2*M,2*N);
for m = 1:M
for n = 1:N
upsamp_a{k}(2*m,2*n) = y(m,n);
end
end
%Convolve columns of upsampled approximation with low-pass filter
for j = 1:2*M
col_a{k}(:,j) = conv(upsamp_a{k}(:,j),,gJp),;
end
%Convolve rows of resultant image with low-pass filter
[A,B] = size(col_a{k});
forj = 1:A
sub_a{k}(j,:) = conv(col_a{k}(j,:),g_lp);
end
%Keep only that part of the image which is needed
rec_a{k} = sub_a{k}(lf+1 :end,lf+1 :end);
51


%Upsample and filter horizontal details
upsamp_h{k} = zeros(2*M,2*N);
for m = 1 :M
for n = 1:N
upsamp_h{k}(2*m,2*n) = h{k}(m,n);
end
end
%Convolve columns of upsampled horizontal details...
%...with high-pass filter
for j = 1:2*M
col_h{k}(:,j) = conv(upsamp_h{k}(:,j)',g_hp)';
end
%Convolve rows of resultant image with low-pass filter
forj = 1:A
sub_h{k}(j,:) = conv(col_h{k}(j,:),g_lp);
end
%Keep only that part of the image which is needed
rec_h{k} = sub_h{k}(lf+1 :end,lf+1 :end);
%Upsample and filter diagonal details
upsamp_d{k} = zeros(2*M,2*N);
for m = 1 :M
for n = 1:N
upsamp_d{k}(2*m,2*n) = d{k}(m,n);
end
end
%Convolve columns of upsampled diagonal details with high-pass filter
for j = 1:2*M
col_d{k}(:,j) = conv(upsamp_d{k}(:,j)',g_hp)';
end
%Convolve rows of resultant image with high-pass filter
forj = 1:A
sub_d{k}(j,:) = conv(col_d{k}(j,:),g_hp);
end
%Keep only that part of the image which is needed
rec_d{k} = sub_d{k}(lf+1 :end,lf+1 :end);
52


%Upsample and filter vertical details
upsamp_v{k} = zeros(2*M,2*N);
for m = 1 :M
for n = 1:N
upsamp_v{k}(2*m,2*n) = v{k}(m,n);
end
end
%Convolve columns of upsampled vertical details with low-pass filter
for j = 1:2*M
col_v{k}(:,j) = conv(upsamp_v{k}(:,j)',g_lp)';
end
%Convolve rows of resultant image with high-pass filter
forj = 1:A
sub_v{k}(j,:) = conv(col_v{k}(j,:),g_hp);
end
%Keep only that part of the image which is needed
rec_v{k} = sub_v{k}(lf+1 :end,lf+1 :end);
%Add the approximation, horizontal, diagonal, and vertical ...
%...detail images
y = rec_a{k} + rec_h{k} + rec_d{k} + rec_v{k};
end
%Calculate signal-to-noise ratio of reconstructed image
recon_image{i} = y(1 :end-lf+1,1 :end-lf+1);
var_noise = (std2(X recon_image{i}))A2;
SNR(i) = 10*log10(var_s/var_noise);
end
SNR_orig = 10*log10(var_s/var_n);
%Percentage of nonzero coefficients remain after thresholding
h_zeros = nonzerosh/nonzeros %For hard thresholding
s_zeros = nonzeross/nonzeros %For soft thresholding
%Display results
figure
subplot(2,2,1)
imshow(X);
53


xlabel('Original Image1);
subplot(2,2,2)
imshow(X+eta);
xlabel(['Original Noise Image, SNR = ,,num2str(SNR_orig),,dB,]);
subplot(2,2,3)
imshow(recon_image{1},[]);
xlabel(['Estimation using Hard Thresholding, SNR = ',num2str(SNR(1)),,dB']);
subplot(2,2,4)
imshow(recon_image{2},Q);
xlabel(['Estimation using Soft Thresholding, SNR = ',num2str(SNR(2)),'dB']);
54


Appendix C
Algorithme A Trous MATLAB Computer Program
%atrous_recon(detail_level)
%This program implements the "algorithme a trous" using spline filters
%for decomposition and reconstruction of the medical image of the
%cross-section of a healthy neck. The user inputs the level of decomposition
%desired and the program generates horizontal and vertical detail images
%and an approximation for each level. Using the detail images and the last
%approximation image, the program reconstructs the original image. This
%program displays the approximation and horizontal and vertical detail
%images for each level. It also displays the original image as well as the
%reconstructed image along with the resultant signal-to-noise ratio.
function atrous_test(detail_level)
%Spline wavelet filters
hf=[. 125 .375 .375 .125].*sqrt(2); %Low pass decomp, and recon. filter
gf=[.5 -.5].*sqrt(2); %High pass decomposition filter
%High pass recontruction filter:
gprime = [-0.03125 -0.21875 -0.6875 0.6875 0.21875 0.03125].*sqrt(2);
%Read in the image of the cross-section of a healthy neck
A=imread('normal85_f256.bmp');
A=double(A);
Xrgb=0.2990*A(:,:,1)+0.5870*A(:,:,2)+0.1140*A(:,:,3);
NbColors=256;
X=wcodemat(Xrgb,NbColors);
%Nomalize image
X = X./256;
[M,N] = size(X);
y = X;
save_orig = y; %Save original image to display later
%L=level of detail
55


L=detail_level;
[nr,nc]=size(y);
%-------------------------------
%Loop for decomposing image
%...............................
tic
for k = 1:L
%..............................
%Calculate approximations
%..............................
%Shift either image or latest approximation vertically
for i=1 :nc
a{k}(1:nr,i) = leftshift(y(1:nr,i)',2A(k-1))';
%Circular convolution of low pass filter, hf, with columns from.
%...vertically shifted image.
a{k}(1 :nr,i) = cconv(hf,a{k}(1 :nr,i)')';
end
%Shift either image or latest approximation horizontally
for i=1:nr
a{k}(i,1:nc) = leftshift(a{k}(i,1:nc),2A(k-1));
%Circular convolution of low pass filter, hf, with rows from...
%...horizontally shifted image
a{k}(i,1 :nc) = cconv(hf,a{k}(i,1 :nc));
end;
%.........................................
%Calculate horizontal and vertical details
%.........................................
%Circular convolution of high pass filter, gf, with columns from...
%... image or previous approximation,
for i=1 :nc
d1{k}(1 :nr,i)=cconv(gf,y(1 :nr,i)')';
%Shift resultant image vertically. The result is a horizontal...
%.. .detail image
d1{k}(1 :nr,i)=leftshift(d1 {k}(1 :nr,i),,2Ak)';
end;
%Circular convolution of high pass filter, gf, with rows from image
%...or previous approximation,
for i=1 :nr
d2{k}(i,1 :nc)=cconv(gf,y(i,1 :nc));
56


%Shift resultant image horizontally. The result is a vertical...
%...detail image
d2{k}(i,1 :nc)=leftshift(d2{k}(i,1 inc)^^);
end;
%Dilate filters
hf=[dyadup(hf,2) 0]; %a trous
hprime=hf;
gf=[dyadup(gf,2) 0]; %a trous
gprime=[dyadup(gprime,2) 0];
y = a{k};
end;
O/
/O----------------------------
%Loop for reconstructing image
for k = L:-1:1
%Decimate filters
gprime = dyaddown(gprime,3); %high pass filter
hprime = dyaddown(hprime,3); %low pass filter
%Circular convolve rows of approximation, y, with the...
%...low pass filter hprime
for i=1 :nr
rec_a{k}(i,1 :nc) = cconv(hprime,y(i,1 :nc));
%Shift resultant image horizontally
rec_a{k}(i,1 :nc) = leftshift(rec_a{k}(i,1 :nc),2Ak);
end;
%Circular convolve columns of resultant image with the...
%...low pass filter hprime
for i=1 :nc
rec_a{k}(1 :nr,i) = cconv(hprime,rec_a{k}(1 :nr,i)');
%Shift resultant image vertically
rec_a{k}(1 :nr,i) = leftshift(rec_a{k}(1:nr,i)',2Ak)';
end;
%Shift horizontal details vertically
for i=1 :nc
rec_d1 {k}(1 :nr,i)=leftshift(d1 {k}(1 :nr,i)',2A(k-1))';
%Circular convolve columns with high pass filter, gprime
rec_d1 {k}(1 :nr,i)=cconv(gprime,rec_d1 {k}( 1 :nr,i)');
57


end;
%Shift vertical details horizontally
for i=1 :nr
rec_d2{k}(i,1 :nc)=leftshift(d2{k}(i,1:nc),2A(k-1));
%Circular convolve rows with high pass filter, gprime
rec_d2{k}(i,1 :nc)=cconv(gprime,rec_d2{k}(i,1 :nc));
end;
%Add new approximation image to the new horizontal and vertical...
%...details and multiply results by 1/4. This generates the new...
%...approximation for the next level up. The final approximation...
%...will be the reconstructed image,
y = (1/4)*(rec_a{k} + rec_d1{k} + rec_d2{k});
end
toe
%Calculate nonzero coefficients
nonzeros = nnz(a{3});
for k = 1:L
nonzeros = nonzeros + nnz(d1{k}) + nnz(d2{k});
end
nonzeros
%Calculate signal-to-noise ratio
var_s = (std2(save_orig))A2;
var_n = (std2(save_orig y))A2;
snr = 10*log10(var_s/var_n);
%Display results
figure;
%Display approximations
subplot(3,3,1)
imshow(a{1},[]);
subplot(3,3,4)
imshow(a{2},[]);
subplot(3,3,7)
imshow(a{3},[]);
set(gca,Visible1,'on');
set(gca,'XTickLaber,[]);
set(gca,'YTickLabel',[]);
set(gca,'XTick',[]);
58


set(gca,'YTick',n);
xlabel((a)');
%Display horizontal details
subplot(3,3,2)
b = abs(1-(d1{1}./(max(max(d1{1})))));
J = imadjust(b, [],[], 10);
imshow(J);
set(gca,'Visible','on');
set(gca,'XTickLabel',[]);
set(gca,'YTickLabel,[]);
set(gca,'XTick',[]);
set(gca,'YTick',[]);
subplot(3,3,5)
b = abs(1-(d1{2}./(max(max(d1{2})))));
J = imadjust(b, [],[], 10);
imshow(J);
set(gca,'Visible','on');
set(gca,'XTickLaber,[|);
set(gca,'YTickLaber,Q);
set(gca,'XTick',[]);
set(gca,'YTick',[]);
subplot(3,3,8)
b = abs(1-(d1{3}./(max(max(d1{3})))));
J = imadjust(b, [],[], 10);
imshow(J);
set(gca,'Visible','on');
set(gca,'XTickLaber,[]);
set(gca,'YTickLabel',[]);
set(gca,'XTick',D);
set(gca,'YTick',D);
xlabel('(b)');
%Display vertical details
subplot(3,3,3)
b = abs(1-(d2{1}./(max(max(d2{1})))));
J = imadjust(b,[],[], 10);
imshow(J);
set(gca,'Visible','on');
set(gca,'XTickLabel',[]);
set(gca,YTickLabel',[]);
set(gca,'XTick',[]);
set(gca,'YTick',[]);
59


subplot(3,3,6)
b = abs(1-(d2{2}./(max(max(d2{2})))));
J = imadjust(b,[],[],10);
imshow(J);
set(gca,'Visible1,'on');
set(gca,'XTickLabel',[]);
set(gca,'YTickLaber,[]);
set(gca,'XTick',[]);
set(gca,'YTick,G);
subplot(3,3,9)
b = abs(1-(d2{3}./(max(max(d2{3})))));
J = imadjust(b,[],[],10);
imshow(J);
set(gca,'Visible','on');
set(gca,'XTickLabel',[]);
set(gca,'YTickLaber,G);
set(gca,'XTick',[l);
set(gca,'YTick',[]);
xlabel((c)');
figure
subplot(1,2,2)
imshow(y,[]);
xlabel(['Reconstructed Image SNR = ',num2str(snr),dB']);
subplot(1,2,1)
imshow(save_orig,[]);
xlabel('Original Image');
60


Appendix D
Image Reconstruction from Modulus Maxima
Using the Conjugate Gradient Algorithm
%recon_mm(lvl,type)
%This program decomposes either the cross-section of a healthy neck or a
%tumorous neck, depending on the value of the variable, "type".
%lf type = 't', the tumorous image will be analyzed.
%lf type = 'h', the healthy neck image will be analyzed.
%Decomposition is done using the "algorithme a trous" method.
%The modulus maxima matrix masks are generated for each level where the
%number of levels is specified through the variable, "Ivl". Only those details
%that correspond to those masks are used to reconstruct the image through
%an iterative algorithm called the conjugate gradient algorithm. Modulus
%maxima matrix masks for each level are displayed. Original and
%reconstructed images are also displayed along with the resultant signal-to-
%noise ratio.
function recon_mm(lvl,type)
if type == 't'
A=imread(ltumor28_f256.bmp');
rmsr_err = 10e-3; %Relative mean-square reconstruction error
elseif type == 'h'
A=imread('normal85_f256.bmp);
rmsr_err = 3*10e-3; %Relative mean-square reconstruction error
else
error('Wrong type! Choose either t for tumorous image or h for healthy
neck image');
end
%Convert RGB image to grayscale
A=double(A);
Xrgb=0.2990*A(:,:,1)+0.5870*A(:,:,2)+0.1140*A(:,:,3);
NbColors=256;
X=wcodemat(Xrgb,NbColors);
map=gray(NbColors);
61


%Normalize image
X = X./256;
V = X;
save_orig = y; %Save original image to display later
[nr,nc]=size(y);
%Convert 2-D image into a 1 -D vector
orig_vector = y(1
for i = 2:nr
orig_vector = [orig_vector y(i,:)];
end
%Decompose image
%a = last level approximation
%D1_MM = contains all levels of the horizontal details corresponding to the...
%...modulus maxima matrix masks
%D2_MM = contains all levels of the vertical details corresponding to the...
%...modulus maxima matrix masks
%gprime = reconstruction high pass filter
%hprime = reconstruction low pass filter
[a, D1_MM, D2_MM, gprime, hprime] = mm_atrous(lvl,y);
%Generate modulus maxima masks
u_d1 = (abs(D1_MM) > 0);
u_d2 = (abs(D2_MM) > 0);
%lnitialize search direction, p, with the initial gradient which is similar to the...
%...first approximation of the original image,
p = atrous_up(lvl, hprime, gprime, a, D1_MM, D2_MM);
f = zeros(1, nr*nc); %lnitialize f which is the image approximation
pold = zeros(1 ,nr*nc); %lnitialize pold which is the previous search direction
%lnitialize r which is the residual vector. This involves changing...
%...the p matrix into a vector.
r = P(1,:);
for i = 2:nr
r = [r p(i,:)];
end
%lnitialize Lpold which is the previous orthogonal projection of p onto...
62


%...the image space
Lpold = zeros(1 ,nr*nc);
i = 0;
%Continue to iterate the conjugate gradient algorithm until the relative...
%...mean-square reconstruction error is below rmsr_err.
while ((norm(f orig_vector))/(norm(orig_vector))) > rmsr_err
i = i+1;
%lnvoke conjugate gradient algorithm
%fnew updated image vector
%pnew updated search direction
%rnew updated residual vector
%Lp orthogonal projection vector
[fnew,pnew,rnew,Lp] = ConjGrad_2d(f,p,pold,r,u_d1 ,u_d2,lvl,Lpold);
f = fnew;
%Transfer current search direction to previous search direction...
%...and change 2-D search direction into 1-D vector
pold = p(1
for j = 2:nr
pold = [pold p(j,:)];
end
%Update search direction and change 1-D vector into 2-D matrix
p = pnew(1:nc);
for j = 1:nr-1
p = [p; pnew(1 +(j*nc):(j+1 )*nr);];
end
r = rnew; %Update residual vector
%Transfer current orthogonal projection to previous orthogonal projection:
Lpold = Lp;
end
%Transform image vector into 2-D image
fjmage = f(1:nc);
for j = 1:nr-1
fjmage = [fjmage; f(1+(j*nc):(j+1)*nr);];
end
i
%Normalize image
fjmage = f Jmage./(max(max(f Jmage)));
63


%Calculate signal-to-noise ratio
var_s = (std2(save_orig))A2;
var_n = (std2(save_orig f_image))A2;
snr = 10*log10(var_s/var_n);
%Display results
figure
imshow(save_orig,[]);
xlabel(Original Image');
figure
imshow(f_image,0);
xlabel(['Reconstructed Image from the Modulus Maxima SNR
'.nurr^st^snrJ.'dB']);
64


Appendix E
MATLAB Computer Program to Decompose
Image to be Reconstructed by the Conjugate
Gradient Algorithm
%mm_atrous(lvl,y)
%This function is called by recon_mm(lvl,type) to help generate the modulus
%maxima matrix masks needed to reconstruct an image with minimal data.
%This function generates the masked horizontal and vertical details along
%with the last approximation which is used to generate the initial search
%direction for the conjugate gradient algorithm.
%lnputs:
%lvl = number of levels of decomposition
%y = original image
%Outputs:
%y = last level approximation image
%D1_MM = contains masked horizontal details from every level
%D2_MM = contains masked vertical details from every level
%gprime = reconstruction high pass filter
%hprime = reconsrtuction low pass filter
%Finally the program displays the modulus maxima matrix masks for each
%level.
function [y, D1_MM, D2_MM, gprime, hprime] = mm_atrous(lvl,y)
%Spline wavelet filters
hf=[. 125 .375 .375 .125].*sqrt(2); %Low pass decomposition and
reconstruction filter
gf=[.5 -,5].*sqrt(2); %High pass decomposition filter
gprime = [0 0 -.5 .5 0 0].*sqrt(2); %High pass reconstruction filter
%L=level of detail
L=lvl;
[nr,nc]=size(y);
nonzeros = L*2*nr*nc;
65


%Loop for decomposing image
for k = 1 :L
%...............................
%calculate approximations
O/
/o----------------------------
%shift either image or latest approximation vertically
for i=1:nc
a{k}(1:nr,i) = leftshift(y( 1 :nr,i)',2A(k-1))';
%circular convolution of hf with columns from vertically shifted image
a{k}(1 :nr,i) = cconv(hf,a{k}(1 :nr,i)')';
end
%shift either image or latest approximation horizontally
for i=1:nr
a{k}(i,1:nc) = leftshift(a{k}(i,1 :nc),2A(k-1));
%circular convolution of hf with rows from horizontally shifted image
a{k}(i,1:nc) = cconv(hf,a{k}(i,1 :nc));
end;
%...........................................
%calculate horizontal and vertical details
%...........................................
%Circular convolution of high pass filter, gf, with columns from image or...
%... previous approximation
for i=1 :nc
d1 {k}(1 :nr,i)=cconv(gf,y(1 :nr,i)')';
%Shift resultant image vertically. The result is a horizontal detail...
%... image
d1 {k}(1 :nr,i)=leftshift(d1 {k}(1 :nr,i)',2Ak)';
end;
%Circular convolution of high pass filter, gf, with rows from image or...
%... previous approximation
for i=1 :nr
d2{k}(i,1 :nc)=cconv(gf,y(i,1 :nc));
%Shift resultant image horizontally. The result is a vertical detail...
%... image
d2{k}(i, 1 :nc)=leftshift(d2{k}(i, 1 :nc),2Ak);
end;
66


%Dilate filters
hf=[dyadup(hf,2) 0]; %a trous
hprime=hf;
gf=[dyadup(gf,2) 0]; %a trous
gprime = [dyadup(gprime,2)0];
y = a{k};
end;
%Set threshold to zero
Threshold = 0.02;
%Calculate the modulus maxima for each level
for k = L:-1:1
wtm{k} = sqrt(d1{k}.A2 + d2{k}.A2); %Wavelet Transform Modulus
[mod_m,mod_n] = size(wtm{k});
%Calculate angle of the wavelet transform vector
for qt = 1:nr
for r = 1 :nc
alpha{k}(qt,r) = atan2(d2{k}(qt,r),d1{k}(qt,r));
if alpha{k}(qt,r) < 0
alpha{k}(qt,r)=2*pi+alpha{k}(qt,r);
end
end
end
%..................................
%calculate modulus maxima image
%..................................
for r=1:mod_m
forc=1:mod_n
mod_max{k}(r,c)=255; %lnitialize modulus maxima array
end
end
%find local maximum of modulus
ang=pi/8;
for r=2:mod_m-1
for c=2:mod_n-1
if (alpha{k}(r,c)>=(15*ang)|...
alpha{k}(r,c)<=(1 *ang))&...
wtm{k}(r,c)>=Threshold &...
67


wtm{k}(r,c)>=wtm{k}(r-1 ,c) & wtm{k}(r,c) >= wtm{k}(r+1,c)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(1 *ang)&...
alpha{k}(r,c)<=(3*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1 ,c-1) & wtm{k}(r,c) >= wtm{k}(r+1 ,c+1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(3*ang)&...
alpha{k}(r,c)<=(5*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r,c-1) & wtm{k}(r,c) >= wtm{k}(r,c+1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(5*ang)&...
alpha{k}(r,c)<=(7*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1 ,c+1) & wtm{k}(r,c) >= wtm{k}(r+1 ,c-1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(7*ang)&...
alpha{k}(r,c)<=(9*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1 ,c) & wtm{k}(r,c) >= wtm{k}(r+1 ,c)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(9*ang)&...
alpha{k}(r,c)<=(11*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1,c-1) & wtm{k}(r,c) >= wtm{k}(r+1 ,c+1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(11*ang)&...
alpha{k}(r,c)<=(13*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r,c-1) & wtm{k}(r,c) >= wtm{k}(r,c+1)
mod_max{k}( r,c)=0;
elseif (alpha{k}(r,c)>=(13*ang)&...
alpha{k}(r,c)<=(15*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{kj(r,c)>=wtm{k}(r-1,c+1) & wtm{k}(r,c) >= wtm{k}(r+1,c-1)
mod_max{k}(r,c)=0;
end
end
end
end
68


%Build vertical and horizontal detail matrices that include only those ...
%...coefficients that are located at the modulus maxima; otherwise, the...
%...coefficients are set to zero,
for k = 1:L
d1_mm{k} = zeros(nr,nc);
d2_mm{k} = zeros(nr,nc);
for i = 1:nr
forj = 1:nc
if mod_max{k}(i,j) == 0
d1_mm{k}(i,j) = d1{k}(i,j);
d2_mm{k}(i,j) = d2{k}(i,j);
end
end
end
D1_MM(:,:,k) = d1_mm{k};
D2_MM(:,:,k) = d2_mm{k};
end
mm_nonzeros = nnz(D1_MM) + nnz(D2_MM);
percent = mm_nonzeros/nonzeros
%Display modulus maxima matrix mask
figure
subplot(1,3,1)
imshow(mod_max{1},[]);
xlabel('Modulus Maxima for Level 1');
subplot(1,3,2)
imshow(mod_max{2},[]);
xlabel('Modulus Maxima for Level 2');
subplot(1,3,3)
imshow(mod_max{3},[]);
xlabel('Modulus Maxima for Level 3');
69


Appendix F
MATLAB Computer Program for Generation of
the Initial Search Direction Used in the
Conjugate Gradient Algorithm
%atrous_up(levels,hprime,gprime,a,D1_MM,D2_MM)
%This program is called by recon_mm() initially to generate the first search
%direction for the conjugate gradient algorithm. All subsequent calls to this
%program are done within the program, ConjGrad_2d(), to generate the
%orthogonal projection of p (the search direction) onto the image space. This
%program resembles the algorithme a trous image reconstruction algorithm.
%lnputs:
%levels = number of reconstruction levels
%hprime = low pass reconstruction filter
%gprime = high pass reconstruction filter
%D1_MM = horizontal detail matrices
%D2_MM = vertical detail matrices
%Output:
%The first time this program is called, p = search direction
%For subsequent calls, p = orthogonal projection of p onto the image plane
function p = atrous_up(levels,hprime,gprime,a,D1_MM, D2_MM)
%L=level of detail
L=levels;
[nr,nc]=size(a);
y = a;
%.......................................................................
%Loop for generating initial search direction/orthogonal projection for the...
%...conjugate gradient algorithm
%.......................................................................
for k = L:-1:1
%Decimate filters
gprime = dyaddown(gprime,3); %High pass filter
hprime = dyaddown(hprime,3); %Low pass filter
70


%Circular convolve rows of approximation, y, with the low pass filter hprime
for i=1 :nr
rec_a{k}(i,1 :nc) = cconv(hprime,y(i,1 :nc));
%Shift resultant image horizontally
rec_a{k}(i,1:nc) = leftshift(rec_a{k}(i,1:nc),2Ak);
end;
%Circular convolve columns of resultant image with the low pass filter,...
%... hprime
for i=1 :nc
rec_a{k}(1:nr,i) = cconv(hprime,rec_a{k}(1 :nr,i)')';
%Shift resultant image vertically
rec_a{k}(1:nr,i) = leftshift(rec_a{k}(1:nr,i)',2Ak);
end;
%Shift horizontal details vertically
for i=1 :nc
rec_d1{k}(1 :nr,i)=leftshift(D1_MM(1:nr,i,k)',2A(k-1))';
%Circular convolve columns with high pass filter, gprime
rec_d1{k}(1 :nr,i)=cconv(gprime,rec_d1{k}(1 :nr,i)')';
end;
%Shift vertical details horizontally
for i=1:nr
rec_d2{k}(i,1 :nc)=leftshift(D2_MM(i,1 :nc,k),2A(k-1));
%Circular convolve rows with high pass filter, gprime
rec_d2{k}(i,1:nc)=cconv(gprime,rec_d2{k}(i,1:nc));
end;
%Add new approximation image to the new horizontal and vertical details.
%This generates the new approximation for the next level up. The final...
%...approximation will be the first search direction for the conjugate...
%...gradient algorithm,
y = rec_a{k} + rec_d1{k} + rec_d2{k};
end
p = y;
71


Appendix G
Conjugate Gradient MATLAB Computer Program
%ConjGrad_2d(f,q,pold,r,u_d1 ,u_d2,lvl,Lpold)
%This program is the implementation of the conjugate gradient algorithm.
%This routine gets invoked many times until the reconstructed image reaches
%a certain mean-squared error.
%lnputs:
%f = reconstructed image from previous iteration
%q = search direction in matrix form
%pold = previous search direction in vector form
%r = residual vector
%u_d1 = modulus maxima matrix masks for horizontal details at every level
%u_d2 = modulus maxima matrix masks for vertical details at every level
%lvl = number of decomposition/reconstruction levels
%Lpold = previous orthogonal projection of the search direction in vector form
%Outputs:
%fnew = new reconstructed image
%pnew = new search direction
%rnew = new residual vector
%Lp = current orthogonal projection of the search direction, p, onto the image
%plane.
function [fnew,pnew,rnew,Lp] = ConjGrad_2d(f,q,pold,r,u_d1 ,u_d2,lvl,Lpold);
[nr,nc] = size(q);
%Decomposition of the search direction where only those details (D1_MM...
%...and D2_MM) that correspond to the modulus maxima matrix masks are...
%...kept. The variables, hprime and gprime, are the reconstruction filters...
%...to be used in atrous_up(). The variable, y, is the approximation of the...
%...last level also to be used in atrous_up().
[y, D1_MM, D2_MM, hprime, gprime] = atrous_down(q,lvl,u_d1 ,u_d2);
%Transform 2-D search direction into vector
p = q(i.:);
for i = 2:nr
72


p = [pq(i.O];
end
%Calculate orthogonal projection
Lq = atrous_up(lvl, hprime, gprime, y, D1_MM, D2_MM);
%Transform 2-D projection into vector
Lp = Lq(1
for i = 2:nr
Lp = [Lp Lq(i,:)];
end
lambda = (r*p')/(p*Lp'); %Calculate step size
fnew = f + lambda*p; %Update resultant image
mew = r lambda*Lp; %Update residual vector
%Update search direction
pnew = Lp ((Lp*Lp')/(p*Lp'))* p;
if ~(pold == 0),
pnew = pnew ((Lp*Lpold')/(pold*Lpold')) pold;
end
73


Appendix H
MATLAB Computer Program for Decomposing
the Search Direction Used in the Conjugate
Gradient Algorithm
%atrous_down(y,lvLu_d1 ,u_d2)
%The funtion is called by ConjGrad_2d() to decompose the search direction
%into horizontal and vertical details as well the final approximation at
%level = Ivl. Only the details that correspond to the modulus maxima matrix
%masks are kept.
%lnputs:
%y = search direction
%M = number of decomposition levels
%u_d1 = modulus maxima matrix masks for horizontal details
%u_d2 = modulus maxima matrix masks for vertical details
%Outputs:
%y = final approximation
%D1_MM = masked horizontal detail matrix for each level
%D2_MM = masked vertical detial matrix for each level
%hprime = low pass reconstruction filter
%gprime = high pass reconstruction filter
function [y, D1_MM, D2_MM, hprime, gprime] = atrous_down(y,lvl,u_d1,u_d2)
%Spline wavelet filters
hf=[. 125 .375 .375 ,125].*sqrt(2);
gf=[.5 -.5].*sqrt(2);
gprime = [0 0 -.5 .5 0 0].*sqrt(2);
%L=level of detail
L=lvl;
[nr,nc]=size(y);
%..............................
%Loop for decomposing search direction
%......................................
for k = 1:L
%Low pass decomp/recon filter
%High pass decomposition filter
%High pass reconstruction filter
74


%...............................
%calculate approximations
%...............................
%shift either image or latest approximation vertically
for i=1 :nc
a{k}(1:nr,i) = leftshift(y(1 :nr,i)',2/v(k-1))';
%circular convolution of hf with colums from vertically shifted image
a{k}(1:nr,i) = cconv(hf,a{k}(1:nr,i)')';
end
%shift either image or latest approximation horizontally
for i=1 :nr
a{k}(i,1:nc) = leftshift(a{k}(i,1:nc),2A(k-1));
%circular convolution of hf with rows from horizontally shifted image
a{k}(i,1 :nc) = cconv(hf,a{k}(i,1 :nc));
end;
O/
/o------------------------------........-
%calculate horizontal and vertical details
%..........................................
%Circular convolution of high pass filter, gf, with columns from image or...
%...previous approximation
for i=1:nc
d1{k}(1 :nr,i)=cconv(gf,y(1 :nr,i)')';
%Shift resultant image vertically. The results are horizontal details.
d1 {k}(1 :nr,i)=leftshift(d1 {k}(1 :nr,i)',2Ak)';
end;
%Circular convolution of high pass filter, gf, with rows from image or...
%...previous approximation
for i=1:nr
d2{k}(i,1 :nc)=cconv(gf,y(i,1 :nc));
%Shift resultant image horizontally. The results are vertical details.
d2{k}(i,1 :nc)=leftshift(d2{k}(i,1 inc)^^);
end;
%Dilate filters
hf=[dyadup(hf,2) 0]; %a trous
hprime = hf;
gf=[dyadup(gf,2) 0]; %a trous
gprime = [dyadup(gprime,2) 0];
y = a{k};
75


D1_MM(:,:,k) = d1{k}; %Save horizontal details at level k
D2_MM(:,:,k) = d2{k}; %Save vertical details at level k
end;
%Save only the horizontal details that correspond to the modulus maxima...
%... masks.
D1_MM = u_d1.*D1_MM;
%Save only the vertical details that correspond to the modulus maxima...
%... masks.
D2_MM = u_d2.*D2_MM;
76


Appendix I
Utilities
%This function shifts a single vector value from position one to the
%end of the vector.
function ShiftedVector=leftshift(Vector,LengthOfShift)
LengthOfVector=length(Vector);
for i=1 :LengthOfShift
Temp=Vector(1);
TempVector=Vector(2:LengthOfVector);
Vector=[TempVector Temp];
end
ShiftedVector=Vector;
%This function does a circular convolution between an input vector, x,
%and a discrete filter,f.
function y = cconv(f,x)
m = length(x);
r = length(f);
%lf filter length is smaller than the input vector:
if r <= m,
x_extended = [x((m+1-r):m) x];
%lf filter length is bigger than the input vector:
else
z = zeros(1 ,r);
for i=1:r,
q = r*m -r + i-1;
imod = 1 + rem(q,m);
z(i) = x(imod);
end
x_extended = [z x];
end
%Filter the signal
intermediate_y = filter(f,1 ,x_extended);
%Save only the part of the vector that has the circular convolution results:
y = intermediate_y(r+1 :m+r);
77


REFERENCES
Donoho, D., Duncan, M. R., Huo, X., & Levi, O. (1999). Wavelab 802 for
MATLAB 5.x [ Library of MATLAB software programs]. Retrieved from
http ://www-stat. Stanford. ed u/~ wave lab.
Grochenig, K. (1993). Acceleration of the frame algorithm. IEEE Transactions
on Signal Processing, 41(12), 3331-3340.
Hsung, T., Lun, D.P., & Siu, W. (1998). A deblocking technique for block-
transform compressed image using wavelet transform modulus
maxima. IEEE Transactions on Image Processing, 7(10), 1488-1496.
Hwang, W. L. & Mallat, S. (1992). Singularity detection and processing with
wavelets. IEEE Transactions on Information Theory, 38(2), 617-643.
Kawata, S. & Nalcioglu, O. (1985). Constrained iterative reconstruction by the
conjugate gradient method. IEEE Transactions on Medical Imaging,
MI-4(2), 65-71.
Mallat, S. (2001). A wavelet tour of signal processing (2nd ed.). New York:
Academic Press.
Mallat, S. & Zhong, S. (1992). Characterization of signals from multiscale
edges. IEEE Transactions of Pattern Analysis and Machine
Intelligence, 14(7), 710-732.
Polikar, R. (1999). The engineers ultimate guide to wavelet analysis: The
wavelet tutorial. Retrieved October 13, 2002, from the Rowan
University College of Engineering Web site:
http://engineering.rowan.edu/~polikarAA/AVELETS/WTtutorial.html.
Shensa, M. (1992). The discrete wavelet transform: Wedding the a trous and
Mallat algorithms. IEEE Transactions on Signal Processing, 40(10),
2464-2482.
78