Citation
Image denoising and compression based on wavelets

Material Information

Title:
Image denoising and compression based on wavelets
Creator:
Ly, Dieu M
Publication Date:
Language:
English
Physical Description:
xii, 103 leaves : illustrations ; 28 cm

Subjects

Subjects / Keywords:
Electronic noise ( lcsh )
Image processing -- Digital techniques ( lcsh )
Wavelets (Mathematics) ( lcsh )
Electronic noise ( fast )
Image processing -- Digital techniques ( fast )
Wavelets (Mathematics) ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 100-103).
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Dieu M. Ly.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
656249389 ( OCLC )
ocn656249389
Classification:
LD1193.E54 2010m L9 ( lcc )

Full Text
IMAGE DENOISING AND COMPRESSION BASED ON WAVELETS
Dieu M. Ly
B.S., University of Colorado Denver, 2007
A thesis submitted to the
University of Colorado Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering
2010


This thesis for the Master of Science
degree by
Dieu M. Ly
has been approved
by
4Va/A
Date


Ly, Dieu M. (M.S., Electrical Engineering)
Image Denoising and Compression Based on Wavelets.
Thesis directed by Professor Jan T. Bialasiewicz
ABSTRACT
In modem communication, the received signal or image through transmission is
always corrupted with noise. Noise removal or denoising involves in manipulating the
image data for better visual image qualities. For the past two decades, wavelets are
well known for noise removal from the transmitted visual information in the form of
digital image and still become the popular subject in other signal processing tasks
such as image compression and enhancement. First, this thesis reviews deeply the
literature survey of wavelet transform. Secondly, it introduces the basic concept of
Gaussian noise, which is widely used in signal and image processing. The thesis then
investigates the denoising techniques based on wavelet approach, which are
VisuShrink, SureShrink, and BayesShrink. The processes of the study are: corrupt the
image with Gaussian noise and then acquire the Discrete Wavelet Transform (DWT);
perform hard or soft thresholding on the wavelet coefficients based on the noisy
image; and then apply the Inverse Discrete Wavelet Transform (IDWT) to reconstruct
the image. Lastly, the thesis focuses on an application of image compression, the
Wavelet Transform Modulus Maxima (WTMM). When implemented, WTMM uses
a trous algorithm to generate the approximation, horizontal and vertical detail
coefficients of the test image. Instead of down-sampling the filtered image, the a trous
lowpass and highpass filters are obtained by just inserting zero between each filter
coefficient during the decomposition. This creates holes (A trous in French). The
Conjugate Gradient Accelerated Algorithm is then used to reconstruct an image from
its WTMM. The data collected from each denoising technique and compression


provides for comparative study. The performance of each application is compared in
terms of Root Mean-Square-Error (RMSE), Peak Signal-to-Noise Ratio (PSNR)
values, compression rate, and visual image qualities.
This abstract accurately presents the content of the candidates thesis. I recommend
its publication.
Signed
Jan T. Bialasiewicz


DEDICATION
I dedicate this thesis to my parents, Phet, Saphine and my wife, Sanh for their
constant supports, which give me an opportunity to pursue higher educations. My
deepest gratitude goes to them for believing that I could make it to graduate school.
Thanks to my wife, who has been patient and provides me the encouragement to
complete this thesis.


ACKNOWLEDGEMENT
My thanks to Jan Bialasiewicz, thesis advisor, for his conctribution and guidance for
my research. Through his interest in the subjects, I took classes such as digital signal
processing, digital image processing which helped me understand wavelet
applications within image processing. I also thank all the members of my committee
for their participation.


TABLE OF CONTENTS
Figures....................................................................ix
Tables....................................................................xii
Chapter
1. Introduction............................................................1
1.1 Objective...............................................................1
1.2 Thesis Outlines.........................................................2
2. Wavelet Transform.......................................................4
2.1 Continuous Wavelet Transform (CWT)......................................4
2.2 Multiresolution Analysis (MRA)..........................................6
2.3 Discrete Wavelet Transform (DWT).......................................11
2.4 Mallats Decomposition and Reconstruction Algorithm....................16
2.5 Filter Banks...........................................................19
2.6 Summary................................................................23
3. Noise Model and Error Measurements.....................................24
3.1 Gaussian Noise.........................................................24
3.2 Error Measurements.....................................................25
3.3 Summary................................................................27
4. Wavelet Denoising Methods..............................................28
4.1 Thesholding............................................................29
vii


4.1.1 Hard Thresholding..................................................30
4.1.2 Soft Thresholding..................................................30
4.1.3 Universal Thresholding.............................................31
4.2 VisuShrink...........................................................32
4.3 SureShrink...........................................................34
4.4 BayesShrink.........................................................37
4.5 Summary.............................................................39
5. Wavelet Transform Modulus Maxima (WTMM)...............................41
5.1 The A Trous Algorithm................................................42
5.2 Reconstruction from Wavelet Transform Modulus Maxima (WTMM)..........49
5.3 Summary..............................................................53
6. Results and Conclusions...............................................54
6.1 Results..............................................................54
6.2 Conclusions..........................................................64
Appendix
A. VisuShrink Matlab Program............................................65
B. SureShrink Matlab Program............................................70
C. BayesShrink Matlab Program...........................................78
D. Wavelet Transform Modulus Maxima Matlab Programs.....................86
References..............................................................100
viii


LIST OF FIGURES
Figure
2.1 The Morlet Wavelet Function...........................................5
2.2 The Relationship between Scaling and Wavelet Function Spaces..........7
2.3 Haar Scaling and Wavelet Function, (a) Haar scaling function,
(b) Haar wavelet function............................................9
2.4 Daubechies Wavelet for 4 Vanishing Moments, (a) Daubechies
scaling function, (b) Daubechies wavelet function....................11
2.5 The Analysis Filter Bank for Two-Dimensional Wavelet Transform.......15
2.6 One (left) and Two (right) Levels Two-Dimensional DWT................15
2.7 The Synthesis Filter Bank for Two-Dimensional Wavelet Transform......16
2.8 Three-Scale Forward Wavelet Transform Analysis Filter Bank...........19
2.9 Three-Scale Inverse Wavelet Transform Synthesis Filter Bank..........19
2.10 Two-Channel Filter Banks for One-Dimensional Subband Coding..........20
3.1 Graphical Plot of Gaussian Function..................................25
3.2 Coin Image with Gaussian Noise, (a) original image, (b) Gaussian noise
image (with default values of the mean and noise variance of 0 and
0.01 respectively)...................................................25
4.1 Block Diagram of Wavelet Denoising...................................28
4.2 Thresholding Curves, (a) original thresholding, (b) hard thresholding,
(c) soft thresholding...............................................31
4.3 Denoising Lena Image by VisuShrink with Hard and Soft Thresholding,
using a = 20, (a) original Lena, (b) noisy Lena, (c) denoised Lena with
hard thresholding (PSNR = 27.29 dB, RMSE = 11.02), (d) denoised Lena
with soft thresholding (PSNR = 27.97 dB, RMSE = 14.39)........33
IX


4.4 Denoising Mandrill Image by SureShrink with Hard and Soft
Thresholding, using a = 25, (a) original image, (b) noisy image,
(c) denoised image with hard thresholding (PSNR = 22.74 dB,
RMSE = 18.60), (d) denoised image with soft thresholding
(PSNR = 23.56 dB, RMSE = 16.93).......................................36
4.5 Denoising Goldhill Image by BayesShrink with Hard and Soft
Thresholding, using a = 10, (a) original image, (b) noisy image,
(c) denoised image with hard thresholding (PSNR = 30.48 dB,
RMSE = 7.63), (d) denoised image with soft thresholding
(PSNR = 30.60 dB, RMSE = 7.53)........................................39
5.1 Levels 0,.. .,2 of an A Trous Decomposition...........................44
5.2 Quadratic Spline Wavelet (left) and Scaling Function (right)..........46
5.3 Decomposition Filter Bank for the Algorithm A Trous...................48
5.4 Reconstruction Filter Bank for the Algorithm A Trous..................48
5.5 3- Levels of Decomposition of WTMM Representation of Lena Image.......51
5.6 Wavelet Transform Modulus Maxima Representation of Image Lena for
3- Levels Decomposition, (a) wavelet transform modulus, (b) angle of
wavelet transform modulus, (c) modulus maxima..................52
5.7 Image Lena using WTMM with a = 0.5, (a) original image,
(b) reconstructed image, PSNR = 47.45 dB, compression rate = 59.03%..53
6.1 Denoised Lena Images by VisuShrink, SureShrink and BayesShrink, using
a = 10, (a) hard VisuShrink (b) soft VisuShrink, (c) hard SureShrink,
(d) soft SureShrink, (e) hard BayesShrink, (f) soft BayesShrink......56
6.2 Denoised Goldhill Images by VisuShrink, SureShrink and BayesShrink,
using a = 20, (a) hard VisuShrink, (b) soft VisuShrink, (d) hard SureShrink,
(c) soft SureShrink, (e) hard BayesShrink, (f) Soft BayesShrink......58
6.3 Denoised Mandrill Images by VisuShrink, SureShrink and BayesShrink
x


using a = 25, (a) hard VisuShrink, (b) soft VisuShrink, (c) hard SureShrink,
(d) soft SureShrink, (e) hard BayesShrink, (f) soft BayesShrink........60
6.4 Reconstructed Head Image from WTMM with a = 2, (a) original image,
(b) reconstructed image................................................62
6.5 Reconstructed Peppers Image from WTMM with c = 0.9,
(a) original image, (b) reconstructed image............................62
6.5 Reconstructed Lena Image from WTMM with o = 10,
(a) original image, (b) reconstructed image............................63
XI


LIST OF TABLES
Table
4.1 Comparisons of PSNR outputs of Lena image by VisuShrink with hard
and soft thresholding at different a values.........................34
4.2 Comparisons of PSNR outputs of Mandrill image by SureShrink
with hard and soft thresholding at different a values...............36
4.3 Comparisons of PSNR outputs of Goldhill image by BayesShrink using
hard and soft thresholding at different o values....................38
5.1 Spline filter coefficients used in the algorithm a trous............47
6.1 Comparisons of PSNR and RMSE values of Lena image, using a = 10....55
6.2 Comparisons of PSNR and RMSE values of Goldhill image,
using a = 20.........................................................57
6.3 Comparisons of PSNR and RMSE values of Mandrill image,
using a = 25.........................................................59
6.4 Comparisons of PSNR, RMSE values, and CRate (Compression Rate)
of Head, Peppers, and Lena images at different a values.............63
xii


1. Introduction
Since the age of modem communication, variety of digital image has been generated
and introduced to everyday life. They include natural image, geographical
information, astronomy, digital commercial television, and magnetic resonance
images, etc. When images are transmitted, they become a major information source.
They are corrupted by different kinds of noise. Denoising them requires sophisticated
algorithms in the image processing and computer vision.
1.1 Objective
There are many approaches and techniques for denoising and compressing images.
The objective of this thesis is to denoise and compress images using wavelets. The
thesis assumes that images are corrupted by Gaussian noise. The denoising methods
are based on the existing techniques developed by Dohono and Johnstone [10, 11]
and Yang, Yu, and Vetterli [19, 20, 21, 22]. For compression, we use a technique
suggested by Mallat and Zhong [29].
Suggested by Donaho and Johnston, VisuShrink is the technique applying
both hard and soft thresholds utilizing the wavelet shrinkage with the universal
threshold, a single threshold applied globally to all coefficients of all levels. They
also proposed another method called SureShrink, which is based on the threshold
derived from the minimization of the Steins Unbiased Risk Estimate (SURE) for
one-dimensional signal. The choice of thresholding in this technique depends on the
shrinkage functions and the multiresolution levels. For two-dimensions, the threshold
can be achieved on the multiresolution level or subband.
Yang, Yu, and Vetterli proposed BayesShrink, which is based on the
mathematical Bayesian risk. BayesShink is a well known method for obtaining higher
image quality. It is subband threshold dependent to be nearly optimal favoring soft
1


threshold over hard threshold. The Bayesian thresholding is done at each subband
resolution in the wavelet decomposition.
Lastly, the wavelet transform modulus maxima (WTMM), one of multiscale
signal presentations, is used for image compression. Mallat and Zhong showed that
wavelet transform is proportional to gradient smooth function when the detail
coefficients are the derivative of the approximation coefficients. Thus multiscale edge
detection is implemented by looking into the modulus of the wavelet detail coefficient
to detect the local maxima. For this reason, WTMM is implemented by using an a
trous algorithm instead of discrete wavelet transform.
1.2 Thesis Outlines
This thesis is outlined as follows:
Chapter 2 reviews in depth the literature survey of wavelet transforms. It
introduces the continuous wavelet transform (CWT), multiresolution analysis (MRA),
and the discrete wavelet transform (DWT) respectively. Section 2.4 of this chapter
presents Mallats decomposition and reconstruction algorithm. Section 2.5 explains
how two channel filter banks can be used to implement the forward discrete wavelet
transform and the inverse discrete wavelet transform (IDWT).
Chapter 3 introduces basic concepts of Gaussian noise and error
measurements. The quantitative measurements of images are in terms of mean square
error (MSE), root mean square error (RMSE), signal-to-noise ratio (SNR), and peak
signal-to-noise ratio (PSNR).
Chapter 4 presents the applications of the wavelet denoising. First, the hard,
soft, and universal thresholds are introduced. Then, the applications of wavelet
denoising VisuShrink, SureShrink, and BayesShrink are presented and
implemented.
Chapter 5 describes an application of wavelet transform modulus maxima
(WTMM). Because WTMM is implemented by using a trous algorithm, the basic
2


concepts of a trous algorithm are presented as well. Lastly, this chapter reviews the
conjugate gradient accelerated algorithm, which is used to reconstruct an image from
its WTMM.
Chapter 6 provides the results from the additional experiments and concludes
the thesis. In this chapter, the collected data in terms of compression rate, RMSE, and
PSNR values as well as image quality provides for comparative study.
3


2. Wavelet Transform
Wavelets are mathematical functions, which decompose the signal into components
of waves or frequencies and then analyze each component with a resolution to be
matched to its scale. They provide solutions to important signal processing tasks such
as noise removal, image enhancement, and compression. This chapter deeply reviews
the principles of wavelet analysis, which includes continuous wavelet transform,
discrete wavelet transform, and multiresolution analysis. Mallats decomposition and
reconstruction algorithms and two-channel filter banks are presented in this chapter as
well.
Wavelet transform decomposes a function over dilated and translated wavelets
of various frequency components. Suppose that xp(t) is a wavelet function G L2(R).
It must satisfy zero average condition [1, 2, 3, 5, 6], The zero integral of ip is written
as
Co.'Pi^dt = (2.1)
At a scale of a or dilation variable and translation of b, the function ipab (t) is defined
by
*0(0 = ^(v) (12)
where is the normalizing factor. Figure 2.1 shows the graphical plot of Morlet
wavelet function, xp{t) = e~f2/2cos (5t).
2.1 Continuous Wavelet Transform (CWT)
Given a wavelet function ip satisfying equation (2.1), the continuous wavelet
transform of a function / G L2(R) is Wf(ja, b) and given by
Wf(.a.b) = jSSZ,mr(SL)dt (2.3)
4


where ip*is the complex conjugation and a, b are real signals.
In short, the equation (2.3) is written as
Wf{a, b) = ra,i,(t)dt (2.4)
This equation shows that function f(t) is decomposed into a set of function defined
in equation (2.2) that forms an orthogonal basis of L2(R).

- ' 1 1 m i\ J
- i \ i \
- A / 1 ^/ \ 1 A / \
/ \ ^ 4
- -
11
- 1 w
\j 1 T J I
,ti_________________________i________________________i_______________________i ________________________i________________________i________________________i________________________i________________________i
-4 -3 2 l 0 1 2 3 4
Figure 2.1: The Morlet Wavelet Function.
For any function / £ L2(M), the function /(?) can be obtained by taking the
inverse continuous wavelet transform under admissibility condition

(2.5)
where
c* = M
is called the admissibility. The ip(co) is the Fourier transform of The above
equations must satisfy zero average condition, which states i/>(0) = 0 and > 0
as a) -* oo plunging rapidly to make < oo. The equation (2.4) does the analysis
decomposition) while equation (2.5) serves as the synthesis (reconstruction) of the
signal.
5


2.2 Multiresolution Analysis (MRA)
Multiresolution analysis is the important concept in wavelet analysis. It decomposes a
signal into subsignals of different size resolution levels. Thus a signal can be
implemented in multiple resolutions. It could be said that the signal representation of
a wavelet function is coarse in overall approximations and detail coefficients, which
have the effect on the function at various scales. The different signal scales represent
the distinct resolutions in the same signal. For these reasons, multiresolution analysis
is a better way to perform DWT as well as reconstruction of wavelet function. The
multiresolution analysis is defined as a sequence {Vj: j E Z] of closed subspaces of
L2(M) satisfying following conditions [1,8, 27]:
i. The {Vj\ j E Z} in L2(M) is spanned in the subspaces that are nested in
increasing scale.
Vj c Vj+i V/ E TL
ii. lim^.oo Vj = flJL-oo Vj = {0} and lim,-^ Vj = U=_oo Vj is dense in
L2(M). As the resolution approaches zero, the approximation function
contains less and less information until it converges to zero. On the
contrary, as the resolution increases, the approximated signal converges to
the original signal in L2(R).
iii. f(x) EVj - /(2x) £ Vj+1, Vj E Z The space of approximated function
can be written from one another by scaling each approximated function by
rationing their resolution values.
iv. There exists a unique function (p(x) E L2(R) such that for each j ElL the
set { The same proof is applied for wavelet function xp(x). There exists ip(x) such
that {4)j,k(x) = 'f2Jip(2Jx k): j,k E l] is an orthomormal basis of L2(R). The
scaling function 6


scaling function {k(x)} is obtained by binary dilations and dyadic translations and
as stated above is an orthomormal basis of a subspace Vj
for all j,k GZ and along the x-axis while scale j determines the width of k (x) how broad or narrow
they are along the x-axis and 2^2 controls their height. Given that scaling function
functions is defined by scaling and translating the wavelet function i/>(x) in subpaces
Vj and Vj+1
t/;;,k(x) = 2'/2T//(2'x-/c) (2.8)
for all j,k E TL. Figure 2.2 illustrates the nesting spaces of L2(M). If the Wj is the
difference between any two adjacent scaling subspaces Vj and V)+1, then the Vj+1 is
obtained by composing Vj and Wj
Vj+1 = Vj Wj (2.9)
where denotes the union of spaces. Because the Wj is the orthogonal complement
of Vj and Vj+1, the subspaces Vj and Wj are orthogonal to each other. Thus we have
< = J for all j, k.lElL.
Figure 2.2: The Relationship between Scaling and Wavelet Function Spaces.
7


(2.11)
The entire measurable spaces can be express as
L2(M) = V0 W0 W W2 ...
or even it can be written as
L2(M) = W_2 W W0 Wx W2 ...
The scaling function basis in Vx
(2.12)
where h^in) = { The equation (2.12) can be written as
(p(x) = 'j2'Znh(p(n)(p(.2x -n) (2.13)
where hv (n)is the scaling wavelet coefficient. Applying the same analogy, the
wavelet function ip(x) is obtained since xpipc) £ W0 E Wx. It is the weighted sum of
shifted and double resolution scaling functions and can written as
xp(x) = y/2'Znhlp(n)(p(2x n) (2.14)
where is the wavelet function coefficient.
The /i^(n) and /i^(n) can be viewed as low pass and high pass filter
coefficients respectively. According to [2], the equation (2.13) is called
multiresolution analysis (MRA) equation or the dilation equation, meaning that any
expansion function of any subspace can be double-resolution copies of itself that is
the component or function in subspace at lower resolution can be presentable by just a
shift at the next higher resolution. If the wavelets span the orthogonal complement
spaces and the integer translation of the wavelets is orthogonal, then the relation
between the wavelet function coefficient and the scaling function coefficient is as
follows:
/ty(n) = (-l)nh(l-n) (2.15)
8


0 < x < 1
otherwise
(2.16)
Consider the Haar scaling function (p(x)
whose filter coefficients h^in), known as low pass filter of this function, are
/i(n) = ^[ 1,1] (2.17)
The associated Haar wavelet function can be defined by
(1 1 < x < 0.5
000 = 1-1 0.5 < x < 1 (2.18)
v 0 elsewhere
Applying the relation between h^in) and /i filter coefficients h^ (n) are obtained as
Vn)=7f[l--1] (2-19)
Figure 2.3 (a) shows graphical presentation of the Haar scaling function Figure 2.3 (b) is the Haar wavelet function ip(x).
(a) Haar Scaling Function (b) Haar Wavelet Function
Figure 2.3: Haar Scaling and Wavelet Function, (a) Haar scaling function, (b) Haar
wavelet function.
Daubechies wavelet forms a group of wavelet functions that have compact
support and orthogonality [9]. A compactly supported function is nonzero over a
9


small region of its range. For filter length M = L + 1, where L is an odd positive
integer, the low pas filter coefficients hv of some of Daubechies family members
can be found by using the following equations [4],
The normalization property of scaling function is
££=0V = V2 (tf(0) = V2) (2.20)
This equation requires the wavelet function ip with vanishing moments
starting at k = 1
= 0- rn = o,i......L~t (tf(m)(*) = 0) (2-21)
where denotes the mth derivative of H(co).
For orthogonality, the properties can be expressed as
^Lik=2mh(pfch and
lk=0h2vk = l (2.23)
Suppose that h^ = (h^0, h^i,..., hvL) is a solution to the above equations.
Then its Fourier series is //(co) = Y,i=o^-(pielko>- Let z = eI6\ then we can consider
the polynomial
Piz) = YtLi=oh The general polynomial solution used to compute Daubechies coefficients is defined
by
L+1
P(z) = (l + z)(?(z) (2.25)
where Q(z.) is a polynomial of degree L-i For co = 7r, we have z = el7T = 1, and
H(jp) = 0. This implies that P(1) = 0. However the Q( 1) is not equal to zero as
shown by Daubechies [9].
The low pass filter coefficients of Daubechies 4 wavelets are given by:
IKo, hvl, hv2, /i^3} = ^ {(1 + V3), (3 + V3), (3 V3), (1 V3)} (2.26)
10


The resulting solution to the high pass filter coefficients are obtained after applying
the relation between h^(n) and h^in):
{h-xpo, hxpi, hxjj2,hxp3} = { (l + V3), (3 V3), (3 + yfT), (l + V3)} (2.27)
For 4 vanishing moments, Figure 2.4 (a) shows Daubechies scaling function while
Figure 2.4 (b) is Daubechies wavelet function.
(a) db4 Scaling Function
0 12 3 4
(b) db4 Wavelet Function
Figure 2.4: Daubechies Wavelet for 4 Vanishing Moments, (a) Daubechies scaling
function, (b) Daubechies wavelet function.
2.3 Discrete Wavelet Transform (DWT)
The series expansion of the function /(x) £ L2(M) relating to wavelet function
xfj (x)and scaling function fix) = Zk cjo (k)(phik(x) + Zk Tf=jo dj(.k)xpjik(x) (2.28)
But (pj0ik(x) = 2j/2(p(2jx k) and ipj>k(x) = 2j/2ip(2^x k), the function fix)
becomes
11


/O) = Ik cJo (/c)2;/2 Ik lT=h dj(k)Vl2ip(Vx k) (2.29)
where c;-0(fc) are the scaling coefficients and dj(k) are the wavelet detail coefficients.
The lowest level of resolution starts at scale j0 = 0. The first sum approximates f(x)
or provides the coarse representation of the signal by using the scaling function [1,2,
3, 5, 6]. With j > j0 and k any integer index at higher resolution, the wavelet function
is added to the second term of the sum to provide the detail approximation. In other
words, the scaling function (p{x) provides low frequency portion of the signal while
the wavelet function ip(x) represents the high frequency portion of the signal. The
scaling coefficients c7o (k) and the wavelet detail coefficients dj (k) can be obtained
by using the inner product as follows:
cjo{k) = (f(x) = f f {x)2^2 (p(2^ x k)dx (2.30)
dj(k) = = / f(x)2^2ip(2jx k)dx (2.31)
If the series expansion of the function f(x) is a sequence of numbers, the
resulting coefficients are called discrete wavelet transform (DWT) of f(x). Thus the
equations (2.30) and (2.31) become the DWT pairs and are rewritten in the forms of
c(jo,k) = (2'32)
d(j,k) = ^=lkf(.x)xpjik(x) (2.33)
for j > j0and the expansion function f(x) in equation (2.28) becomes
/O) = cOo. kM0,jic(*) + y0 Sfc d(j (2-34)
where -j= is the normalizing factor. Because both c(j0,k) and d(j,k) correspond to
the Cj (fc) and d/(/c) of the series expansion function in equation (2.30) and (2.31),
they are the approximation and detail coefficients respectively. Also the c(j0,k) and
12


d(j, k) as well as f(x)are functions of the discrete variable x = 0,1,2,..., M 1. The
M is equal to 2>,j = 0,1,2,...,] 1, and k = 0,1,2,..., 27 1.
One-dimensional (1-D) transforms can be extended to two-dimensional (2-D)
functions. The 2-D transform is implemented by performing two 1-D transforms in
series. The manner in which this is done is that the 1-D scaling function expands into a 2-D scaling functionary); however, the wavelet function xp(x)
spans into three 2-D wavelets as xl>H(x,y), xpv(x,y), and ipD(x,y). Each function is
the product of a 1-D scaling function xp(x) and wavelet function xp(x) and can be
written in separable function as
(p(x,y) = (p{x)(p{y) (2.35)
*pH(x,y) = xK*) xpv{x,y) = (p(x)xp(y) (2.37)
xpD(x,y) = xp(x)xp(y) (2.38)
The xpH,ijjv, and xpDare wavelet measurement function of the intensity for the image
in directions responding to horizontal, vertical, and diagonal edges respectively. The
dilation of 2-D scaling function has the form of
(p(x, y) = 2j/2cp(2jx m, 2iy n) (2.39)
The corresponding dilation equations of the 2-D wavelet functions are
ipH(x,y) = V'2y\>{Vx m,2;y n)
ipv(x,y) = 2^2ip{2ix m, 2;y n)
*pD(x,y) = V'2^{Vx m, 2;y n) ( 2.40)
The discrete wavelet transform of function f(x, y)of size MxN [2] are then
Cj0 (jn, ri) = -j== Zx=o £y=o / dj+1(m,n) = fix.y) ipj,m,n(x,y) (2.42)
where f(x,y)is obtained through the inverse discrete wavelet transform (IDWT) as
13


1
/0y) Sn Cj0 (.mi ft) (Pj0,m,n(.X y)
4" VMN ^f =H,V,D £;'=0 21m Xn dj+i(jn, *0 *Pj,m,n(.X> y) (2-43)
where y0 = 0, and M = N = 27 is selected so that _/ = 0,1,2, ...,j 1 and m,n =
0,1,2,..., 2; 1. Normally, the coefficients Cj0(m,n) define an approximation of
function f{x,y) at j0 = 0, the starting scale as in 1-D signal. The wavelet coefficients
dj+1(m,n) add the horizontal, vertical, and diagonal details for scale j > jQ.
The computational approach to 2-D discrete forward transform is shown in
Figure 2.5, which is also called the analysis filter bank. Note that because j0 denotes
the scaling j = 0, the Cj(m, n) can be written as c;o(m,n). At each decomposition,
the approximation coefficient Cy+1(m,n) is decomposed into Cj(m,n) and three other
detail coefficients, df(m, n), dj(m, n), and df(m,ri). To obtain c;(m,n), the
Cj+1(m,n) is convolved with h^in) and down-sampled column-wise by 2
^indicated down-sampled) and then the output is convolved again with the same
filter and down-sampled row-wise by 2. Apply the same analogy to compute the
df (m, n), d'j (m, n), and df{m,ri). Figure 2.6 shows wavelet representation of
image. It illustrates decomposition levels of a 2-D image. Here LL, LH, HL, and HH
are the four subbands, corresponding to the approximation, horizontal, vertical, and
diagonal edges respectively. Note that the superscripts of these subbands indicate the
numbers of the level decomposition. When further decomposition is composed, only
LL is repeated.
14


Columns
Cj+i(m,n)
hv(-n)
hq>(-n)
Columns
Rows
Figure 2.5: The Analysis Filter Bank for Two-Dimensional Wavelet Transform.
LL1 LH1
HL1 HH1
CM _l LH2 LH1
HL2 CM X X
HL1 HH1
Figure 2.6: One (left) and Two (right) Levels Two-Dimensional DWT
15


dDj(m,n) * 2t
Columns
Rows

hw(n)
dvj(m,n) H2M H hq,(m)
Rows

Cj(m,n)
Cj+i(m,n)
Columns
2|
hv(n)
Rows
Figure 2.7: The Synthesis Filter Bank for Two-Dimensional Wavelet Transform.
Figure 2.7 shows the approach of reconstruction of the 2-D discrete wavelet
transform. At each iteration, the approximation and the wavelet coefficients are first
up-sampled by 2 (2t indicated up-sample by 2) in rows and then convolved with
filters htpim) and (m), and then added in pairs to produce two outputs. The
resulting two outputs are up-sampled in columns and then convolved with filters
h^n) and h-^(n). The sum of the two final outputs produces the Cj+1(m,ri). The
process could be repeated until the original image c;+1(m,n) is reconstructed.
2.4 Mallats Decomposition and Reconstruction Algorithm
To implement the relationship between the coefficients of the DWT at a lower scale
in terms of those at the adjacent scales, we start by scaling x by 2J, translating k, and
letting m = 2k + n. The scaling function becomes
- '/2'Lmh(p(m 2k)(p(2j+1x m) (2.44)
16


Similarly, the wavelet function i/>(x) in equation (2.14) is obtained by
ip(2jx k) = V2h^(m 2k)cp{2j+1x m) (2.45)
Substituting the right side of the equation (2.7) into the DWT in equation (2.32), we
have
c(j, k) = j='Zxf(x)2J/2(p(2j k) (2.46)
After inserting the right side of equation (2.44), interchanging, and rearranging the
terms, this equation then becomes
c(J,k) = £m hfpim 2k) [^=Ix/(x)2('+1)/2^(2J+1x m)] (2.47)
where the bracket is identical to equation (2.32) with j0 = j + 1. Replacing the m
term by k and j0 = j + 1, this approximation c(J0, k) becomes c(j + 1, m); therefore,
equation (2.47) can be written as
c(j, k) = £m hv(m 2k)c(j + 1, m) (2.48)
The derivation of the DWT detail coefficients d(J,k) is similar. Substitute the right
side of equation (2.45) into the equation (2.8). The result of equation (2.8) is then
inserted into equation (2.33). After interchanging and rearranging the terms as in
equation (2.47) and letting jQ j + 1, the detail coefficients d(j, k) are obtained by
d(j,k) = Zmhipirn 2k)c(j + 1 ,m) (2.49)
Such derivation utilizes Mallats principle of MRA, also known as Mallats Fast
Algorithm [1, 2, 8, 27], The equations (2.48) and (2.49) show that the scale j
approximations and detail coefficients, can be determined by convolving c(j + 1, m),
the scale j + 1 approximation coefficients, with the time reversed scaling and wavelet
coefficients, n) and h^{n), and subsampling the results. It is not necessary to
know the scaling and wavelet functions in order to obtain the approximations c(j,k)
and the detail coefficients d(j, k), despite the hv(n) and h^(ri) are known. The
equations (2.48) and (2.49) can be convolved with nonnegative argument of hv at
distance n = 2k for k > 0; meaning that the scale j + 1 approximation coefficients
17


are filtered by the Finite Impulse Response (FIR) filter, h^iri), and followed by
down-sampling the results by a factor of 2. If x(n) is the output of a convolution
between a filter h^ and c;+1(m), then
*00 = lm K (m n)cj+1 (m) (2.50)
and the convolution of the version of x(n) down-sampled by a factor of 2 is give by
x(2n) = 'Zmhcpi.m 2n)cj+1(m) (2.51)
Now denote c(j + 1 m) as c;+1(m). It follows that the equations (2.48) and (2.49)
become the decomposition parts and therefore rewritten as follows:
cj(n) = hyi-nTcj+^n) (2.52)
dj(n) = h^(-nycj+1(n) (2.53)
Note that these two equations imply 1-level decomposition. To obtain the
reconstruction of c;+1(n), the decomposition is reversed in the process. We therefore
have
Cj+1(n) = hv(ri)*Cj(n) + /ty(-n)*dy(n) (2.54)
It means that Cy(n)and dj(n) are first up-sampled by a factor of 2, then filtered by
h(p(ri)and h^(n) and the results are added to reconstruct Cj+1(ri). To clarify these
decomposition and reconstruction algorithms, we utilize filter banks in Figure 2.8 and
Figure 2.9. Figure 2.8 illustrates the three cascading filter banks for computing the
DWT, also known as the analysis filters. The h0(n)and h^ri) are the low pass
and high pass filters of the decomposition parts and the same as filters h^in) and
hjpin) respectively. The 2 l indicates the downsampling. The signal c/+1(n) is
further decomposed into an approximation Cj(ri) and a detail coefficient d; (n). The
processes are repeated on the approximation and detail coefficients at each
decomposition level. In other words, cy+1(n) is represented by Cy(n) and
djinj.Cj-iiri), and dy.^n), and finally c;_2(n) and d;_2(n) for the three levels of
decomposition.
18


hi(-n) |+
2|
d/n)
Cj-i(n) hi(-a) - 2| '
| ho(-n) 2|
ho(-n) 21
dj-i(n)
hi(-n) 21

ho(-n) 21
dj-2(n)
r2(n)
Figure 2.8: Three-Scale Forward Wavelet Transform Analysis Filter Bank.
Figure 2.9 shows the reconstruction procedures. Based on equation (2.54), the same
filters are used as in the analysis, since the scaling function and wavelet function are
orthomormal. Thus the transformation of the analysis filters of Figure 2.8 and
synthesis filters of Figure 2.9 are invertible. In this case, the g0(n) and g/ri) are the
low pass and high pass filter and timer reversal h0(ri) and h/ri) of respectively. That
is h0(ri) = ,go(n) and h/-/) = g/ri).
d/n)
2t
gi(n)
dji(n)
2f
dj-2(n)
cr2(n)
2f gi(n)

2t go(n)
2T
gi(n)
go(n)
'+\
(4) ^ H go(n)
Figure 2.9: Three-Scale Inverse Wavelet Transform Synthesis Filter Bank.
2.5 Filter Banks
The DWT can be implemented by using a two-channel subband coding scheme. The
dual channel subband filters decompose the input signal into wavelet coefficients.
This part of the coding scheme called the analysis filter bank, which is identical to the
19


forward DWT. The signal is passed through one low pass and high pass filters and
down-sampled by a factor of 2 (2J,). To perform the IDWT, reverse the
decomposition process. At every level, the signals are up sampled by 2 that is 2 T
and passed through synthesis low pass filter g0(ji) and high pass filter two outputs are added up to reconstruct the original signal. Interesting enough, the
analysis filter and the synthesis filter are identical to each other, but time reversal.
For a signal to be perfectly reconstructed, the conditions on these filters need
to be defined.
/ ( )
(+> f(n)
Analysis = DWT Sythesis = IDWT
Figure 2.10: Two-Channel Filter banks for One-Dimensional Subband Coding.
The z-transform of sequence x(n)for n = 0,1,2,..., is defined as
X(z) = H-oo x(ri)z~n (2.55)
where z = e*0*. Note that if inserting instead of z, the equation (2.55) then
becomes discrete Fourier transform of x(n). When a sequence x(n) is down-sampled
by a factor of 2, the z-transform of x(n) = x(2n) is given by
X(z) = \ [^(z1/2) + X(-z1/2)] (2.56)
The sequence x(n) = x(n/2) is called up-sampled by a factor of 2. This sequence
yields x(n), whose z-transform is
X(z)=l[X(z)+X(z)] (2.57)
20


The x(ri) = z~1[X(z)] is the reconstruction process of the down-sampled and up-
sampled sequence. If j = 0, the z-transform of the decomposition level c; (n) and
dj(n) in Figure 2.10 are in pair as follows:
C0(z) = \[H0{z1'2)F{z1l2) + //0(z1/2)F(-z1/2)] (2.58)
F>oO) = [H1(z1'2)F(z1'2) + ff1(z1/2)F(z1/2)] (2.59)
According to Figure 2.10, if F(z) is the z-transform of input signal f(n), and
F(z) = z-1F(z) is the transform of the signal /(n), then the reconstructed signal
f(n) is obtained by:
FO) = i[//0(z)G0(z) + H1(z)Gz(z)]F(z) +
i[H0(-z)G0(z) + H1(z)G1 (z)]F(z) (2.60)
For perfect reconstruction of the input, x(n) = x(n) and X(z) = X(z). Assuming
H(z) and G(z )are the z-transform of the lowpass filter h(ri) and highpass filter g(n)
respectively, a two-channel filter bank obtained must satisfy the following conditions
[34, 35]:
//0(-z)G0(z) + //1(-z)G1(z) = 0
H0(z)G0(z) + W1(z)G1(z) = 2
(2.61)
(2.62)
= [2 0]
These two equations can be combined in to a single matrix form as
[G(z) GiOOl [^(Z)
H0(z) H0(-z)
Hx(z) //
(2.63)
Let Hm(z) =
(z) f/iC-z).
'f Z^l. Transpose the equation (2.63) and then left
ik~z .)J
multiply by inverse (//^(X)) 1 to get
G0(z)
.GiO)
1 2 [//i(-z)l
J det (Hm(z)) [F/0(z)J
(2.64)
21


From here, it tell us the relation between functions H{z) and G(z). That means the
(^(z) is a function of H0(z), while G0(z) is a function of H^z ). Therefore, it is
apparent that filters /io(n)/^i(tt),^o(n)> and #i(n)are related as: g0(n) =
(l)Tl/i1(rz) and gx(n) = (-l)n+1h0(n).
Esteban and Galand [32] proposed the following family of equivalents to
remove alias version of equation (2.62)
H1(z) = H0(-z), G0(z) = H0(z), Gx (z) = -H^z ) = -H0(-z) (2.65)
Substituting these into equation (2.62), it yields
Ho(z) Hq(z) = 2 (2.66)
The equation (2.65) and (2.66) are called Quadrature mirror filter (QMF) since
Hx{z ) is a mirror version of H0(z).
Smith and Barnwell [33] suggested the relationship between filters H and G of
a family of equation (2.65) can be reconditioned for perfect reconstruction as follows:
Hi(z ) = z-1H0( z)
G0{z) = tf0O-1)
GiCz) = Z/iCz-1) = z//0( z) (2.67)
After inserting equation (2.67), the equation (2.62) yields
Hq{z)Hq{z~1') + //0(-z)//0(-z-1) = 2 (2.68)
The equations (2.67) and (2.68) are known as Conjugate Quadrature Filter (CQF). If
we let H0{z) = //0(z-1) or //0(e7), where z = e;a>. The equation
(2.68) becomes
|//0(e^)|2 + \H0(-e^)\2 = \H0(co)\2 + |H0(m + tt)|2 = 2 (2.69)
In this case, there exists h0(n) satisfying the conditions in equation (2.67); therefore,
the two channel filter bank can be used for perfect reconstruction.
A two channel filter bank in Figure 2.10 convolves an input signal /(n) with
a low pass filter h0(n) and a high pass filter h^ri) and subsamples by 2. The
equations are known as decomposition parts and given in [1] by
22


Cj(ri) = f(n)*h0(2ri) and dj(n) = f (ri)* h^Iri) (2.70)
Under the orthomormal condition, the synthesis filters g0(n) and ^(n) are time
reversed version of analysis filters h0(n) and h^ri). We insert the zeros, denoted 2|,
with filters g0(n) and g1(n). The results are added up to obtained the reconstruction.
/(n) = Cj(nyg0(n) + djinTg^n) (2.71)
Note that synthesis filters g0(n) and ^(n) will be used to cancel the aliasing version
according equation (2.62) and filters h0(n) and h1(n) are replaced by g0(n) and
g^irt) respectively in Figure 2.10; therefore, such conditions guarantee a perfect
reconstruction. That is f(n) = /(n).
2.6 Summary
This chapter reviews in depth the literature survey of wavelet transforms, which
includes the CWT, MRA, DWT, Mallats decomposition and reconstruction
algorithms and filter banks. MRA is a better technique of relevant DWT as well as a
justification of Mallats decomposition and reconstructions. Two channel filter banks
are the practical methods identical to the DWT. Its analysis part is used to perform the
forward DWT while the synthesis is used to perform the IDWT.
The next chapter will introduce the basic concepts of noise model and error
measurements. This thesis focuses only on Gaussian noise because it is commonly
found in the wavelet coefficients of a signal or an image. The quantitative
measurements of images, which will be presented, are in terms of mean-square error,
root mean-square error, signal-to-noise, and peak signal-to-noise.
23


3. Noise Model and Error Measurements
In image processing, noise is undesired information contaminating image during
transmission. For example, noise can be presented in camera sensor under the
environment condition during image performance. Light level and sensor temperature
are the sources affecting the resulting image with large amount of noise. Noise factors
includes Gaussian, salt and pepper, Poisson, speckle, and uniform noise.
Acknowledging the types of noise presented in the image is the key role in image
processing. This thesis assumes that the image is corrupted by Gaussian noise. The
basic concept of Gaussian noise first is introduced and followed by error
measurements in images.
3.1 Gaussian Noise
Assume that noise £(t) is a zero-mean Gaussian random process function. The value
of £ at any arbitrary time t is characterized by the Gaussian probability density
function, which is defined in [2] by
where a2 is called the variance of £. The normalized Gaussian density function of
zero mean is computed by assuming that a = 1. Then its probability density function
(pdf), p(z) is expressed as
where z is the grey level presentation, p is the mean average value of z, and a is the
standard deviation. Figure 3.1 shows the graphical plot of p(z). 70% of the z value is
ranging in [(p o), (p + a)], and 95% is ranging in [(p 2a), (p + 2o)]. Figure
3.2 (a) is noiseless, grayscale image Coin. Figure 3.2 (b) shows Gaussian noise with
zero mean and noise variance of 0.01 presentation of Coin image.
(3.1)
(3.2)
24


O1-- I--1-1-i-1-1-1-
0123456789 tO
Gaussian function, p = [2,5]
Figure 3.1: Graphical Plot of Gaussian Function.
(a) original image
(b) Gaussian noise image
Figure 3.2: Coin Image with Gaussian Noise, (a) original image, (b) Gaussian noise
image (with default values of the mean and noise variance of 0 and 0.01 respectively).
3.2 Error Measurements
The observed data is the sum of the Gaussian noise and signal and given by
y(U) = fiUf) + £ (ij) (3.3)
where y(i,j) is a noisy signal of f{i,j) and e(i,j) is independent and identically
distributed (i.i.d) zero mean, Gaussian noise with standard deviation a, i.e. N(Q,a2),
25


and variance er2. Note that instead of using z and /z in equation (3.2), the
corresponding notations y and/are used here. The important objective is to obtain an
estimate of original signal f(i,j) from observation of y(i,j) such that the
mean square error is minimal. Thus the difference between the estimate f(i,j) and
original signal f(i,j) [25] is
(3.4)
The total difference between signal f(i,j) and the estimated f{i,j) is given by
= liLoI.j=olf(i,j) ~ fii.j)} (3.5)
where MxN is the image size. By averaging the squared 8T(i,j) of equation (3.5), the
mean square error (MSE) is obtained by
MSE = - /ay)]2 0.6)
Taking the square root of mean square error averaging over the image size, the root
mean square (RMSE) error is computed as
RMSE = bbHt s"-o [/(./) /ay)]2]1/2 av
This relates to the signal-to-noise ratio (SNR). The noise presented in a signal or an
image is removed if the value of its signal to noise ratio is large enough. If the SNR
value is small, there is still noise presented in the image. Signal-to-noise ratio is
defined by
SNR = (3.8)
noise
The root mean square error signal to noise ratio of the output image is computed as
8NRrms

(3.9)
The equation (3.9) is still lying on the RMS values. While the measurement of
interest is in decibel (dB), the peak signal to noise ratio (PSNR) is considered. The
PSNR is defined in [4] by
26


PSNR = 20 log10 (dB) (3.10)
where 255 is the maximum pixel value for an 8 bits/pixel gray scale image, and
RMSE is the root mean square between original image and output image. The
equation (3.10) shows that the smaller error of the MSE is, the larger the PSNR values
are found.
3.3 Summary
This chapter reviews Gaussian noise model and in depth on how to minimize error
when measuring a signal or an image. The MSE algorithm is used to approximate the
error between original image and the output image. The RMSE is obtained by taking
the square-root of MSE averaging over the image size. Lastly, the output images can
be computed by using SNR or PSNR.
Next chapter will investigate the wavelet denoising techniques, which
includes VisuShrink, SureShrink, and BayesShrink. Before implementing these
methods, the choices of thresholding, hard and soft thresholds will be defined.
27


4. Wavelet Denoising Methods
Noise removal is an important element of image processing, because many images
acquired are contaminated by significant amounts of noise. This chapter presents
some fundamental wavelet-based methods for removing noise from images based on
the works by Dohono and Johnstone [10, 11].
Recall equation (3.3) that a noisy input signal is the sum of the original signal
and Gaussian noise with standard deviation a and variance a2 i.e N(0, <72).That is
y(i,j) = /(ij) + The estimated f(i,j) of original signal f(i,j) is the result
of the technique of wavelet denoising. The data is transformed into different basis,
wavelet basis. The wavelet basis should well match the signal characteristic when
selected. It has been shown that the large wavelet coefficients correspond to the signal
while the small ones are the noise representation. Now let Y,F,e be the observed
data, original data, and the error matrix respectively. When the DWT W is applied to
the data sets, the wavelet coefficients obtained from this transformation are a> = WY.
Then perform thresholding on detail coefficients of co to obtain the estimate <2 of the
wavelet coefficients of F. When W is orthomormal, the original observations Y can
be recovered using the IDWT. That is F = W~Xa). This should be achieved by
applying successive decomposition level on the DWT for multiresolution. This is the
reason why multiresolution analysis introduced in Section 2.2 is obviously important
concept in wavelet denoising. The more levels of decomposition are applied, the
better the signal gets recovered.
Noisy
image
Denoised
image
Figure 4.1: Block Diagram of Wavelet Denoising.
28


Based on Figure 4.1, the denoising methods are characterized into the following
steps:
i. Compute the wavelet coefficient matrix go by applying a wavelet
transform W to the data:
co = WY = WF + We
ii. Perform thresholding on detail coefficients of go to obtain the estimate
go of the wavelet coefficients of F:
go -> GO
iii. Inverse the DWT on the thresholded wavelet coefficients to obtain the
estimated image (reconstructed image):
F = W~xgo
It can be restated that the data is decomposed into wavelet coefficients. Then
the detail coefficient is compared with a given threshold value. It is set to zero if its
magnitude is less than the threshold value; otherwise, it is retained, depending on the
threshold rule. Lastly, the data is reconstructed from the modified coefficients. It is
obvious that denoising is achieved by applying the threshold values to the wavelet
coefficients followed by the reconstruction of data. Thus, thresholding plays a major
role in removing noise from the data.
4.1 Thresholding
Thresholding is an important concept in data denoising and compression, because as a
result of thresholding, the wavelet coefficients of the estimated data are smoothed and
the effect of noise on the data is reduced or the same over all the coefficients at each
scale. Many thresholding choices have been proposed, but few have made it way so
popular in wavelet denoising. Mainly developed by Dohono and Johnstone [10, 11],
the wavelet thresholding schemes that have near optimum properties in the minimal
29


sense seem to perform well in image denoising. They proposed two popular versions
of thresholding, hard and soft thresholds, which shall be described as follows.
4.1.1 Hard Thresholding
Hard threshold is the most common scheme in denoising. It compares each
coefficient to the threshold value. If the coefficient value of x is smaller than the
tolerance X, the signal is set to zero. If x is great than X, the signal value is retained.
The hard thresholding is defined by
where A is a tolerance or thresholding value. The graphical plot of hard threshold in
Figure 4.2 (b) shows that there is a discontinuity at x = X. This discontinuity causes
ringing or blocking in the denoised signal. To avoid the problem, new scheme, soft
thresholding is suggested.
4.1.2 Soft Thresholding
Soft threshold is different from hard threshold in the case where its function is
continuous. When the x is greater than X, then the x values in the hard thresholding is
replaced by x X. In case |x| < X, the signal x is set to zero and soft threshold is the
same as hard threshold. Soft threshold is different from hard threshold when |x| > X.
That means the signal x is shrinked so that X gets closer to zero. The x is replaced
with x + X when x < X. The soft thresholding is defined by
otherwise
(4.1)
if x > X
if x < X
(4.2)
otherwise
30


(a) Original (b) Hard Thresholding (c) Soft Thresholding
1 / , , ' 1 J
/ / / /
/ o.e / / 0.4
- / / o.ej- / .. / /
4 / 0. f / '
/ | / /
/ 0 / /
Oh / i -0.2 /
T / -02 ! /
/ 1 /
i 0 4 i - -0.4 7
/ 6 / 0.6 / . /
/ / -0.6 /
8 / -0.8
/ /
. / I ,
|iz--------1--------1---------:---------!---------1 ------------------1---------'--------1---------1 -0.81---------'---------1---------:--------'<--------
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Figure 4.2: Thresholding Curves, (a) original thresholding, (b) hard thresholding, (c)
soft thresholding.
Because its continuous, soft threshold provides smoother results in comparison with
the hard threshold. If more levels of decomposition are applied to an image, the
outcome is more visual. However, it sometime might be good to append the soft
threshold to few levels of detail coefficients, and the hard threshold to the rest in
order to preserve the better edge. Figure 4.2 (c) shows soft thresholding curve.
4.1.3 Universal Thresholding
Proposed by Dohono and Johnstone [10, 11], Universal threshold is a single
thresholding applied globally to all coefficients of all levels and it is defined by
J-univ = Oy]2\og (N) (4.3)
where N is sample size, or the number of image pixel and a is the standard deviation
of noise. The depends on the size of N and the noisy level a. The cr is
approximated from the highpass element in the discrete wavelet transforms from the
31


coefficient since this portion of the transform is almost noisy. The estimation of a is
based on median absolute deviation and has the form of
- Merftcm({|y;_1|fc |: 0~1})
_ 0.6745 ^ '
where corresponds to the detail coefficients of the wavelet transform. The idea
is to remove all the wavelet coefficients that are smaller than the expected maxima of
the independent and identically distributed (i.i.d) noise sequence of the size N. If
noise is Gaussian white noise Et i.i.d iV(0,l), it can be shown that
P{Maxi\ei\ > yjl log(lV)) -* 0, N - o (4.5)
This is where universal thresholding comes from.
4.2 VisuShrink
Originally developed by Donoho and Johnstone, VisuShrink is the thresholding,
which applies universal thresholding Auniv in equation (4.3). We can apply both hard
and soft thresholds by utilizing the wavelet shrinkage with the universal thresholding.
VisuShrink ensures, with high probability, that the estimates should be as smooth as
original signal; however, it is known for yielding the recovered signals that are overly
smoothed. This is because the threshold value gets higher with larger value of o.
Therefore; it removes many signal coefficients along with noise. Image denoising
using VisuShrink technique is implemented as follows.
Figure 4.3 (a) shows a 512x512 pixel, noiseless, grayscale image Lena. Using
cr = 20, Dabauchies wavelet, db4, and 5 decomposition levels, this image is then
corrupted by Gaussian noise according to equation (3.3) and is shown on Figure 4.3
(b). Next apply VisuShrink scheme described above. The performance of the this
method is evaluated by using peak signal-to-noise (PSNR), equation (3.10) mentioned
in chapter 3. The Figure 4.3 (c) shows the denoised Lena with hard thresholding,
whose PSNR value is 27.29 dB. Figure 4.3 (d) is the denoised Lena with soft
thresholding. This denoised image has PSNR value of 24.97 dB. Table 4.1 shows the
32


comparisons of the PSNR values of the denoised images by VisuShrink with hard and
soft thresholds for the selected standard deviation of noise a from 5 to 40.
(a) Original image (b) Noisy image(PSNR lnput=22.11dB)
Figure 4.3: Denoising Lena Image by VisuShrink with Hard and Soft Thresholding,
using a = 20, (a) original Lena, (b) noisy Lena, (c) denoised Lena with hard
thresholding (PSNR = 27.29 dB, RMSE = 11.02), (d) denoised Lena with soft
thresholding (PSNR = 27.97 dB, RMSE = 14.39).
33


Lena VisuShrink PSNR (dB)
a 5 10 15 20 25 30 35 40
Hard 33.65 30.37 28.53 27.29 26.33 25.65 25.02 24.56
Soft 30.73 27.73 26.08 24.97 24.14 23.48 22.93 22.46
Table 4.1: Comparisons of PSNR outputs of Lena image by VisuShrink with hard
and soft thresholding at difference a values.
4.3 SureShrink
To overcome the problem of universal thresholding, Donoho and Johnstone [10, 11]
suggested the threshold derived from the minimization of the Steins Unbiased Risk
Estimate (SURE) for one-dimensional signal. This method is called SureShrink. The
threshold is assigned to each resolution level of the wavelet transform. SureShrink
combines the universal thresholding scheme with SURE thresholding method;
therefore, it can provide a better visual image quality than VisuShrink. The choice of
thresholding depends on the shrinkage function and the multiresolution level. For
two-dimensional data, the threshold can be achieved on the level multiresolution or
subband. The threshold on the subband 5 is given by
huRE = arg?[SURE(A, ws)] (4.6)
where ws is the detail coefficient of the subband s and SURE (A, ws) is the Steins
Unbiased Risk Estimate of the corresponding shrinkage function. If the threshold of
the soft shrinkage function is used on the subband s, then the threshold ASURE in
equation (4.6) becomes
J-sure = argY>o[SUREs(A,ws)] (4.7)
where
SUREs(A,ws) = Ns + Ifc=1[min(|wfc|),A]2 2[(#o/wk):\wk\ < A] (4.8)
The Ns is the number coefficient wk in {wk} and standard deviation a is assumed to
be one. If the coefficient is used on the decomposition level, the standard deviation is
computed from the equation (4.3). For data with non-unit variance, the coefficients
34


are standardized by an appropriate d estimate before computing the thresholding with
equation (4.7). Use the coefficients on a level instead on a subband when
implementing the level dependency. In the case if the wavelet coefficient
decomposition is sparse, Dohono and Johnstone suggested that SureShrink utilizes
the combination of the universal and SURE thresholds as follow's:
SureShrink technique is implemented as follows. A 512x512 pixel, noiseless,
grayscale image Mandrill is decomposed for 5 decomposition levels and uses db4 and
standard deviation of noise a = 25. Figure 4.4 (a) shows original Mandrill while
Figure 4.4 (b) is a noisy Mandrill. Figure 4.4 (c) and Figure 4.4 (d) are the denoised
Mandrill images with hard and soft thresholds respectively. Note that both denoised
images are almost identical. Thus, it is difficult to distinguish which one has better
image quality. However, we can tell the difference by their PSNR values. Figure 4.4
(c) has the PSNR of 22.74 dB. Figure 4.4 (d) has 23.56 dB and therefore has better
image quality. Table 4.2 provides the comparisons of values of PSNR of the denoised
Mandrill images by SureShrink with hard and soft thresholding, using standard
deviation of noise values a from 5 to 40. The closed matched values of PSNR are
when a = 15.
(4.9)
35


(a) Original image (b) Noisy image(PSNR lnput=20.17dB)
(c) Denoised with Hard(PSNR=22.74dB) (d) Denoised with Soft(PSNR=23.56dB)
Figure 4.4: Denoising Mandrill Image by SureShrink with Hard and Soft
Thresholding, using a = 25, (a) original image, (b) noisy image, (c) denoised image
with hard thresholding (PSNR = 22.74 dB, RMSE = 18.60), (d) denoised image with
soft thresholding (PSNR = 23.56 dB, RMSE = 16.93).
Mandrill SureShrink PSNR (dB)
a 5 10 15 20 25 30 35 40
Hard 34.08 28.33 25.52 23.84 22.74 21.91 21.83 20.88
Soft 30.42 27.69 25.80 24.51 23.56 22.81 22.24 21.77
Table 4.2: Comparisons of PSNR outputs of Mandrill image by SureShrink with hard
and soft thresholding at difference a values.
36


4.4 BayesShrink
Proposed by Chang, Yu and Vetterli [19, 20, 21, 22], BayesShrink is based on the
mathematical Bayesian risk. It is subband thresholding dependent to be nearly
optimal favoring for soft thresholding over hard thresholding. Used for adaptive
smoothness, the Bayesian threshold is done at each subband resolution in the wavelet
decomposition. For various subbands and decomposition levels, the wavelet
coefficients of a large class of images can be described by a Generalized Gaussian
Distribution (GGD) as follows:
GGp ax(x) = C{P,ox)e~^a^,ax^x^, -oo < x < oo,/? > 0,ax > 0 (4.10)
where
(/?, o*) = ox
-l
[r(D
k)j
,1/2
and C(p,ox) =
PajP.ox)
2r(|)
(4.11)
and r(t) /0 euuc ldu is gamma function. The parameter axis the standard deviation
and P is the shape parameter.
Recall that the observation model of additive noise is given by
y(tj) = fii.j) + £(b/) (4-12)
Let Y(i,j) be wavelet coefficients of y(i,j), X{i,j) be wavelet coefficients of the
original signal f(i,j), and is wavelet coefficients of noise The /(ij)
and s(i,j)are independent of each other. Thus, for each subband, X(i,j) is modeled
as independent sample of px (x) = GGp ax(x) and W(i,j) is independent sample of
Gaussian distribution in chapter 3, equation (3.1). It can be stated that
where a2 is the noise variance of Y(i,j) and ox is the estimated signal variance
considered. The noise variance a2 is estimated as median absolute deviation of the
diagonal detail coefficients on level 1 (i.e. subband LH1 in Figure 2.6). The standard
deviation is computed by
dx = y]max(Jj2 d2,0)
(4.14)
37


where
a2 = YNs V2
Cy i YiJ
(4.15)
The Yt j is the wavelet coefficients, and Ns is the number of YLj on the subband. From
here ones can find that the adaptive threshold to be near optimal, also called Bayesian
threshold to be

Ox
(4.16)
Note that the threshold is set to As = max(|Fi;|) and all coefficients from subband
are set to zero in case d2 > a2.
Figure 4.6. (a) is a 512x512 pixel, noiseless, grayscale, original image
Goldhill and Figure 4.6 (b) shows its noisy one contaminated by a standard deviation
of noise a = 10. Using db4 and 5 levels of decomposition, the denoised images by
BayesShrink with hard and soft thresholding are shown on Figure 4.5 (c) and Figure
4.5 (d) respectively. Figure 4.5 (c) has PSNR = 30.48 dB while Figure 4.5 (d) has
30.60 dB. Table 4.3 shows PSNR outputs resulting from BayesShrink with hard and
soft thresholding, using various standard deviation values of noise a. Note that the
denoised images with hard and soft thresholding are very closely matched in PSNR
outputs when the standard deviation of noise a is equal to 10.
Goldhill BayesShrink PSNR (dB)
a 5 10 15 20 25 30 35 40
Hard 34.58 30.48 28.75 27.61 26.79 26.19 25.70 25.29
Soft 33.15 30.60 29.01 27.97 27.23 26.65 26.17 25.77
Tab
e 4.
?y BayesShrink with
hard and soft thresholding at difference a values.
38


(a) Original image
(c) Denoised with Hard(PSNR=30.48dB)
(b) Noisy image(PSNR lnput=28.13dB)
(d) Denoised with Soft(PSNR=30.60dB)
Figure 4.5: Denoising Goldhill Image by BayesShrink with Hard and Soft
Thresholding, using a = 10, (a) original image, (b) noisy image, (c) denoised image
with hard thresholding (PSNR = 30.48 dB, RMSE = 7.63), (d) denoised image with
soft thresholding (PSNR = 30.60 dB, RMSE = 7.53)
4.5 Summary
This chapter shows how to denoise images using wavelets. The applications of
denoising has been reviewed and implemented. The choices of thresholding, hard and
soft thresholds are selected because they are simple and easy to implement. A
universal thresholding is used for the implementation of VisuShrink scheme because
it can compare the proposed and the existing algorithm under the different noise
models and standard variances using the means of the evaluations. SureShrink uses
39


threshold derived from the minimization of the Steins Unbiased Risk Estimate
(SURE) for one-dimensional signal while BayesShrink is based on the mathematical
Bayesian risk. Obviously, the simulations of each application have different results as
shown above.
The next chapter introduces an application of image compression, Wavelet
Transform Modulus Maxima (WTMM). WTMM is implemented by using a trous
algorithm. The conjugate gradient accelerated algorithm is then used to reconstruct a
signal or an image from its WTMM.
40


5. Wavelet Transform Modulus Maxima (WTMM)
Edge detection is one of the important features for analyzing the signals or images.
Because it involves in a range of scales, the multiscale method is taken into
consideration. This chapter introduces the wavelet transform modulus maxima
(WTMM), one of multiscale presentations of a signal or an image. This presentation
of signal is obtained by detecting the local modulus maxima in dyadic wavelet
transform across scales. The local modulus maximum, often referred to as multiscale
edge, is the region in the wavelet transform domain with a high concentration of
energy. Thus, a signal or an image can be reconstructed from its WTMM. Because a
trous algorithm is used to implement the WTMM, it is deeply presented as well in this
chapter.
Mallat and Zhong [29] showed that the wavelet transform is proportional to
the gradient of a smooth function f(x,y), when the detail coefficients are the
derivative of the approximation coefficients. Therefore, multiscale edge detection is
implemented by looking into the modulus of the wavelet detail coefficients to detect
the local maxima, or wavelet transform modulus maxima. The two wavelets proposed
to be the partially derivative of a two dimension (2-D) smooth function B(x, y)along
x and y directions are
f1(*,y)=^. and 02(*,y)=^ (5.1)
Let = f) anc* 1lJs(x>y)= 2-D wavelet
transforms of an image f(x,y)with respect to i/;1(x,y) and ip2(x,y) become the
gradients of the image at multiples of dyadic scales and are obtained respectively by
W1f(s,x,y) = f*ipl{x,y), and W2f(s,x,y)=f*ip%(x,y) (5.2)
41


If the IV1/(2J,x,y) and W2f{2^,x,y) are defined to be the horizontal and vertical
detail coefficients of the function f(x,y) smoothed at scale 5 = 2j respectively, then
the wavelet transform modulus is bounded by
Mf(2j,x,y) = VI W1f(2J,x,y)\2 + \ W2f(2.i,x,y)\2 (5.3)
and the phase angle between them is
Af{V,x,y) = tan ^
W2f{2i,x,y)\
W'f{2i,x,y))
(5.4)
Note that the equation (5.3) costs less burden on computing for diagonal detail
coefficients. The local modulus maxima of W1/(2;,x,y) along x, for constant y, are
detected at points, in which the edges of image are located, of the horizontal variation
of function f(x,y) smoothed at scale of 2;. Similarly, The local modulus maxima of
W2f(2>,x,y) along y, for constant x, are detected at points of vertical variation at
scale of 27. They are where the modulus image M/(27, x, y) is locally maximum
along the phase of degree Af(2.i,x,y) in eight quantized directions:
re it 3tt _tt _tt _3tt ^ which 0 degree is the low value from the horizontal to
the higher one at 90 degrees [30].
5.1 The A Trous Algorithm
Instead of using the DWT described in section 2.3 to implement a signal or image,
WTMM is implemented by using an a trous algorithm. An a trous algorithm is the
discrete wavelet transform with the undecimation [1, 31, 39]. There is no down
sampling step taking place in this dyadic wavelet transform, because the transform
itself is redundant, meaning that output coefficient matrices are the same as the input
and somewhat retained. If an image is decomposed into an approximated signal and a
detail signal at scale 2\ then this detail signal is the same as original signal in the
dimension.
42


Suppose that a low pass filter H satisfies the condition
S(fc)
^2 k ~
V2
(5.5)
where 8mk is the Kronecker delta and S(k) = 8k0, the H is then said to be a trous
filter [39], Corresponding to this a trous low pass filter H is the scaling function
(p{x). The scalar product of the function f(x) with such scaling function (pipe) gives
the first approximation at scale j0 = 0.
cjo(k) = (f(x), (p{x k)) (5.6)
For scale j > j0, a trous next level decomposition for the approximation of f(x) is
obtained by
c,ik)={;(ir)> (5.7)
The associated detail coefficients are computed as
dj (fc) = {f{x)M^)) (5-8)
Recall the two well known functions, the scale function and wavelet function ip(x) = V2 'Znd^{n)(p{2x n). According to these equations,
the iterative a trous decomposition has the following form
c;(fc) = Sn hq, (r^Cj-^x + 2;_1n) (5.9)
Figure 5.1 shows holes between samples. The sampling distance is increased by a
factor of 2 for scale j 1. It also shows that the signal difference, (c; (/c) Cy+1(/c)}
containing information between the two scales, is the discrete set associated with the
scale function 43


-4-3-2-101234
Figure 5.1: Levels 0,...,2 of an A Trous Decomposition.
The associated mother wavelet is therefore computed as
ip(x) = Once the set of approximations Cjo(k)..., Cj(k) have been found, the computation of
the wavelet coefficients is straight forward. Instead of applying the similar steps
described above, the wavelet coefficient can be found from the difference between
the adjacent approximation levels
djik) = cj.^k) cj(k) (5.11)
Note that for each decomposition level, its not necessary to subsample on the
convolution because the low pass and high pass wavelet filters are obtained by just
inserting 2J 1 zero between each filter coefficient, where j is the current
decomposition level. This creates holes (A trous in French). All filters are circularly
convolved instead to prevent the discontinuity near image boundaries. If n is the
number of decomposition levels, then a trous algorithm outputs n + 1 images of the
same size a single approximation image plus n detail images. Beside, a trous
algorithm is also translation-invariant; meaning a shift in the input simply shifts the
coefficients. Without applying any interpolation, the discrete transform values are
44


exactly identified at every pixel location. Furthermore, the correlation across values
can be achieved from its inherent structure [25], If ones would reconstruct a signal by
using a trous algorithm, the reconstruction has a form of
where n is decomposition level, dj(k) are detail coefficient matrices, and cn(k) is the
approximation.
When compressing an image using WTMM, there is always occurring the
changing in the edges. To prevent this changing, the mother wavelet should be
symmetrical in order the edge location matching the WTMM at a given scale. For this
reason, the quadratic spline filters are chosen. They are used to create holes instead of
Daubechies and other filters. These filters are non-orthogonal filters and have been
shown to be optimal filters for image processing. Mallat [1] suggested the quadratic
spline filters are designed as follows:
Suppose that 0 and 0 are the Fourier transforms of the scaling function (p and
wavelet functions 0 respectively. The 0 of a box spline with m degree, which is a
translation of m + 1 convolutions of 1[0 ^ with itself, is given by:
Cj0(k) = c^ + 'Z^djik)
(5.12)
This box spline is centered at t = i for m even and t = 0 for m odd. The low pass and
high pass filters in the neighborhood of co = 0 are designed as
The Fourier transform of the resulting wavelets is obtained by
(5.16)
45


with a box spline of degree m+2 at center of t = . Figure 5.2 shows graphical plot
of quadratic spline wavelet function on the left and scaling function on the right.
Figure 5.2: Quadratic Spline Wavelet (left) and Scaling Function (right)
If this spline supposes to be reconstructed under the perfect reconstruction, the dual
scaling function (p and wavelet function ip, as well as the associated low pass filter h,
and high pass filter g should be redefined as ip, xp, h, g respectively. For ip and xp to be
spline, we choose h = h. As the consequence under the condition, the cp is equal to ip.
Thus, the g is computed as
-iV2elT5in-jSn=o(c057) (5-17)
The quadratic spline filter coefficients are shown in Table 5.1. Here h(ri) and g(n)
are the lowpass and highpass analysis filters while h(n) and g(n) are the lowpass and
highpass synthesis filters respectively.
_ 2|/i(a))| _
46


n h(n) h(n) g(n) g(n)
-2 -0.0442
-1 0.1768 0.1768 -0.3094
0 0.5303 0.5303 -0.7071 -0.9723
1 0.5303 0.5303 0.7071 0.9723
2 0.1768 0.1768 0.3094
3 0.0442
Table 5.1: Spline filter coefficients used in the algorithm a trous.
Shensa [39] showed that Mallat algorithms are relatively inherent from fully
sampled a trous algorithm of same filter bank. As stated earlier in this section, the low
pass and high pass filters are obtained by inserting 2; 1 zero between each sample
of h(n). For any filter h(ri), we denote hj(n), the filter obtained by inserting 27 1
zero between each sample of h(n), whose Fourier transform is the equation (5.14),
then j is the scale parameter, (j E Z). The Figure 5.3 shows the decomposition filter
banks for the two dimensional a trous algorithm while Figure 5.4 is the reconstruction
[1]. In decomposition processes, the approximation is generated by first shifting the
columns of the input signal c;(n) to the left 27_1 times where j is the current level of
decomposition. The shifted columns are filtered by circular convolution using
lowpass filter hj (n) and the rows are shifted to the left 27-1 times and then filtered
with hj(n) to the approximation at level j. The horizontal details are calculated by
first filtering the columns of cy(n) using the high pass filter, §j (n) and then shifting
the columns, 27 times while the vertical details are first filtering the rows of c;(n)
using the high pass filter, £;(n) and then shifting the rows, 27 times.
To reconstruct the original image, reverse the order from the decomposition
procedures with the exception that the shifting is altered. At each level of
reconstruction, the filters are decimated by inserting 27 1 zeros between each
coefficient. Note that j decreases at each level of reconstruction. The outputs of the
47


horizontal detail coefficients and high pass filter, the vertical detail coefficients and
high pass filter, and the sum of the approximation and low pass filters, are added up.
The results are then multiplied by 1/4.
Columns
Rows
Cj(n)
Figure 5.3: Decomposition Filter Bank for the Algorithm A Trous.
Rows Columns
c/n)
Figure 5.4: Reconstruction Filter Bank for the Algorithm A Trous.
Let hj(n) = hj{ri) and gj(ri) = gj(n). The proposition [1] suggests that for any
; > 0, the cascading convolution formulas to compute the dyadic wavelet transform
are given as follows:
48


(5.18)
(5.19)
Cj+iin) = Cj(ri)*hjhj(n)
dJ+1(ji) = Cj(n)*gjS(n) and df+i(n) = C;(n)*5£,(n)
and the inversion is obtained by
Cjin) = cj+1(nyhjhj(n) + d]+1(n)* g}8(n) + df+1{nY 8 g j(ri)) (5.20)
where hj(n) is low pass filter, and gj (n) is high pass filter. The hj(n) and §j (n) are
time reversed version of hj(n) and cjj (n) respectively. Note that 5(n) is discrete
Dirac to separate filters such as xy(n, m) = x(n)y(m).
5.2 Reconstruction from Wavelet Transform Modulus Maxima (WTMM)
For the 2-D WTMM of an image / to be recovered, the set of wavelets {ipj,n(x>y)}
that corresponds to the maxima locations forms a frame of subspace L2(T) if there
exists two constant A,B > 0,A < B < oo such that
A\\f\\2 |2 where A and B are lower and upper frame bounds respectively [26]. The image can be
reconstructed from its WTMM if the equation (5.21) is satisfied. When A = B, the
frame is said to be tight. The frame forms an orthomormal basis if A = B = 1. If not,
the predictions are based on the knowledge of frame bounds A and B. Therefore,
determining the frame bounds is a very crucial and difficult mathematical problem.
Grochenig [38] came up with the solution, known as conjugate gradient accelerated
algorithm that does not require priori knowledge of frame bounds. Using this
algorithm, an image can be iteratively recovered from its WTMM.
If {ipj n) to be a frame of subspace L2 (T), the initialization of the vectors h, r
and pel2 (Z) are
h0 = 0, r0 = p0 = Sf, p_i = 0 (5.22)
Then, for n > 0, / can be reconstructed iteratively from the coefficients {<
>}ty
49


(PriSPn)
(5.23)
A
h-n+i hn 3" Anpn (5.24)
rn+l ~ ~ ^n^Pn (5.25)
Pn+1
(SPwSPn) _ (Spn.Spn-i
(PriSPn) 71 (Pn-l (5.26)
where pn is the search direction and it is initialized along with the residual vector rn
with the initial gradient Sf and Spnis the approximation and is the orthogonal
projection of pn on the image space defined by the inner products of the image /
with a set of wavelets The An is number of iteration in the pn direction. If
||5|| = 6, ||5-1||_1 = y4, and a = (Vfi Vi4)/(VS + V^4), the following error
estimate holds in the S-norm:
\\f-hn\\s<-^\\f\\s, (5.27)
where the S-norm is defined as
||/||s =< f.Sf >^2= US1'2/!! (5.28)
Let p = (B 4)(6 + A) be ordinary frame algorithm, then equation (5.27) shows
that the convergence factors for the reconstruction has been improved from ordinary
frame to accelerated frame algorithm of o = (Vfi y[A)/(yfB + VZ).
The coefficients in Table 5.1 are the spline filter coefficients used in a trous
algorithm. These filter coefficients are used to implement WTMM. Using 3 levels of
decomposition and standard deviation of noise a = 0.5, Figure 5.5 shows the
approximation, horizontal, and vertical detail coefficients of WTMM representation
of image Lena. From left, first column represents the 3-levels of the approximations.
The second and third columns are the 3-levels horizontal and vertical detail
coefficients representation of the image respectively. The numbers of the
decomposition level are represented in rows. Figure 5.6 (a) is the 3-levels of
decomposition of wavelet transform modulus. Figure 5.6 (b) and Figure 5.6 (c) show
50


the corresponding angle of wavelet transform modulus and modulus maxima
respectively. Figure 5.7 (a) is a 256x256 pixels, noiseless, grayscale Lena image used
in the process. The reconstructed image is iteratively recovered from its WTMM by
using conjugate gradient accelerated algorithm as illustrated on Figure 5.7 (b), which
has the PSNR value of 47.45 dB. We can see that this reconstructed image is almost
as good as original image. This is due to small amount of noise presented in the
image (cr = 0.5), which constitutes small threshold values. Therefore, there are many
modulus maxima or edges as shown on Figure 5.6 (c) bounded in the frame before the
image is reconstructed by conjugate gradient accelerated algorithm. If there are large
amounts of noise presented in image, there increase in values of threshold. Thus there
is less modulus maxima to be detected. This causes the reconstructed image to
become blurring. Well show this case in the next chapter.
Figure 5.5: 3-Levels of Decomposition of WTMM Representation of Lena Image.
51


Wavelet Transform Modulus level 1
(a)
Angle of Wavelet Transform Vector level 1
(b)
Modulus Maximum level 1
Wavelet Transform Modulus level 2
Angle of Wavelet Transform Vector level 2
Wavelet Transform Modulus level 3
Angle of the Wavelet Transform Vector level 3
Modulus Maximum level 2
Modulus Maximum level 3
Figure 5.6: Wavelet Transform Modulus Maxima Representation of Image Lena for
3-Levels Decomposition, (a) wavelet transform modulus, (b) angle of wavelet
transform modulus, (c) modulus maxima.
52


(a) Orginal Image (b) Reconstructed Image (PSNR = 47.45dB)
Figure 5.7: Image Lena using WTMM with a 0.5, (a) original image, (b)
reconstructed image. PSNR = 47.45 dB, compression rate = 59.03%.
5.3 Summary
This chapter deeply reviews literature of the WTMM and a trous algorithm. It also
shows how WTMM uses a trous algorithm to represent the image. To reconstruct an
image from its WTMM, the conjugate gradient accelerated algorithm is implemented.
More discussions of the application of WTMM and its experimental data can
be found on the next chapter. Also, well conduct more experiments on the
applications of VisuShink, SureShrink, and BayesShrink. The collected data and
results will be then discussed and compared.
53


6. Results and Conclusions
This chapter provides more experiments of the applications on wavelet denoising and
compression described in chapter 4 and 5. For wavelet denoising, we use the 512x512
pixels, noiseless, and grayscale images. We corrupt these images with Gaussian noise
and then use VisuShrink, SureShrink, and BayesShrink to remove noise. The outputs
of the test image in terms of RMSE and PSNR values as well as image quality are
discussed and compared. However, WTMM can only compress the 256x256 pixels,
noiseless, and grayscale images. Using this method, images are compressed to the
desired compression rate and reconstructed by using conjugate gradient accelerated
algorithm. The reconstructed images are then compared in terms of PSNR values or
compression rate.
6.1 Results
The 512x512 pixels, noiseless, grayscale Lena, Goldhill, and Mandrill images are the
standard test image chosen for the experiments of the three denoising methods -
VisuShrink, SureShrink, and BayesShrink, for the purpose of comparison. We first
start with image Lena. Using the standard deviation of noise a = 10, this image is
corrupted by Gaussian noise, according to equation (3.3), decomposed for 5 levels,
and implemented by sym8. We then apply universal threshold, known as
VisuShrink to remove noise. The denoised image is then reconstructed by using the
IDWT. We also use equations (3.7) and (3.10) to calculate the RMSE and PSNR
values of the denoised images. For SureShrink and BayesShrink techniques, the same
ideas and procedures are repeated. Figure 6.1 (a) and Figure 6.1 (b) show the
denoised Lena by VisuShrink with hard and soft thresholding. The qualities of these
denoised images are not as good as Figure 6.1 (c) and Figure 6.1 (d), which are the
denoised Lena by SureShrink with hard and soft thresholding respectively. Figure 6.1
54


(e) and Figure 6.1 (f) are the denoised Lena by BayesShrink with hard and soft
thresholding. These denoised images, however, have better image quality than those
by both VisuShrink and SureShrink. Comparing to Figure 6.1 (b) and Figure 6.1 (d),
Figure 6.1 (f) by BayesShrink with soft thresholding has better image quality. The
resulted PSNR value in decibel and RMSE value for each tested image are tabulated
in Table 6.1. We found that the performances of SureShrink and BayesShrink, with
soft thresholding, are almost equal in terms of the PSNR values. SureShrink has
PSNR = 30.39 dB while BayesShrink has 30.34 dB. However, the PSNR value of the
denoised image by VisuShrink, which is 27.82 dB, is smaller than those by
SureShrink and BayesShrink. This proved that SureShrink has the best PSNR value.
Methods PSNR Inputs PSNR Outputs RMSE Values Lena
Hard Soft Hard Soft o
VisuShrink 28.13 30.58 27.82 7.54 10.36 10
SureShrink 28.13 32.55 30.39 6.01 7.71 10
BayesShrink 28.13 32.27 30.34 6.21 7.75 10
Table 6.1: Comparisons of PSNR and RMSE values of Lena image, using a = 10
55


(a) Hard VisuShrink (b) Soft VisuShrink
Figure 6.1: Denoised Lena Images by VisuShrink, SureShrink, and BayesShrink,
using a = 10, (a) hard VisuShrink, (b) soft VisuShrink, (c) hard SureShrink, (d) soft
SureShrink, (e) hard BayesShrink, (f) soft BayesShrink.
56


We now investigate the experimental results of Goldhill image. In the
experimental processes, we still apply the same decomposition levels and sym8, but
different standard deviation of noise value, which is cr = 20. The Figure 6.2 (a) and
Figure 6.2 (b) are the denoised Goldhill images by VisuShrink with hard and soft
thresholding. The qualities of these two denoised images are every poor. They are
overly smoothed and blur. This is due to large amounts of standard deviation noise
a = 20. If there are large amounts of noise presented in image, there increase in
values of threshold. With such high threshold value, VisuShrink removes many
coefficients along with noise and thus causes image to become overly smoothed.
SureShrink and BayesShrink produce better image quality than VisuShrink. Figure
6.2 (c) and Figure 6.2 (d) illustrate the denoised images by SureShrink with hard and
soft thresholding while Figure 6.2 (e) and Figure 6.2 (f) are the denoised images by
Bayeshrink with hard and soft thresholding respectively. The PSNR and RMSE
output values of the denoised Goldhill image are displayed on Table 6.2. From this
table, SureShrink with soft thresholding still has the best PSNR value, which is 28.20
dB while VisuShrink and BayesShrink have 24.09 dB and 28.15 dB respectively.
Methods PSNR Inputs PSNR Outputs RMSE Values Goldhill
Hard Soft Hard Soft a
VisuShrink 22.11 25.87 24.09 12.97 15.92 20
SureShrink 22.11 27.65 28.20 10.57 9.92 20
BayesShrink 22.11 27.60 28.15 10.63 9.98 20
Table 6.2: Comparisons of PSNR and RMSE values of Goldhill image, using
57


(a) Hard VisuShrink (b) Soft VisuShrink
Figure 6.2: Denoised Goldhill Images by VisuShrink, SureShrink and BayesShrink
using a = 20, (a) hard VisuShrink, (b) soft VisuShrink, (d) hard SureShrink, (c) soft
SureShrink, (e) hard BayesShrink, (f) soft BayesShrink.
58


Using the standard deviation of noise o = 25, the denoised Mandrill images
are shown on Figure 6.3. Not surprisingly, the denoised images of Figure 6.3 (c) and
Figure 6.3 (d) by SureShrink with hard and soft thresholding and Figure 6.3 (e) and
Figure 6.3 (f) by BayesShrink with hard and soft thresholding are very identical in
terms of image quality because their PSNR values are approximately equal. However,
if we look closely at Figure 6.3 (f), this denoised image has higher PSNR values than
the others that is PSNR = 23.68 dB compared to 23.66 dB of SureShrink, or 20.06
dB of VisuShrink. This is why this denoised image has better image quality than the
others. Again, the denoised images by VisuShrink are very poor in image quality
compared to those denoised images by SureShrink and BayesShrink. Figure 6.3 (a)
and Figure 6.3 (b) shows these two images with poor visual image quality. Table 6.3
provides the PSNR and RMSE values of the denoised Mandrill images by the three
techniques described.
Methods PSNR Inputs PSNR Outputs RMSE Values Mandrill
Hard Soft Hard Soft a
VisuShrink 20.17 20.98 20.06 22.78 25.33 25
SureShrink 20.17 22.81 23.66 18.44 16.74 25
BayesShrink 20.17 22.82 23.68 18.43 16.69 25
6.3: Comparisons of PSNR and RMSE values of Mandril image, u
ct = 25
59


(a) Hard VisuShrink (b) Soft VisuShrink
Figure 6.3: Denoised Mandrill Images by VisuShrink, SureShrink and BayesShrink
using a = 25, (a) hard VisuShrink, (b) soft VisuShrink, (c) hard SureShrink, (d) soft
SureShrink, (e) hard BayesShrink, (f) soft BayesShrink.
60


The following is the experimental WTMM. We use spline filter coefficients in
Table 5.1 to implement WTMM. The image compression rate is calculated as
Compression Rate = 100 100 (----------------J (6.1)
where MxN are the size of image. Figure 6.4 (a) shows a 256x256 pixels, noiseless,
grayscale Head image of a human. For 3 levels of decomposition and standard
deviation of noise <7 = 2, this image is compressed at a rate of 63.28% and then
reconstructed from WTMM by using conjugate gradient accelerated algorithm as
shown on Figure 6.4 (b). The reconstructed image has PSNR and RMSE output
values of 33.93 dB and 5.13 respectively.
Having the same size as Head image, Peppers image has lighter grayscale as
shown on Figure 6.5 (a). For 3 decomposition levels and standard deviation of noise
a = 0.9, Peppers image is compressed at a rate of 46.13%. Figure 6.5 (b) is a
reconstructed image from its WTMM, whose PSNR and RMSE values are 31.09 dB
and 7.11. The quality of this reconstructed image is almost the same as original
image.
Using standard deviation of noise a = 10, original Lena image, Figure 6.6 (a)
is decomposed for 3 levels. The compression rate is 97.92%. Figure 6.6 (b) is a
reconstructed image, which has PSNR value = 36.46 dB and RMSE = 3.83. This
image looks blurrier and smoother than other reconstructed images, Head and Peppers
described above, due to high value of noise. In fact, using the same value of noise
a = 10, this reconstructed Lena is even blurrier and smoother than those denoised
Lena images by VisuShrink, SureShrink, and BayesShrink in Figure 6.1. However,
note that this Lena has the size of 256x256 pixels while denoised Lena images in
Figure 6.1 have the size of 512x512 pixels. Unlike denoising methods, which
compare the wavelet coefficients with the threshold values, WTMM compare both the
transform modulus maxima and its angles to the threshold values. When noise value
is increased, threshold value increases. This forces most of modulus maxima or edges
61


out of bound and makes them difficult to be detected. Therefore, when reconstructing,
conjugate gradient accelerated frame barely recognizes all the edges and causes the
reconstructed image to become overly smoothed and blurring as shown on Figure 6.6
(b). We can conclude that WTMM is at its best performance when the threshold value
is small. Table 6.4 provides the PSNR and RMSE values of reconstructed images
with a values, varying from 0.5 to 20. For standard deviation noise a = 20, Lena
image had a good compression, 99. 92% while Head and Peppers images have only
94.27% and 92.59%.
(b) Reconstructed Image (PSNR = 33.93dB)
Figure 6.4: Reconstructed Head Image from WTMM with a = 2, (a) original image,
(b) reconstructed image.
(a) Orginal Image (b) Reconstructed Image (PSNR = 31.09dB)
Figure 6.5: Reconstructed Peppers Image from WTMM with a = 0.9, (a) original
image, (b) reconstructed image.
62


(a) Orginal Image (b) Reconstructed Image (PSNR = 36.46dB)
Figure 6.6: Reconstructed Lena Image from WTMM with a = 10, (a) original
image, (b) reconstructed image.
Head WTMM
a 0.5 0.9 2 5 10 15 20
PSNR 34.21 34.20 33.93 29.92 27.08 26.22 25.31
RMSE 4.97 4.97 5.13 8.14 11.28 12.46 13.83
Crate % 21.61 22.91 38.06 75.14 87.97 91.99 94.74

Peppers
a 0.5 0.9 2 5 10 15 20
PSNR 31.15 31.09 30.86 30.36 29.36 28.16 27.04
RMSE 7.07 7.11 7.31 7.73 8.68 9.96 11.34
Crate % 36.88 46.13 63.28 76.54 85.11 89.62 92.59

Lena ~2 " 1 ' "To 15 20
a 0.5 0.9
PSNR 47.45 46.68 44.34 39.44 36.46 35.29 34.82
RMSE 1.08 1.18 1.55 2.72 3.83 4.39 4.63
Crate % 59.03 68.37 80.73 92.47 97.92 99.57 99.92
Table 6.4: Comparisons of PSNR, RMSE values, and CRate (Compression Rate) of
Head, Peppers, and Lena images at difference a values.
63


6.2 Conclusions
In this thesis, we have shown that the denoising methods VisuShrink, SureShrink,
and BayesShrink, successfully remove Gaussian noise from the corrupted images. We
have also proven that SureShrink produces best PSNR values compared to both
VisuShrink and BayesShrink as shown on Table 6.1. However, the denoised image by
BayesShrink has better image quality than those by both VisuShrink and
BayesShrink. Evidently, Figure 6.1 (f) has better image quality than both Figure 6.1
(b) and Figure 6.1 (d). VisuShrink yields the denoised images that are overly
smoothed. Clearly, Figure 6.1 (b) is blurrier and smoother than both Figure 6.1 (d)
and Figure 6.1 (f).
In image compression, a reconstructed image from its WTMM by using
Conjugate Gradient Accelerated Algorithm is almost as good as original image when
small value of noise is presented as shown on Figure 6.5 (b). When large amounts of
noise are presented in an image, a constructed image becomes blurring as shown on
Figure 6.6 (b). This image is even smoother and blurrier when comparing to the
denoised images by VisuShrink, SureShrink, or BayesShrink with the same value of
noise. However, with small value of noise, WTMM produces better reconstructed
image than VisuShrink, SureShrink, or BayeShrink. A reconstructed image, Figure
6.5 (b) is almost as good as original image, Figure 6.5 (a).
From the experimental results, the denoised and compressed images are
shown to be favorable. The collected data in terms of the PSNR and RMSE values
from each experimental image are accurate and reliable.
In future, I would like to corrupted images with Peppers, Salt, and speckle
noise and then use VisuShrink, SureShrink, and BayesShrink to remove those noises
from images. I would also like to use WTMM to denoise images, instead of
compressing, and determine whether itll be able to remove noise as good as
VisuShrink, SureShrink, and BayesShrink.
64


APPENDIX A
VisuShrink Matlab Program
;%%%%%%%%%%%%
S' S'S'Sr S- 2*§r!
>'oo'6'o'6'o'6'6'6'6'<
% Program asks the user to input the image of size 512x512,
% standard deviation noise to create noisy image, and wavelet type.
% Perform forward wavelet transform on the image. Apply hard and
% soft thresholds on wavelet domain. Then perform inverse wavelet
% transform for hard and soft thesholded images. Compute for RMSE
% and PSNR values for the denoised images. Display the results:
% original image, noisy image, denoised image with hard threshold
% and denoised image with soft threshold
2r S'Sr 2r S'9-
T> % % % 0
-i- k-> WkX .A. l LIU. ^ W ^ l_ AA kj \_/> J_ l_ A J.J_ -1. D
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function VisuShrink(SIZE, sigma, wtype, num_levels, max_levels,...
filename)
clear all
close all
SIZE = 512;
dwtmode(1 per')
num_leve1s=5;
max_leve1s = 9;
PLOT = 'on';
%image size
%discrete wavelet transform extension mode
% number of level of filter
%max number of level of filter
% If PLOT='on', the results are plotted
% Load the original noise-free image
display(Available images for selection')
display('
display('
display('
display('
display('
display('
%display('
imag=input('Please
switch imag
case 1
f
case
1:lena.tif');
2:mandrill.tif');
3 tgoldhill.tif');
4:boat.tif 1 ) ;
5:einstein.tif');
6:peppers512 tif') ;
') ;
enter image (Ex:l):
filename=imread(1lena.tif') ;
2
filename=imread('mandrill.tif');
case 3
filename=imread(1goldhill.tif');
case 4
filename=imread('boat.tif') ;
case 5
65


filename=imread('einstein.tif');
case 6
filename=imread('peppers512.tif');
end
fprintf(1,'\n');
display('Standard deviation noise ');
sigma = input('Please enter sigma value (Ex:5)
fprintf(1,'\n');
%-----------------------------------
filename = filename;
original = double(filename);
% Create input noisy image
%-------------------------
randn('state',0);
noise = randn(size(original));
noise = noise/std(noise(:));
noisy_im = original+sigma*noise;
Y=noisy_im;
display('Orthonormal wavelets');
display('enter 1 for haar wavelet');
display('enter 2 for db2 wavelet');
display('enter 3 for db4 wavelet');
display('enter 4 for db8 wavelet');
display('enter 5 for sym4 wavelet');
display('enter 6 for sym6 wavelet');
display('enter 7 for sym8 wavelet');
display('press any key to quit');
wavelet=input('enter your choice: ');
switch wavelet
case 1
wtype='haar';
case 2
wtype='db2';
case 3
wtype='db4';
case 4
wtype='db8';
case 5
wtype='sym4' ;
case 6
wtype ='sym6';
case 7
wtype='sym8';
66


otherwise
quit ;
end
fprintf(1,'\n');
start = clock;
%Forward wavelet transform on image Y,
%where num levels are wavelet numbers.
for i=l:num_levels
sz=2Ai; %number of rows = number of column
J=SIZE/sz;
[LL,LH,HL,HH] = dwt2(Y,Wtype);
coef(1:J,1:J)=LL;
COef(J+l:2*J,1:J)=LH;
coef(1:J,J+1:2*J)=HL;
coef(J+1:2*J,J+1:2*J)=HH;
Y=LL ;
end
%Threshold used in denoising steps in wavelet domain
%--------------------------------------------------
Threshold=sqrt(2*log(SIZE*SIZE))*sigma; % universal thresholding
for i=l:SIZE
for j =1:SIZE
%performing hard threshold, keep coefficient above threshold
% anything below threshold kills or sets to zero,
if (abs(coef(i,j))>Threshold)
detcoef_hard(i,j)=coef(i,j);
else
detcoef_hard(i, j)=0;
end
%performing soft threshold, set to zero any coefficient below
%threshold in absolute value
if (coef(i,j)>Threshold)
detcoef_soft(i,j)=coef(i,j)-Threshold;
end
if (coef(i,j)<-Threshold)
detcoef_soft(i,j)=coef(i,j)+Threshold;
end
if (abs(coef(i,j))cThreshold)
detcoef_soft(i,j ) =0 ;
end
end
end
coef=detcoef_hard; % detail coefficient using hard threshold
coeff=detcoef_soft; % detail coefficient using soft threshold
%Inverse wavelet transform for hard threshold
%---------------------------------------------
67


for level=l:num_levels
J=2a(max_levels-num_levels+level-l);
LL=COef(1:J,1:J);
LH=COef(J+l:2 *J,1:J) ;
HL=COef(1:J,J+l:2*J);
HH=coef(J+1:2*J,J+1:2*J);
Y=idwt2(LL,LH,HL,HH,wtype);
coef(1:2*J,1:2 *J)=Y;
if (level clear Y;
else
Y_hard=Y;
end
end
clear Y;
image_hard = Y_hard;
%Inverse wavelet transform for soft threshold
for level=l:num_levels
J=2a(max_levels-num_levels+level-l);
LL=COeff(1:J,1:J);
LH=COeff(J+l:2 *J,1:J) ;
HL=COeff(1:J,J+l:2*J);
HH=coeff(J+1:2*J,J+1:2*J);
Y=idwt2(LL,LH,HL,HH,wtype);
coeff(1:2 *J,1:2*J)=Y;
if (level clear Y;
else
Y_soft=Y;
end
end
image_soft = Y_soft;
time = etime (clock, start) ,-
%PSNR computing for denoising using hard and soft threshold
%--------------------------------------------------
rmse_input=sqrt(sum(sum((noisy_im(:)-original(:)).*2)))/512;
psnr_input = 20*logl0(255/rmse_input);
rmse_hard=sqrt(sum(sum((image_hard(:)-original(:)).A2)))/512;
psnr_hard=20*logl0(255/rmse_hard);
rmse_soft=sqrt(sum(sum((image_soft(:)-original(:)).*2)))/512;
psnr_soft=20*logl0(255/rmse_soft);
%Display results
68


fprintf(1,'VisuShrink');
fprintf([1, '\nThe RMSE input = num2str(rmse_input, '% .2f )])
fprintf([1\nThe PSNR input= num2str(psnr_input, 1 %.2f') ...
' [dB] 1 ] ) ;
fprintf(1,'\n\n')
fprintf(1VisuShrink with hard threshold');
fprintf([1\nThe RMSE = num2str(rmse_hard,'%.2f')] );
fprintf([1\nThe PSNR = num2str(psnr_hard, '% 2f') [dB] 1])
fprintf(1,'\n\n');
fprintf(1,'VisuShrink with soft threshold');
fprintf([1,'\nThe RMSE = ', num2str(rmse_soft,'%.2f\n')]);
fprintf([1\nThe PSNR = num2str(psnr_soft,'%.2f') '[dB]'])
fprintf (['\nElapsed time : ' num2str (time, % 2f ') '[s]\n\n'])f-
% Plot results
o_
-Q -
if(strcmp(PLOT,'on'))
figure;
%h = figure('Units','normalized','Position',[0 0.5 1 0.5])
%set(h,'name','on');
subplot(1, 2, 1)
imagesc(original,[0 255]); colormap gray(256);
axis image; axis off;
title('(a) Original image', 'fontsize',16);drawnow;
subplot(1, 2, 2)
imagesc(noisy_im,[0 255]); colormap gray(256);
axis image; axis off;
title(['(b) Noisy image(PSNR Input=' num2str(psnr_input,..
'%.2f') 'dB)' ],'fontsize',16);drawnow;
figure;
subplot(1, 2,1 )
imagesc(image_hard,[0 255]); colormap gray(256);
axis image; axis off;
title('Denoised image with Hard');
title(['(c) Denoised with Hard(PSNR=' num2str(psnr_hard,..
'%.2f') 'dB) ],'fontsize',16);drawnow;
subplot(1, 2, 2)
imagesc(image_soft,[0 255]); colormap gray(256);
axis image; axis off;
title('Denoised image with Soft');
title(['(d) Denoised with Soft(PSNR=' num2str(psnr_soft,..
'%.2f') 'dB)'],'fontsize',16);drawnow;
fprintf (1, 'The program completed successfully\n') ,-
end
o,
*5
69


APPENDIX B
SureShrink Matlab Program

% Program asks the user to input the image of size 512x512,
% standard deviation noise to create noisy image, and wavelet type.
% Perform forward wavelet transform on the image by assigning Sure
% threshold to horizontal subband. Check for sparse and then assign%
% Sure threshold minimum to hard and soft thresholds on wavelet %
% domain. Repeat process for vertical and diagonal subbands. Then %
% perform inverse wavelet transform for hard and soft thesholded %
% images. Compute for RMSE and PSNR values for the denoised images.%
% Display the results: original image, noisy image, denoised image %
% with hard threshold and denoised image with soft threshold %
function SureShrink(SIZE, sigma, wtype, filename, num_levels,...
max_levels)
clear all
close all
SIZE = 512;
%sigma = 40;
%wtype = 1db4;
dwtmode (1 per1) ,-
num_leve1s = 5;
max_leve1s=9;
PLOT = 'on1;
%image size
% Noise standard deviation
% Orthornormal Wavelet filter
%discrete wavelet transform extension mode
% number of level of filter
%max number of level of filter
% If PLOT='on', the results are plotted
% Load the original noise-free image
%------------------------------------
% Load the original noise-free image
display('Available images for selection');
1:lena.tif');
2:mandrill.tif');
3:goldhill.tif');
4 .-boat. tif ') ;
5:einstein.tif');
6:peppers512.tif') ;
display('
display('
display('
display('
display('
display('
%display(' ') ;
imag=input('Please enter image (Ex:l)
switch imag
case 1
filename=imread('lena.tif');
)
70
o\ o\ o\o


case 2
filename=imread('mandrill.tif ) ;
case 3
filename=imread('goldhill.tif 1 ) ;
case 4
filename=imread('boat.tif ') ;
case 5
filename=imread('einstein.tif');
case 6
filename=imread('peppers512.tif');
end
fprintf(1,'\n1);
display('Standard deviation noise ');
sigma = input('Please enter sigma value (Ex:5)
fprintf(1, '\n');
filename = 'mandrill.tif';
original = double(imread(filename));
% Create input noisy image
%-------------------------
randn('state' 0) ;
noise = randn(size(original));
noise = noise/std(noise(:));
noisy_im = original+sigma*noise;
X=noisy_im;
display('Orthonormal wavelets');
display('enter 1 for haar wavelet');
display('enter 2 for db2 wavelet');
display('enter 3 for db4 wavelet');
display('enter 4 for db8 wavelet');
display(enter 5 for sym4 wavelet');
display('enter 6 for sym6 wavelet');
display('enter 7 for sym8 wavelet');
display('press any key to quit');
wavelet=input('enter your choice: ');
switch wavelet
case 1
wtype='haar';
case 2
wtype='db2';
case 3
wtype='db4';
case 4
wtype='db8' ;
case 5
wtype='sym41;
case 6
71


wtype =1sym61;
case 7
wtype='sym8';
otherwise
quit ;
end
fprintf(1,'\n');
start = clock;
%Forward wavelet transform on image y,
%where num levels are wavelet numbers
for level=l:num_levels
sz=2Alevel ;
J=SI2E/sz;
N=JA2;
[LL,LH,HL,HH] = dwt2(X,wtype);
coef(1;J,1:J)=LL;
decoef_hard(1:J,1:J)=LL;
decoef_soft(1:J,1:J)=LL;
% The horizontal subband
coef(J+l:2*J,1:J)=LH;
for j =1: J
Y((j-l)*J+l:j*J)=LH(j,1:J)';
end
Y=sort(abs(Y));
Y=Y(J*J:-1:1);
sure_min=l.Oe+8;
ssum=Y*Y';
for k=l:N
t=Y(k);
S=Y(k:N)*(Y(k:N))';
sure=s-(N-k)*sigmaA2+k*(sigma^2+t^2)
if (sure tmin=t;
sure_min=sure;
end
end
epsilon=sigmaA2*sqrt(N)*(log(N))A(3.0/2)
if (ssum-N*sigmaA2<=epsilon)
thr_sure_hard=sigma*sqrt(2*log(N));
thr_sure_soft=sigma*sqrt(2*log(N));
else
thr_sure_soft=tmin;
thr_sure_hard=2 tmin;
end
LH_hard=zeros(J,J);
LH_soft=zeros(J, J) ;
for i=l:J
72


for j=l:J
if (abs(LH(i,j))>thr_sure_hard)
LH_hard(i,j)=LH(i,j);
else
LH_hard(i,j)=0.0;
end
if {LH(i,j)>thr_sure_soft)
LH_soft(i,j)=LH(i,j)-thr_sure_soft;
end
if (LH(i,j)<-thr_sure_soft)
LH_soft(i,j)=LH(i,j)+thr_sure_soft;
end
if (abs(LH(i,j)) LH_soft(i,j)=0.0;
end
end
end
decoef_hard(J+l:2*J,1:J)=LH_hard;
decoef_SOft(J+l:2*J,1:J)=LH_soft;
% The vertical subband
coef(1:J,J+l:2*J)=HL;
for j =1 : J
Y((j-l)*J+l:j*J)=HL(j,1:J)';
end
Y=sort(abs(Y));
Y=Y(J*J:-1:1);
sure_min=l.Oe+9;
ssum=Y*Y';
for k=l:N
t=Y(k);
s=Y(k:N)*(Y(k:N))1;
sure=s-(N-k)*sigmaA2+k*(sigmaA2+tA2);
if (sure tmin=t;
sure_min=sure;
end
end
epsilon=sigmaA2*sqrt(N)*(log(N))A(3.0/2);
if (ssum-N*sigmaA2<=epsilon)
thr_sure_hard=sigma*sqrt(2*log(N));
thr_sure_soft=sigma*sqrt(2*log(N));
else
thr_sure_soft=tmin;
thr_sure_hard=2 tmin;
end
HL_hard=zeros(J,J);
HL_soft=zeros(J,J);
for i=l:J
for j =1:J
if (abs(HL(i,j))>thr_sure_hard)
73


HL_hard(i,j)=HL(i,j);
else
HL_hard(i,j)=0.0;
end
if (HL(i,j)>thr_sure_soft)
HL_soft(i,j)=HL(i,j)-thr_sure_soft
end
if (HL(i,j)<-thr_sure_soft)
HL_soft(i,j)=HL(i,j)+thr_sure_soft
end
if (abs(HL(i,j)) HL_sof t (i, j ) = 0.0;
end
end
end
decoef_hard(1:J,J+l:2*J)=HL_hard;
decoef_soft(1:J,J+1:2*J)=HL_soft;
% The diagonal subband
coef(J+1:2*J,J+1:2*J)=HH;
for j=l:J
Y ( (j -1) J+l: j J) =HH (j 1: J) ;
end
Y=sort(abs(Y));
Y=Y(J*J:-1:1);
sure_min=l.0e+10;
ssum=Y*Y';
for k=l:N
t=Y(k);
S=Y (k: N) (Y (k : N) ) ;
sure=s-(N-k)*sigmaA2+k*(sigmaA2+tA2);
if (sure tmin=t;
sure_min=sure;
end
end
epsilon=sigmaA2*sqrt(N)*(log(N))A(3.0/2);
if (ssum-N*sigmaA2<=epsilon)
thr_sure_hard=sigma*sqrt(2*log(N));
thr_sure_soft=sigma*sqrt(2*log(N));
else
thr_sure_soft=tmin;
thr_sure_hard=2 tmin ;
end
HH_hard=zeros(J, J);
HH_soft=zeros(J, J) ;
for i=l:J
for j=l:J
if (abs(HH(i,j))>thr_sure_hard)
HH_hard(i,j)=HH(i,j);
else
74


HH_hard(i,j)=0.0 ;
end
if (HH(i,j)>thr_sure_soft)
HH_soft(i,j)=HH(i,j)-thr_sure_soft
end
if (HL(i,j)<-thr_sure_sof t)
HH_soft(i,j)=HH(i,j)+thr_sure_soft
end
if (abs(HH(i,j)) HH_soft(i,j)=0.0;
end
end
end
decoef_hard(J+l:2*J,J+l:2*J)=HH_hard;
decoef_soft(J+l:2*J,J+l:2*J)=HH_Soft;
clear X;
X=LL;
clear LL;
clear LH;
clear HL;
clear HH;
clear Y;
end
coef=decoef_hard;
coeff=decoef_soft ;
% inverse wavelet transform for hard threshold
for level=l:num_levels
J=2'" (max_levels-num_levels+level-l) ;
LL=coef(1:J,1:J);
LH=coef(J+l:2*J,1:J);
HL=coef(1:J,J+1:2*J);
HH=coef(J+1:2*J,J+1:2*J);
X=idwt2(LL,LH,HL,HH,wtype);
coef(1:2*J,1:2*J)=X;
if (level clear X;
else
X_hard=X;
end
clear LL;
clear LH;
clear HL;
clear HH;
end
% inverse wavelet transform for soft threshold
for level=l:num_levels
J=2*~ (max_levels-num_levels + level-l) ;
LL=coeff(1:J,1:J);
75


LH=coeff(J+l:2*J,1:J);
HL=coeff(1:J,J+1:2*J);
HH=coeff(J+1:2*J,J+1:2*J);
X=idwt2(LL,LH,HL,HH,wtype);
coeff(1:2*J,1:2*J)=X;
if (level clear X;
else
X_SOft=X;
end
end
image_hard=X_hard;
image_soft=X_soft,-
time = etime(clock, start);
%PSNR computing for denoising using hard and soft threshold
o_
O --- ------
rmse_input=sqrt(sum(sum((noisy_im(:)-original(:)).*2)))/512;
psnr_input = 20*logl0(255/rmse_input);
rmse_hard=sqrt(sum(sum((image_hard(:)-original(:)).*2)))/512;
psnr_hard=20*logl0(255/rmse_hard);
rmse_soft=sqrt(sum(sum((image_soft(:)-original(:)).^2)))/512;
psnr_soft=20*logl0(255/rmse_soft),-
%Display results
%-------------------------
fprintf(1,1SureShrink');
fprintf([1,'\nThe RMSE input = num2str(rmse_input,%.2f')])
fprintf([1,'\nThe PSNR input= ', num2str(psnr_input,...
' %.2 f 1 ) 1 [dB] '] ) ;
fprintf(1,1\n\n')
fprintf(1,'SureShrink with hard threshold');
fprintf([1\nThe RMSE = num2str(rmse_hard, 1 %.2f')] );
fprintf([1\nThe PSNR = num2str(psnr_hard,'%.2f') '[dB]'])
fprintf(1,'\n\n');
fprintf(1,'SureShrink with soft threshold');
fprintf([1,'\nThe RMSE = ', num2str(rmse_soft/'%.2f\n')]);
fprintf([1,'\nThe PSNR = ', num2str(psnr_soft,'%.2f') '[dB]'])
fprintf(['\nElapsed time : ' num2str(time,'%.2f') '[s]\n\n']);
% Plot results
%-------------------
if(strcmp(PLOT,'on'))
76


figure;
subplot(1, 2, 1)
imagesc(original,[0 255]); colormap gray(256);
axis image; axis off;
title(1(a) Original image', 'fontsize',16);drawnow;
subplot(1, 2, 2)
imagesc(noisy_im,[0 255]); colormap gray(256);
axis image; axis off;
title(['(b) Noisy image(PSNR Input=' num2str(psnr_input,
'%.2 f') 'dB)' ],'fontsize16);drawnow;
figure;
subplot(1, 2,1 )
imagesc(image_hard,[0 255]); colormap gray(256);
axis image; axis off;
title('Denoised image with Hard');
title(['(c) Denoised with Hard(PSNR=' num2str(psnr_hard,
'%.2f') 'dB)' ],'fontsize',16)jdrawnow;
subplot(1, 2, 2)
imagesc(image_soft,[0 255]); colormap gray(256);
axis image; axis off;
title('Denoised image with Soft');
title(['(d) Denoised with Soft(PSNR=' num2str(psnr_soft,
'%.2f') 'dB)'],'fontsize',16);drawnow;
fprintf(1,'The program completed successfully\n');
end
77


APPENDIX C

BayesShrink Matlab Program
%%%%%%%%%%%%%%%%%%!
% Program asks the user to input the image of size 512x512, %
% standard deviation noise to create noisy image, and wavelet type.%
% Perform forward wavelet transform on the image. Compute
% BayesShrink for hard and soft thresholds on subbands.
% Perform the resulted hard and soft threshold on horizontal %
% subband. Repeat the processes for vertical and diagonal subbands %
% Then perform inverse wavelet transform for hard and soft %
% thesholded images. Compute for RMSE and PSNR values for the %
% denoised images. Display the results: original image, noisy %
% image, denoised image with hard threshold and denoised image %
% with soft threshold %
function BayesShrink(SIZE, sigma, wtype, filename, num_levels,...
max levels)
clear all
close all
SIZE = 512;
%sigma = 10;
%wtype = db4;
dwtmode(1 per1);
num_leve1s = 5;
max_leve1s=9;
PLOT = 'on1;
%image size
% Noise standard deviation
% Orthonormal Wavelet filter
%discrete wavelet transform extension mode:
% number of level of filter
%max number of level of filter
% If PLOT='on', the results are plotted
% Load the original noise-free image
%-----------------------------------
'Available
display
display('
display('
display('
display('
display('
display('
%display(
imag=input('Please enter image (Ex:l)
switch imag
case 1
filename=imread('lena.tif');
case 2
images for selection')
1:1ena.t i f');
2:mandrill.tif');
3:goldhill.tif');
4:boat.tif');
5:einstein.tif');
6:peppers512.tif') ;
)
78
o\o o\P


filename=imread('mandrill.tif ') ;
case 3
filename=imread('goldhill.tif');
case 4
filename=imread('boat.tif');
case 5
filename=imread(1einstein.tif');
case 6
filename=imread('peppers512.tif') ;
end
fprintf(1,'\n1);
display('Standard deviation noise ');
sigma = input('Please enter sigma value (Ex:5)
fprintf(1,'\n');
filename = filename;
original = double(filename);
%filename = './Images/goldhill.tif';
%original = double(imread(filename));
% Create input noisy image
%-------------------------
randn('state' 0) ;
noise = randn(size(original));
noise = noise/std(noise(:));
noisy_im = original+sigma*noise;
X=noisy_im;
display('Orthonormal wavelets');
display('enter 1 for haar wavelet');
display('enter 2 for db2 wavelet');
display('enter 3 for db4 wavelet');
display('enter 4 for db8 wavelet');
display('enter 5 for sym4 wavelet');
display('enter 6 for sym6 wavelet');
display('enter 7 for sym8 wavelet');
display('press any key to quit');
wavelet=input('enter your choice: ');
switch wavelet
case 1
wtype='haar';
case 2
wtype='db2' ;
case 3
wtype='db4';
case 4
wtype='db8';
79


case 5
wtype='sym4';
case 6
wtype ='sym6';
case 7
wtype='sym8';
otherwise
quit ;
end
fprintf(1,'\n');
start = clock;
%Forward wavelet transform on image Y,
%where num_levels are wavelet numbers
%--------------------------------------
for i=l:num_levels
sz=2Ai;
J=SIZE/SZ;
[LL,LH,HL,HH]=dwt2(X,wtype);
coef(1:J,1:J)=LL;
detcoef_hard(1:J,1:J)=LL;
detcoef_soft(1:J,1:J)=LL;
% The horizontal subband
coef(J+l:2*J,1:J)=LH;
for j =1:J
Y((j-1)*J+l:j *J)=LH(j,1:J) 1 ;
end
sigma_y=var(Y); %the observed variance on the subband (Eq: 4.15)
%standard deviatin on the subband (Eq: 4.14)
sigma_x=sqrt(max((sigma_y-sigmaA2),0));
%compute BayesShink hard and soft threshold on the subband
if (sigma_x==0)
thr_bayes_soft=max(abs(Y));
else
%BayesShrink siibband dependent thresholds are optimal
%for soft thresholing.
thr_bayes_soft=sigma*2/sigma_x; %(Eq: 4.16)
end
%compute BayesShrink hard threshold
thr_bayes_hard=2*thr_bayes_soft;
LH_hard=zeros(J,J);
LH_soft=zeros(J,J);
tperform BayesShrink hard and soft threshold
% on the horizontal subband
for i=l:J
80


o\ o\o
for j=l:J
if (abs(LH(i,j))>thr_bayes_hard)
LH_hard(i,j)=LH(i, j ) ;
else
LH_hard(i,j)= 0.0;
end
if (LH(i,j)>thr_bayes_soft)
LH_soft(i,j)=LH(i,j)-thr_bayes_soft;
end
if (LH(i,j)<-thr_bayes_soft)
LH_soft(i,j)=LH(i,j)+thr_bayes_soft;
end
if (abs(LH(i,j)) LH_sof t (i, j ) =0.0,-
end
end
end
detcoef_hard(J+l:2*J,1:J)=LH_hard;
detcoef_SOft(J+1:2*J,1:J)=LH_SOft;
The vertical subband
coef(1:J,J+l:2*J)=HL;
for j =1:J
Y( (j-1)*J+l:j *J)=HL(j,1:J) ;
end
sigma_y=var (Y) , %the observed variance on the subband (Eq: 4.15)
%standard deviatin on the subband (Eq: 4.14 )
sigma_x=sqrt(max((sigma_y-sigma*2),0));
%compute Bayes hard and soft threshold on the subband
o.
O-- -- ----------- -
if (sigma_x==0)
thr_bayes_soft=max(abs(Y));
else
%BayesShrink subband dependent thresholds are optimal
%for soft thresholing.
thr_bayes_soft=sigmaA2/sigma_x; %Eq: (4.16)
end
%compute BayesShrink hard threshold
thr_baye s_hard=2 thr_baye s_so f t;
HL_hard=zeros(J,J);
HL_soft=zeros(J,J);
%perform BayesShrink hard and soft threshold
%on the horizontal subband
o, _
"O -- - --- - ~ -- - --
for i=l:J
for j=l:J
if (abs(HL(i,j))>thr_bayes_hard)
81


o\<> o\
HL_hard(i, j )=HL(i,j);
else
HL_hard(i,j)=0.0 ;
end
if (HL(i,j)>thr_bayes_soft)
HL_soft(i,j)=HL(i,j)-thr_bayes_soft;
end
if (HL(i,j)<-thr_bayes_soft)
HL_soft(i,j)=HL(i,j)+thr_bayes_soft;
end
if (abs(HL(i,j)) HL_soft(i,j)=0.0;
end
end
end
detcoef_hard(1:J,J+l:2*J)=HL_hard;
detcoef_soft(1:J,J+1:2*J)=HL_soft;
The diagonal subband
coef(J+l:2*J,J+l:2*J)=HH;
for j =1:J
Y((j-l)*J+l:j*J)=HH(j,1:J)';
end
sigma_y=var(Y); %the observed variance on the subband (Eq: 4.15)
%standard deviatin on the subband (Eq: 4.14 )
sigma_x=sqrt(max((sigma_y-sigmaA2),0));
if (sigma_x==0)
thr_bayes_soft=max(abs(Y));
else
%BayesShrink subband dependent thresholds are optimal for soft
%thresholing, Eq: (4.16)
thr_bayes_soft=sigma^2/sigma_x;
end
%compute BayesShrink hard threshold
thr_bayes_hard=2*thr_bayes_soft;
HH_hard=zeros(J,J);
HH_soft=zeros(J,J);
for i=l:J
for j =1:J
if (abs(HH(i,j))>thr_bayes_hard)
HH_hard(i,j)=HH(i,j);
else
HH_hard(i,j)=0.0 ;
end
if (HH(i,j)>thr_bayes_soft)
HH_soft(i,j)=HH(i,j)-thr_bayes_soft;
end
if (HL(i,j)<-thr_bayes_soft)
HH_soft(i,j)=HH(i,j)+thr_bayes_soft;
82


end
if (abs(HH(i,j)) HH_soft(i,j)=0.0;
end
end
end
detcoef_hard(J+l:2*J,J+l:2*J)=HH_hard;
detCOef_Soft(J+l:2*J,J+l:2*J)=HH_soft;
clear X;
X=LL ;
clear LL;
clear LH;
clear HL;
clear HH;
clear Y;
end
coef=detcoef_hard; % detail coefficient using hard threshold
coeff=detcoef_soft; % detail coefficient using soft threshold
%Inverse wavelet transform for BayesShrink hard threshold
for level=l:num_levels
J=2 *(max_levels-num_levels+level-l);
LL=COef(1:J,1:J);
LH=COef(J+l:2*J,1:J);
HL=COef(1:J,J+l:2*J);
HH=COef(J+l:2*J,J+l:2*J);
X=idwt2(LL,LH,HL,HH,wtype);
coef(1:2 *J,1:2*J)=X;
if (level clear X;
else
X_hard=X;
end
end
image_hard = X_hard;
%Inverse wavelet transform for BayeShrink soft threshold
%--------------------------------------------
for level=l:num_levels
J=2*(max_levels-num_levels+level-l);
LL=COeff(1:J,1:J);
LH=COeff(J+1:2*J,1:J);
HL=coeff(1:J,J+l:2*J);
HH=COeff(J+1:2*J,J+1:2*J);
X=idwt2(LL,LH,HL,HH,wtype);
coeff(1:2*J,1:2*J)=X;
if (level clear X;
else
83


X soft=X;
end
end
image_soft=X_soft;
time = etime(clock,start);
%PSNR computing for denoising using hard and soft threshold
rmse_input=sqrt(sum(sum((noisy_im(:)-original(:)).*2)))/512;
psnr_input = 20*logl0(255/rmse_input);
rmse_hard=sqrt(sum(sum((image_hard{:)-original(:)).*2)))/512;
psnr_hard=20*logl0(255/rmse_hard);
rmse_soft=sqrt(sum(sum((image_soft(:)-original(:)) .A2)))/512 ;
psnr_soft=20*logl0(255/rmse_soft);
%Display results
fprintf(1,'BayesShrink');
fprintf([1,'\nThe RMSE input = num2str(rmse_input,'%.2f')])
fprintf([1,'\nThe PSNR input= ', num2str(psnr_input,...
' %.2 f ') [dB] '] ) ;
fprintf(1,1\n\n')
fprintf(1,'BayesShrink with hard threshold');
fprintf([1,'\nThe RMSE = num2str(rmse_hard, '%.2f ')]
fprintf([1,'\nThe PSNR = ', num2str(psnr_hard, '% 2f ')
fprintf(1,'\n\n');
) ;
[dB]
] )
fprintf(1,'BayesShrink with soft threshold');
fprintf([1,'\nThe RMSE = ', num2str(rmse_soft,'%.2f\n')]);
fprintf([1,'\nThe PSNR = ', num2str(psnr_soft, '%.2f ') ' [dB] '])
fprintf(['\nElapsed time : ' num2str(time, '% 2f') '[s]\n\n']);
% Plot results
%-------------------
if(strcmp(PLOT,'on'))
figure;
subplot(1, 2, 1)
imagesc(original,[0 255]); colormap gray(256);
axis image; axis off;
title('(a) Original image', 'fontsize',16);drawnow;
subplot(1, 2, 2)
imagesc(noisy_im,[0 255]); colormap gray(256);
axis image; axis off;
title(['(b) Noisy image(PSNR Input=' num2str(psnr_input,..
' % 2f ') ' dB) ] fontsize 16) ,-drawnow;
figure;
subplot(1, 2,1 )
84


imagesc(image_hard,[0 255]); colormap gray(256);
axis image; axis off;
title(1Denoised image with Hard');
title(['(c) Denoised with Hard(PSNR=' num2str(psnr_hard,
'% 2f ) 'dB) ] 'fontsize',16) ;drawnow;
subplot(1, 2, 2)
imagesc(image_soft,[0 255]); colormap gray(256);
axis image; axis off;
title('Denoised image with Soft');
title(['(d) Denoised with Soft(PSNR=' num2str(psnr_soft,
' % 2f') 'dB) '], 'fontsize',16);drawnow;
fprintf(1,'The program completed successfully\n');
end
85


APPENDIX D
Wavelet Transform Modulus Maxima Matlab Programs
a. Image Reconstruction from Modulus Maxima by using the Conjugate Gradient
Algorithm
function recon_mm(x,lvl,sigma)
% Reconstruct image from WTMM using the conjugate gradient accerated
% algorithm. For lvl level decomposition, the image is reconstructed
% based on the the detial coefficients corresponding to the modulus
% maxima. Original and reconstructed images are also displayed along
% with the resultant peak signal-to-noise ratio (PSNR) and root mean
% square error (RMSE)
%------------------------------------------
% x: 256x256 input image
% lvl: level decompostion
% sigma: standard deviation noise
%------------------------------------------
clear all
close all
display('Level of decomposition ');
lvl = input('Enter between 1 to 3: ')
fprintf(1,'\n');
display('Standard deviation noise ')
sigma = input('Please enter sigma
fprintf(1,'\n');
%Thresholding
Threshold=sqrt(2*log(256*256))*sigma;
%x = imread('Lena.bmp');
%use image with size of 256x256 only
display('Sellect the image');
display(' 1:lena.bmp')
display(' 2:coco.tif')
display (' 3: head.tif)
display(' 4:house.tif');
display(' 5:peppers.tif');
display(' 6:cameraman.tif');
%display(1 ');
value (Ex:0.5)
) ;
imag=input('Please enter image (Ex:l): ');
switch imag
case 1
86


x=imread(1lena.bmp');
case 2
x=imread('coco.tif');
case 3
x=imread('brain.tif');
case 4
x=imread('house.tif1);
case 5
x=imread('peppers.tif');
case 6
x=imread('cameraman.tif');
end
fprintf(1,'\n');
%----------------------------------------
x=double(x);
y = x;
save_orig = y; %Save original image to display later
[nr,nc]=size(y);
%---------------------------------------------------
%Image decompostion
%a = last level approximation
%D1_MM = contains all levels of the horizontal details corresponding
%to the modulus maxima matrix masks
%D2_MM = contains all levels of the vertical details corresponding
%to the modulus maxima matrix masks
%gprime = reconstruction high pass filter
%hprime = reconstruction low pass filter
[a, D1_MM, D2_MM, gprime, hprime] = mm_atrous(y,lvl,Threshold);
%Generate modolus maxima masks
%u_a = (abs(a) > 0) ,-
u_dl = (abs(D1_MM) > 0);
u_d2 = (abs (D2_MM) > 0) ;
%Initialize search direction, p, with the initial gradient
%which is similar to the first approximation of the original image,
p = atrous_up(lvl, hprime, gprime, a, D1_MM, D2_MM);
f = zeros(1, nr*nc); %initialize the image apporximation f
pold = zeros(l,nr*nc); %Initiatize search direction pold
%Initialize residual vector r
%This involves changing the p matrix into a vector,
r = p (1, : ) ;
for i = 2:nr
r = [r p (i, : ) ] ;
end
%lnitialize Lpold, th projection of p onto the image space
87


Lpold = zeros(l,nr*nc);
imax = 16;
i = 0;
%Continue to iterate the conjugate gradient algorithm until the
%relative mean-square reconstruction error is below rmsr.
while icimax
i = i+1;
%Invoke conjugate gradient algorithm
%fnew updated image vector
%pnew updated search direction
%rnew updated residual vector
%Lp orthogonal projection vector
[fnew,pnew,rnew,Lp] =
ConjGrad_2d(f,p,pold,r,u_dl,u_d2,lvl,Lpold);
f = fnew;
%Transfer current search direction to previous search direction
%and change 2-D search direction into 1-D vector
pold = p(1,:);
for j = 2:nr
pold = [pold p(j,:)];
end
%Update search direction and change 1-D vector into 2-D matrix
p = pnew(line);
for j = l:nr-l
p = [p; pnew(1+(j*nc) : (j+1)*nr);] ;
end
r = rnew
%Transfer current orthogonal projection to previous orthogonal
%projection
Lpold = Lp;
end
%Calculate signal-to-noise ratio
f_image = f(1:nc);
for j = l:nr-l
f_image = [f_image; f(1+(j*nc):(j+1)*nr);];
end
%Compute SNR and MSE
%var_s = (std2 (save_orig) ) "~2 ;
%var_n = (std2(double(f_image) save_orig))A2;
%psnr = 10*logl0(var_s/var_n);
%Compute PSNR and RMSE
var_n=sqrt(sum(sum((f_image(:)-save_orig(:)).*2)))/256;
psnr=20*logl0(255/var_n);
fprintf([1,'\nThe RMSE = num2str(var_n, '% 2f ')]) ;
fprintf([1, '\nThe PSNR = num2str(psnr, '% 2f ') ' [dB] ']);
fprintf(1,1\n');
88