Citation
Implementation of a trous algorithm using lifting for the image processing

Material Information

Title:
Implementation of a trous algorithm using lifting for the image processing
Creator:
Patil, Kunal B
Publication Date:
Language:
English
Physical Description:
vii, 94 leaves : illustrations ; 28 cm

Subjects

Subjects / Keywords:
Image processing -- Digital techniques ( lcsh )
Digital images ( lcsh )
Electronic noise ( lcsh )
Data compression (Computer science) ( lcsh )
Algorithms ( lcsh )
Wavelets (Mathematics) ( lcsh )
Algorithms ( fast )
Data compression (Computer science) ( fast )
Digital images ( fast )
Electronic noise ( fast )
Image processing -- Digital techniques ( fast )
Wavelets (Mathematics) ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 92-94).
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Kunal B. Patil.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
656396335 ( OCLC )
ocn656396335
Classification:
LD1193.E54 2010m P37 ( lcc )

Full Text
IMPLEMENTATION OF A TROUS ALGORITHM USING
LIFTING FOR THE IMAGE PROCESSING
by
Kunal B. Patil
A thesis submitted to the
University of Colorado Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering
2010


This thesis for the Master of Science
Degree by
Kunal B. Patil
Dr. Hamid Fardi
o4/'^
Date


Patil, Kunal B. (M.S. Electrical Engineering)
Implementation of A Trous Algorithm using Lifting for the Image
Processing
Under the guidance of Dr. Jan Bialasiewicz
Abstract
Wavelet transforms have become useful tools for various image
processing applications. Primarily, in this study we are dealing with the
image denoising and compression. In this study, lifting wavelet
transform is used for the processing of real images. We have
implemented lifting wavelet transform along with the two algorithms,
called a trous algorithm, and conjugate gradient algorithm. In a trous
algorithm, unlike other algorithms, instead of down sampling the
filtered image during decomposition, dilation of the filter is done by
creating holes (trous is French). Only horizontal and vertical details
along with the corresponding low frequency approximations are
generated from this algorithm. Further, the a trous algorithm is used,
within an iterative conjugate gradient technique that uses multiscale
edge detection to reconstruct an image.
Signed


ACKNOWLEDGEMENT
I am heartily thankful to advisor, Dr.Jan Bialasiewicz, whose guidance
from the initial to the final level enabled me to develop an
understanding of the subject. Above all and the most needed, he
provided me unflinching encouragement and support in various ways.
I am thankful to him for introducing me to the world of wavelets and
opening up new opportunities in my professional career.
I would also like to thank my parents. They were always supporting me
and encouraging me with their best wishes.
Lastly, I offer my regards and blessings to all of those who supported
me in any respect during the completion of the project.


TABLE OF CONTENTS
Figures........................................................vii
Chapter
1. Wavelet Transforms............................................1
2. The Discrete Wavelet Transform................................4
3. Condition for Perfect Reconstruction..........................7
4. Lifting......................................................11
5. A trous Algorithm............................................41
6. Image Denoising..............................................50
7. Image Reconstruction using Modulus Maxima....................54
8. Conclusion...................................................61
Appendix
A. Image Analysis using Lifting Wavelet Transform.............62
B. Implementation of A Trous Algorithm using Lifting...........65
C. Image Denoising.............................................70
D. Modulus Maxima Mask Generation using A Trous Algorithm......74
E. Reconstruction of the Image using the Conjugate Gradient Algorithm
80
V


F. Modulus Maxima............................................82
G. Generation of the Initial Search Direction Used in the Conjugate
Gradient Algorithm...........................................87
H. Conjugate Gradient Program................................89
I. Program For Decomposing the Search Direction Used in the
Conjugate Gradient Algorithm.................................90
References...................................................92
VI


LIST OF FIGURES
Figure 2.1 Filter Bank Implementations of the Wavelet Transform.......11
Figure 2.2. Filter Bank Implementations of the Inverse Wavelet
Transform..................................................12
Figure 3.1 Two channel Analysis and Synthesis..........................14
Figure 4.1. Lifting Steps, Split, Predict (P), and Update (U)..........19
Figure 4.2: DWT in the z-representation................................23
Figure 4.3. Level 1 Approximation and Details..........................39
Figure 4.4. Level 2 Approximation and Details..........................40
Figure 4.5. Level 3 Approximation and Details..........................40
Figure 5.1 .Decomposition Filter Bank for the A trous Algorithm.......43
Figure .5.2. Reconstruction Filter Bank for the A trous Algorithm......44
Figure 5.3. Scaling Function...........................................48
Figure 5.4.Wavelet Function............................................48
Figure 5.5.Three level Image Decomposition: Approximations, Horizontal,
and Vertical Details .......................................49
Figure 6.1 .Scaling and Wavelet functions of the Symlet filters used in the
Denoising operations........................................52
Figure 6.2. Image Denoising: Original Image, Noisy Image, Denoised
Image (From Left to Right)..................................53
Figure 7.1: Three Level Modulus Maxima Matrix of Image.................58
Figure 7.2: Original Image.............................................60
Figure 7.3: Reconstructed Image........................................60
vii


1. Wavelet Transforms
Signal analysis has traditionally been the domain of Fourier transforms
based techniques. Fourier analysis suffers from the fundamental lack
of time localization, and therefore is not suited to the analysis of
signals containing localized features such as transients, and edges.
The Fourier transform's utility lies in its ability to analyze a signal in the
time domain for its frequency content. The transform works by first
translating a function in the time domain into a function in the
frequency domain. The signal can then be analyzed for its frequency
content because the Fourier coefficients of the transformed function
represent the contribution of each sine and cosine function at each
frequency. An inverse Fourier transforms data from the frequency
domain into the time domain.
For image analysis, the Fourier transform identifies the spectral
components present in an image but it does not provide information as
to where certain components occur within the image. If we are only
interested in stationary signals, the Fourier transform provides
complete spectral analysis because all spectral components exist at all
times. However, if we are interested in non-stationary signals, signals
with transient phenomena, the Fourier transform would only give us the
l


spectral components within an image but not there location [3]. The
wavelet transform solves this problem by using a variable length
window called wavelet.
The wavelet transform or wavelet analysis is probably the most recent
solution to overcome the shortcomings of the Fourier transform.
Wavelet analysis can provide both time and frequency localization,
which led to efficient representations of the signal. Over the last
decade, wavelets have found applications in numerous areas of
mathematics, engineering, computer science, statistics, physics, etc.
[1]. Wavelet transform provides high time resolution, and low frequency
resolution for high frequencies (low scales); and low time resolution,
and high frequency resolution for low frequencies (high scales). The
wavelet transform itself offers great design flexibility. It uses two
functions -scaling and wavelet functions for the transform. The
wavelet function is equivalent to a high pass filter and produces the
high frequency components (details) of the signal at its output. The
scaling function is equivalent to a low pass filter and passes the low
frequency components (approximations) of the signal. The wavelet
coefficients are retained and represent the details in the signal. The
scaling coefficients are decomposed further using another set of low
2


pass and high pass filters. Fast implementation of wavelet transforms
using a filter-bank framework enables real time processing capability.
3


2. The Discrete Wavelet Transform
The Discrete Wavelet Transform (DWT) provides sufficient information
for both analysis and synthesis of the original signal, with a significant
reduction in the computation time and resources. The DWT, where the
signal is decomposed into so-called approximations & details, can be
realized by the multi-rate filter bank. Approximations represent low
frequency signal content, and details correspond to high frequency
components of the processed signal. Each set of the components of a
signal can decompose further into other levels of approximations &
details.
The DWT represents a signal in terms of shifts, and dilations of a low-
pass scaling function, is multiscale, in that it creates a set of coarse coefficients that
represent signal information at the lowest scale, and sets of detail
coefficients with increasingly finer resolution [2].
The discrete wavelet transforms are implemented, using low -pass
filterH0(z), and high pass filter//,(z). Figures 2.1 and 2.2 show the
filter bank implementation of the wavelet transform and inverse wavelet
transform, respectively. The implementation of the transform, as a filter
bank with analysis low-pass filter H0(z) and high pass filter //,(z) is as
4


shown in the figure 2.1. The inverse transform uses synthesis low pass
G0(z) and high pass filter G,(z) as shown in the Fig. 2.2. For special
choices of//0(z) ,//,(z) ,G0(z) andG,(z) the underlying wavelets and
scaling functions form a biorthogonal basis and provide perfect
reconstruction. The transform is typically iterated on the output of the
low-pass band X0(z) to create the series of detail coefficients at
different scales. Following Fig. 2.1 shows the down sampling of the
signal after passing through the analysis low pass filter H0 and high
pass filter//,,and Fig. 2.2 shows the upsampling of the signals before
passing through the respective synthesis low pass filter G0 and high
pass filterG,.
X(z)
X0(z)
X{(z)
Figure 2.1 Filter Bank Implementations of the Wavelet Transform
5


X0(z)
Xy(z)
Figure 2.2. Filter Bank Implementations of the Inverse Wavelet Transform
6


3. Condition for Perfect Reconstruction
In most wavelet transform applications, it is required that the original
signal be synthesized from the wavelet coefficients. To achieve perfect
reconstruction the analysis and synthesis filters have to satisfy certain
conditions. Filter is a linear map, which maps a signal with the finite
energy into another signal with finite energy [4]. In the time domain
signal is represented by convolution with a vector lh\ In the frequency
domain, we need to assume that the z-transform of this vectorH(z) is
bounded by the unit circle. Convolution by vector h of the original
signal is called filtering, it means in time domain is given byh*x,
and in the frequency domain byH(z)*X(z). The vectorh is called
impulse response (IR) of the filter, and H(ejw) the transfer function
(frequency response). If vector h is finite sequence, then h is finite
impulse response (FIR) filter, or if it is an infinite sequence, then h is
infinite impulse response filter (MR) filter.
Now we will consider the four filters, two analysis filters, two synthesis
filters, and their condition for perfect reconstruction. Following Fig. 3.1
shows the analysis and synthesis parts of the filter bank. In this figure,
the input X(z) that is applied to the analysis part produces output pair
Y0(z) andT,(z). The synthesis part then transforms this pair to the
7


output X(z). The filtering scheme has the perfect reconstruction
property, if X(z) = X (z) for all possible inputs X(z).
Figure 3.1 Two channel Analysis and Synthesis
We first analyze the conditions needed on the filters, in order to obtain
perfect reconstruction property of the filters. We perform this analysis
in the z-transform representation. As shown in the above Fig.3.1, the
input signal X(z) is down sampled by two after passing through the
filtertf0, which further produces the output Y0(z).
To explain the down sampling phenomenon, let us consider the
sequence X in time domain, and this sequence down sampled by two,
denoted by X2i defined by:
X2l[n] = X[2n], ne Z
8


In the z-transform representation, we find the following result,
Now, we will use above obtained down sampling results to get the
output y0(z)and y,(z) i.e.,
In the synthesis part, above obtained results y0(z), and Yx(z) are
upsampled by two before applying to the synthesis filters G0 and Gx.
Given a sequence Y, the up sampling operation yields the sequence
y2t, obtained in the time domain by,
Y27[n] = 0 If n is Odd,
y2t[n] = y[n/2] If n is even.
In the z-transform representation, we find after a change of summation
x2i(Z)=Xx[2]r
X-(XM(z,,2r+xTO(-zl,2r)
k
= ^X(z'/2) + X(-zl/2)).
Y0(z) = -[H0(zU2)X(zil2) + H0(-zy2)X(-z112)]
(3.1)
Yl(z) = -[Hl(zU2)X(zll2) + Hl(-zU2)X(-zv2)]
(3.2)
variable from n to k =
y2T[z] = ^y2T[k-" = =r 9


We will use above obtained upsampling results in the synthesis parts
of the above Fig 3.1, upsampling by two, following by the G-filters, and
addition of the results, leads to a reconstructed signal:
X(z) = G0(z)Y0(z2) + Gl(z)Yi(z2) (3.3)
Now we combine the above expressions and then regroup terms to
get,
X(z) = ^[G0(z)H0(z) + Gl(z)Hl(z)]X(z) + ^[G0(z)H0(-z) + G,(z)Hl(-z)]X(-z)
(3.4)
As we mentioned above condition for reconstruction X(z) = X (z), will
then follow from the conditions:
G0(z)ff0(z) + G,(z)ff1(z) = 2 (3.5)
G0(z)H0(-z) + Gl(z)Hl(-z) = 0 (3.6)
It can be observe that if we switch the analysis and synthesis filters,
the perfect reconstruction condition does not change. If we want to
have perfect reconstruction, these conditions mean that four filters
cannot be choose independently.
10


4. Lifting
Lifting scheme as introduced by W. Sweldens D. in 1996 is a new way
of constructing wavelet transform. The lifting scheme is a technique for
both designing wavelets and performing the discrete wavelet
transform. Lifting allows the construction of wavelets entirely in the
spatial domain. The wavelets generated after applying lifting steps are
known as second generation wavelets, or simply, second generation
wavelets are more general in the sense that all the classical wavelets
can be generated by the lifting scheme.
The inverse transform is trivial to find and always provides perfect
reconstruction of the original signal. Any traditional wavelet transform
can be factored into lifting steps. Lifting implementation of wavelet
transform shows many advantages making it fit for applications such
as image denoising, image compression etc. In comparison
conventional wavelet transforms, lifting scheme is faster and always
has perfect reconstruction properties, which enable transform
adaptation to the input signal by means of changing filter coefficients.
As lifting wavelet transform uses in-place computation, reduction in
memory required for its implementation. The lifting coefficients replace
the image samples present in the respective memory locations. In
11


other words, no auxiliary memory is needed and the signal or signal
can be replaced with its wavelet transform, immediately to find the
inverse transform, easily manage boundary extension and possibly of
defining a wavelet -like transform that maps integer to integer.
12


4.1. Lifting Steps

dH(n)
Figure 4.1.: Lifting Steps, Split, Predict (P), and Update (U)
Lifting concept consist of three main stages i.e. Split, Predict, Update.
4.1.1. Split
We split the original data set X.() in to two sets the even polyphase
components xe(n) and the odd polyphase componentsxo(n), i.e,
Xe(n)-Xj(2n) And X0(n) = X .(2n + l).
If theXj(n)corresponds to the samples of an underlying smooth,
slowly varying function, the even and odd polyphase components are
highly correlated. This correlation structure is typically local, and thus
we should be accurately predicting each odd element from the even
elements.
13


4.1.2. Predict
If the signal data contains some structure, then we can expect
correlation between a sample and its nearest neighbor. In this stage
we predict the odd polyphase coefficients Xa(n) from the even
polyphase componentsXe(n). The predictor for each Xa(n) is a linear
combination of neighboring even coefficients.
Using Xj(2n) as prediction ofX7.(2n + l), We will have the difference:
dj_l(n) = Xj(2n + l)-Xj(2n)
In general, using prediction procedure P, predicting step would be,
dj_j = oddH P(evetij_j)
Thus, in the d signal each entry is one odd sample minus some
prediction based on a number of even samples. This prediction
procedure is equivalent applying a highpass filter to original
signal Xj(n).
4.1.3. Update
The third lifting step transforms the even entries Xe(n) into a lowpass
filtered and subsampled version ofx.(n). We have predicted the next
odd entry from the even entry and stored the difference. Then we have
14


updated even entry to reflect knowledge of the signal. We obtained the
coarse approximation by updating Xe(n) with linear combination of the
prediction residuals, as represented by the following equation:
X,_1(n) = X.(2n) + J._,(n)/2
In general, using U updating procedure, updating step would be:
Xj_, =evenj_i -U(dH)
The algorithm described is called one step lifting. It requires the choice
of a prediction procedure P, and an update procedure U.
In the lifting process, predict step is a high pass filter, which calculates
the wavelet function in the wavelet transform while the update step
calculates the scaling function. Each lifting step is always invertible: no
information is lost. Assuming that the same P and U are chosen for the
analysis and synthesis stages, the lifting construction produces perfect
reconstruction for any p and U. One can show that the update
function determines the properties of dual wavelet and primal scaling
function. Lifting Scheme is a method for constructing wavelets entirely
by spatial approach [6].
According to I. Daubechies and W. Sweldens, any DWT with finite filter
can be decompose into finite sequence of simple filtering steps, which
is called the lifting steps. FIR wavelet or filter bank can be decomposed
15


into lifting steps by writing the transform into the polyphase form.
Statements concerning lifting can then be made using matrices with
polynomial or Laurent polynomial entries.
The decomposition of arbitrary wavelet transforms into lifting steps
implies that we can gain, for all wavelet transforms, the traditional
advantages of lifting implementations, i.e.
1. Lifting leads to a speed-up when compared to the standard
implementation.
2. Lifting allows for an in-place implementation of the fast wavelet
transform. This means the wavelet transform can be calculated without
allocating auxiliary memory.
3. All operations within one lifting step can be done parallel while the
only sequential part is the order of the lifting operations.
4. Using lifting it is particularly easy to build non-linear wavelet
transforms. A typical example is wavelet transforms that map integers
to integers.
16


4.2. From Filters to Lifting Steps
There are number of forms for representing the building block in a
DWT. The transform can be given as set of equations, or in the
frequency domain as a factored matrix of Laurent Polynomials. There
are three basic forms of representation of wavelet transform, Matrix,
Equation and Filter. These three forms can be converted into each
other. The equation form is needed for the implementation of the lifting
steps. If we want to convert existing filters as lifting steps, we need to
convert filter form into the equation form. For conversion of the filters to
lifting steps, we need to factorize the polyphase matrix. For factoring
polyphase matrix, we need to use algorithm, which is mainly derived by
I. Daubechies and W. Sweldens [7].
A finite energy signal X ={*[n]}ez e /2(Z) has associated with Fourier
series,
X{eiw) = J^x[ n]e-jnw (4.2.1)
nez
The z-transform of the sequence jc is defined as,
X(z) = 'Zx[n]z-n (4.2.2)
nez
17


Properties of the z-transform:
Shifting a given signal one time unit left or right can also be
implemented in the z-transform representation. Suppose X = {*[>]} is
a given signal.
(i) let Xlefi = {jc[ +1]} denote the signal shifted one time unit to the left.
According to the definition of the z-transform that we have,
Xh/i(z) = 2ix{n + l]z-n
n
= 2>[h + 1]z_("+1,+1
n
Xleft(z) = zX(z) (4.2.3)
(ii) Similarly, A'rigfc((z) = {4n-1]} denote the signal shifted one time unit
to the right.
n
n
Xright{z) = z~xX{z) (4.2.4)
(iii) Let us consider the sequence X in time domain, and this
sequence down sampled by two, denoted by X2l defined by:
18


X2l[n] = X[2n],
nsZ
In the z-transform representation, we find the following result,
X2l(z) = Y,X{2]z"
n
X2l(z) = ^(X(zl/2) + X(-zy2)) (4.2.5)
(iv) Given a sequence Y, the up sampling operation yields the
sequence 72T, obtained in the time domain by,
K,T[n] = 0 If n is odd,
F2T[n] = F[n/2] If n is even.
In the z-transform representation, we find after a change of summation
variable from n to & = ^,
Y2r[z\ = ^Y2t[n]z-n
n
k
lr2T[z] = ^(z2) (4.2.6)
Now, We will see how to implement the DWT defined via the lifting
technique in the z-transform representation .The first step is to split a
given signal into its even and odd components In the z-transform
representation splitting is obtained by,
XOO^XoUVz-'X^z2) (4.2.7)
19


Where
X0(z) = JJx[2n]z-n,
(4.2.8)
X1(z) = Jjx[2n + l]z~n.
(4.2.9)
Using the formulas (4.2.3) and (4.2.5), we thus have,
X0(z) = ^((X(z1/2) + X(-z1/2)),
\n
Xx{z) = ^-({X(zm)-X{-zin))
(4.2.11)
(4.2.10)
Now, let us see implementation of the prediction step in the
z-transform representation. The prediction technique is to form a linear
combination of the even entries and then subtract the result from the
odd entry.
The prediction step can be implemented asX](z)-T(z)X0(z),
whereT(z) = ^(l + z). The convolutionT(z)X0(z) means the convolution
t*x0 in the time domain, which is exactly a linear combination of the
even entries with weights t[n].
r(z)X0(z) = ^(l + z)2>[2n]z-'
^ n
20


= ^x[2n]z-n +^x[2n]z~n+x
^ n ** n
= ^-(x[2n] + x[2n + 2])z~n
n ^
Thus, we have,
X,(z) -7Tz)X0(z) = *[2n+1]--{x[2n]+x[2n+2])
n V 2
(4.2.12)
The transition from (X0(z),X,(z)) to (X0(z),X,(z)-r(z)X0(z)) can be
described by matrix multiplication.
We have,
X0(z) 1 O' *X0(z)"
_X,(z)-7(z)X0(z)_ -T(z) 1 _X,(z)_
Similarly, if we define S(z) = -(l + z"'), then the update step can be
4
implemented in the z- transform representation as multiplication by the
following matrix,
'1 S(z)
0 1
The final normalization step can be implemented by multiplication by a
matrix of the form,
~K 1 X > 0
0 AT1
21


In general case, a prediction step is always given by multiplication by a
matrix of the form,
P(z) =
1
-T(z)
0
1
And an update step by multiplication by a matrix of the form,
U(z) =
1
0
S(z)
1
Above T(z) and 5(z)are both polynomials in z and z'.Such
polynomials are called Laurent Polynomials.
The general DWT with normalization step included, is then in the z-
transform representation given as a matrix product,
H(z) =
I 0 1 '1 SN(z) 1 O . i '1 5,(z)" 1 Q|
1 T o i 0 1 i ** 1 0 1 i (4.2.13)
The order of the factors is determined by the order in which we apply
the various steps. First a prediction step, then an update step, perhaps
repeated N times, and then finally the normalization step.
Multiplying all the matrices in the product defining H(z) in equation
4.2.13, we get a matrix with entries, which are Laurent Polynomials.
We use the notation,
22


H(z) =
(4.2.14)
Hn(z) Hn{z)
H2l(z) H22(z)
Following Figure (4.2.1) represents implementation of the DWT in the
z- transform representation, based on above equation (4.2.13).
The DWT in The matrix form is given as,
~Hu(z) Hn{z) ~X0(z)
Yl(z)_ H2l(z) H22(z)_ _Xx(z)
(4.2.15)
This representation of a two-channel filter bank, without any reference
to lifting, is in the signal analysis literature called the polyphase
representation [4].
It is an important result that filtering approach, and one based on lifting,
actually is identical. This means that they are just two different of
describing the same transformation.
23


The first step is to show that the analysis step in Figure 3.1 is
equivalent to the analysis step summarized in Figure. 4.2.1 and
equation (4.2.15).Thus, we want to find the equation relating the
coefficients in the matrix (4.2.14) and the filters. The analysis step in
by both methods should yield the same result. Let us start with the
equality,
Where, Y0 and Y1 are obtained from the filter bank approach Equation
(3.1) and (3.2), and the right hand side from the lifting approach (in the
polyphase form), with H from equation (4.2.14). After putting the
equations (4.2.10) and (4.2.11) on right side of the above equation
(4.2.16), the first equation can then be written as,
-(X(z) + X(-z
^(X(z)-X(-z))
(4.2.16)
-(ff0(z)X(z) + ff0(-z)X(-z)) =
= tfn(z2)^(X(z) + X(-z)) + tf12(z2)^(X(z)-X(-z)).
This leads to the relation,
H0(z) = Hn(z2) + zHn(z2)
(4.2.17)
Similarly, we can write following equations,
24


(4.2.18)
Hl(z) = H21(z2) + zH22(z2)
G0(z) = Gn(z2) + zG12(z2)
Gx(z) = G21(z2) + zG22(z2)
Thus in the polyphase representation, we use
r//u(z) hI2(Z)i
H(z) =
[H2l(z) H22{z) J
(4.2.19)
(4.2.20)
(4.2.21)

Gu(z) G21(z)
Gn(z) G22(z)
(4.2.22)
The requirement of perfect reconstruction in the polyphase formulation
is the requirement that G(z) should be the inverse of H(z).If we start
with a filter bank then we easily define H(z) using equation (4.2.17)
and (4.2.18).But to get H(z) to the lifting implementation, we need to
factor H(z) into lifting steps As mentioned earlier, this is obtained by I.
Daubechies and W. Sweldens in the paper [7]. Following section
shows the algorithm for making filters into lifting steps by factoring the
polyphase matrix.
25


4.2.1. Theorem
Consider the following 2x2 matrix:
H(z) -
VH2x{z) H22(z)J
Where, the Hnk(z) are Laurent Polynomials, which defines filter
coefficients, and where
det H(z) = Hn(z)H22(z) Hl2(z)H2l(z) = 1 (4.2.22)
Then there exist a constant K 0 and Laurent polynomials
S,(z),....,5m(z), and Tx{z),...TM(z),
Such that,
H(z)
'K 0 ' M rr 'i s.u)' ' 1 0N
,0 11 [o 1 J Jk(z) 1,
(4.2.23)
This theorem requires the matrix to have determinant one, so in order
to apply to an analysis wavelet filter pair; we need Hnk of the filter to
fulfill equation (4.2. 22).
26


4.2.2. The Euclidean Algorithm
Euclidean algorithm is an algorithm for finding the greatest common
divisor of the Laurent polynomials. The Z-transform of a FIR filter h is
a Laurent polynomial, which is in the following form,
h(z)=fJh[k]z-k
k=kb
Where, kb and ka are integers satisfying^ polynomials can be define by assuming h[kb]*0 andM^J^O. Then
the degree h(z) is defined as,
\h\=kb-ka
The zero Laurent polynomial is assigned the degree -.A polynomial
degree zero is also to as a monomial.
4.2.3. Euclidean algorithm for Laurent polynomial
Consider the two Laurent polynomials a(z) and&(z)*0, such that
I a(z) I ^ I b(z) |.
Let a0(z) = a(z) andbQ(z) = b(z), iterate the following steps starting from
7 = 0
aj+l(z) bj{z), bJ+](z) = aj{z)%bj{z)
27


Let TV denote the smallest integer with N<|6(z)|+1, forfcw(z) = 0,
Then aN(z) is greatest common divisor for a(z) andb(z).
If we let qJ+x = aj(z)lbj(z) then the first step in the algorithm is given
by:
al(z) = b0(z),
b](z) = a0(z)-b0(z)ql(z),
Then next step can be written as following:
a2(z) = bi(z),
b2(z) = al(z)-hl(z)q2(z)>
And after the N steps,
aN(z) = bN_l(z),
0 = bN(z) = aN_x(z)-bN_,(z)qN(z),
According to theorem bN (z) = 0.
Following is the first step written in the matrix form:
ax(z) '0 1 a(z)
1 (z)_ _b(z)_
So finally, we will get the following equation for the N steps:
aN(z)
n
J j=N
0 1
1 -Qjiz)
a(z)
b(z)
28


After further calculations, we can apply this to the low pass filterH0 (z)
The polyphase components of H0(z) are Hu(z)andHn(z), and if we
let a(z) = Hu(z) and b(z) = Hn(z) we get,
Hu(z)
Hn{z)
n
1
1
0
Kzc
0
We know that,
Hn{z)H22{z)-Hn{z)Hn(z) = 1
(4.2.24)
(4.2.25)
Consider p(z) is greatest common divisor of the Hn{z) and Hn{z).So
that means the entire equation (4.2.25) can be divided by p(z). As we
are dividing equation (4.2.25), the only Laurent polynomial that divides
1 is a monomial so p(z) = Kzc .If we further multiply on both sides of
equation (4.2.3) with ze, we get,
K (z)
Hn(z)
z~cHu(z)
z~cHn{z)
n
Qjiz)
l
l
0
K
0
Multiplying Hn(z) with z c only shifts the indices in the z -transform,
and hence does not change the fact that it is the even coefficients of
the low pass impulse response.
29


That means by choosing the right z -transformation of the low-pass
impulse response, it is always possible to end up with aN (z) = K.
Now let us assume that aN(z) is a constant. If we observe that,
q(z) f l q(z) *0 l' 'o r 1 o'
l 0 0 1 l 0 1 0 _q(z) \
Using the first equation for odd j, and the second one for even j,
gives,
Hu(z)
Hn{z)
NI2
n
j=i
1 o 1
1 o'
1
If N is odd, we take^2n(z) = 0. If we replace
K
0
with
K 0
0 Kl
We will get the following equation:
K
0
Hn{z) H2l(z)
Hn{z) H22{z)
ff 1 fcy-iU)
1 0
K 0
0 K~'
After transposing the above equation we will get,
30


(4.2.26)
Hn(z) Hn(z)
H2l(z) H22(z)
K 0 ' i TT 1 K~x 11 j=M 0 1
Now, if the analysis filter pair ((//0(z),//1(z))has a polyphase
representation with determinant one, then other analysis filter pair
((H0(z),H1"w(z))is related by,
Hr(z) = Hl(z) + H0(z)t(z2),
Where, t(z) is a Laurent polynomial,
Hnew{z)
Hu(z) Hn(z)
H{z) Hi (z)
Hu(z) Hn(z)
Hn(z) H22{z)_
Which further yields,
det Hnew(z) = det H (z) = l (4.2.27)
Hnew(z) =
1 0
t(z) 1
Therefore, Applying results to equation (4.2.26). We can find high pass
filter H,(z) in the following way. The Laurent polynomial r(z)such that,
~Hu(z) H2l(z) -t(z) ~H2l{z)
_Hn{z) H22{z) 1 H22(z)_
31


Moreover, by multiplying on both sides with the inverse of the 2*2
matrix, we will get,
-t(z)
1 1
H22(z)
-Hn(z)
-H2x{z)
Hu(z)
H2l(z)
H22(z)
Gives,
t(z) = H2X(z)H22(z)-H22(z)H2x(z)
(4.2.28)
Multiplying equation (4.2.26) with,
1 0
-t(z) 1
This finally gives,
~Hn{z) Hn(z)
_H2X{z) H22{z)
Where,
1 Q| 1 0 1 1 0 fc* 1 1 o
1 1 1 1 o >i 1 1 1 7 k o 1 1 1 N> 1
By appropriate reindexing of the g polynomials, we can find S(z) and
T(z)in the theorem. This lifting theorem is very useful for conversion of
the existing filters into the lifting steps.
K 0
0 K1
1
0
-K2t(z) 1
n
j=M
i q2J{z)
o 1
l o
hj-M) 1
32


4.2.4. Example
In this section, we will factorize the daubechies 4 into lifting steps. As
mentioned in the above section, there are number of ways of
representation of the building block in a Discreet Wavelet Transform
(DWT) i.e. Matrix from, Equation form, and Filter form. The transform
can be represented by a pair of low pass and high pass filters, which
shows the perfect reconstruction property. It can be represent as a
lifting steps, which are either represented in the time domain as a set
of equations or in the frequency domain as a factored matrix of Laurent
polynomials. We will take daubechies 4 as an example for the
explanation and conversion of filters to lifting steps. Following are the
different forms of the daubechies 4 filters [4].
Matrix Form:
IV3-1 V2 0 [l z 1 O' h V3l
0 V3 + 1 S J _0 1 yfe S-2 z l L 4 4 J _ 1
Filter Form:
33


The Daubechies 4 filter taps are given by:
r 1+V3 3 + V3 3-V3 1-V3
0 ~ 4V2 4V2 4V2 4V2
The even and odd coefficients are given by the following equations:
Hu ao(.z)
Hn=b0(z) =
3 + V3 1-V3
4^2 + W2
1 + V3 3-V3
4V2 + 4V2
Our next step is to find q^z) from the leftmost term of a0(z) and b0(z)
.so it will give,
3+V3
zx £^_^>/2__3 + V3
*lU)~W~lW3 ~ 1 + V3
4>/2
qx{z) = S
The remainder is given by,
rl(z) = a0(z)-bo(z)ql(z)
Which yields,
fi(z) =
1-V3
V2 *
This is the first iteration. The next iteration q2(z) will be in the form of
cz 1 +d .so,
34


, , , 1 + 73 3-73
a,(z)^0(Z) = 17r + 1^-z,
^(z) = r,(z) =
1-V3
>/2 '
The term d is constant in q2(z) is given by,
3-V3
d_ 4V2 z_ 3-V3 s
1-V3 4(1-V3) 4 '
S Z
Since,|^(2)1=0 then r2{z) = 0, so q2(z) will be such thata1(z) = &i(z)g2(z).
Gives,
fl + 73 3-73 ] 1-V3 r
[ 4V2 4V2 J H & c4
So,
1 + V3
_ 4V2 1 + V3 2 + V3
C_ 1-V3 _4(1-V3)_ 4
V2
Therefore,
35


2 + 75
4
75
4
r2(z) = 0,
a2(z) = b,(z) =
1 s/3
75
z.
Now, we will use equation (4.2.5) to find H2] and H22.In this case
1-75
75z
as a multiplier.
1-73
[ Hu(z) Hn(z) 1 72 0 1 2+75 75 z + " 1 O'
_H2l(z) H22(z)_ 0 1 + 73 _! 75 z J 0 4 4 1 73 1.
Hn(z) Hn(z)
Hn(z) H22(z)
3+73 1-73 1 + 73 3-75
472 + 472 Z 472 + 472 Z
73+3 _] 1 + 73 _i
~7TZ
We also need to find H2l{z) and H22(z) .we know that,
g0[*] = 6o[-H and *,[*] = /,[-*] for all *eZ (4.2.29)
And,
G0(z) = cz2k+'Hl(-z) and G,(z) = -cz2k+lH0(-z) (4.2.30)
Where,G,(z) ,G0(z) ,//,(z), tf0(z) are high pass and low pass filters A:
is some integer, c is some non zero constant.
36


By combining above two equations (4.2.29) and (4.2.30),
//1(z) = -cz2t+1//0(-z-1) (4.2.31)
Since in this formation we will get,
Hu(z) = h[\] + h[3]z and Hi2(z) = h[0] + h[2]z
Now, we have z- transform representation of lifting. In the
z- transform representation splitting of a given signal into its even and
odd components is given by,
X(z) = X0(z2) + z_1Xj(z2) (4.2.32)
Where,
*0u)=Z 42]r\
n
X,(z) = Z*[2n + l]z-\
n
By using equation (4.2.32), we can find,
H0 (z) = Hx, (z2) + z'H12 (z2 ) = M0]z-1 + Ml] + M2]z + h[3]z2
If we put k = 0 and c = 1 from equation (4.2.31)
H, (z) = -MO] + Ml]z-1 M2]z'2 + M3]z'3,
And thus,
H21(z) = -M0]-M2]z, and //22(z) = M1]-M3]z_1 (4.2.33)
37


Further we will put H2x{z), H22(z) together with H2X(z) H'22(z) into
Equation (4.2.28).
t(z) = H1X(z)H22(z) -H22{z)H2x(z) = (2 + S)z~l .
As we use multiplier
reconstructed as,
1-V3
V2 ^
and then entire H(z) can now be
H(z) =
z 0 'l o' 1 2 + V3 z ' 1 O'
0 i 7 ten i z 1 0 4 1 4 U ij
Although, the consequence is that the right hand side no longer equals
left hand side it is still a valid factorization into lifting steps. It just
results in a different z-transformation of the even and odd low and high
pass analysis filters. From the above theorem and calculations, we
conclude that, existing filters can be transformed into lifting steps.
In Appendix (A), shows lifting wavelet transform applied on real image.
It has produced the following results. After calculating elapsed time,
38


and error of reconstruction, we have found elapsed time is 13.153416
seconds and error of reconstruction 1.7764e-015.
In observation, the time taken by the lifting, for image analysis very
short, lifting scheme is faster and always has perfect reconstruction
properties. We have decomposed the image up to three levels,
following are approximations and details at each level.
Figure 4.3: Level 1 Approximation and Details
39


Figure 4.4: Level 2 Approximation and Details
40


5. A trous Algorithm
Atrous algorithm is put forward by M.Shen, in 1992. This a trous
algorithm can be obtained by inserting appropriate zero into the
various filter coefficients, and computing circular convolution. A trous
algorithm has a two-dimensional direction, its process of transform can
be realized through the filter, and it does not require sampling and
interpolation in calculating. The details of the characteristic can be
obtained more easily and storage capacity is reduced. It also has the
advantages of less calculation time and easy programming [8].
The algorithm a trous is implementation of wavelet transform for non-
orthogonal wavelets was developed for music synthesis and in
particularly for signal processing applications [12].The algorithm a
trous is another implementation of the discrete wavelet transform that
uses a filter bank structure, a' trous algorithm has its particular
mathematical properties and leads to different image decompositions.
The a' trous is a non-orthogonal, shift-invariant, dyadic, symmetric,
undecimated, redundant DWT algorithm [9]. The a trous algorithm can
be decompose the image or signal into an approximate signal or at a
detail signal at a scale a detail signal is called wavelet plane which is
same as the original Image in dimension[10].
41


During decomposition at every scale, the filters are dilated by putting
2l -1 zeros between each of the coefficients of the analysis filters
used in the algorithm a trous. The filters of the algorithm a trous are
decimated at each level of reconstruction by putting 2L -1 zeros
between each of the coefficients of the synthesis filters. Only horizontal
and vertical details, dk anddvk, respectively, as well as an
approximation, ak of the image at each level are created. In the
implementation of a trous algorithm, separable circular convolutions
are used instead of separable straight convolutions to minimize border
problems. Following Figure 5.1 and 5.2 shows block diagram of the
algorithm a trous decomposition and reconstruction, respectively.
42


Columns
Rows
Figure. 5.1.Decomposition Filter Bank for the A trous Algorithm
The biorthogonal filters called Spline filters used for the a trous
algorithm. These filters are not quadrature mirror filters. As mentioned
earlier, all filtering is done with circular convolutions to prevent the
discontinuity induced near image boundaries after each level of
decomposition. The filters are dilated by inserting zeros between each
coefficient. This creates holes. There is no down sampling only
dilation of the filters.
In this process of decomposition of the image, approximations are
generated by first shifting the columns of ak to the left 2L-l times
where L is the level of decomposition. Thereafter shifted columns are
filtered by circular convolution using the low pass filterLoD. In case of
rows, shift rows to the left 2L -1 and filter these rows with low
43


pass LoD. Result is the approximation at level L .Horizontal details are
generated by first filtering the columns of ak using the high pass
filter, HiD and then shifting the columns. Vertical details are calculated
by first filtering the rows of ak using the high pass filter, HiD and then
shifting the rows, 2L times.
Columns Rows
Figure 5.2: Reconstruction Filter Bank for the A trous Algorithm
The process of reconstruction of the image is followed in reverse order
from the decomposition procedure except the shifting is altered. After
each level of reconstruction, the filters are decimated by inserting
2L -1 zeros between each coefficient. Note that, L decreases at each
44


level of reconstruction. The synthesis filters are LoR and HiR. The
resultant images are added together and multiplied by 1/4.
For the one-dimensional case, the multiplication factor is Vz. Therefore,
we have taken % for the multiplication in the two-dimensional case.
Instead of using the constant 1A, perhaps a variable that changes
depending on scale would give better results. [13]
As we know a trous algorithm is implementation of high pass filter and
low pass filter, we will generate these filters using the lifting for
implementation of the a trous algorithm. Therefore, for this
implementation of a trous algorithm using lifting for image processing
we will use biorthogonal filters. They have been shown to be optimum
filters for image processing.
As Insertion of the zeroes into the filters along with the decomposition
and reconstruction of the Images or signal creates series of holes into
the filters this is the reason it is called by a trous algorithm [11].
The following equations used which gives the convolution formulas that
are cascaded to calculate the discrete wavelet transform and its
inverse,
aM[n\ = ak*hkhk[n]
dI+M = ak*gAn\
45


dL [n] = ak*Sgk[ti]
Inverse Discrete Wavelet transform
ak [n] = ^ak+l*hkhk [] + dvk+l gk S[n\ + dhM[n] *Sgk[n]
In the above equations, h, g, h and g are analysis and synthesis filters
[13]. Above equations are used for the implementation of a trous
algorithm using analysis and synthesis filters shown above in the
Figures 5.1 and 5.2.
The Matlab program, in appendix (B), which is implementation of a
trous algorithm along with the lifting, used to generate following
images. We have decomposed original image into the three different
levels. Following Fig.5.5, are the images of the approximations,
horizontal and vertical details for each of decomposition level. In this
implementation, we have calculated time elapsed for the execution,
which is 13.523580 seconds.
We have used following four biorthogonal quadruplet filters for the
implementation of a trous algorithm and lifting scheme.
LoD: Low pass decomposition filter
HiD: High pass decomposition filter
LoR: Low pass reconstruction filter
46


HiR : High pass reconstruction filter
Coefficients of the quadruplet filters are:
LoD=[0.1768 0.5303 0.530 0.1768]
HiD= [0.3536 1.0607 -1.0607 -0.3536]
LoR = [-0.3536 1.0607 1.0607 -0.3536]
HiR = [0.1768 -0.5303 0.5303 -0.1768]
Following Figures 5.3, and 5.4 are images of the wavelet functions,
and scaling functions of the biorthogonal quadruplet filters generated
during the implementation.
47


Figure 5.3: Scaling function
48


Figure 5.5: 3 level Image Decomposition: Approximations, Horizontal and
Vertical Details
49


6. Image Oenoising
In image processing, wavelets are used for instance for compression,
denoising, edges detection, watermarking, texture detection etc. We
have Implemented lifting and a trous algorithm for the image denoising
application. In this implementation, we have denoised image using
thresholding phenomenon. In this phenomenon, we can use either soft
thresholding, or hard thresholding. Here, the threshold plays an
important role in the denoising process. Finding an optimum threshold
is a tedious process. A small threshold value will retain the noisy
coefficients, whereas a large threshold value leads to the loss of
coefficients that carry image signal details. Hard thresholding is a keep
or kill rule whereas soft thresholding shrinks the coefficients above the
threshold in absolute value.
Image denoising using consists of the three successive procedures,
namely, Image decomposition, thresholding of the coefficients, and
Image reconstruction. Firstly, we carried out the wavelet analysis of a
noisy signal up to a chosen level n. Secondly; we performed
thresholding of the detail coefficients. Lastly, we synthesized the
images using the altered detail coefficients and approximation
coefficients of the image .Hard thresholding takes any wavelet
50


coefficient below a certain threshold value T, and sets it to zero [13].
Hard thresholding is implemented in the following way:
d"(x,y) if \d(xfy)\>T
*
0 if \d*{x,y)\ (6.1)
Soft thresholding is the attenuation of wavelet coefficients whose value
is above a certain threshold T. For those coefficients, whose absolute
value is below the threshold, these coefficients are set to zero [13].
The implementation of the soft thresholding procedure is as follows:
dnj{x,y) = <
d] (x,y)-T
d;(x,y) + T
0
if d"(x,y)>T
if d"(x,y) if \dj(x,y)\ (6.2)
The Matlab program, in Appendix (C) is image denoising using lifting
and a trous algorithm, which produces the following images. The
Gaussian noise is added to the original image, with signal to noise ratio
around 15dB.
In this implementation of the lifting and a trous algorithm, we have
used sym 4 wavelet. After processing image with noise through
algorithm, we have calculated signal to noise ratio of the reconstructed
image. Following are the coefficients of the filters, with their scaling
and wavelet functions.
51


Coefficients of the Sym4 filters:
LoD = [-0.0758 -0.0296 0.4976 0.8037 0.2979 -0.0992 -0.0126 0.0322]
HiD = [-0.0322 -0.0126 0.0992 0.2979 -0.8037 0.4976 0.0296 -0.0758]
LoR =[0.0322 -0.0126 -0.0992 0.2979 0.8037 0.4976 -0.0296 -0.0758]
HiR = [-0.0758 0.0296 0.4976 -0.8037 0.2979 0.0992 -0.0126 -0.0322]
Figure 6.1: Scaling and Wavelet functions of the Symlet filters used in the
denoising operations
52


Figure 6.2: Image Denoising: Original Image, Noisy Image, Denoised Image
(From Left to Right)
53


7. Image Reconstruction using Modulus Maxima
Edge detection within an image is very important in image
compression and reconstruction. By detecting the modulus maxima in
a two-dimensional dyadic wavelet transform across scales, edges are
detected. Locations of the modulus maxima, often referred to as
multiscale edges, determine regions in the wavelet transform domain
with a high concentration of energy. For this reason, modulus maxima
are used for reconstruction of the original image.
Initially, a trous algorithm is used to generate the wavelet coefficients
at each level. Next, the wavelet transform modulus and angle of the
wavelet transform modulus for each corresponding pixel location is
generated. The modulus maxima matrices are generated from the
wavelet transform modulus and the angle matrices. Finally, the
conjugate gradient algorithm used to reconstruct the image.
To reconstruct an image,/, from its multiscale edges the modulus
maxima matrices must first be generated from the wavelet transform
modulus and angle of the wavelet transform modulus for each
corresponding pixel location.
The wavelet transform modulus, Mjf(x,y) and angle Ajf(x,y), are
calculated by following equations (7.1),and (7.2) using horizontal
54


details d*, and vertical details d] .generated at each level by a trous
algorithm.
Mjf(x, y) = J(dj (x, y))2 + (,d)(x, y))2 (7.1)
Ajf(x, y) = arctan(rf;v (*, y)/dj (x, y)) (7.2)
For each pixel location within each scale, a modulus maxima is
detected at location (jc, j) ,if M Jf(x,y)>MJf(xl,yi), and
Mjf(x,y)>Mjf(x2,y2). Locations and (x2,y2)are adjacent to
(x, y) indicated by the angle ^/(x,y). The angle can be one of eight
quantized directions 0, n / 4, n / 2, / 4, n, n / 4, -n / 4, -k / 2, or
-3^/4 [18].
The conjugate gradient algorithm can be thought of as an acceleration
of the frame algorithm [19]. Frame theory states that under certain
conditions, one can recover a vector / in a Hilbert space H from its
inner products with vectors{p}ner. The index set T may or may not
be finite.
A sequence {^}K£rin a Hilbert space H is called a frame, if there exist
constants A,B> 0 such that for all /e H.
55


hgT
Where A and B are positive constants known as frame bounds.
The conjugate gradient algorithm as stated by Grochenig is as follows:
Let 5 is a frame operator, defined bySf[ri\ = (f, Frame for H with associated frame operator S
Initialize the vectors /0 = 0 , r0 = p0 = g p_t= 0
j, = (rn,P)
{PnSPn)
fn + \ fn ^nPn
rn^=rn~kSPn
_s {Sp..Sp,) {SpSp,_,)
Pn+l JPn / c \ Pn / r \ Pn-1
{PnJPn) {P-vSPn-\)
The vector pn is the search direction and it is initialized along with the
residual vector rn with the initial gradient, g.
The initial gradient is the first approximation of the image using a
formula similar to the a trous algorithme IDWT and the non-zero
horizontal and vertical details corresponding to the modulus maxima
locations at every level.
56


Note that aQ[n] = Spn. The variable^, which dynamically changes
during each iteration, is the step size in the pn direction S/?n. The
orthogonal projection of pn on the image space defined by the inner
products of the image / with vectorsj^}^. Now the actual conjugate
gradient algorithm is executed by calculating Xn, fn+l, rn+l and pn+i using
the equations listed previously .The process of generating Spn,An,fn+l,
rn+1and pn+x is iterated until the relative mean-square reconstruction
error, ||/-/||/|/|| .becomes minimal.
The Matlab program, in Appendix (D) produces following figure 7.1, for
the three level modulus maxima matrix of an image. The modulus
maxima mask is generated at threshold value of 0.01. Wherever the
wavelet transform modulus is deemed to be maximum, its location is
noted by creating a modulus maxima mask matrix for each level. All
wavelet coefficients not located at the modulus maxima locations for
their corresponding level are set to zero.
57


Figure 7.1: Three Level Modulus Maxima Mask of Image
Finally, after generating modulus maxima mask of the image,
conjugate gradient algorithm is applied. In the MATLAB program,
Appendix (E), recon_mm.m, the image to be analyzed and synthesized
using modulus maxima is, initially decomposed into the three levels.
From the modulus maxima matrix masks generated above, only those
details that correspond with the masks are saved along with the final
approximation.
This is done within the program, mm_atrous.m, which is located in
Appendix (F). Next, the program atrous_up.m in Appendix (G)
executes an algorithm similar to a trous algorithm version of inverse
58


dyadic wavelet transform using the details and approximation. The
program atrous_up.m generates the first direction image, which is
transferred to the program, ConjGrad_2d.m, which is located in
Appendix (H).The ConjGrad_2d.m subroutine orchestrates the
conjugate gradient algorithm by first calling atrous_down.m. Using the
program atrous_down.m in Appendix (I), the direction image is
decomposed into the previously stated number of levels, saving only
those details that correspond to the modulus maxima matrix masks
initially generated in mm_atrous.m. The Figure 7.3 is the reconstructed
image obtained from the conjugate gradient algorithm. The
reconstructed image is lighter in shade. The algorithm reduces the
amount of wavelet coefficients by 80% for an image. Thus, image
compression using the modulus maxima technique is quite good as
shown by the following figures.
59


Figure 7.2: Original Image
Figure 7.3: Reconstructed Image
60


8. Conclusion
In this study, image processing based on two different wavelet
transform algorithms along with the lifting scheme has been
demonstrated. The results obtained from lifting wavelet transform for
an image analysis were very favorable. In addition, the time taken, and
reconstruction error by lifting for an image analysis was very small,
which shows that wavelet transform implementation by lifting is very
fast, with the almost perfect reconstruction property. In addition, the a
trous algorithm is implemented with lifting, used for the image
denoising, led to good results. In this thesis, we have developed the
conjugate gradient algorithm, which uses the modulus maxima
technique as well as a form of the a trous algorithm with lifting to
reconstruct an image. The results were shown to be favorable.
61


APPENDIX (A)
IMAGE ANALYSIS USING LIFTING WAVELET TRANSFORM
% Image Reading
x=imread('a.jpg');
image(x)
x=double(x); %Convert to double precision
RGB=0.2990*x(:,:,1)+0.5870*x(:,:,2)+0.1140*x(:,:,3);
NBCodes=256;
X=wcodemat(RGB,NBCodes);
X=X/256; % Normalization of image
y= X;
imshow(y)%display Image
L=3; %level of detail
%...................................................
tic
%...................................................
% Get the corresponding lifting scheme
Isdb4 = Iiftwave('db4');
% Visualize the obtained lifting scheme
displs(lsdb4);
% Add a primal ELS to the lifting scheme,
els = {'p',[ 2.13181671 0.63642827] ,0};
Isnew = addlift(lsdb4,els);
%...................................................
% Perform LWT at level L of the Image.
[cA,cH,cV,cD] = Iwt2(y, Isnew, L);
xDec = Iwt2(y, Isnew, L);
% Extract approximation coefficients of each level,
cal = lwtcoef2('ca,,xDec,lsnew,L,1);
ca2 = lwtcoef2('ca',xDec,lsnew,L,2);
ca3 = lwtcoef2('ca',xDec,lsnew,L,3);
xRec = ilwt2(cA,cH,cV,cD,lsnew,3);
%...................................................
% Reconstruct approximations and details.
%...................................................
% Reconstruct approximations and details of level 1.
a1 = lwtcoef2('a,,xDec,lsnew,L,1);
62


hi = lwtcoef2('h',xDec,lsnew,L,1);
v1 = lwtcoef2('v',xDec,lsnew,L,1);
d1 = lwtcoef2('d',xDec,lsnew,L,1);
% Reconstruct approximations and details of level 2.
a2 = lwtcoef2(la',xDec)lsnew)L,2);
h2 = lwtcoef2(h',xDec,lsnew,L,2);
v2 = lwtcoef2('v',xDecIlsnew,L,2);
d2 = lwtcoef2('d',xDec,lsnew,L,2);
% Reconstruct approximations and details of level 3.
a3 = lwtcoef2(,a',xDec,lsnew,L,3);
h3 = lwtcoef2('h',xDec,lsnew,L,3);
v3 = lwtcoef2(V,xDec,lsnew,L,3);
d3 = lwtcoef2('d',xDec,lsnew,L,3);
%................................................
toe
%................................................
% Generation of the Images
%------------------------------------------------
figure (1)
subplot(2,3,2);
image(wcodemat(a1,192));
title('Approximation a1')
colormapCgray')
subplot(2,3,4);
image(wcodemat(h1,192));
colormapCgray')
title('Horizontal Detail hi')
subplot(2,3,5);
image(wcodemat(v1,192));
colormap('gray')
title('Vertical Detail v1')
subplot(2,3,6);
image(wcodemat(d1,192));
colormapCgray')
title('Diagonal Detail d1')
63


figure(2)
subplot(2,3,2);
image(wcodemat(a2,192));
colormap('gray')
title('Approximation a2')
subplot(2,3,4);
image(wcodemat(h2,192));
colormap('gray')
title('Horizontal Detail h2')
subplot(2,3,5);
image(wcodemat(v2,192));
title('Vertical Detail v2')
subplot(2,3,6);
image(wcodemat(d2,192));
colormapCgray')
title('Diagonal Detail d2')
figure(3)
subplot(2,3,2);
image(wcodemat(a3,192));
title('Approximation a3')
colormap('gray')
subplot(2,3,4);
image(wcodemat(h2,192));
colormapCgray')
title('Horizontal Detail h3')
subplot(2,3,5);
image(wcodemat(v2,192));
title('Vertical Detail v3)
subplot(2,3,6);
image(wcodemat(d2,192));
colormap('gray')
title('Diagonal Detail d3')
% Check perfect reconstruction,
err = max(max(abs(y-xRec)))
64


APPENDIX (B)
IMPLEMENTATION OF A TROUS ALGORITHM USING LIFTING
% Image Reading
x=imread('a.jpg');
image(x)
x=double(x); %Convert to double precision
RGB=0.2990*x(:,:,1)+0.5870*x(:,:,2)+0.1140*x(:,:,3);
NBCodes=256;
X=wcodemat(RGB,NBCodes);
X=X./256; % Normalization of image
[M,N] = size(X);
y= X;
imshow(X);%display Image
L=3; %level of detail
[r,c]=size(y);
tic
%....................................................
% Lifting Scheme
0/
/o --------------------------
% Get the corresponding lifting scheme
Isbior = liftwaveCbiorS.I1);
% Visualize the obtained lifting scheme
displs(lsbior);
% Transform the lifting scheme to biorthogonal filters quadruplet.
[LoD,HiD,LoR,HiR] = ls2filt(lsbior)
bswfun(LoD,HiD,LoR,HiR,lplot');
%....................................................
%0R
% Compute the four filters associated with spline biorthogonal wavelet
% bior3.1
% % Find the two scaling filters associated with bior3.1
% [Rf.Df] = biorwavf('bior3.1');
%
% % Compute the four filters needed.
% [LoD,HiD,LoR,HiR] = biorfilt(Df.Rf)
% bswfun(LoD,HiD,LoR1HiR,,plotl);
%
% subplot(221); stem(LoD);
65


% title(Dec. low-pass filter bior3.1');
% subplot(222); stem(HiD);
% title('Dec. high-pass filter bior3.1');
% subplot(223); stem(LoR);
% title('Rec. low-pass filter bior3.1');
% subplot(224); stem(HiR);
% title('Rec. high-pass filter bior3.1');
0/
/O----------------------------------
% A Trous Algorithm
%...................................
% Decomposition of the Image
for k = 1:L
%...................................
% Approximations Generation
O/
/o------------------- -------------
for i=1 :c
a{k}(1:r,i) = leftshiftMlir.iy^'Xk-l))'; %Shift either image vertically to
the Left
a{k}(1:r,i) = cconv(LoD,a{k}(1:r,iy)'; % Convolution of low pass filter
with columns from vertically shifted image
end
for i=1:r
a{k}(i,1:c) = leftshift(a{k}(i, 1 icj^^k-l)); %Shift either image
horizontally to the Left
a{k}(i,1 :c) = cconv(LoD,a{k}(i,1 :c)); % Convolution of low pass filter with
rows from horizontally shifted image
end;
O/
/o--------------------------------------------------------------
%Horizontal and Vertical Details Generations
%...............................................................
for i=1 :c
d1{k}(1:r,i)=cconv(HiD,y(1:r,i),),; %Convolution of high pass filter with
columns of the image
d1{k}(1:r,i)=leftshift(d1{k}(1:r,i),,2Ak)'; % horizontal detail image
end;
66


for i=1:r
d2{k}(i,1:c)=cconv(HiD,y(i,1:c)); %Convolution of high pass filter with
rows of the image
d2{k}(i,1 :c)=leftshift(d2{k}(i,1 ic)^^);0/ Vertical detail image
end;
%Dilate filters
LoD= dyadup(LoD,2); %Decomposition low pass filter
HiD= dyadup(HiD,2); %Decomposition high pass filter
y = a{k};
end;
%..........................................................
%Reconstruction of the Image
%..........................................................
for k = L:-2:1
%Decimate filters
HiR = dyaddown(HiR,1); %Reonstruction high pass filter
LoR = dyaddown(LoR,1); %Reconstruction low pass filter
for i=1 :r
Ra{k}(i,1:c) = cconv(LoR,y(i,1:c));%Convolution rows of approximation
with the reconstruction low pass filter
Ra{k}(i,1:c) = leftshift(Ra{k}(i,1:c),2Ak);%Shift resultant image
horizontally
end;
for i=1 :c
Ra{k}(1 :r,i) = cconv(LoR,Ra{k}(1 irjyy^/oConvolution of the columns of
resultant image with the low pass filter
Ra{k}(1:r,i) = leftshift(Ra{k}(1:r,i)',2Ak),;%Shift resultant image vertically
end;
for i=1 :c
Rd1 {k}(1 :r,i)=leftshift(d1 {k}(1 ir.iy^k-l ))';%Shift horizontal details
vertically
Rd1{k}(1:r,i)=cconv(HiR,Rd1{k}(1:r,i)')';7oConvolution of columns with
reconstruction high pass filter
end;
67


for i=1 :r
Rd2{k}(i,1:c)=leftshift(d2{k}(i,1:c),2A(k-1)); %Shift vertical details
horizontally
Rd2{k}(i,1:c)=cconv(HiR,Rd2{k}(i,1:c));%Convolution of rows with
reconstruction high pass filter
end;
y = (1/4)*(Ra{k} + Rd1{k} + Rd2{k});% Reconstruction of the Final
Image
end
toe
%Display results
figure;
%Display approximations
subplot(3,3,1)
imshow(a{1},[]);
xlabel('Approximation(1)');
subplot(3,3,2)
imshow(a{2},[]);
xlabel(,Approximation(2)');
subplot(3,3,3)
imshow(a{3},[]);
xlabel('Approximation(3)');
%Display horizontal details
subplot(3,3,4)
b = abs(1-(d1{1}./(max(max(d1{1})))));
Hd1 = imadjust(b, [],[], 8);
imshow(Hdl);
xlabel('Horizontal Details(1)');
subplot(3,3,5)
b = abs(1-(d1{2}./(max(max(d1{2})))));
Hd2 = imadjust(b,[],[],8);
imshow(Hd2);
68


xlabel('Horizontal Details(2)');
subplot(3,3,6)
b = abs(1-(d1{3}./(max(max(d1{3})))));
Hd3 = imadjust(b,[],[],8);
imshow(Hd3);
xlabel('Horizontal Details(3));
%Display vertical details
subplot(3,3,7)
b = abs(1-(d2{1}./(max(max(d2{1})))));
Vd1 = imadjust(b, [],[], 8);
imshow(Vdl);
xlabel('Vertical Details(1)');
subplot(3,3,8)
b = abs(1-(d2{2}./(max(max(d2{2})))));
Vd2= imadjust(b, [],[], 8);
imshow(Vd2);
xlabel('Vertical Details(2)');
subplot(3,3,9)
b = abs(1-(d2{3}./(max(max(d2{3})))));
Vd3= imadjust(b, [],[], 8);
imshow(Vd3);
xlabel('Vertical Details(3)');
69


APPENDIX (C)
IMAGE DENOISING
clear all
close all
% Image Reading
x=imread(,a.jpg');
x=double(x);
RGB=0.3240*x(:>:,1)+0.2465*x(:,:l2)+0.3640*x(:,:,3);
NBCodes=256;
X=wcodemat(RGB,NBCodes);
X=X./256;
figure(1);
imshow(X)
title('Originar)
[M,N]=size(X)
% y=zeros(size(X));
%[M1,N1]=size(y);
n=3;% Decompositon Levels
% % Noise Addition to the Image
% m=input('Enter the Value of Mean->')
% v=input(Enter the Value of Vaiance->')
% W=imnoise(X,,gaussian',m,v);
% imshow(W)
%[M1,N1]=size(W);
%or
SNR=input('Enter the Value of SNR->')
var_s = (std2(X))A2;
var_n = lO^loglOtvar.s) (SNR/10));
eta = sqrt(var_n)*randn(M,N);
W = X + eta;
[M1,N1]=size(W);
figure(2);
imshow(W)
title('Noisy Image1)
% Lifting Wavelet Transform
Issym4=liftwave('sym4');
displs(lssym4);
[LoD,HiD,LoR,HiR]=ls2filt(lssym4);
70


l=length(LoD);
Mength(HiD);
l=length(LoR);
l=length(HiR);
% Zero Padding of the Image
%Extend left and right of image with zeros
[M1,N1] = size(W);
right_ext = zeros(M1,1-1);
left_ext = zeros(M1,1-1);
W= [left_ext W right_ext];
%Extend upper and lower image with zeros
[M2,N2] = size(W);
upper_ext = zeros(l-1,N2);
lower_ext = zeros(l-1,N2);
W = [upper_ext; W; lower_ext];
[R,C]=size(W)
% Decomposition of the Image
%Loop for decomposing image
for k = 1:n
for i=1:C
a{k}(1:R,i) = leftshift(W(1 :R,i)',2^(k-1))';
a{k}(1:R,i) = cconv(LoD,a{k}(1:R,i)1)';
end
for i=1:R
a{k}(i,1:C) = leftshift(a{k}(i,1:C),2A(k-1));
a{kj(i,1 :C) = cconv(LoD,a{k}(i,1 :C)); % Approximations Generation
end;
for i=1:C
d1 {k}(1 :R,i)=cconv(HiD,W(1 :R,i)')';
d1{k}(1:R,i)=leftshift(d1{k}(1:R,i),,2Ak)'; % Vertical Details Generation
end;
for i=1:R
d2{k}(i,1 :C)=cconv(HiD,W(i,1 :C));
d2{k}(i,1 :C)=leftshift(d2{k}(i,1 ^j^^^/oHorizontal Details Generation
71


end;
%Dilate filteRs
LoD= dyadup(LoD,2);
HiD= dyadup(HiD,2);
z=a{k};
end;
%Enter the values for Threshold and type of Thresholding
Thrsh=input('Enter The Value of Threshold->')
%Soft and Hard Thresholding
choice=input('Enter choice 0 for Hard and 1 for Soft Thresholding ->
')% Type 0 for Hard Type 1 for Soft
% Soft Thresholding
if choice==1
% x = abs(d1{k})
d1{k} = sign(d1{k}).*(abs(d1{k}) >= Thrsh).*(abs(d1{k}) Thrsh);
% x = abs(d2{k})
d2{k} = sign(d2{k}).*(abs(d2{k}) >=Thrsh).*(abs(d2{k}) Thrsh);
%Hard Thresholding
elseif choice==0
d1{k} = d1{k}.*(abs(d1{k})>=Thrsh);
d2{k} = d2{k}.*(abs(d2{k})>=Thrsh);
end
%......................................................
%Reconstruction of the Image
%......................................................
fork = n:-2:1
%Decimate filters
HiR = dyaddown(HiR,1);
LoR = dyaddown(LoR,1);
for i=1:R
Ra{k}(i,1 :C) = cconv(LoR,z(i,1 :C));
Ra{k}(i,1:C) = leftshift(Ra{k}(if1:C),2*k);
end;
for i=1:C
72


Ra{k}(1 :R,i) = cconv(LoR,Ra{k}(1 :R,i)')';
Ra{k}(1:R,i) = leftshift(Ra{k}(1:R)i),,2Ak)';
end;
for i=1:C
Rd1 {k}(1 :R,i)=leftshift(d1 {k}(1 :R,i)',2A(k-1))';
Rd1{k}(1 :R,i)=cconv(HiR,Rd1 {k}(1 :R,i)')';
end;
for i=1:R
Rd2{k}(i,1 :C)=leftshift(d2{k}(i,1 iC^k-l));
Rd2{k}(i,1:C)=cconv(HiR,Rd2{k}(i,1 :C));
end;
Recon_lmage= (1/4)*(Ra{k} + Rd1{k} + Rd2{k});% Reconstruction of
the Final Image
end;
% Reconstructed Image after Denoising
figure(3);
imshow(Recon_lmage,[]);
xlabel(['Reconstructed Image after Denoising']);
73


APPENDIX (D)
MODULUS MAXIMA MASK GENERATION USING A TROUS
ALGORITHM
% Image Reading
x=imread('a.jpg');
image(x)
x=double(x); %Convert to double precision
RGB=0.2990*x(:,:,1)+0.5870*x(:,:,2)+0.1140*x(:,:,3);
NBCodes=256;
X=wcodemat(RGB, N BCodes);
X=X./256; % Normalization of image
[M,N] = size(X);
y=X;
% imshow(X);%display Image
L=3; %level of detail
[nr,nc]=size(y);
nonzeros = L*2*nr*nc;
%......................................................
% Lifting Scheme
%------------------------------------------------------
% Get the corresponding lifting scheme
Isbior = liftwaveCbiorS.I');
% Visualize the obtained lifting scheme
displs(lsbior);
% Transform the lifting scheme to biorthogonal filters quadruplet.
[LoD,HiD,LoR,HiR] = ls2filt(lsbior);
%LoD=Low Pass decomposition filter
%HiD=High Pass decomposition filter
%LoR=Low Pass reconstruction filter
%HiR=High Pass reconstruction filter
bswfunO-oD.HiD.LoR.HiR/plot');
%......................................................
% A Trous Algorithm
%......................................................
% Decomposition of the Image
74


for k = 1 :L
%............................................................
% Approximations Generation
%............................................................
for i=1 :nc
a{k}(1 :nr,i) = leftshift(y(1 :nr,i)',2A(k-1))'; %Shift either image vertically
to the Left
a{k}(1:nr,i) = cconv(LoD,a{k}(1:nr,i),),; % nconvolution of low pass filter
with ncolumns from vertically shifted image
end
for i=1:nr
a{k}(i,1:nc) = leftshift(a{k}(i,1:nc),2A(k-1)); %Shift either image
horizontally to the Left
a{k}(i,1:nc) = cconv(LoD,a{k}(i,1:nc)); % nconvolution of low pass filter
with rows from horizontally shifted image
end;
%......................................
%Horizontal and Vertical Details Generations
%......................................
for i=1:nc
d1{k}(1:nr,i)=cconv(HiD,y(1:nr,i)')'; %nconvolution of high pass filter
with ncolumns of the image
d1{k}(1:nr,i)=leftshift(d1{k}(1:nr,i)',2Ak)1; % horizontal detail image
end;
for i=1:nr
d2{k}(i,1:nc)=cconv(HiD,y(i,1:nc)); %nconvolution of high pass filter
with rows of the image
d2{k}(i,1:nc)=leftshift(d2{k}(i,1:nc),2Ak);% Vertical detail image
end;
%Dilate filters
LoD= dyadup(LoD,2); %Dencomposition low pass filter
HiD= dyadup(HiD,2); %Dencomposition high pass filter
y = a{k};
end;
75


%Set threshold
Threshold = 0.01;
%Calculate the modulus maxima for each level
for k = L:-1:1
wtm{k} = sqrt(d1{k}.A2 + d2{k}.A2); %Wavelet Transform Modulus
[mod_m,mod_n] = size(wtm{k});
%Calculate angle of the wavelet transform vector
for qt = 1 :nr
for r = 1:nc
alpha{k}(qt,r) = atan2(d2{k}(qt,r),d1{k}(qt,r));
if alpha{k}(qt,r) < 0
alpha{k}(qt,r)=2*pi+alpha{k}(qt,r);
end
end
end
%.....................-............
%calculate modulus maxima image
%..................................
for r=1 :mod_m
for c=1 :mod_n
mod_max{k}(r,c)=255; %lnitialize modulus maxima array
end
end
%find local maximum of modulus
ang=pi/8;
for r=2:mod_m-1
for c=2:mod_n-1
if (alpha{k}(r,c)>=(15*ang)|...
alpha{k}(r,c)<=(1 *ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1,c) & wtm{k}(r,c) >= wtm{k}(r+1 ,c)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(1 *ang)&...
alpha{k}(r,c)<=(3*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1,c-1) & wtm{k}(r,c) >= wtm{k}(r+1,c+1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(3*ang)&...
alpha{k}(r,c)<=(5*ang))&...
wtm{k}(r,c)>=Threshold &...
76


wtm{k}(r,c)>=wtm{k}(r,c-1) & wtm{k}(r,c) >= wtm{k}(r,c+1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(5*ang)&...
alpha{k}(r,c)<=(7*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1,c+1) & wtm{k}(r,c) >= wtm{k}(r+1,c-1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(7*ang)&...
alpha{k}(r,c)<=(9*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1,c) & wtm{k}(r,c) >= wtm{k}(r+1 ,c)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(9*ang)&...
alpha{k}(r,c)<=(11 *ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1 ,c-1) & wtm{k}(r,c) >= wtm{k}(r+1 ,c+1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(11*ang)&...
alpha{k}(r,c)<=(13*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r,c-1) & wtm{k}(r,c) >= wtm{k}(r,c+1)
mod_max{k}(r,c)=0;
elseif (alpha{kj(r,c)>=( 13*ang)&...
alpha{k}(r,c)<=(15*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1,c+1) & wtm{k}(r,c) >= wtm{k}(r+1,c-1)
mod_max{k}(r,c)=0;
end
end
end
end
%Build vertical and horizontal detail matrices that include only those ...
%...coefficients that are located at the modulus maxima; otherwise,
the...
%...coefficients are set to zero,
for k = 1:L
d1_mm{k} = zeros(nr,nc);
d2_mm{k} = zeros(nr,nc);
for i = 1:nr
77


forj = 1:nc
if mod_max{k}(i,j) == 0
d1_mm{k}(i,j) = d1{k}(i,j);
d2_mm{k}(i,j) = d2{k}(i,j);
end
end
end
D1_MM(:,:,k) = d1_mm{k};
D2_MM(:,:,k) = d2_mm{k};
end
mm_nonzeros = nnz(D1_MM) + nnz(D2_MM);
percent = mm_nonzeros/nonzeros
%Display modulus maxima matrix mask
figure
subplot(1,3,1)
imshow(mod_max{1},[]);
xlabel(Modulus Maxima for Level 1 );
subplot(1,3,2)
imshow(mod_max{2},[]);
xlabel('Modulus Maxima for Level 2');
subplot(1,3,3)
imshow(mod_max{3},[]);
xlabel('Modulus Maxima for Level 3');
decomp_times=3;
for k = 1 :decomp_times
figure;
subplot(3,1,1)
imshow(a{k},[]);
title(['Approximation for level ,num2str(k)]);
subplot(3,1,2)
imshow(d1{k},[]);
title(['Horizontal details for level \num2str(k)]);
subplot(3,1,3)
imshow(d2{k},[]);
title(['Vertical details for level ',num2str(k)]);
end
figure;
for k = 1 :decomp_times
78


subplot(decomp_times,1 ,k)
imshow(wtm{k},[]);
if k == 1
title('Wavelet Transform Modulus');
end
end
figure;
for k = 1 :decomp_times
subplot(decomp_times,1 ,k)
imshow(alpha{k},[]);
if k == 1
title('Angle of the Wavelet Transform Vector');
end
end
figure;
zerojen = 0;
for k = 1 :decomp_times
subplot(decomp_times,1 ,k)
imshow(mod_max{k},[]);
zerojen = zerojen + length(find(mod_max{k}==0));
if k == 1
title('Modulus Maximum');
end
end
[xsize.ysize] = size(mod_max{1});
datajen = xsize*ysize;
compressionRate = 100 -100*(zerojen/datajen)
79


APPENDIX (E)
RECONSTRUCTION OF THE IMAGE USING THE CONJUGATE
GRADIENT ALGORITHM
function recon_mm(lvl,Threshold)
% Image Reading
x=imread('a.jpg');
image(x)
x=double(x); %Convert to double precision
RGB=0.2990*x(:,:,1)+0.5870*x(:,:,2)+0.1140*x(:,:,3);
NBCodes=256;
X=wcodemat(RG B, N BCodes);
X=X./256; % Normalization of image
[M,N] = size(X);
y= X(100:400,100:400);
save_orig = y;
[nr,nc]=size(y);
[a, D1 _MM, D2_MM, gprime, hprime] = mm_atrous(lvl,Threshold);
%u_a = (abs(a) > 0);
u_d1 = (abs(D1_MM) > 0);
u_d2 = (abs(D2_MM) > 0);
p = atrous_up(lvl, hprime, gprime, a, D1_MM, D2_MM);
f = zeros(1, nr*nc);
pold = zeros(1,nr*nc);
r = P(1,:);
for i = 2:nr
r = [r p(i.:)];
end
Lpold = zeros(1 ,nr*nc);
imax = 16;
i = 0;
while kimax
i = i+1;
[fnew,pnew,rnew,Lp] = ConjGrad_2d(f,p,pold,r,u_d1 ,u_d2,lvl, Lpold);
f = fnew;
80


pold = p(1
for j = 2:nr
pold = [pold p(j,:)];
end
p = pnew(1:nc);
for j = 1:nr-1
p = [p; pnew(1 +(j*nc):(j+1 )*nr);];
end
r = rnew;
Lpold = Lp;
end
%Calculate signal-to-noise ratio
fjmage = f(1:nc);
for j = 1:nr-1
fjmage = [fjmage; f(1+(j*nc):(j+1)*nr);];
end
var_s = (std2(save_orig))A2;
var_n = (std2(double(save_orig) f_image))A2;
snr = 10*log10(var_s/var_n);
figure
imshow(save_orig,[]);
title('Original Image1);
figure
imshow(fJmage,[]);
title(['Reconstructed Image from the Modulus Maxima SNR
'.nurr^st^snO/dB']);
81


APPENDIX (F)
MODULUS MAXIMA
function [y, D1_MM, D2_MM, gprime, hprime] =
mm_atrous(lvl,Threshold)
% Get the corresponding lifting scheme
Isbior = liftwave('bior3.1');
% Visualize the obtained lifting scheme
displs(lsbior);
% Transform the lifting scheme to biorthogonal filters quadruplet.
[hf,gf, hprime, gprime] = ls2filt(lsbior);
%bswfun(LoDIHiD>LoR,HiR>,plot');
% or
% Image Reading
x=imread('a.jpg');
image(x)
x=double(x); %Convert to double precision
RGB=0.2990*x(:,:,1)+0.5870*x(:,:,2)+0.1140*x(:,:,3);
NBCodes=256;
X=wcodemat( RG B, N BCodes);
X=X./256; % Normalization of image
[M,N] = size(X);
y= X(100:400,100:400);
save_orig = y;
L=lvl;
%L=level of detail
[nr,nc]=size(y);
%.......................
%Loop for decomposing
%.......................
for k = 1:L
%..............................
%calculate approximations
%..............................
%shift either image or latest approximation vertically
for i=1:nc
a{k}(1:nr,i) = leftshift(y(1:nr,i)',2A(k-1))';
82


%circular convolution of hd with colums from vertically shifted
image
a{k}(1 :nr,i) = cconv(hf,a{k}(1 :nr,i)')';
end
%shift either image or latest approximation horizontally
for i=1:nr
a{k}(i,1:nc) = leftshift(a{k}(i,1:nc),2A(k-1));
%circular convolution of hd with rows from horizontally shifted image
a{k}(i,1 :nc) = cconv(hf,a{k}(i,1 :nc));
end;
%---------------------------
%calculate details d1 and d2
%...........................
%shift image or previous approximation vertically
for i=1:nc
d1 {k}(1 :nr,i)=cconv(gf,y(1 :nr,i)')';
d1 {k}(1 :nr,i)=leftshift(d1 {k}(1 mr.iy^k)';
%d1 comes from convolving columns of shifted image
end;
%shift image or previous approximation horizontally
for i=1:nr
d2{k}(i,1 :nc)=cconv(gf,y(i,1 :nc));
d2{k}(i,1 :nc)=leftshift(d2{k}(i,1 inc)^7^);
%d2 comes from convolving rows of shifted image
end;
%calculate filters
hf=[dyadup(hf,2) 0]; %a trous
gf=[dyadup(gf,2) 0]; %a trous
hprime=[dyadup(hprime,2) 0];
gprime = [dyadup(gprime,2) 0];
y = a{k};
end;
%Threshold = 0;
%Calculate the modulus maxima for each level
for k = L:-1:1
wtm{k} = sqrt(d1{k}.A2 + d2{k}.A2); %Wavelet Transform Modulus
[mod_m,mod_n] = size(wtm{k});
83


%Calculate angle of the wavelet transform vector
for qt = 1 :nr
for r = 1:nc
alpha{k}(qt,r) = atan2(d2{k}(qt,r),d1{k}(qt,r));
if alpha{k}(qt,r) < 0
alpha{k}(qt,r)=2*pi+alpha{k}(qt,r);
end
end
end
%....................
%calculate modulus max image
%....................
for r=1 :mod_m
for c=1 :mod_n
mod_max{k}(r,c)=255; %lnitialize modulus maxima array
end
end
%find local maximum of modulus
ang=pi/8;
for r=2:mod_m-1
for c=2:mod_n-1
if (alpha{k}(r,c)>=(15*ang)|...
alpha{k}(r,c)<=(1 *ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1,c) & wtm{k}(r,c) >= wtm{k}(r+1 ,c)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(1 *ang)&...
alpha{k}(r,c)<=(3*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1,c-1) & wtm{k}(r,c) >=
wtm{k}(r+1,c+1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(3*ang)&...
alpha{k}(r,c)<=(5*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r,c-1) & wtm{k}(r,c) >= wtm{k}(r,c+1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(5*ang)&...
alpha{k}(r,c)<=(7*ang))&...
84


wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1,c+1) & wtm{k}(r,c) >= wtm{k}(r+1,c-1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(7*ang)&...
alpha{k}(r,c)<=(9*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1,c) & wtm{k}(r,c) >= wtm{k}(r+1 ,c)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(9*ang)&...
alpha{k}(r,c)<=(11 *ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{kj(r,c)>=wtm{k}(r-1,c-1) & wtm{k}(r,c) >=
wtm{k}(r+1,c+1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(11*ang)&...
alpha{k}(r,c)<=(13*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r,c-1) & wtm{k}(r,c) >= wtm{k}(r,c+1)
mod_max{k}(r,c)=0;
elseif (alpha{k}(r,c)>=(13*ang)&...
alpha{k}(r,c)<=(15*ang))&...
wtm{k}(r,c)>=Threshold &...
wtm{k}(r,c)>=wtm{k}(r-1 ,c+1) & wtm{k}(r,c) >= wtm{k}(r+1 ,c-1)
mod_max{k}(r,c)=0;
end
end
end
end
%Build vertical and horizontal detail matrices that include only those ...
%...coefficients that are located at the modulus maxima; otherwise,
the...
%...coefficients are set to zero,
for k = 1:L
d1_mm{k} = zeros(nr,nc);
d2_mm{k} = zeros(nr,nc);
for i = 1:nr
for j = 1 :nc
if mod_max{k}(i,j) == 0
d1_mm{k}(i,j) = d1{k}(i,j);
85


d2_mm{k}(i,j) = d2{k}(i,j);
VD
00

E E
E E
I i
CM
~0 "O
II II
S' S'
o
c

O H
o'g^o.'
QQ^
CD


APPENDIX (G)
GENERATION OF THE INITIAL SEARCH DIRECTION USED IN THE
CONJUGATE GRADIENT ALGORITHM
%This program is called by recon_mm() initially to generate the first
search
%direction for the conjugate gradient algorithm. All subsequent calls to
this
%program are done within the program, ConjGrad_2d(), to generate
the
%orthogonal projection of p (the search direction) onto the image
space. This
%program resembles the algorithme a trous image reconstruction
algorithm.
%Output:
%The first time this program is called, p = search direction
%For subsequent calls, p = orthogonal projection of p onto the image
plane
function p = atrous_up(levels,hprime,gprime,a,D1_MM, D2_MM)
L=levels;
%L=level of detail
[nr,nc]=size(a);
y = a;
%.......................
%Loop for reconstructing
%.......................
for k = L:-1:1
%Calculate filters
gprime = dyaddown(gprime,3);
hprime = dyaddown(hprime,3);
%Shift and convolve the last approximation that was done before
modulus maximus was calculated
for i=1:nr
rec_a{k}(i,1 :nc) = cconv(hprime,y(i,1 :nc));
rec_a{k}(i,1:nc) = leftshift(rec_a{k}(i,1:nc),2Ak);
end;
for i=1 :nc
rec_a{k}(1:nr,i) = cconv(hprime,rec_a{k}(1:nr,i),),;
rec_a{k}(1:nr,i) = leftshift(rec_a{k}(1:nr,i)',2Ak)1;
87


end;
for i=1:nc
rec_d1 {k}(1 :nr,i)=leftshift(D1 _MM(1 :nr,i,k)',2*^-1))';
rec_d1 {k}(1 :nr,i)=cconv(gprime,rec_d1 {k}(1 :nr,i))';
end;
for i=1:nr
rec_d2{k}(i,1 :nc)=leftshift(D2_MM(i,1 inc.kj^^k-l));
rec_d2{k}(i,1 :nc)=cconv(gprime,rec_d2{k}(i,1 :nc));
end;
y = rec_a{k} + rec_d1{k} + rec_d2{k};
end
p = y;
88


APPENDIX (H)
CONJUGATE GRADIENT PROGRAM
%function [fnew,pnew,rnew,Lp] =
ConjGrad_2d(f)q,pold,r,u_d1,u_d2)lvl,Lpold);
%This program is the implementation of the conjugate gradient
algorithm.
%This routine gets invoked many times until the reconstructed image
reaches
%a certain mean-squared error.
[nr,nc] = size(q);
[y, D1_MM, D2_MM, hprime, gprime] = atrous_down(q,lvl,u_d1,u_d2);
p = q(1,:);
for i = 2:nr
P = [P q(i,:)];
end
Lq = atrous_up(lvl, hprime, gprime, y, D1_MM, D2_MM);
Lp = Lq(1,:);
for i = 2:nr
Lp = [Lp Lq(i,:)];
end
lambda = (r*p')/(p*Lp');
fnew = f + lambda*p;
rnew = r lambda*Lp;
pnew = Lp ((Lp*Lp')/(p*Lp'))* p;
if ~(pold == 0),
pnew = pnew ((Lp*Lpold')/(pold*Lpold'))* pold;
end
89


APPENDIX (I)
PROGRAM FOR DECOMPOSING THE SEARCH DIRECTION USED
IN THECONJUGATE GRADIENT ALGORITHM
function [y, D1_MM, D2_MM, hprime, gprime] =
atrous_down(y,lvl,u_d1 ,u_d2)
%The funtion is called by ConjGrad_2d() to decompose the search
direction
%into horizontal and vertical details as well the final approximation at
%level = Ivl. Only the details that correspond to the modulus maxima
matrix
%masks are kept.
% Get the corresponding lifting scheme
Isbior = liftwave('bior3.1');
% Visualize the obtained lifting scheme
displs(lsbior);
% Transform the lifting scheme to biorthogonal filters quadruplet.
[hf,gf,hprime,gprime] = ls2filt(lsbior);
L=lvl;
%L=level of detail
[nr,nc]=size(y);
%......................
%Loop for decomposing Lenna
%----------------------
for k = 1 :L
%.............................
%calculate approximations
%-----------------------------
%shift either image or latest approximation vertically
for i=1 :nc
a{k}(1:nr,i) = leftshift(y(1:nr,i)',2A(k-1))';
%circular convolution of hd with colums from vertically shifted
image
a{k}(1 :nr,i) = cconv(hf,a{k}(1 inr.i)1)';
end
%shift either image or latest approximation horizontally
90


for i=1:nr
a{k}(i,1:nc) = leftshift(a{k}(i,1 :nc),2A(k-1));
%circular convolution of hd with rows from horizontally shifted image
a{k}(i,1:nc) = cconv(hf,a{k}(i,1:nc));
end;
%............................
%calculate details d1 and d2
%............................
%shift image or previous approximation vertically
for i=1 :nc
d1 {k}(1 :nr,i)=cconv(gf,y(1 :nr,i)');
d1 {k}(1 :nr,i)=leftshift(d1 {k}(1 :nr,i)\2Ak)';
end;
%shift image or previous approximation horizontally
for i=1:nr
d2{k}(i,1 :nc)=cconv(gf,y(i,1 :nc));
d2{k}(i,1 :nc)=leftshift(d2{k}(i,1 :nc),2Ak);
end;
%calculate filters
hf=[dyadup(hf,2) 0]; %a trous
hprime = [dyadup(hprime,2) 0];
gf=[dyadup(gf,2) 0]; %a trous
gprime = [dyadup(gprime,2) 0];
y = a{k};
D1_MM(:,:,k) = d1{k};
D2_MM(:,:,k) = d2{k};
end;
D1_MM = u_d1.*D1_MM;
D2_MM = u_d2.*D2_MM;
91


REFERENCES
[1] W. Sweldens. The lifting scheme: A construction of second
generation wavelets. SIAM J. Math. Anal., 29(2):511.546, 1997.
[2] Roger L. Claypoole, Jr., Geoffrey M. Davis, Wim Sweldens, and
Richard G. Baraniuk (2003).Nonlinear Wavelet Transforms for Image
Coding via Lifting. IEEE Transactions on Signal Processing, 12(12),
1449-1459.
[3] Polikar, R. (1999). The engineers ultimate guide to wavelet
analysis: The wavelet tutorial. Retrieved October 13, 2002, from the
Rowan.
University College of Engineering Web site:
http://enaineerinq.rowan.edu/~polikar/WAVELETS/WTtutorial.html.
[4] Arne Jensen and Anders la Cour-Harbo. Ripples in Mathematics:
The Discrete Wavelet Transform. Springer Publications, Berlin, 2001.
[5] W. Sweldens, The lifting scheme: A new philosophy in biorthogonal
wavelet constructions, in Wavelet Applications in Signal and Image
Processing III, A. F. Laine and M. Unser, Eds. New York: SPIE, 1995,
vol. 2569, pp. 68-79.
[6] W. Sweldens, The lifting scheme: A custom-design construction of
biorthogonal wavelets, J. Appl. Comput. Harmonic Analysis, 3 (1996),
pp. 186-200.
92


[7] Daubechies I, Sweldens W. Factoring Wavelet Transforms Into
Lifting Steps. J Fourier Anal Appl, 1998,4(3): 247-269.
[8] Jianfang Shi, Baofeng Hao, and Fujun Zhang.(2009).Track
Correlation Based on Atrous Algorithm. Sixth International Conference
on Fuzzy Systems and Knowledge Discovery. 7(14-16).392-396.
[9] M. Gonz^lez-Audfcana, X.Otaz, O. Fors, and A. Seco.(2005).
Comparison between Mallats and the a' trous discrete wavelet
transform based algorithms for the fusion of multispectral and
panchromatic images. International Journal of Remote Sensing, 26(3),
595-614.
[10] Xiaodang Zhang and Deren Li. (2001). A trous wavelet
decomposition applied to Image edge detection. Annals of GIS,7
(2).119 123.
[11] Lei Hong; Yujing Guan; Lei Zhang.( 2007). An a-trous algorithm
based threshold shrinkage denoising method for blood oxygen signal.
CWAPR 07. International Conference on Wavelet Analysis and
Pattern Recognition, 4(2-4), 1669- 1673.
[12] Shensa, M. (1992). The discrete wavelet transform: Wedding the
a trous and Mallat algorithms. IEEE Transactions on Signal
Processing, 40(10), 2464-2482.
93