Citation
Image de-noising using wavelet shrinkage and non-linear filtering

Material Information

Title:
Image de-noising using wavelet shrinkage and non-linear filtering
Creator:
Hart, Thomas O
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
xii, 90 leaves : illustrations ; 29 cm

Subjects

Subjects / Keywords:
Image processing ( lcsh )
Imaging systems -- Image quality ( lcsh )
Wavelets (Mathematics) ( lcsh )
Filters (Mathematics) ( lcsh )
Filters (Mathematics) ( fast )
Image processing ( fast )
Imaging systems -- Image quality ( fast )
Wavelets (Mathematics) ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 89-90).
Thesis:
Electrical engineering
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Thomas O. Hart.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
40283065 ( OCLC )
ocm40283065
Classification:
LD1190.E54 1998m .H37 ( lcc )

Downloads

This item has the following downloads:


Full Text
Image De-Noising
Using
Wavelet Shrinkage
And
Non-Linear Filtering
by
Thomas O. Hart
B.S. University of Colorado, 1980
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirement for the degree of
Master of Science
Electrical Engineering
1998


1998 by Thomas O. Hart
All Rights Reserved


This Thesis for the Master of Science
degree by
Thomas O. Hart
has been approved
by
Christopher Smith


Hart, Thomas, O. (M.S. Electrical Engineering)
Image De-Noising Using Wavelet Shrinkage And Non-Linear Filtering
Thesis Directed by Associate Professor Tamal Bose
Abstract
Many clever techniques to reduce noise while preserving the content of sensed
data have been devised. Recent publications derive a Wavelet transform based
method of noise reduction for multi-dimensional signals. The focus of this thesis is to
quantify the performance of the Wavelet transform in its ability to reduce Gaussian
noise while retaining salient features within the data stream. The analysis herein uses
simulated signals and images, with Gaussian noise artificially added, to quantify
performance of Wavelets in their ability to reduce white noise. To contrast the
Wavelet method, a variant of Unsharp Masking is applied to the same data set.
Figures of merit for both techniques are presented to gauge the strengths and
weaknesses of their noise reduction capacity.
This abstract accurately represents the content of the candidates thesis. I
recommend its publication.
IV


Signed
Tamal Bose
v


CONTENTS
Chapter
1. Introduction.........................................................1
2. Noise Measurement Techniques.........................................4
2.1 Signal To Noise Ratio.............................................6
2.2 Background Variance Detail Variance.............................7
3. Wavelet Theory......................................................10
3.1 The Wavelet Transform............................................10
3.2 W avelet De-Noising..............................................24
4. Polynomial Filters..................................................30
4.1 Gradient Based Filtering.........................................30
4.2 Cubic Sharpening.................................................33
4.3 Modified Cubic Sharpening........................................35
5. Computer Simulations and Comparative Analysis.......................40
5.1 One Dimensional De-Noising Results...............................40
5.2 Two Dimensional De-Noising Results...............................44
5.2.1 Analysis of tire...........................................47
5.2.2 Analysis of woman..........................................52
vi


CONTENTS
(continued)
5.2.3 Analysis of julia..........................................56
6. Conclusions........................................................60
Appendix A................................................................62
References................................................................89
vii


FIGURES
Figure
2.1 Imagetire............................................................9
2.2 Image finger.........................................................9
3.1 Analysis Filter Bank..................................................11
3.2 Scaling and Dilation..................................................13
3.3 Wavelet Filter Bank...................................................16
3.4 Frequency vs. Time Wavelet Digraph....................................20
3.5 Simulated Three Tone Signal...........................................21
3.6 Two Dimensional Wavelet Filter B ank..................................22
3.7 Two Dimensional Wavelet Digraph.......................................23
3.8a Image wbarb.........................................................23
3.8b Wavelet Decomposition of Image wbarb................................23
3.9 Spectrum of Four Level Decomposition..................................28
3.10 Signal Reconstruction vs. 7n........................................ 29
4.1 Edge Determination....................................................31
4.2 Laplacian of Image woman............................................33
4.3 Cubic Filtering Block Diagram.........................................35
4.4 Modified Cubic Sharpening.............................................36
vm


FIGURES
(continued)
Figure
5.1 One Dimensional De-Noising Signal Set.................................41
5.2a Wavelet De-Noising of Blocks........................................43
5.2b Wavelet De-Noising of Heavisine.....................................43
5.2c Wavelet De-Noising of Bumps.........................................43
5.2d Wavelet De-Noising of Doppler.......................................43
5.3a Image tire..........................................................44
5.3b Image woman.........................................................44
5.3c Image julia.........................................................45
5.4a Original Image tire.................................................47
5.4b Original Image tire + Noise.........................................47
5.5a SNRI of Row Wise De-Noising tire....................................47
5.5b Row De-Noised Image tire............................................47
5.6a SNRI of Column Wise De-Noising tire.................................48
5.6b Column De-Noised Image tire.........................................48
5.7a SNRI of Row/Column De-Noising tire..................................48
ix


FIGURES
(continued)
Figure
5.7b Row/Column De-Noised Image tire...............................48
5.8a SNRI of Non-Separable De-Noising................................49
5.8b Non-S eparable De-Noised Image tire .........................49
5.9a SNRI of Modified Cubic Filtering De-Noising.....................49
5.9b Modified Cubic Filtering De-Noised Image tire.................49
5.10a Original Image woman..........................................52
5.10b Original Image woman + Noise..................................52
5.11a SNRI of Row Wise De-Noising woman.............................52
5.11b Row De-Noised Image woman.....................................52
5.12a SNRI of Column Wise De-Noising woman..........................53
5.12b Column De-Noised Image woman..................................53
5.13b SNRI of Row/Column De-Noising woman...........................53
5.13a Row/Column De-Noised Image woman..............................53
5.14a SNRI of Non-Separable De-Noising........................u........54
5.14b Non-Separable De-Noised Image woman...........................54
5.15a SNRI of Modified Cubic Filtering De-Noising.....................54
x


FIGURES
(continued)
Figure
5.15b Modified Cubic Filtering De-Noised Image woman..................54
5.16a Original Image julia............................................56
5.16b Original Image julia + Noise....................................56
5.17a SNRI of Row Wise De-Noising julia...............................56
5.17b Row De-Noised Image julia.......................................56
5.18a SNRI of Column Wise De-Noising julia............................57
5.18b Column De-Noised Image julia....................................57
5.19b SNRI of Row/Column De-Noising julia.............................57
5.19a Row/Column De-Noised Image julia................................57
5.20a SNRI of Non-Separable De-Noising..................................58
5.20b Non-Separable De-Noised Image julia.............................58
5.21a SNRI of Modified Cubic Filtering De-Noising.......................58
5.21b Modified Cubic Filtering De-Noised Image julia..................58
xi


TABLES
Table
2.1 BVDV Comparison..................................................... 9
3.1 RiskShrink Coefficients.............................................25
5.1 Performance Summary for Image tire................................50
5.2 Performance Summary for Image woman...............................55
5.3 Performance Summary for Image julia...............................59
6.1 Performance Summary with Ranking....................................61
Xll


1.
Introduction
The focus of image filtering and restoration is to improve the quality of an
image that has been degraded via some systematic or random process. For a signal
corrupted by Gaussian or impulsive noise, the goal is to reconstruct from the noisy
image a representation of the true image. Since all restoration techniques produce an
approximation of the original image, an element of uncertainty is ever present. Some
methods of restoration develop a formal bound on reconstructing the image. For
example, wavelet de-noising minimizes the error in a least squares sense providing
some mathematical basis for the quality of reconstruction. Other methods rely on data
content alone and leave it to the viewer to judge restoration quality.
In a broader sense, image restoration is a subset in the diverse field of image
processing. It is closely related to data compression and image enhancement. The
intent of image enhancement is to accentuate features in an image or portion of an
image. This can potentially degrade other aspects of the image but this may be
acceptable. Image enhancement presupposes the existence of a feature leading to
pattern recognition which is beyond the scope of this thesis but is a consideration
when applying image enhancement. As is the case with image restoration,
enhancement can be done spatially or in other domains, most notably the frequency
1


domain via the Fourier transform. This has the advantage of efficient
implementations.
Data compression is rooted to image enhancement and restoration since many
applications of data compression are lossy and introduce errors in the decompression
process. Although lossless techniques are available, the advantage to lossy algorithms
lies in larger compression ratios. If an imperfect reconstruction can be tolerated, then
lossy compression algorithms substantially reduce file sizes.
The goal of this thesis is to examine a relatively new method of image
restoration by application of the wavelet transform. There are six chapters comprising
this thesis, a brief synopsis follows.
Chapter 2 introduces noise and its effects on m-dimensional signals.
Alternative methods of noise compensation are briefly mentioned and references
given. Although noise is easily distinguished by the human senses, it is harder to
quantify numerically. In Chapter 2, some quantitative measures of noise are
described and examples given to reinforce the concepts.
In Chapter 3 the wavelet transform is explained in a heuristic fashion. The
central tenet to wavelet theory is multi-resolution which is demonstrated within the
heuristic model via the Haar wavelet. The Haar wavelet is nothing more than a
highpass ((x[ xt.\)/2 ) and lowpass ( (jq + Xi.\)/2 ) filter which conveniently fits the
wavelet criteria. Other wavelet concepts are given and two examples provided. The
2


first example decomposes a one dimensional signal with the wavelet transform into a
time vs. frequency digraph exemplifying the virtues of multi-resolution. The second
example uses the wavelet transform to decompose an image into its constituent
highpass and lowpass components. With the fundamentals of wavelet transforms
exposed, Chapter 3 moves on to de-noising a multi-dimensional signal with wavelet
shrinkage, that is, selective adjustment of wavelet coefficient magnitudes to reject
noise in the signal.
A more traditional approach to image restoration is with non-linear filtering.
Chapter 4 presents a filtering scheme centered on Laplacian filtering of an image.
Modifications to the basic Laplacian filter are made to reduce the filters sensitivity to
noise. To highlight the advantages of the modified Laplacian filter some example
images processed with the Laplacian based methods are shown.
In Chapter 5 computer simulations, implemented with MATLAB, take an
image, add Gaussian noise and then process the noisy image with the algorithms from
Chapters 3 and 4. Results are presented in tabular form and interpreted. Note that all
images used in this thesis are available in the MATLAB wavelets toolkit.
Finally, Chapter 6 concludes the thesis with some remarks about the tradeoffs
between wavelet de-noising and Laplacian filtering.
3


2.
Noise Measurement Techniques
The ability to continuously measure the state of any system, actively,
passively, remotely or locally, is inevitably corrupted by noise. The degree to which
this corrupts meaningful data varies from insignificant to catastrophic. In between
these two extremes lies the challenge of removing noise from sensed data to produce
an accurate measurement of the observed system. The dilemma does not end here;
not all noise is the same. Impulsive, white, colored, shot and thermal noise comprise
many of the common types encountered. Additionally, noise characteristics could be
altered as one steps through the system. Any band limiting process modifies true
white noise (infinite frequency content) to colored noise (finite frequency content).
Clearly, the corruptive influence of noise is not trivial and has prompted copious
amounts of research spanning many decades.
Classical techniques geared towards either noise reduction or extraction of
sensor measurements from noisy data includes:
1. Kalman filtering strictly speaking not a noise reduction technique, Kalman
filtering [1] is a state estimator. It estimates the states of a system from the system
inputs and outputs.
4


2. Matched bandwidth filtering a classical noise reduction technique for 1-D
signals that achieves an SNR processing gain by narrowband filtering around a
spectral feature of interest.
3. Frame averaging a conceptually simple 2-D technique whereby an SNR
processing gain is achieved by averaging successive video frames [2].
4. Mean/Median filtering a kernel driven technique for 2-D data that determines,
for data within annxn window, whether the center pixel should be ascribed a
mean or median value based on the window statistics [3].
5. Segmentation a method of segmenting the image into regions of similar
structure. Simple implementations use thresholding criteria for segmentation,
followed by filtering and then reconstruction of the image [4].
6. Histogram equalization to more efficiently use under-utilized gray scales the
distribution of the image histogram is reorganized. A typical application of
histogram equalization would result in a straight line distribution of pixels [2].
7. Homomorphic processing an image enhancement technique which
simultaneously compresses dynamic range and increases image contrast [5].
Not intended to be an exhaustive compendium of noise reduction algorithms, these
examples illustrate the variety of methods and approaches one could use to process
the data at hand.
5


2.1 Signal to Noise Ratio
Based on the qualitative nature of noise just described a logical question is,
how could noise be quantitatively characterized? One measure, the Signal to Noise
Ratio, or SNR, expressed in decibels (dB), is defined as
SNR = 101og10 '22stnal . (2.D
O Noise
An observation about SNR concerns its interpretation in 1-D and 2-D
applications. One dimensional signals, e.g., communications, provide a more
intuitive understanding of SNR. Typically, SNRs are well controlled as a signal
propagates along a constrained and well defined path. To begin with, signals of this
type are generated and carried in an electronic medium facilitating control and
measurement with finely tuned electronic devices. Corruptive influences are well
understood and compensated for as a signal jumps from one relay station to the next.
Because SNRs can be accurately measured, signals are maintained at high SNRs
allowing incredibly low (10'6 or less) bit rate errors as a result.
In contrast, SNR characterization for two dimensional signals are much more
subjective. The medium is not necessarily electronic; typically pixels are the unit of
measurement and absolute measurements are not always known. For images the
SNR is measured in dB and is defined relative to the original and noisy
representations as such:
6


SNR = 101og10
' II(/)2 '
j k_____
V J k
(2.2)
In this expression I is the original image and In the noisy version of I.
This definition of SNR is extended to gauge how well a given restoration
technique has improved the image from a corrupted version of itself. In this sense the
SNR Improvement (SNRI) is computed from I, In and Ir, where Ir is the output of the
restoration process. With these, the SNRI is given by:
SNRI = 101og10
j k_____
XX('-M2
\ j k J
(2.3)
2.2 Background Variance Detail Variance
While the SNR measure is universally accepted and straight forward to
compute, it does have a major deficiency. Specifically, it provides no information
about local features within an image. To characterize local features in an image a
technique known as the Background Variance Detail Variance (BVDV) can be
implemented. BVDV categorizes pixels into background or detail regions according
to their local statistics. As part of the BVDV process two accumulation registers are
set up for the background and detail measures. This algorithm works by computing
7


the variance in an n x n window within the image and comparing the variance value to
an arbitrary threshold. If the variance exceeds the threshold the center pixel of the
kernel is considered as belonging to a detail region. Its coordinates are kept along
with all other detail pixel coordinates and the variance added to the detail register.
The same is true for kernel variances beneath the threshold. This process is applied to
every pixel in the image. In the end, the background and detail registers are divided
by the respective number of elements comprising the register sum to yield an average
BV and DV measure.
To exemplify, two images are shown below. The first image, Figure 2.1, is of
a tire; it is characterized by expansive regions of uniform pixelation and well defined
edges or areas of sharp contrast. The second image, Figure 2.2, is a section of
fingerprint. Relative to Figure 2.1 its features are much less distinct, there is more
texture in the image and the edges, while visually apparent, are much smoother than
the former. Applying the BVDV algorithm provides more information about the
content of these images than does SNR. Intuitively, images with uniform regions
would have a lower background variance than one with a textured appearance.
Conversely, features with well defined boundaries and sharp contrast would exhibit a
large detail variance. The BVDV numbers for these two images are given in Table
2.1. Consistent with the intuitive notion discussed above, the BVDV values for these
images fit this paradigm.
8


Figure 2.1- Image tire
Figure 2.2 Image finger
Tire Fingerprint
BV DV BV DV
3.9 413.9 8.4 157.2
Table 2.1 BVDV Comparison
9


3.
Wavelet Theory
Transformation of a time domain signal into its fundamental temporal and
frequency components is achieved with the wavelet transform. At the heart of
wavelet theory is multi-resolution whereby a signal is decomposed at varying scales.
Scales are on the order of 2; where j is typically no greater than 5 or 6 but could,
theoretically, be infinite. The wavelet implementation discussed in this thesis is built
around a cascaded filter bank and the interaction of the lowpass and highpass filters
comprising the wavelet filter bank. Central to construction of a successful wavelet
filter bank is the concept of perfect reconstruction, the ability to reconstruct a
distortionless, un-aliased version of the input signal.
In the following sections the dilation and wavelet equations are given. To
solidify the meaning of dilation and translation, the Haar wavelet is introduced to
graphically illustrate the interplay between scale and dilation/translation. To show
that the wavelet transform really works, a contrived example of a multi-tone signal
and an image are transformed and the wavelet coefficients visualized. Finally, signal
de-noising is presented for removal of Gaussian noise.
3.1 The Wavelet Transform
A conceptual approach to the Wavelet transform is diagrammed below in
Figure 3.1. It shows a block diagram of a cascaded filter bank upon which this
10


heuristic of the Wavelet transform is built [6,7]. This figure will be subsequently
amended but for now it suffices to diagram the analysis portion of the complete
Figure 3.1 Analysis Filter Bank
filter bank. Each level of decomposition in this figure results in two filtering
operations on the input. At the first level (by convention the first level of
decomposition starts at 7 and progresses downward to 1), j = 7, x is lowpass and
highpass filtered by Ho and Hi, respectively. After decimating the filter outputs by
two, the highpass output dj is stored for later use and the lowpass outputs are passed
into the next level of filtering, j = 7-1. This iterative process of filtering and
decimation will be termed decomposition. If the process stops after 7 iterations then
it can be said that x has undergone a 7 level decomposition. The purpose is to
decompose x in time and frequency to isolate the details of the signal. This is the crux
of multi-resolution and is achieved with the dilation equation 0(V) defined as:
11


Dilation Equation
KO=2XAo(*>(2'f ~ *)
k
(3.1)
A distinguishing characteristic of 0 (f) is its bounded nature, it is supported (non-
zero) over the interval [0,N] and zero elsewhere. This is because 0(f) is a
simultaneous two time scale function. Without the two time scale feature 0(f)
would have an exponential solution and would be excluded from multi-resolution
analysis. A final note about 0(f) involves the influence of j and k: j results in
dilation of the time scale and k translates 0 (f )
It can be seen from Figure 3.1 that the highpass branch is associated with the
wavelet equation w(f). Through an appropriate choice for the dilation equation
0(f), the decomposition of x produces coefficients d} whose scale is proportional to
2;. For future reference the wavelet equation is defined as:
Wavelet Equation
It is important to understand the significance of the translation and dilation
operations. Multi-resolution is fundamental to the wavelet paradigm and is a direct
result of (3.1) and (3.2). To illustrate the point, consider the box function
(3.2)
12


diagrammed in Figure 3.2. It shows (t) in its basic form (/'=0), plus tl
dilation of (j) (t) (/=!) and its corresponding wavelet equation. Of interest in thi
elsewhere. From (3.1)
4 (0 = 2[MM2')+ *o(l)0(2f -1)],
assume for arguments sake the lowpass filter hQ (k ) to be [V2, V2] then (3.3) re
to
13


(3.4)
*(*)= [*(20+*(*-!)]
which confirms the premise in (3.1) as well as the graphical model in Figure 3.2.
Similarly, from (3.2) with an hi (k ) of [V2, -Vi\,
w(t)=2 [hx (0)S(2f)+A, (1)^(2; -1)] = [^(2()(S(2r1)] (3.5)
again confirming the premise of (3.2) and Figure 3.2. To confirm the assumption of
the filtering effects of ho and hi, take the Z transform of ho and h\.
ffo(z)=X(1 + z')-
Hi w=K(1-z1)-
(3.6)
(3.7)
Clearly, the net effect of the former is to average x\ and jq.i while the latter differences
jci and X\-\. This is nothing more than a set of lowpass and highpass filters. Another
observation to make comes from writing hG and h\ in vector notation:
V r i/ l/i 72 72
A_ V -i/ L/I 72\
Taking the dot product of this matrix yields a null result implying hQhx, which
satisfies another criteria for the wavelet filter bank. The choice of a box function
for 0(?) is not coincidental. It turns out to be the Haar wavelet, a simple but useful
wavelet for demonstrating the virtues of scaling and dilation.
14


The question now is, how do we define 0(V) in general? This is answered by
applying the continuous Fourier transform to both sides of (j)(t):
F{^))= F
2
. k
= 2jX*of>(2f k)e~imda>
(3.9)
(3.10a)
O(ffl) = 2X*o/0(2i k)e~imdco.
(3.10b)
With application of variable substitution and some more reduction, the final form of
O (q) ) is given by
(G))=H0
( co Y_ ( cA

\2J V2y
(3.11)
This is an interesting result in that it defines a recursive process for solving O (ty )!
By repeating the same steps,
4>(ffl)= H0
(Q}\ ( ft\\
CO
H0
V2) \4J V4
f
O
co
= H
( fiA
0
H
( fiA
0
Hr
( CO^
O
^ co\
V 2 y v 4 y v8y V8 J
(3.12)
(3.12a)
15


oo
(3.12b)
= n#o
;=i
oo
= n#o
7=1
f 0)^
aj j
(3.12c)
So 0(V) is derived from an infinite product of sums. Clearly, the filter
coefficients hQ are fundamental to the development of 0 (V ) warranting examination
of h0 and what its properties might be. From Figure 3.1 it is apparent that/iQ and
hx are filtering operations but what are the characteristics of these filters? If and how
are they related? x (t) enters from the left, is acted upon in some fashion (filtering)
to produce detail coefficients d j and an input stream for the next filtering level but
then what? To help answer these questions Figure 3.3 shows an amended Figure 3.1
which introduces a synthesis filter bank receiving the detail and approximation
coefficients, dand a j respectively, from the analysis portion of the filter bank.
Figure 3.3 Wavelet Filter Bank
16


The complete set of filters is comprised of h0 hx, f0 and fx. It also shows
A A
the reconstruction x from passing x through the filter bank. Ideally, x = x that
A
is, x is nothing more than a delayed version of x. This is called perfect
A
reconstruction and x = x when,
1. There is no aliasing of X (ft) ),
2. There is no distortion of Jt .
The focus is now on the relationship between hQ hx, /0 and fx. Consider
the Z transform of hQ ,
H0 (z) (3-13)
k
the Z transforms of the remaining filters are similarly defined. Perfect reconstruction
is guaranteed when the following criteria are met:
Perfect Reconstruction
F0(z)tf0(z)+ F1(z)H1(z)=2z~l (3.14a)
F0(z)H0(- z}+ F1(z)Hi(-z) = 0 (3.14b)
where / is the delay of the filter. From these equations, it can be shown that alias
cancellation occurs when
F0(z)=Hl(-z) (3.15a)
17


and
*i(z)=-o(-z)-
(3.15b)
Finally, to complete the analysis, from (3.14a) and (3.14b) define the product filters
Pq and Pj,
When defining Fo and F\ from (3.15a) and (3.15b), a fallout of the product filters is
the relation between Pq and Pi, namely,
The relevance here is Pq(z) is in a natural form for spectral factorization [8] as well as
satisfying the perfect reconstruction criteria Pq(z) Pq(-z) = 2zl.
A final note about choosing Hq was put forth by Smith and Barnwell [9]
providing guidance on how H\ (and hence F0 and Pi) can be found from Hq. For an
Hq correctly chosen to satisfy the above criteria, Ho, Fo and Pi are directly obtained
from (3.15a) and (3.15b). The bonus offered by the Smith-Bamwell criteria of
(3.15a) and (3.15b) is an orthogonal filter bank, one of the fundamental concepts
defining a successful wavelet based filter bank.
Referring back to Figure 3.1, the output of the wavelet equation (3.2) is a set
of wavelet coefficients dfor each reconstruction level \< j < J These
(3.16a)
(3.16b)
(3.17)
18


coefficients comprise the time and frequency content of the input signal. A common
method of analysis is plotting the coefficient magnitude versus index. Another more
intuitive method constructs a layered matrix shown in Figure 3.4. In this way each
coefficient assumes a pixel like character where the amplitude is proportional to the
magnitude of the coefficient. This graph shows the inverse relationship between time
and frequency; increased resolution in one dimension invariably leads to decreased
resolution in the other. The advantage though, lies in mutli-resolution. The
corresponding spectrum of a Fourier transformed signal would indicate the presence
of signal energy and frequency content but it would give no temporal identification.
Observe that the number of wavelet coefficients at a given level of reconstruction is
roughly equal to the size of the dataset divided by 2, raised to the power of the
reconstruction level.
To finalize the foregoing discussion, two simple but illustrative examples will
be given. The first example shows one dimensional decomposition and the second
two dimensional decomposition. In Figure 3.5, a signal 5(t)with three separate tones
and a low level of Gaussian noise added is shown. A five level wavelet transform of
s(t) is taken and shown immediately below the graph of s(t)- In the wavelet domain
the three tones are seen both temporally and in frequency (the vertical axis is
reconstruction level). As can be seen the three tones correlate in time to the graph of
19


s(t) Note that there is a temporal delay in the wavelet domain due to the delay
through the filters. For the two dimensional case, Figure 3.1 must be extended to two
dimensions. Conceptually, the wavelet transform still performs the same
decomposition at varying scales except now the input is an image and there are four
outputs from the filter bank as shown in Figure 3.6. This figure shows four branches
through the filter bank resulting in four filtering operations on the image:
20


1. Lowpass Lowpass (LL) these coefficients are passed on to the next level of the
wavelet transform.
2. Lowpass Highpass (LH) these coefficients indicate horizontal details in the
image.
3. Highpass Lowpass (HL) these coefficients indicate vertical details in the image.
4. Highpass Highpass (HH) these coefficients indicate diagonal details in the
image.
Similar to Figure 3.4, the convention for visualizing the four sets of coefficients is
S(t) With Locality
Sample
Figure 3.5 Simulated Three Tone Signal
shown below in Figure 3.7. The varying scales are seen in Figure 3.7 as the
21


Rows Columns
Figure 3.6 Two Dimensional Wavelet Filter Bank
reconstruction progresses from level to level. The high frequency components are
seen in level 3, the medium range components in level two and, finally, the lowest
frequency components in level 1. Also shown in level one are the lowpass lowpass
coefficients known as the approximation coefficients. Using the image wbarb,
shown in Figure 3.8a, as the input to the wavelet transform results in the
decomposition seen in Figure 3.8b. Features of interest can be seen in the respective
filter outputs. This image has obvious vertical, diagonal and horizontal content that is
seen in the HL, HH and LH regions respectively of levels 1 and 2.
22


LL HL HL HL
LH HH
LH HH
LH HH
Figure 3.7 Two Dimensional Wavelet Digraph
Figure 3.8a Image wbarb
Figure 3.8b Wavelet decomposition of
wbarb
23


3.2 Wavelet De-Noising
A consequence of transforming an evenly sampled, time varying function into
the wavelet domain is suppression of noise via judicious shrinkage of certain wavelet
coefficients. The theory was put forth by Donoho and Johnstone in [10,11] as a way
to reject Gaussian noise resulting in a reconstruction that is at least as smooth, in a
probabilistic sense, as the true signal. Donoho and Johnstone develop the model as
follows. Let
be the sample set with e\ independently distributed Gaussian noise of zero mean and
minimized. Thus, de-noising with wavelet shrinkage is a mean squared error
approach for signal estimation.
The minutiae of wavelet shrinkage are presented in detail in [10]. In essence,
the authors develop their theory around a probabilistic model that through the
appropriate choice of a threshold, signal reconstruction and noise suppression is
optimized in a mean squared error sense. The probabilistic model does constrain this
A
statement to the condition that f is at least as smooth as f with high probability.
Admittedly, this is a nebulous statement. In [11] this is more rigorously defined:
A
equivalent smoothness with high probability means f cannot oscillate more than f.
(3.17)
24


The cornerstone to wavelet shrinkage is selection of the right threshold. Two
methods proposed by Donoho and Johnstone are Riskshrink and Visushrink in [10].
A brief summary follows:
Riskshrink Threshold values for the Riskshrink technique were presented by
the authors in tabulated format for specific values of n, the number of points
in f. They are repeated below in Table 3.1 and are valid only for the given
values of n:
Riskshrink Thresholds
n X n X
64 1.474 4096 2.594
128 1.669 8192 2.773
256 1.860 16384 2.952
512 2.048 32768 3.131
1024 2.232 65536 3.310
2048 2.414
Table 3.1 Riskshrink Coefficients
Visushrink A more flexible technique for choosing a threshold since
it is not limited to specific values of n. The threshold is given by
C = cr(21n(n))^
(3.18)
where a is equal to the median absolute deviation of the wavelet coefficients
at the finest scale of decomposition divided by 0.6745.
25


Thresholding the wavelet coefficients is the next step after selecting the
thresholding technique. Two methods are available to do this, soft or hard
thresholding. Hard thresholding is implemented by keeping all coefficients above the
threshold intact and setting coefficients below the threshold to zero. Soft thresholding
sets coefficients below the threshold to zero but coefficients above the threshold are
handled differently. These coefficients are reduced in value by the threshold, thus the
term wavelet shrinkage. The net effect is to remove a bias from legitimate wavelet
coefficients due to Gaussian noise. Accordingly, the two thresholding techniques are
described by
The choice of which method to use, hard or soft thresholding, is not well documented.
Consistent with most literature contributing to this thesis, soft thresholding will be
used for computer simulations.
The final step in the wavelet shrinkage algorithm concerns choosing the
appropriate decomposition level at which to start the thresholding process. As with
thresholding, there are no clear criteria from which a choice can be made. It is also
true that a starting decomposition for one image may not be the best choice for
hard thresholding;
(3.19)
soft thresholding. (3.20)
26


another image. This can be explained rather heuristically. Consider the spectrams of
an arbitrary signal shown in Figure 3.9. The spectrum of the original signal x is
shown along with the spectrams from a four level decomposition of jc. Generally
speaking, most the energy content of x lies in the lower end of the frequency
spectrum. This corresponds to low contrast regions of x; conversely, higher
frequencies correspond to edges and high contrast artifacts in jc. So, based on the
content of jc, thresholding at higher decomposition levels effects the details contained
in the reconstructed x. Conceptually speaking, consider a signal, jtj, with a high DV
measure implying regions of high contrast (high frequency content). Consider a
second signal, x2 with a relatively low DV measure implying regions of moderate
contrast (mid-range frequency content) compared to jtj. If both signals are de-noised
with a four level decomposition and thresholding of level 4, then one would expect
A A
that the reconstruction x2 would be a better reconstruction than X\. In this sense, a
A
better reconstruction means the x2 DV measure is closer to its original than the
A
jCj DV. This is attributed to manipulation of the d* wavelet coefficients of which
jcj contributes more to than x2- To illustrate the effects of thresholding at various
levels, Figure 3.10 shows a set of signals with a 7 dB SNR. Each signal underwent a
4 level decomposition followed by thresholding and reconstruction. The first plot
27


shows the original signal and the next four correspond to thresholding at level J, n
4, 3, 2 and 1.
1 i 1 i I X(ffl)

at X(ffl/2) 1
(to X(m/4)
to X(ffl/8)
X(C0/16)
-J > 0)
Figure 3.9 Spectrum of Four Level Decomposition
28


S(k), SNR=7dB


-m
500
1000
1500
2000
0 ---~/\f *\f^~
1&
500
1000
1500
2000

-idL
i4
-m
500
1000
1500
2000

1$
-m
500
1000
1500
2000
500
1000
1500
2000
Original
J=4
J=3
J=2
J=1
Figure 3.10 Signal Reconstruction vs. Jn
29


4. Polynomial Filters
The legacy of polynomial filters is rooted in an image enhancement technique for
edge detection. This chapter highlights the migration of gradient based edge detection
to Laplacian filters to polynomial filters. Functionally, polynomial filters are
designed for the purposes of image enhancement, however, they can also exhibit a
side benefit of noise rejection.
4.1 Gradient Based Filtering
Of interest in image enhancement is accentuating details in the image without
a corresponding gain in background features or noise artifacts. Accentuating details is
akin to edge detection, a method of categorizing what is and is not an edge.
Typically, once an edge is found its pixels are artificially manipulated to produce a
sharper edge by increasing the contrast between edge and background pixels. As
usual, there is no hard rule defining what is an edge and where it begins. One method
[12] of edge determination is diagrammed in Figure 4.1. The advantage of this
method is the reliance on image content at a given point xQ For a two dimensional
function f(x,y) an edge point is determined from the gradient,
(4.1)
30


Figure 4.1 Edge Determination
For images with unit step size the above partials can be replaced with first differences,
of(x,y)
ox
f(x,y)-f(x-l,y),
of{x,y)
ox
f{x,y)-f(x,y-\).
(4.2a)
(4.2b)
31


It is noteworthy that the difference equations (4.2a) and (4.2b) are the same as
convolution with a filter h(x,y), the nature of which will soon be defined.
Another method of edge determination is evaluation of / (*, y) to determine
if the derivative is at a local maximum meaning the second derivative f (x, y ) is at a
minimum. Accordingly, *., is judged an edge point when
v2/(*,>>)=
d2f{x,y)
d2f(*,y)_0
dy1
(4.3)
Similar to the first partials of f, the second order partials can be computed from
difference equations:
V2f{x,y)=4f(x,y)-f(x+\y)-f(x-Xy)-f(x,y+l)-f(x,y-l) (4.4)
Viewing this as a convolution operation, the filter h{x,y) realizing V2/(jc,y) will
sense the presence of horizontal and vertical edges. It turns out that h{x,y) is a
common filter referred to as a Laplacian with coefficients
L(x, y) =
0
-1
0
-1 0
4 -1
-1 0
(4.5)
The disadvantage of Laplacian filters is their sensitivity to noise. Any zero
crossing is deemed an edge point leading to many false edges contaminating the edge
detected image, see Lim [12] for methods of reducing false edges. This is shown in
32


Figure 4.2 where the Laplacian of woman is shown.
Figure 4.2 Laplacian of Image woman
4.2 Cubic Sharpening
To overcome the introduction of false edges, G. Ramponi proposed [13] a
method using a filter similar to the Laplacian for sharpening an image. In addition to
image sharpening, it demonstrates useful image restoration properties due to its
inherent noise suppression.
Underlying Ramponis method is unsharp masking, a well known technique
which operates by adding a high pass filtered version of the image onto itself. The
drawback to unsharp masking is its vulnerability to noise created by the action of the
high pass filter. The basic form of the unsharp masking filter is defined by
yj,k=Xj,k+foj,k <4-)
33


where ^ , the linear high pass filter, is defined as
Zj,k =faj,k ~xj+l,k-xj-l,k) + {2xj,k-XjM\-xi,k-1) (4-7)
and X is a data dependent scaling factor judiciously selected by the viewer. Without
modification the unsharp masking filter can increase the images DV measure due to
its noise sensitivity. Ramponi mitigates this sensitivity by incorporating a quadratic
term into ^ as such:
Now, the output of his filtering scheme is the sum of the original image plus a scaled,
cubic high pass filtered version of the image. Ramponi calls the quadratic term an
edge sensor (the linear term keeps its identity as the high pass filter). In this scheme
the edge sensor acts like a switch. Uniform regions of x have a small output from the
edge sensor while contrasting regions have a much larger response from the edge
detector due to its non-linear behavior. The resulting effect of the cubic filter is to
alleviate noise perturbations while strongly responding to true image details. A
block diagram is shown in Figure 4.3. The relevance of X now becomes apparent
since the output of the cubic filter can far exceed the dynamic range of a grayscale
34


X
Figure 4.3 Cubic Filtering Block Diagram
image. To deal with this a limiter is inserted into the filter block as shown. The
choice of saturation limits is arbitrary as well as the value of lambda. In [13] a
saturation limit of 50000 was used with X = 0.001.
A last note about the cubic operator £ : the Laplacian operator just described
used a plus shaped pattern in the 3 x 3 window. A cross shaped window can also be
used but should be scaled by due to the distance from the center pixel.
V2
4.3 Modified Cubic Sharpening
In another paper by Ramponi [14], the modified unsharp masking algorithm
described in Section 4.2 is further refined to act simultaneously in four directions.
This is achieved by decomposing the 3 x 3 Laplacian into four 1x3 vectors
corresponding to the four orientations within a 3 x 3 window. After filtering with the
uni-directional Laplacians, another filtering operation is performed from which the
35


output determines correlation in the four orientations. To put this scheme into
context, a block diagram is shown in Figure 4.4.
Figure 4.4 Modified Cubic Sharpening
Of concern in this figure are the lowpass, uni-directional highpass and S
filters. The first of these, the lowpass filter, is a simple 3x3 kernel with the
following coefficients:
36


Lowpass Filter
a a a
a 1-8 a a
a a a
(4.9)
In this filter a controls the degree of smoothing administered by the kernel and is
bounded by 0 < a < 0.1.
The W filters are the uni-directional Laplacian operators previously
mentioned. They function by sensing the presence of edges in a direction orthogonal
to the orientation of the filter; large outputs correspond to image edges, zero or near
zero outputs result from uniform regions. The filters are similar to those in Section
4.2 and are given by:
W/j =2*j,k ~xj,k+l~Xj,k-l (4.10a)
Wv=2xjjk-xj+hk-xj_u (4.10b)
W45 = 2Xj,k ~xj-\,k+\~Xj+hk-\ (4.10c)
^35 =^xj,k ~xj+l,k+l~Xj~l,k-l- (4.10d)
The third, and final, filters of interest are the 3 x 3 S filters. Like the W filters
they are directional and act to determine the presence of correlated data in a direction
orthogonal to the orientation of the particular S filter. Each directional S filter is
applied to the corresponding directional W matrix where the correlation action is
37


manifested by seeking edges. A subtlety of the technique lies in rearranging each 3 x
3 W matrix into a 1 x 9 vector. This is Ramponis method, presumably this is done for
intuitive reasons. The discussion on how the S filters work will be more
understandable by defining them first:
sh=wv (5)| wv (4)1 wv (6)| +1 wv (4)| wv (6)] +
k (5)l(wv (4)k (6i+K (4)K (6)) (4.11a)
SV = wh (5)| wh (2)| wh (8)| + k (2)| Wh (8)| +
K (5)|(wa (2) wh (8)| +1 wh (2)| wh (8)) (4.1 lb)
545=5{;135(5)|t'l35(1)|'l35(9)i + h35(l)|w135(9)| +
| W135 (SlKwi 35 (l^Wi35 (9) + |w135(1)|w135(9))) (4.11c)
^135 = \ (w45 (5)(vC'45 (3)|w45 (7)| + |w45(3)|m45 (7)| +
|w45(5)|(w45(3)|w45(7)1 + |w45(3)|w45(7))) (4.iid)
Each filter behaves in a curious way by outputting zeros except when the sign of each
W component is the same. In this way the non-linear behavior of each S filter
produces a strong response when correlated data in the form of edges are present in
the W matrices. Note that the response is from edge data orthogonal to the particular
S filter, so a large response from the horizontal S filter indicates the presence of a
38


vertical edge. This underscores the purpose of the modified cubic filter. In its basic
form the cubic filter responds strongly to large differences within the 3 x 3 data
kernel. The modified cubic filter also exhibits a strong response but only when the
directional content within the 3 x 3 data kernel is correlated, that is, adjacent cells
within the kernel are of the same sign for a given orientation.
39


5. Computer Simulations and Comparative Analysis
Wavelet de-noising is an active area of research prompting the question how
does wavelet shrinkage compare to other noise reduction strategies? This chapter
provides some insight to this question by comparing wavelet shrinkage to modified
one dimensional signals commonly seen in Donohos de-noising literature. This is
followed by an analysis of image data where each algorithm is optimized for peak
SNRI performance.
5.1 One Dimensional De-Noising Results
Although not a standard set of signals, equations (5.1a-d) are used
(5.1a) Blocks
cubic sharpening. An initial evaluation of wavelet shrinkage is performed on a set of
(5.1b) Bumps
(5.1c) Heavisine
/ (t) = 4 sin(4;tf ) sgn(f 0.3) sgn(0.72 t)
40


(5.Id) Doppler
f{t) = [f(l t)f2 sin[2;r(l + e)/(t + e)], £ = 0.05
throughout Donohos research and are representative of typical phenomenon seen in
many signals. From Figure 5.1 one can see these signals contain discontinuities,
instantaneous jumps, low to high frequency content and non-periodic features.
Using this signal set as a basis, performance of wavelet shrinkage is measured by
adding Gaussian noise to the signal, taking the wavelet transform, soft thresholding
and then reconstructing the signal by taking the inverse wavelet transform. With this
41


approach SNRI can be measured since all three components needed for SNRI
computation are present, namely, the original, noisy and reconstructed signals.
The dilemma arises when choosing the reconstruction level and threshold. In
this analysis, each signal set contains 2048 points and a decomposition depth of six
was used. Based on the observation that reconstruction quality degrades when
thresholding at lower levels (see Figure 3.10), a somewhat arbitrary choice of levels
five and six were selected for application of soft thresholding. The other unknown
concerns selecting the right threshold for wavelet shrinkage. To answer this a series
of reconstructions were performed at varying thresholds to build a Threshold vs.
SNRI performance curve for each of the four signals above. In the subsequent
analysis, Gaussian noise was added to produce an SNR of 7 dB. In Figures 5.2a-d the
Threshold vs. SNRI performance can be seen. Shown along with each figure is the
noisy signal and the reconstruction.
Each reconstruction used the Daubechies DB4 mother wavelet. From these
four pictures the following observations can be made:
1. A positive SNR improvement was achieved validating the use of wavelet
shrinkage as a method of noise reduction.
2. The amount of SNR improvement hovers around 8 dB with the exception of the
Heavisine signal. This is not surprising given the low frequency content of the
42


signal. In fact, the only high frequency content to the signal is from the added
noise and the two discontinuities seen in the theoretical signal.
VtodaI>Nisngd''K£iicf
5 SttI
0 1 02 03 04 OS 06 07 Q8 0.9 Hrednfd
0 S(t>4Nis
0 02 04 06 08
0 ~0njurvi
0 02 04 06 05 RamsnualSd) Hire
Figure 5.2a Wavelet De-Noising of Blocks with Threshold of 0.6.
WnddEfeNESgcfHaivtiiEr
12
1 15 2 25 Ifrafrfcl 3
0 SfttNn:
C 02 Q4 Q6 08 1
0
0 02 Q4 Q6 Q8 1 tanaruSilSB) Tbr
Figure 5.2c Wavelet De-Noising of
Heavisine with Threshold of 1.5.
VfevddEbNiagtf'lhiTpr
Q1 02 03 0.4 05 06 07 08 09

JULAi__LA
Figure 5.2b Wavelet De-Noising of
Bumps with Threshold of 0.2.

10
SNR]
0
0 006 01 015 02 025
0
SW+Keb
-1
0 02 04 06 Q8 1

_i____________i_
11_____i______i_______i_______i_______i
0 02 04 06 Q8 1
RomauaalStO Tnro
! 7igure 5.2d Wavelet De-Noising o:
Doppler with Threshold of 0.1.
43


3. High frequency components suffer the most from wavelet shrinkage. Since these
are the coefficients targeted for shrinkage, artificial manipulation is bound to have an
effect.
5.1 Two Dimensional De-Noising Results
Measuring the performance of two
dimensional wavelet shrinkage and
modified cubic filtering is closely
aligned to the one dimensional method
of Section 5.1. The process begins with
the selection of three test images shown
in Figures 5.3a, 5.3b and 5.3c. These
images were chosen due to their
contrasting features. While the image
tire is more uniform with hard edges,
the image woman is rich with texture
and soft edges. This is characterized by
the DV measure of each figure; DV for
woman is 598.4 while the DV for tire
is 413.9.
Figure 5.3a Image tire
44


The model for measuring performance of
image de-noising begins by adding 7dB
of Gaussian noise to the image. The
next step is optimization of SNR1
performance through parameter
selection. The optimization procedure is
different for the two algorithms. For
wavelet de-noising a threshold vs. SNRI performance curve is generated just like in
the 1-D case. This curve dictates the choice of threshold for maximum SNRI. In the
analysis wavelet de-noising is executed two ways. The first approach assumes
separability of the image and measures performance of row wise de-noising, column
wise de-noising and row followed by column de-noising. The second approach
invokes a true two dimensional filtering scheme to produce the wavelet coefficients.
Both methods use the Daubechies db4 mother wavelet.
For modified cubic filtering there are two parameters effecting SNRI, the
smoothing factor a and the threshold X shown in Figure 4.4. Similar to the wavelet
SNRI curve, a SNRI performance surface is generated yielding the proper choice of a
and X for the best overall SNRI. In this implementation of modified cubic filtering
the scaling factor X is applied such that the maximum value in the T matrix is scaled
to the value of X itself. In other words, if the SNRI performance surface indicates a
45


peak at X = 100, then T, which contains the detail pixels to be added back into the
lowpass filtered image, has a dynamic range between 0 and 100. Effectively, this acts
as the limiter in the highpass filter path but with an altered interpretation. Whereas
choosing a (X,a) pair coupled with the limiter in Figure 4.4 assures a bound on T,
there is no assurance that the optimum (k,a) pair was employed. The method used in
this analysis circumvents the trial and error involved in gauging SNRI performance.
At this point, an extension to the BVDV figure of merit will be introduced.
After characterizing pixels within an image as either background or detail it is useful
to compute the SNRI of the de-noised image as well as the SNRI of the background
and detail regions. In this way a direct measure of overall gain is realized and, more
importantly, a qualitative characterization for how well a process improves (a positive
SNRI) or degrades (a negative SNRI) an image is achieved.
The analysis results are presented in the next three sections. Section 5.2.1 is
for the image tire, Section 5.2.2 is for image woman and Section 5.2.3 is for
julia.
46


5.2.1 Analysis of tire
Figure 5.4a Original Image tire
Figure 5.4b Original Image tire +
Noise,SNR = 7 dB
Separable Wavelet De-Noising
Figure 5.5a Ensemble average of SNRI
row wise de-noising
Figure 5.5b Row de-noised image with
a peak SNRI of 4.6 using a threshold of
52 as seen in Figure 5.5a. The SNRIbv =
5.6 and SNRIdv = 3.9.
47


Col Reconstruction, SNRI vs Threshold, # Sim = 5
Figure 5.6a Ensemble average of SNRI
column wise de-noising.
Figure 5.7a Ensemble average of SNRI
row column de-noising.
Figure 5.6b Column de-noised image
with a peak SNRI of 4.5 using a
threshold of 52 as seen in Figure 5.6a.
The SNRIbv = 5.6 and SNRIDv = 3.7.
Figure 5.7b Row/Column de-noised
image with a peak SNRI of 7.0 using a
threshold of 23 as seen in Figure 5.7a.
The SNRIbv = 10.6 and SNRfov = 5.3.
48


Non-Separable Wavelet De-Noising
Figure 5.8a Ensemble average SNRI of
non-separable de-noising.
Figure 5.8b Non-separable de-noised
image with a peak SNRI of 7.1 using a
threshold of 52 as seen in Figure 5.8a.
The SNRIbv = 10.4 and SNRI,W = 5.4.
Modified Cubic Filtering
Figure 5.9a Ensemble SNRI perfor-
mance surface for modified cubic
sharpening, 5 iterations in the ensemble.
Figure 5.9b Modified cubic filtering
with a peak SNRI of 7.5 using X = 15
and a = 0.1 as seen in Figure 5.9a. The
SNRIbv = 9.0 and SNRInv = 6.5.
49


Visually, there is a clear distinction between the wavelet restored images and
the image out of the modified cubic filter. The most apparent distinction to note is the
resolvability between the differing techniques. While the wavelet filtered images are
smoother, it comes at the price of less detail in the image relative to the modified
cubic filter image. This is attributed to filter size. The polynomial filter uses a 3 x 3
kernel while the Daubechies 4 filter is 16 x 16. The net result of a smaller filter is
less smoothing relative to a larger filter. This accounts for the smoother (less detail)
appearance seen in the wavelet images and the coarser (more detail) appearance in the
modified cubic filter image.
Recall that the desired restoration process should reduce the BV as much as
possible while maintaining DV near that of the original. It can be seen, from Table
5.1, that this is indeed true. With this criteria it is clear that row wise and column wise
Algorithm BV DV SNRI SNRIbv SNRIdv
Original 3.9 413.9
Original + Noise 766.0 1178.0
Row De-Noising 172.4 495.0 4.6 5.6 3.9
Col. De-Noising 172.3 480.0 4.5 5.6 3.7
R/C De-Noising 32.1 289.5 7.0 10.6 5.3
2-D De-Noising 37.1 306.7 7.1 10.4 5.4
Mod. Cubic Filter 57.6 301.5 7.5 9.0 6.5
Table 5.1 Performance summary for image tire
BV lagged behind the BV measures of row/column, non-separable and modified
cubic filtering values. Conversely, DV for row and column measures were superior
since frequency components in the orthogonal direction went untouched. The SNRI
50


components complete the picture by measuring overall performance gain and the
corresponding gain in background and details. Simply stated, higher SNRIs directly
translate to better restoration. The SNRI numbers for this image show that row wise
and column wise processing perform poorly when compared to the other methods. As
a whole, Table 5.1 shows non-separable processing to be the best performer for this
image. Even though it is not the best performer in any category, it demonstrates
consistently good behavior across all categories. This is not true for any of the other
techniques.
51


5.2.2 Analysis of woman
Figure 5.10a Original Image woman
ugure 5.10b Original Image + Noise,
SNR = 7 dB
Separable Wavelet De-Noising
Row Reconstruction, SNRI vs Threshold, # Sim = 5
Figure 5.1 la Ensemble average SNRI
of row wise de-noising.
ugure 5.1 lb Row de-noised image
with a peak SNRI of 1.6 using a
threshold of 20 as seen in Figure 5.1 la.
The SNRIbv = 3.6 and SNRIDV = 1.6.
52


Col Reconstruction, SNRI vs Threshold, # Sim = 5
Figure 5.12a Ensemble average SNRI
of column wise de-noising.
with a peak SNRI of 2.1 using a
threshold of 15 as seen in Figure 5.12a.
The SNRIbv = 3.0 and SNRIDV = 2.1.
Figure 5.13a Ensemble average SNRI
of row column de-noising.
image with a peak SNRI of 2.7 using a
threshold of 13 as seen in Figure 5.13a.
The SNRIbv = 7.0 and SNRIDV = 2.7.
53


Non-Separable Wavelet De-Noising
SNRI vs Threshold, 5 Simulations
Figure 5.14a Ensemble average SNRI
of non-separable de-noising.
Modified
Figure 5.15a- Ensemble average SNRI
surface for modified cubic sharpening
^igure 5.14b Non-separable denoised
image with SNRI = 2.9 using a threshold
of 23 as seen in Figure5.14a. The
SNRIbv = 6.3 and SNRIDV = 2.9.
ic Filtering
^igure 5.15b Modified cubic filtering
with SNRI = 3.0 using A. = 70 and a =
.075 as seen in Figure 5.15a. The
SNRIbv = 5.8 and SNRIDV = 3.0.
54


This image was chosen due to the prevalence of edges and contrast.
Comparing the appearance of the processed image reveals visually similar results to
tire in terms of resolution. This image is more suitable for visual inspection of
detail retention. To this viewer modified cubic filtering retains more detail than non-
separable or row/column processing. Again, this is attributable to the 3 x 3 kernel
used for polynomial filtering. Numerically, Table 5.2 shows row/column processing
to have the best BY and SNRIbv but the second worst DV. It is curious that the DV
Algorithm BV DV SNRI SNRIbv SNRIdv
Original 1.1 598.4
Original + Noise 517.3 1101.8
Row De-Noising 198.7 543.9 1.6 3.6 1.6
Col. De-Noising 244.6 729.2 2.1 3.0 2.1
R/C De-Noising 75.6 343.9 2.7 7.0 2.7
2-D De-Noising 83.0 426.8 2.9 6.3 2.9
Mod. Cubic Filter 119.2 379.1 3.0 5.8 3.0
Table 5.2 Performance summary for image woman
for column processing is so large on an image with substantial vertically oriented
content. Overall, Table 5.2 indicates that non-separable processing is the best
performer. Like its performance with the image tire, it shows consistently good
behavior across all measurement categories.
55


5.2.3 Analysis of julia
Figure 5.16a Original Image
Figure 5.16b Original Image + Noise,
SNR = 7 dB.
Separable Wavelet De-Noising
Row Reconstruction. SNR I vs Threshold. # Sim = 5
Figure 5.17a Ensemble average SNRI
of row wise de-noising.
Figure 5.17b Row de-noised image
with a peak SNRI of 3.1 using a
threshold of 9 as seen in Figure 5.17a.
The SNRIbv = 4.8 and SNRIDV = -0.4.
56


Col Reconstruction. SNRI vs Threshold, # Sim = 5
Figure 5.18a Ensemble average SNRI
of column wise de-noising.
RC Reconstruction, SNRI vs Threshold, # Sim = 5
Figure 5.19a Ensemble average SNRI
of row/column de-noising.
Figure 5.18b Column de-noised image
with a peak SNRI of 2.9 using a
threshold of 10 as seen in Figure 5.18a.
The SNRIbv = 5.0 and SNRIDV = -1.0.
Figure 5.19b Row/Column de-noised
image with a peak SNRI of 4.2 using a
threshold of 5 as seen in Figure 5.19a.
The SNRIbv = 9.1 and SNRIDV = -1.4.
57


Non-Separable Wavelet De-Noising
Figure 5.20a Ensemble average SNRI
of non-separable de-noising.
Figure 5.20b Non-separable de-noised
image with SNRI = 4.2 using a threshold
of 9 as seen in Figure 5.20a. The
SNRIbv = 7.7 and SNRIDV = -0.7.
Modified Cubic Filtering
Figure 5.21a Ensemble average SNRI
surface for modified cubic sharpening.
Figure 5.21b Modified cubic filtering
with SNRI = 3.1 using X = 70 and a =
0.075 as seen in Figure 5.21a. The
SNRIBv = 6.4 and SNRIDV = -1.8.
58


The image julia was used for the third test case because of its color coding
and uniform background. The advantage of color is revealed in the processed images
in that it is easier to see filtering effects in the background regions. The effect of
filtering in one dimension can be seen in the row wise and column wise images. The
row processed image contains a horizontal grain due to filtering in the horizontal
direction. The same effect manifests itself in the column processed image. The
remaining images, which have all undergone processing in both dimensions, have a
smoother appearance in the background regions. In accordance with results from the
previous images, Table 5.3 indicates that non-separable filtering demonstrates the best
overall improvement. It has the best SNRI with consistently good behavior across the
remaining categories.
Algorithm BV DV SNRI SNRIbv SNRIdv
Original 1.1 293.6
Original + Noise 55.8 135.5
Row De-Noising 16.4 210.4 3.1 4.8 -0.4
Col. De-Noising 15.5 189.2 2.9 5.0 -1.0
R/C De-Noising 4.6 142.3 4.2 9.1 -1.4
2-D De-Noising 7.3 170.5 4.2 7.7 -0.7
Mod. Cubic Filter 10.5 135.5 3.1 6.4 -1.8
Table 5.3 Performance summary for image julia
59


6.
Conclusions
In Chapter 5 the wavelet de-noising and modified cubic filtering algorithms
were applied to three images. Prior to processing, Gaussian noise was added to each
image producing an SNR of 7 dB. Using the theory from Section 3.2 wavelet de-
noising was performed two ways. The first method treated the image as a separable
sequence measuring performance of row only processing, column only processing
and row/column processing. The second method assumed a non-separable condition
applying the two dimensional wavelet transform as diagrammed in Figure 3.6.
Modified cubic filtering, described in Chapter 4, was used as an alternative technique
to wavelet shrinkage for comparison purposes. The performance of each technique is
repeated in Table 6.1 with an addition. Shown in the upper right comer of each cell
in the table is a ranking of parameter score for each technique for each performance
parameter. In this ranking scheme the best score gets a 1, the worst a 5 (or equivalent
if there is a tie). By summing the ranks for each technique a total score is computed.
In this scenario the lowest score wins. Examination of the totals forces one to the
conclusion that, for the algorithms employed in this analysis, non-separable wavelet
de-noising is the best performer. This is confirmed in a second way. It was stated
earlier that any de-noising technique should lower BV as much as possible and
maintain DV. This means the best performer should score a 1 (ideally) for
60



Algorithm BV DV SNRI SNRIbv SNRIpv
Original 3.9 413.9
Original + Noise 766.0 1178.0 Total
Row De-Noising 172.4 * 495.0 2 4.6 4 5.6 4 3.9 4 19
Col. De-Noising 172.3 4 480.0 1 4.5 5 5.6 4 3.7 5 18
R/C De-Noising 32.1 1 289.5 5 7.0 3 10.6 3 5.3 3 13
2-D De-Noising 37.1 2 306.7 3 7.1 2 10.4 2 5.4 2 11
Mod. Cubic Filter 57.6 3 301.5 4 7.5 1 9.0 1 6.5 1 12
Performance summary for image tire
Algorithm BV DV SNRI SNRIbv SNRIdv
Original 1.1 598.4
Original + Noise 517.3 1101.8 Total
Row De-Noising 198.7 4 543.9 1 1.6 5 3.6 4 1.6 3 i$
Col. De-Noising 244.6 5 729.2 2 2.1 4 3.0 5 2.1 4 20
R/C De-Noising 75.6 1 343.9 5 2.7 3 7.0 1 2.7 3 13
2-D De-Noising 83.0 2 426.8 3 2.9 2 6.3 2 2.9 2 11
Mod. Cubic Filter 119.23 379.1 4 3.0 1 5.8 3 3.0 1 12
Performance summary for image woman
Algorithm BV DV SNRI SNRIbv SNRIdv
Original 1.1 293.6
Original + Noise 55.8 135.5 Total
Row De-Noising 16.4 5 210.4 1 3.1 2 4.8 5 -0.4 1 14
Col. De-Noising 15.5 4 189.2 2 2.9 3 5.0 4 -1.0 3 16
R/C De-Noising 4.6 1 142.3 4 4.2 1 9.1 1 -1.4 4 11
2-D De-Noising 7.3 2 170.5 3 4.2 1 7.7 2 ~4h7 r 16
Mod. Cubic Filter 10.5 3 135.5 5 3.1 2 6.4 3 -1.8 5 IS
Performance summary for image julia
Table 6.1 Performance summary with ranking
background and detail variance. By looking at the BVDV ranks, non-separable
wavelet de-noising comes closer to the ideal than any of the other methods.
61


APPENDIX A
Separable De-Noising Source Code
function dnrc( img, w_name, snr, n_sim, t_low, t_high, t_points, j_0, b_thresh );
% dn2d( img, w_name, snr, n_sim, t_low, t_high, t_points, j_0, b_thresh );
% Function to de-noise an image non-coherently by computing the wavelet
% transfer musing the mother wavelet w_name, applying soft thresholding
% and inverse transforming the signal to get the de-noised version of the
% 1-D signal. This procedure is repeated across the rows and columns
% of the image to produce the final form of the de-noised image. Note that
% the de-noised images for row, column and row+column are saved as a
% bitmap file.
%
% Inputs
% img = Input image for de-noising, assumed to be a .mat file
% w_name = Wavelet filter to be used
% snr = Requested SNR
% n_sim = Number of simulations to run
% t_low = Minimum threshold
% t_high = Maximum threshold
% t_points = Number of threshold points
% j_0 = Beginning reconstruction level
% b_thresh = BVDV threshold (Std. deviation, not variance)
%
% Outputs
% Threshold vs SNRI plot
%
% Get the image and add noise
load( img);
check = d_idx;
s = whos( check);
if( isempty(s) == 1)
[bv,b_idx,dv,d_idx] = bvdv_idx( X, 3, b_thresh);
save( img, X, map, bv, dv, b_idx, d_idx)
end
[m,n] = size( X);
imshow( X, map);
txt = sprintff Original Image, Bv = %4.1f, Dv = %5.1f, BvDv Threshold = %2d, bv, dv,
b_thresh);
title( txt);
% Get the threshold table and noise floor variance
62


if( t_low ~= t_high )
thresh = vs_thresh( t_low, t_high, t_points );
else
thresh -- t_low;
end
n_thresh = length( thresh);
n_sigma n_floor( X, snr);
fname = sprintf( dnrc_%1c_n%03d_t%02.0f_%02.0f.txt, img(l), n_sim, t_low, t_high)
fid = fopen( fname, \vt);
fprintf( fid, Thresh BV DV SNRI SNRI_BV SNRI_DV\n);
fprintf( fid, %6.1f %6.1f Original Image\n, bv, dv);
% Get the number of decomposition levels
levels = wmaxlev( min(m,n), w_name);
j_0 = levels -1;
snri_r zeros(n_thresh,3);
snri_c = zeros(n_thresh,3);
snri_rc = zeros(n_thresh,3);
img_bd = zeros(n_thresh,4,2);
for i l:n_thresh
fprintf( 1, Thresh: %d of %d\n, i, n_thresh);
for j = l:n_sim
thresh(i) = 5.0;
dnr_img zeros(m,n);
randn(state,sum( 100*clock));
gn = randn( m, n );
Xpn = X + n_sigma gn;
[bv,dv] = bvdv( Xpn, 3, b_thresh, b_idx, d_idx);
img_bd(i,l,l) = img_bd(i,l,l) + bv;
img_bd(i,l,2) = img_bd(i,l,2) + dv;
fprintf( 1, Tt: sim = %d of %d\n, j, n_sim);
for k = l:m
s = Xpn(k,l:n);
[C,L] = wavedec( s, levels, w_name );
% Now threshold the signal
[C, L] = vs_den( C, L, j_0, thresh(i));
% And do the inverse wavelet transform
dnr_img(k,l:n) = waverec( C, L, w_name);
end
[bv,dv] = bvdv( dnr_img, 3, b_thresh, b_idx, d_idx);
img_bd(i,2,l) = img_bd(i,2,l) + bv;
img_bd(i,2,2) = img_bd(i,2,2) + dv;
[tmp,tmp_b,tmp_d] = meas2_snri( X, Xpn, dnr_img, b_idx, d_idx);
63


snri_r(i,l) = snri_r(i,l) + tmp;
snri_r(i,2) = snri_r(i,2) + tmp_b;
snri_r(i,3) = snri_r(i,3) + tmp_d;
fprintf(fid, Vow: %6.2f %6.2f %6.2f %6.2f %6.2f\n, bv,dv,tmp,tmp_b,tmp_d);
% De-noise the columns of the row filtered image
dnrc_img = zeros(m,n);
fprintf( 1, RC: sim = %d of %d\n, j, n_sim);
thresh(i) = 3.8;
for k = 1 :n
s = dnr_img(l:m,k);
[C,L] = wavedec( s, levels, w_name);
% Now threshold the signal
[C, L] = vs_den( C, L, j_0, thresh(i));
% And do the inverse wavelet transform
dnrc_img(l:m,k) waverec( C, L, w_name);
end
[bv,dv] = bvdv( dnrc_img, 3, b_thresh, b_idx, d_idx );
img_bd(i,3,l) = img_bd(i,3,l) + bv;
img_bd(i,3,2) = img_bd(i,3,2) + dv;
[tmp,tmp_b,tmp_d] meas2_snri( X, Xpn, dnrcjmg, b_idx, d_idx);
snri_rc(i,l) = snri_rc(i,l) + tmp;
snri_rc(i,2) = snri_rc(i,2) + tmp_b;
snri_rc(i,3) = snri_rc(i,3) + tmp_d;
fprintf(fid, r/c: %6.2f %6.2f %6.2f %6.2f %6.2f\n, bv,dv,tmp,tmp_b,tmp_d);
% De-noise the columns only
dnc_img zeros(m,n);
fprintf( 1, C: sim = %d of %d\n\ j, n_sim);
thresh = 5.5;
for k = 1 :n
s = Xpn(l:m,k);
[C,L] = wavedec( s, levels, w_name);
% Now threshold the signal
[C, L] = vs_den( C, L, j_0, thresh );
% And do the inverse wavelet transform
dnc_img(l:m,k) = waverec( C, L, w_name);
end
[bv,dv] = bvdv( dnc_img, 3, b_thresh, b_idx, d_idx);
img_bd(i,4,l) = img_bd(i,4,l) + bv;
img_bd(i,4,2) = img_bd(i,4,2) + dv;
[tmp,tmp_b,tmp_d] = meas2_snri( X, Xpn, dnc_img, b_idx, d_idx );
snri_c(i,l) = snri_c(i,l) + tmp;
snri_c(i,2) = snri_c(i,2) + tmp_b;
64


snri_c(i,3) = snri_c(i,3) + tmp_d;
fprintf(fid, c: %6.2f %6.2f %6.2f %6.2f %6.2f\n, bv,dv,tmp,tmp_b,tmp_d);
end % End n_sim loop
% clear dnr_img dnrc_imb dnc_img;
end % End thresh loop
snri_r = snri_r / n_sim;
snri_rc = snri_rc / n_sim;
snri_c = snri_c / n_sim;
img_bd = img_bd / n_sim;
fprintf( fid, %6.1f %6.1f Image + Noise\n, img_bd(l,l,l), img_bd(l,l,2));
for i = l:n_thresh
fprintf( fid, Rows: %7.4f %6.2f %6.1f %8.4f %8.4f %8.4f\n, thresh(i),
img_bd(i,2,l), img_bd(i,2,2), snri_r(i,l), snri_r(i,2), snri_r(i,3));
end
for i = l:n_thresh
fprintf( fid, Columns: %7.4f %6.2f %6.1f %8.4f %8.4f %8.4f\n, thresh(i),
img_bd(i,4,l), img_bd(i,4,2), snri_c(i,l), snri_c(i,2), snri_c(i,3));
end
for i = 1 :n_thresh
fprintf( fid, Row/Column: %7.4f %6.2f %6.1f %8.4f %8.4f %8.4f\n, thresh(i),
img_bd(i,3,l), img_bd(i,3,2), snri_rc(i,l), snri_rc(i,2), snri_rc(i,3));
end
fclose( fid);
if( n_sim > 1)
figure;
plot( thresh, snri_r(:,l), Tc);
xlabel( Threshold);
ylabel( SNRI)
txt = sprintf( Row Reconstruction, SNRI vs Threshold, # Sim = %d, n_sim);
title( txt);
figure;
plot( thresh, snri_c(:,l), Tc);
xlabel( Threshold);
ylabel( SNRI)
txt = sprintf( Column Reconstruction, SNRI vs Threshold, # Sim = %d\ n_sim);
title( txt);
figure;
plot( thresh, snri_rc(:,l), Tc);
xlabel( Threshold);
ylabel( SNRI)
65


txt = sprintf( RC Reconstruction, SNRI vs Threshold, # Sim = %d, n_sim)
title( txt);
end
save dnrc_j;
function [C, L] = vs_den( C, L, j_0, thresh);
% Function: vs_den
% [C, L] = vs_den( C, L, j_0);
% Function to denoise the wavelet transform of the signal s. This algorithm
% uses Donoho and Johnstones method titled Riskshrink to do hard or
% soft thresholding of the signal. It assumes the noise floor is known.
% Hard thresholding sets all coefficients above level j_0 to zero that fall
% below the threshold. Soft thresholding subtracts out the threshold level
% from the coefficients above level j_0.
%
% Inputs
% C = Wavelet decomposition coefficients returned by wavedec
% L = Bookkeeping vector returned by wavedec
% j_0 = Lowest decomposition level to apply thresholding, default = jmax/3
% thresh = Threshold value for de-noising
%
% Outputs
% C = De-noised wavelet coefficient vector C
% L = Bookkeeping vector returned by wavedec
% First set the jmin and jmax values
N = length(L);
type = soft;
if( nargin == 2)
j_0 = floor( (N 2) / 3 );
end
% Figure out array indices wavelet shrinkage
% r_start = L(l)+1;
% r_stop = r_start + sum( L(2:j_0+1)) -1;
% r_start = sum( L( 1 :N-j_0-1)) + 1;
r_start = sum( L(l:j_0) )+l;
r_stop = length( C );
% Get the length of the vector s for selection of the threshold lambda
Lindx = log(L(N)) / log( 2);
% load lambda;
if( type = soft)
z_indx = find( abs( C(r_start:r_stop)) <= thresh ) + r_start -1;
s_indx = find( abs( C(r_start:r_stop)) > thresh ) + r_start -1;
C(z_indx) = 0.;
C(s_indx) = sign( C(s_indx)) .* ( abs( C(s_indx)) thresh);
66


else
l_indx = fmd( abs( C(r_start:r_stop)) <= thresh);
C(I_indx) = 0.;
67


Non-Separable De-Noising Source Code
function dn2d_bv( img, w_name, snr, n_sim, t_low, t_high, t_points, j_0, b_thresh );
% dn2d_bv( img, w_name, snr, n_sim, t_low, t_high, t_points, j_0, b_thresh);
% Function to de-noise an image using the 2-D theory of Donoho and Johnstone.
% This function computes the threshold, thresholding type and approximation
% coefficients from ddencmp to be used in the function wdencmp.
% De-noising is achieved by calling wdencmp.
%
% Inputs
% img = Input image for de-noising, assumed to be a .mat file
% w_name = Wavelet filter to be used
% snr = Requested SNR
% n_sim = Number of simulations to run
% t_low = Minimum threshold
% t_high = Maximum threshold
% t_points = Number of threshold points
% j_0 = Beginning reconstruction level
% b_thresh = BvDv threshold (Std. deviation, not variance )
%
% Outputs
% Threshold vs SNRI plot
% Get the image and add noise
load( img);
check = d_idx;
s = whos( check);
if( isempty(s) == 1 )
[bv,b_idx,dv,d_idx] = bvdv_idx( X, 3, b_thresh );
save( img, X, map, bv, dv, bjdx, d_idx)
end
[m,n] = size( X);
imshow( X, map);
txt = sprintf( Original Image, Bv = %4.1f, Dv = %5.1f, BvDv Threshold = %2d, bv, dv,
b_thresh);
title( txt);
fiiame = sprintf( dn2d_%s_n%03d_t%02.0f_%02.0f.txt, img, n_sim, t_low, t_high);
fid = fopen( fname, w);
fprintf( fid, Thresh Bv Dv SNRI SNRI_Bv SNRI_Dv\n);
fprintf( fid, %6.2f %6.1f Original ImageNn, bv, dv);
% Get the threshold table and noise floor variance
if( t_low ~= t_high)
thresh = vs_thresh( t_low, t_high, t_points);
else
thresh = t_low;
end
68


n_thresh = Iength( thresh);
n_sigma = n_floor( X, snr);
k_app = 1;
% Plot a representative noisy image
gn = randn( m, n);
Xpn = X + n_sigma gn;
[bv,dv] = bvdv( Xpn, 3, b_thresh, b_idx, d_idx);
figure;
imshow( Xpn, map);
txt = sprintf( Image + Noise, SNR = %2d, Bv = %4.1f, Dv = %5.1f, BvDv = %2d, snr, bv,
dv, b_thresh);
title( txt);
clear gn;
fprintf( fid, Simulations: %d, BvDv Threshold: %d\n, n_sim, b_thresh );
fprintf( fid, %6.2f %6.1f Image + Noise, SNR = %d\n, bv, dv, snr);
snri = zeros(n_thresh,3);
% De-noise the image in a 2D proper sense
for i = 1 :n_thresh
fprintf( 1, Thresh %d of %d\n, i, n_thresh );
for j = l:n_sim
lprintf( 1, Sim %d of %d\n, j, n_sim);
randn( state, sum( 100*clock));
gn = randn( m, n );
Xpn = X + n_sigma gn;
dn_img = wdencmp( gbl, Xpn, w_name, 2, thresh(i), s, k_app );
[bv,dv] = bvdv( dn_img, 3, b_thresh, b_idx, d_idx );
[tmp,tmp_b,tmp_d] = meas2_snri( X, Xpn, dn_img, b_idx, d_idx);
snri(i,l) = snri(i,l) + tmp;
snri(i,2) = snri(i,2) + tmp_b;
snri(i,3) = snri(i,3) + tmp_d;
end
snri(i,:) = snri(i,:) / n_sim;
fprintf( fid, %7.4f %6.2f %6.1f %8.4f %8.4f %8.4f\n, thresh(i), bv, dv, snri(i,l),
snri(i,2), snri(i,3));
end
txt = sprintf( SNRI = %4.1f, SNRI_B_v = %4.1f, SNRI_D_v = %4.1f, # Sim = %3d,
snri(l,l), snri(l,2), snri(l,3), n_sim);
xlabel( txt);
fclose( fid);
figure;
hold on;
plot( thresh, snri(:,l));
xlabel( Threshold);
ylabel( SNRI)
69


title([SNRI vs Threshold, ,num2str(n_sim),Simulations]);
save dn2d_j;
70


Modified Cubic Filtering Source Code
function polyf(img,n_sim,snr,t_low,t_high,t_step,b_thresh,a_low,a_high,a_step)
% polyf(img,n_sim,snr,t_low,t_high,t_step,b_thresh,a_low,a_high,a_step)
% Function to enhance an image using the polynomial filter described
% by A. Vanzo, G. Ramponi and G. L. Sicuranza. This algorithm is
% similar to unsharp masking except four 3x1 directional Laplacian
% filters act on the image instead of the UM 3x3 kernel. A
% filter is applied to indicate direction of the edge which is then
% fed into the feedback loop of the highpass filter path.
%
% Inputs
% img = Input image for de-noising, assumed to be a .mat file
% n_sim = Number of simulations
% snr = Requested Signal to Noise Ratio
% t_low = Minimum threshold
% t_high = Maximum threshold
% t_step = Step size in pixels between low and high
% b_thresh = BVDV threshold (Std. deviation, not variance)
% a_low = Minimum smoothing factor for the low pass filter 0. <= a <= 0.1
% a_high = Maximum smoothing factor for the low pass filter 0. <= a <= 0.1
% a_step = Step size between a_low and a_high
%
% Outputs
% BV, DV, SNRI, SNRI_BV, SNRI_DV
% Get the image and comput the noise floor
load( img);
check = d_idx;
s = whos( check);
if( isempty(s) = 1 )
% Compute the BVDV and get the background and detail pixels
[bv,b_idx,dv,d_idx] = bvdv_idx( X, 3, b_thresh);
save( img, X, map, bv, dv, b_idx, d_idx)
end
[m,n] = size( X);
n_sigma = n_floor( X, snr);
% Set the detail pixel threshold values
if( t_low -= t_high )
max_pix = t_low:t_step:t_high;
else
max_pix = t_low;
end
n_mxpx = length(max_pix);
71


% Set the smoothing threshold values
if( a_low ~= a_high)
a = a_low:a_step:a_high;
else
a = a_low;
end
n_a = length( a);
p_snri = zeros(n_a,n_mxpx,3);
p_bv = zeros(n_a,n_mxpx);
p_dv = zeros(n_a,n_mxpx);
n_bv = 0.;
n_dv = 0.;
tmp_b = 0.;
tmp_d 0.;
fname = sprintf(
p_%lc_n%02d_a%02d_%02d_t%03d_%03d.txt,img(l),n_sim,fix(100*a(l)),fix(100*a(n_a)),max_pix
(l),max_pix(n_mxpx));
fid = fopen( fname, Svt);
fprintff fid, Original Image: BV = %6.2f DV = %7.1f\n, bv, dv);
imshowf X, map);
txt = sprintff Original Image, Bv = %4.1f, Dv = %5.1f, bv, dv );
title( txt);
for h = 1 :n_a
fprintff 1, a: %d of %d\n, h, n_a);
for i l:n_sim
randn( state, sum( 100*clock));
gn = randn( m, n);
Xpn = X + n_sigma gn;
[tmp_b,tmp_d] = bvdv( Xpn, 3, b_thresh, b_idx, d_idx);
n_bv n_bv + tmp_b;
n_dv = n_dv + tmp_d;
% Get the directional laplacian of the noisy image
[w_h, w_v, w_45, w_135] = laplacianf Xpn);
% S filter the directional laplacian
[t_h, t_v, t_45, t_135] s_filter( w_h, w_v, w_45, w_135);
t = (t_h + t_v +1_45 +1_135 );
tmp max( max( abs(t)));
lambda = tmp ./ max_pix;
clear w_h w_v w_45 w_135 t_h t_v t_45 t_l 35;
% Lowpass filter the image
fltrd_img = lowpassf Xpn, a(h));
forj = l:n_mxpx;
72


fprintff 1, Sim: %d of %d lambda: %d of %d\n, i, n_sim,j,length(lambda));
tmp_t = t / lambda(j);
% Processed image = fltrd_img + tmp_t
p_img = fltrd_img + tmp_t;
[bv,dv] = bvdv( p_img, 3, b_thresh, b_idx, d_idx);
[tmp,tmp_b.tmp_d] -- meas2_snri( X, Xpn, p_img, b_idx,d_idx );
p_snri(h,j,l) -- p_snri(h,j,l) + tmp;
p_snri(h,j,2) = p_snri(h,j,2) + tmp_b;
p_snri(h,j,3) p_snri(h,j,3) + tmp_d;
p_bv(h j) = p_bv(h,j) + bv;
p_dv(h,j) = p_dv(h,j) + dv;
end
end % End sim loop
end % End a loop
p_snri = p_snri / n_sim;
p_bv = p_bv / n_sim;
p_dv = p_dv / n_sim;
n_bv = n_bv / n_sim;
n_dv = n_dv / n_sim;
for h = l:n_a
fprintf( fid, \n\n pix Bv Dv SNRI SNRI_Bv SNRI_Dv\n);
for i = 1 :n_mxpx;
fprintf( fid,%5.3f %3d %4.1f %5.1f %6.2f %6.1f %6.1f\n, a(h),
max_pix(i), p_bv(h,i), p_dv(h,i), p_snri(h,i,l), p_snri(h,i,2), p_snri(h,i,3));
end
end
% Do these figures only if you want them
% randn( state, sum( 100*clock));
% gn = randn( m, n );
% Xpn = X + n_sigma gn;
% figure;
% imshow( Xpn, map);
% txt = sprintf( Tmage+Noise, Bv = %4.1f, Dv = %5.1f, BvDv Thresh = %2d, n_bv, n_dv,
b_thresh );
% title( txt);
% figure;
% imshow( p_img, map );
% txt = sprintf( Filtered Image, Bv = %4.1f, Dv = %5.1f, BvDv Thresh = %2d, bv, dv, b_thresh
);
% title( txt);
% txt = sprintf( Lambda = %d, SNRI = %4.1f, SNRI_B_v = %4.1f, SNRI_D_v = %5.1f, t_low,
p_snri(l,l), p_snri(l,2), p_snri(l,3));
% xlabel( txt);
% txt = sprintf( T:\db4_pictures\polyf_%s_n%02d_t%03d, img, n_sim, t_low);
73


% fname =
f:\\db4_pictures\p_%lc_n%02d_a%0.2f_t%03d_,img(l),n_sim,a(l),max_pix(l));
% bmpwrite( p_img, map, fname);
fclose( fid);
save p_snri;
if( n_mxpx -= 1)
figure;
mesh( max_pix, a, p_snri(:,:,l));
ylabelf Smoothing Factor);
xlabel( Lambda);
zlabel( SNRI);
txt = sprintff SNRI vs lambda For Image %s', img);
title( txt);
end
function [w_h,w_v,w_45,w_135] = laplacian( img_tmp);
% [w_h,w_v,w_45,w_135] = laplacianf img_tmp)
% Function to compute the horizontal Laplacian of the input image.
% The algorithm is from A. Vanzo, G. Ramponi and G. Sicuranzas
% paper "An Image Enhancement Technique Using Polynomial Filters".
% The Laplacian algorithm defines a highpass filter whose output
% are 2*img(j,k) img(j-l,k) img(j+l,k). This filter is run
% over the entire image to form the w_h matrix. The edge points
% are arbitrarily duplicated from the first row or column to provide
% a value at the negative index positions.
%
% Inputs
% img_tmp = Input matrix
%
% Outputs
% w_h Horizontal Laplacian matrix
% w_v = Vertical Laplacian matrix
% w_45 = Diagonal Laplacian matrix along the 45 degree axis
% w_135 = Diagonal Laplacian matrix along the 135 degree axis
[m,n] = size( img_tmp);
w_h = zeros(m,n);
w_v = zeros(m.n);
w_45 = zeros(m,n);
w_135 zeros(m,n);
% Enlarge the image to deal with the edges,
m = m + 2;
n = n + 2;
img -- zeros(m,n);
img(2:m-l,2:n-l) = img_tmp;
sprintff
74


% First and last columns excluding corners
img(2:m-l,l) = img_tmp(l:m-2,2); img(2:m-l,n) = img_tmp(l:m-2,n-3);
% First and last rows excluding corners
img(l,2:n-l) = img_tmp(2,l:n-2); img(m,2:n-l) = img_tmp(m-3,l:n-2);
% Comers
img(l,l) = img_tmp(2,2); img(m,l) = img_tmp(m-3,2);
img(l,n) = img_tmp(2,n-3); img(m,n) = img_tmp(m-3,n-3);
for j -- 2:m-l
fork = 2:n-l
% Do the horizontal laplacian
w_h(j-l,k-l) -2* img(j,k) img(j-l,k) img(j+l,k);
% Do the vertical laplacian
w_v(j-l,k-l) = 2* img(j,k) img(j,k-l) img(j,k+l);
% Do the diagonal laplacians
w_45(j-l,k-l) = 2* img(j,k) img(j-l,k-l) img(j+l,k+l);
w_135(j-l,k-l) = 2 img(j,k) img(j-l,k+l) img(j+l,k-l);
end
end
function [s_h,s_v,s_45,s_135] = s_filter( w_h, w_v, w_45, w_135 );
% [s_h,s_v,s_45,s_135] = s_filter( w_h, w_v, w_45, w_135 )
% Function to compute the S filtering of the directional Laplacian.
% The algorithm is from A. Vanzo, G. Ramponi and G. Sicuranzas
% paper "An Image Enhancement Technique Using Polynomial Filters".
% The S filter algorithm detects correlated data in a direction
% orthogonal to the w matrix its operating on. This filter is run
% over the entire image to form the s__matrices. The edge points
% are arbitrarily duplicated from the first row or column to provide
% a value at the negative index positions.
%
% Inputs
% w_h = Horizontal directional Laplacian matrix
% w_v = Vertical directional Laplacian S filtered matrix
% w_45 = Directional Laplacian matrix along the 45 degree axis
% w_135 = Directional Laplacian matrix along the 135 degree axis
%
% Outputs
% s_h = Horizontal S filtered matrix
% s_v = Vertical S filtered matrix
% s_45 = Diagonal S filtered matrix along the 45 degree axis
% s_135 = Diagonal S filtered matrix along the 135 degree axis
[m,n] -- size( w_h );
75


m = m + 2;
n = n + 2;
t_h = zeros(m.n);
t_h(2:m-l,2:n-l) = w_h;
% First and last columns excluding comers
t_h(2:m-l,l) = w_h(l:m-2,2); t_h(2:m-l,n) = w_h(l:ni-2,n-3);
% First and last rows excluding comers
t_h(l,2:n-l) = w_h(2,l:n-2); t_h(m,2:n-l) = w_h(m-3,l:n-2);
% Comers
t_h(l,l) = w_h(2,2); t_h(m,l) = w_h(m-3,2);
t_h(l,n) = w_h(2,n-3); t_h(m,n) = w_h(m-3,n-3);
t_v zeros(m,n);
t_v(2:m-l,2:n-l) = w_v;
% First and last columns excluding corners
t_v(2:m-l,l) = w_v(l:m-2,2); t_v(2:m-l,n) w_v(l:m-2,n-3);
% First and last rows excluding comers
t_v(l,2:n-l) = w_v(2,l:n-2); t_v(m,2:n-l) = w_v(m-3,l:n-2);
% Comers
t_v(l,l) = w_v(2,2); t_v(m,l) = w_v(m-3,2);
t_v(l,n) w_v(2,n-3); t_v(m,n) -- w_v(m-3,n-3);
t_45 zeros(m,n);
t_45(2:m-l,2:n-l) = w_45;
% First and last columns excluding comers
t_45(2:m-l,l) = w_45(l:m-2,2); t_45(2:m-l,n) = w_45(l:m-2,n-3);
% First and last rows excluding corners
t_45(l,2:n-i) = w_45(2,l:n-2); t_45(m,2:n-l) = w_45(m-3,l:n-2);
% Comers
t_45(l,l) = w_45(2,2); t_45(m,l) = w_45(m-3,2);
t_45(l,n) = w_45(2,n-3); t_45(m,n) = w_45(m-3,n-3);
t_135 = zeros(m,n);
t_135(2:m-l,2:n-l) = w_135;
% First and last columns excluding comers
t_135(2:m-l,l) = w_135(l:m-2,2); t_135(2:m-l,n) = w_135(l:m-2,n-3);
% First and last rows excluding corners
t_135(l,2:n-l) = w_135(2,l:n-2); t_135(m,2:n-l) = w_135(m-3,l:n-2);
% Comers
t_135(l,l) = w_135(2,2); t_135(m,l) = w_135(m-3,2);
t_135(l,n) = w_135(2,n-3); t_135(m,n) = w_135(m-3,n-3);
% Do the S filter computation
for j = 2:m-l
for k = 2:n-l
% Do the vertical S filter
76


tmp = [t_h(j-l,k-l:k+l) t_h(j,k-l:k+l) t_h(j+l,k-l:k+l)];
s_v(j-l,k-l) = tmp(5) abs( tmp(2) abs( tmp(8)) + ...
abs( tmp(2)) tmp(8)) + ...
abs( tmp(5)) (tmp(2) abs( tmp(8)) + ...
abs( tmp(2)) tmp(8));
% Do the horizontal S filter
tmp = [t_v(j-l,k-l:k+l) t_v(j,k-l:k+l) t_v(j+l,k-l:k+l)];
s_h(j-l,k-l) = tmp(5) abs( tmp(4) abs( tmp(6)) + ...
abs( tmp(4)) tmp(6) ) + ...
abs( tmp(5)) (tmp(4) abs( tmp(6)) + ...
abs( tmp(4)) tmp(6));
% Do the 135 degree S filter
tmp = [t_45(j-l,k-l:k+l) t_45(j,k-l:k+l) t_45(j+l,k-l:k+l)];
s_135(j-l,k-l) = tmp(5) abs( tmp(l) abs( tmp(9) ) + ...
abs( tmp(l) ) tmp(9)) +...
abs( tmp(5)) (tmp(l) abs( tmp(9)) + ...
abs( tmp(l)) tmp(9) ) / 2.;
% Do the 45 degree S filter
tmp = [t_135(j-l,k-l:k+l) t_135(j,k-l:k+l) t_135(j+l,k-l:k+l)];
s_45(j-l,k-l) = tmp(5) abs( tmp(3) abs( tmp(7)) + ...
abs( tmp(3)) tmp(7)) + ...
abs( tmp(5)) (tmp(3) abs( tmp(7)) +...
abs( tmp(3)) tmp(7) ) / 2.;
end
end
function f_img = lowpass( img, a);
% f_img = lowpass( img, a);
% Function to compute the low pass filtered version of img.
% The filter uses A. Vanzo, G. Ramponi and G. Sicuranzas
% lowpass filter whose coefficients are given by
% la a al
% la l-8a al
% la a al
% as documented in their paper "An Image Enhancement Technique
% Using Polynomial Filters".
%
% Inputs
% img = Image to be lowpass filtered
% a Filtering constant, 0. <= a <= 0.1
%
% Outputs
% fjmg = Lowpass filtered version of img
77


[m,n] = size( img);
m = m + 2;
n = n + 2;
t_img zeros(m,n);
t_img(2:m-l,2:n-l) = img;
% First and last columns excluding corners
t_img(2:m-l,l) = img(l:m-2,2); t_img(2:m-l,n) = img(l:m-2,n-3);
% First and last rows excluding comers
t_img(l,2:n-l) = img(2,l:n-2); t_img(m,2:n-l) = img(m-3,l:n-2);
% Comers
t_img(l,l) = img(2,2); t_img(m,l) = img(m-3,2);
t_img(l,n) = img(2,n-3); t_img(m,n) = img(m-3,n-3);
% Initialize the lowpass filter
lpf =[aa a;a l-8*a a;a a a];
% Do the lowpass filter computation
for j = 2:m-l
fork = 2:n-l
tmp = [t_img(j-l,k-l:k+l);t_img(j,k-l:k+l);t_img(j+l,k-l;k+l)];
f_img(j-l,k-l) = sum( sum( lpf .* tmp));
end
end
78


One Dimensional De-Noising Source Code
function [mean,thresh,spn,spn_r] = dnld( s_type, w_name, snr, n_sim, t_low, t_high, t_points, j_0)
% dnld( s_type, w_name, snr, n_sim, t_low, t_high, t_points, j_0);
% dnld adds noise to the input signal to achieve the requested SNR
% and de-noises the signal using Donohos soft thresholding algorithm.
% It uses wname for the mother wavelet to decompose the signal.
%
% Inputs
% s_type = Data type: Blocks, Bumps, Doppler, Heavisine
% w_name = Wavelet filter to be used
% snr = Requested SNR
% n_sim = Number of simulations to run
% t_low Minimum threshold
% t_high = Maximum threshold
% t_points = Number of threshold points
% j_0 = Beginning reconstruction level
%
% Outputs
% Threshold vs. SNR plot
% Generate the signal and add noise
if( strcmp( s_type, Blocks) = 1 )
s = blocks;
elseif( strcmp( s_type, Bumps) == 1)
s = bumps;
elseiff strcmp( s_type, Heavisine) == 1)
s = heavisine;
elseiff strcmpf s_type, Doppler) == 1)
s = doppler;
else
error( Gould not match data type, check spelling);
end
m = length( s);
% Get the threshold table and noise floor variance
if( t_low ~= t_high)
thresh = vs_thresh( t_low, t_high, t_points);
else
thresh t_low;
end
n_thresh = length( thresh);
n_sigma n_floor( s, snr);
% levels = wmaxlev( m, w_name);
levels = 6;
79


randn(state,sum( 100*clock));
gn = randn( 1, m);
spn = s + n_sigma gn;
figure;
subplot(3,l,2);
plot( spn, V);
[thr,sorh,keepapp] = ddencmp(den\wv,spn);
clear thr sorh gn;
% Output some results
fid = fopen( f:\report\simfoml.doc, w);
fprintf( fid, Threshold SNRI Mean SNRI Sigma\n);
for i = l:n_thresh
fprintf( 1, Processing Loop %d\n, i);
fprintf( fid, Threshold = %5.3f\n, thresh(i));
snri = zeros(l, n_sim);
forj l:n_sim
randn(state,sum( 100*clock));
gn = randn( 1, m);
spn = s + n_sigma gn;
[thr,sorh,keepapp] = ddencmp(den,'wv,spn);
clear gn
% Do the inverse wavelet transform
spn_r = wdencmpfgbl,spn,w_name,j_0,thresh(i),s,keepapp);
% Compute the SNRI
snri(j) = snri(j) + measl_snri( s, spn, spn_r, fid);
if( j = 1)
myplot( spn_r, thresh(i), snri(j), j_0);
end
end
mean(i) = sum( snri) / n_sim;
sig(i) = sqrt( sum( (snri mean(i) ).A2) / (n_sim -1));
clear snri
fprintf( fid, Mean SNRI: %6.4f Sigma SNRI: %6.4f\n\n,mean(i),sig(i));
end
fclose( fid);
% figure;
subplot(3,l,l);
plot( thresh, mean, k);
xlabel(Threshold);
y label (SNR Improvemenf);
title( [SNRI vs Threshold for ,s_type,, M = ,num2str(m),, J_n = ,num2str(j_0),,
,num2str(n_sim),Simulations]);
80


function s = blocks( dur, cnt);
% s = blocks( dur, cnt);
% Function to compute a bumpy wave using Donoho and Johnstones function:
% f(t) = sum( h*k( (t-t_j)/w), k(t) = (1 + abs(t) )A-4
% where h = [4 -5 3 -4 5 -4.2 2.1 4.3 -3.1 2.1 -4.2]
% tj = [0.1 0.13 0.15 0.23 0.25 0.40 0.44 0.65 0.76 0.78 0.81]
% Inputs
% dur = time duration of sampled signal, default = 1 sec
% cnt = number of data points to compute, default = 2048
%
% Outputs
% s = Signal
% First lets deal with the defaults
if nargin 0
dur = 1.0;
cnt = 2048;
elseif nargin = 1
cnt = 2048;
end
% Initilize the vectors h, w and t_J
h = [4 -5 3 -4 5 -4.2 2.1 4.3 -3.1 2.1 -4.2];
tj = [0.1 0.13 0.15 0.23 0.25 0.40 0.44 0.65 0.76 0.78 0.81];
% Now define the sample rate f_s
f_s = cnt / dur;
% Now initialize the time vector t
t = 0:cnt-l;
t = t / f_s;
% Now compute s over the intervals j = 1,...,J
s = zeros(l.cnt);
forj = 1 :length(t_j)
k = (l + sign((t-U(i))))/2;
s = s + h(j) k;
end
function s = bumps( dur, cnt);
% Function to compute a bumpy wave using Donoho and Johnstones function:
% f(t) = sum( h*k( (t-tj)/w), k(t) = (1 + abs(t) )A-4
% where h = [4 5 3 4 5 4.2 2.1 4.3 3.1 5.1 4.2]
% w = [0.005 0.005 0.006 0.01 0.01 0.03 0.01 0.01 0.005 0.008 0.005]
% tj = [0.1 0.13 0.15 0.23 0.25 0.40 0.44 0.65 0.76 0.78 0.81]
% dur = time duration of sampled signal, default = 1 sec
% cnt = number of data points to compute, default = 2048
81


% First lets deal with the defaults
if nargin == 0
dur = 1.0;
cnt = 2048;
elseif nargin == 1
cnt = 2048;
end
% Initilize the vectors h, w and t_j
h = [4 5 3 4 5 4.2 2.1 4.3 3.1 5.1 4.2];
w = [0.005 0.005 0.006 0.01 0.01 0.03 0.01 0.01 0.005 0.008 0.005];
tj = [0.1 0.13 0.15 0.23 0.25 0.40 0.44 0.65 0.76 0.78 0.81];
% Now define the sample rate f_s
f_s = cnt / dur;
% Now initialize the time vector t
t = 0:cnt-l;
t = t / f_s;
% Now compute s over the intervals j = 1,...,J
s = zeros(l.cnt);
for j = l:length(tj)
k = (1 + abs( (t tj(j))./ w(j)) ).A-4;
s = s + h(j) k;
end
function s = heavisine( dur, cnt);
% s = heavisine( dur, cnt);
% Function to compute a goofy sine wave using Donoho and Johnstones function:
% f(t) = 4 sin(4*pi*t) sign(t-0.3) sign(0.72-t)
% dur = time duration of sampled signal, default = 1 sec
% cnt = number of data points to compute, default = 2048
%
% Inputs
% dur = Signal duration in seconds
% cnt = Number of samples to compute
%
% Outputs
% s = Signal
% First lets deal with the defaults
if nargin == 0
dur = 1.0;
cnt = 2048;
elseif nargin = 1
cnt = 2048;
end
82


% Now define the sample rate f_s
f_s ~ cnt / dur;
% Now initialize the time vector t
t = 0:cnt-l;
t = t / f_s;
s = 4 sin(4*pi*t) sign(t-0.3) sign(0.72-t);
function s = doppler( dur, cnt, epsilon );
% s = dopplerf dur, cnt, epsilon );
% Function to compute a chirped signal using Donoho and Johnstones function:
% f(t) = { t*(l-t)A.5 sin( 2*pi*(l+epsilon)/(t+epsilon) }, epsilon = 0.05
% Inputs
% dur = time duration of sampled signal, default = 1 sec
% cnt = number of data points to compute, default = 2048
% epsilon = damping factor, default = 0.05
% Outputs
% x = Doppler signal
% First lets deal with the defaults
if nargin == 0
dur = 1.0;
cnt = 2048;
epsilon = 0.05;
elseif nargin == 1
cnt = 2048;
epsilon = 0.05;
elseif nargin 2
epsilon = 0.05;
end
% Now define the sample rate f_s
f_s = cnt / dur;
% Now initialize the time vector t
t = 0:cnt-l;
t = t / f_s;
s = (t .* (dur-t) ).A.5 .* sin( 2*pi*(l+epsilon)./(t+epsilon));
83


Utilities
function [bv,dv] = bvdv( img, w_size, thresh, b_idx, d_idx );
% [bv, dv] = bvdv( img, w_size, thresh, b_idx, d_idx );
% bvdvcomputes the background variance detail variance figure
% of merit for the input image. The variance is computed local to
% every pixel in the image using an odd window size (square) of w_size
% input by the user. The variance for every pixel is computed and a
% histogram of the variances is displayed. Based on this histogram
% the user is queried for a threshold which defines whether a given
% pixel is assigned to the background or detail category.
%
% Inputs
% img Input image
% w_size = Window size for computing local variance
% thresh = Variance threshold between background and detail regions
% b_idx = Optional input for computing the BV measure
% d_idx = Optional input for computing the DV measure
%
% Outputs
% bv = Background variance
% dv = Detail variance
% Initialize parameters
[m,n] = size( img );
mid = floor( w_size / 2);
w2 = w_sizeA2;
bv 0.;
n_bv = 0.;
dv = 0.;
n_dv 0.;
% Enlarge the image to deal with the edges,
m = m + 2;
n = n + 2;
t_h = zeros(m,n);
t_h(2:m-l,2:n-l) = img;
% First and last columns excluding corners
t_h(2:m-l,l) = img(l:m-2,2); t_h(2:m-l,n) = img(l:m-2,n-3);
% First and last rows excluding corners
t_h(l,2:n-l) = img(2,l:n-2); t_h(m,2:n-l) = img(m-3,l:n-2);
% Corners
t_h( 1,1) = img(2,2); t_h(m,l) = img(m-3,2);
t_h(l,n) = img(2,n-3); t_h(m,n) = img(m-3,n-3);
% Compute the background variance
for j = l:length(b_idx)
tmp = b_idx(j);
84


idx = [lmp-m-1 tmp-m tmp-m+1 tmp-1 tmp tmp+1 tmp+m-1 tmp+m tmp+m+1];
var = t_h(idx);
var = std( t_h(idx) )A2;
bv = bv + var;
end
bv = bv / length(b_idx);
% Compute the background variance
forj = l:length(d_idx)
tmp = d_idx(j);
idx = [tmp-m-1 tmp-m tmp-m+1 tmp-1 tmp tmp+1 tmp+m-1 tmp+m tmp+m+1];
var = t_h(idx);
var = std( t_h(idx) )A2;
dv = dv + var;
end
dv = dv / length(d_idx);
function [bv,b_idx,dv,d_idx] = bvdv_idx( img, w_size, thresh );
% [bv,b_idx,dv,d_idx,] = bvdv_idx( img, w_size, thresh );
% bvdv_idxcomputes the background variance detail variance figure
% of merit for the input image. The variance is computed local to
% every pixel in the image using an odd window size (square) of w_size
% input by the user. The variance for every pixel is computed and a
% histogram of the variances is displayed. Based on this histogram
% the user is queried for a threshold which defines whether a given
% pixel is assigned to the background or detail category.
%
% Inputs
% img Input image
% w_size = Window size for computing local variance
% thresh = Variance threshold between background and detail regions
%
% Outputs
% bv = Background variance
% b_idx = Background variance pixel indices
% dv = Detail variance
% d_idx = Detail variance pixel indices
% Initialize parameters
[m,n] = size( img );
mid = floor( w_size / 2 );
w2 = w_sizeA2;
bv = 0.;
n_bv 0;
dv = 0.;
n_dv = 0;
% Get the background and detail pixels.
% Enlarge the image to deal with the edges.
85


m = m + 2;
n = n + 2;
t_h = zeros(m,n);
t_h(2:m-1,2:n-1) = img;
% First and last columns excluding corners
t_h(2:m-l,l) = img(l:m-2,2); t_h(2:m-l,n) = img(l:m-2,n-3);
% First and last rows excluding corners
t_h( 1,2:n-1) = img(2,l:n-2); t_h(m,2:n-l) = img(m-3,l:n-2);
% Corners
t_h( 1,1) = img(2,2); t_h(m,l) = img(m-3,2);
t_h(l,n) = img(2,n-3); t_h(m,n) = img(m-3,n-3);
% Compute the local variance
forj = mid+1 :m-mid
for k mid+1 :n-mid
var = reshape( t_h(j-mid:j+mid,k-mid:k+mid), 1, w2 );
var = std( var )A2;
if( var >= thresh )
dv = dv + var;
n_dv = n_dv + 1;
d_idx(n_dv) = (k-1) m + j;
else
bv = bv + var;
n_bv = n_bv + 1;
b_idx(n_bv) = (k-1) m + j;
end
end
end
bv = bv / n_bv,
dv = dv / n_dv;
function [snri,snri_bv,snri_dv] = meas2_snri( X, Xpn, Xpn_r, b_idx, d_idx )
% [snri,snri_bv,snri_dv] = meas2_snri( X, Xpn, Xpn_r, b_idx, d_idx )
% Function to compute the signal to noise ratio improvement given by:
% snri = 10*logl0( sum(img-img_c)A2 / sum(img-img_f)A2 );
%
% Inputs
% X = Uncorrupted image
% Xpn = Corrupted signal
% Xpn_r = Filtered signal
%
% Outputs
% snri = Signal to noise ratio improvement
% snri_bv = SNRI of background pixels
% snri_bv = SNRI of detail pixels
snri = 10*logl0( sum( sum( (X-Xpn).A2 )) / sum( sum( (X-Xpn_r).A2 )));
snri_bv = 0.;
snri_dv = 0.;
86


if( nargin == 5 )
[m,n] = size( X );
m = m+2;
n n+2;
tmpX = X;
X = zeros(m,n);
X(2:m-l,2:n-l) = tmpX;
clear tmpX;
tmpXpn = Xpn;
Xpn = zeros(m,n);
Xpn(2:m-l,2:n-l) = tmpXpn;
clear tmpXpn;
tmpXpn_r = Xpn_r;
Xpn_r = zeros(m,n);
Xpn_r(2:m-l,2:n-l) = tmpXpn_r;
clear tmpXpn_r;
snri_bv = 10*logl0( sum( ( X(b_idx)-Xpn(b_idx) ).A2 ) / sum( ( X(b_idx)
Xpn_r(b_idx) ).A2 ));
snri_dv = 10*Iogl0( sum( ( X(d_idx)-Xpn(d_idx) ).A2 ) / sum( ( X(d_idx)
Xpn_r(d_idx)) A2 ));
end
function n_sigma = n_floor( img, snr );
% n_sigma = n_floor( img, snr)
% Function to compute the noise floor sigma for a given SNR.
% SNR = 20*logl0( [signal sigma] / [noise sigma])
%
% Inputs
% img = Uncorrupted signal
% snr = Requested SNR
%
% Outputs
% n_sigma = Noise floor sigma for requested SNR
[m,n] = size( img );
tmp = reshapef img, 1, m*n );
img_sigma = std( tmp );
n_sigma = img_sigma / 10.A(snr/20.);
function thresh = vs_thresh( t_low, t_high, t_points );
% thresh = vs_thresh( m, t_low, t_high, t_points );
% Function to compute the visushrink threshold given as:
% thresh = sqrt( 2 log( m))
%
87


% Inputs
% t_low Minimum threshold
% t_high = Maximum threshold
% t_points = Number of threshold points
%
% Outputs
% thresh = Table of visushrink thresholds
step = (t_high t_low) / t_points;
thresh = t_low:step:t_high;
88


Full Text

PAGE 1

Image De-Noising Using Wavelet Shrinkage And Non-Linear Filtering by Thomas 0. Hart B.S. University of Colorado, 1980 A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirement for the degree of Master of Science Electrical Engineering 1998

PAGE 2

1998 by Thomas 0. Hart All Rights Reserved

PAGE 3

This Thesis for the Master of Science degree by Thomas 0. Hart has been approved by Tarnal Bose lll

PAGE 4

Hart, Thomas, 0. (M.S. Electrical Engineering) Image De-Noising Using Wavelet Shrinkage And Non-Linear Filtering Thesis Directed by Associate Professor Tarnal Bose Abstract Many clever techniques to reduce noise while preserving the content of sensed data have been devised. Recent publications derive a Wavelet transform based method of noise reduction for multi-dimensional signals. The focus of this thesis is to quantify the performance of the Wavelet transform in its ability to reduce Gaussian noise while retaining salient features within the data stream. The analysis herein uses simulated signals and images, with Gaussian noise artificially added, to quantify performance of Wavelets in their ability to reduce white noise. To contrast the Wavelet method, a variant of Unsharp Masking is applied to the same data set. Figures of merit for both techniques are presented to gauge the strengths and weaknesses of their noise reduction capacity. This abstract accurately represents the content of the candidates thesis. I recommend its publication. iv

PAGE 5

Signed Tarnal Bose v

PAGE 6

CONTENTS Chapter 1. Introduction ....................................................................................................... 1 2. Noise Measurement Techniques ....................................................................... 4 2.1 Signal To Noise Ratio ................................................................................. 6 2.2 Background Variance Detail Variance ........................................................ 7 3. Wavelet Theory ............................................................................................... 10 3.1 The Wavelet Transform ............................................................................. 10 3.2 Wavelet De-Noising .................................................................................. 24 4. Polynomial Filters ........................................................................................... 30 4.1 Gradient Based Filtering ............................................................................ 30 4.2 Cubic Sharpening ...................................................................................... 33 4.3 Modified Cubic Sharpening ...................................................................... 35 5. Computer Simulations and Comparative Analysis ......................................... 40 5.1 One Dimensional De-Noising Results ...................................................... 40 5.2 Two Dimensional De-Noising Results ...................................................... 44 5.2.1 Analysis of 'tire' .................................................................................. 47 5.2.2 Analysis of 'woman' ........................................................................... 52 vi

PAGE 7

CONTENTS (continued) 5.2.3 Analysis of 'julia' ................................................................................ 56 6. Conclusions ..................................................................................................... 60 Appendix A ................................................................................................................ 62 References .................................................................................................................. 89 vii

PAGE 8

FIGURES Figure 2.1 linage 'tire' ........................................................................................................ 9 2.2 linage 'finger' .................................................................................................... 9 3.1 AnalysisFilterBank ........................................................................................ 11 3.2 Scaling and Dilation ........................................................................................ 13 3.3 Wavelet Filter Bank ......................................................................................... 16 3.4 Frequency vs. Time Wavelet Digraph ............................................................. 20 3.5 Simulated Three Tone Signal .......................................................................... 21 3.6 Two Dimensional Wavelet Filter Bank ........................................................... 22 3.7 Two Dimensional Wavelet Digraph ................................................................ 23 3.8a linage 'wbarb' ................................................................................................. 23 3.8b Wavelet Decomposition of Image 'wbarb' ..................................................... 23 3.9 Spectrum of Four Level Decomposition ......................................................... 28 3.10 Signal Reconstruction vs. In ............................................................................ 29 4.1 Edge Determination ......................................................................................... 31 4.2 Laplacian of linage 'woman' ........................................................................... 33 4.3 Cubic Filtering Block Diagram ....................................................................... 35 4.4 Modified Cubic Sharpening ............................................................................ 36 viii

PAGE 9

Figure FIGURES (continued) 5.1 One Dimensional De-Noising Signal Set ........................................................ 41 5 .2a Wavelet De-Noising of 'Blocks' ..................................................................... 43 5.2b Wavelet De-Noising of 'Heavisine' ................................................................ 43 5.2c Wavelet De-Noising of 'Bumps' ..................................................................... 43 5.2d Wavelet De-Noising of 'Doppler' ................................................................... 43 5.3a Image 'tire' ...................................................................................................... 44 5.3b Image 'woman' ................................................................................................ 44 5.3c Image 'julia' .................................................................................................... 45 5.4a Original Image 'tire' ....................................................................................... 47 5.4b Original Image 'tire' +Noise .......................................................................... 47 5.5a SNRI of Row Wise De-Noising 'tire' ............................................................. 47 5.5b Row De-Noised Image 'tire' ........................................................................... 47 5.6a SNRI of Column Wise De-Noising 'tire' ........................................................ 48 5.6b Column De-Noised Image 'tire' ...................................................................... 48 5.7a SNRI of Row/Column De-Noising 'tire' ........................................................ 48 ix

PAGE 10

Figure FIGURES (continued) 5.7b Row/Column De-Noised Image 'tire' ............................................................. 48 5.8a SNRI of Non-Separable De-Noising ............................................................... 49 5.8b Non-Separable De-Noised Image 'tire' ........................................................... 49 5.9a SNRI of Modified Cubic Filtering De-Noislng .............................................. .49 5.9b Modified Cubic Filtering De-Noised Image 'tire' ........................................... 49 5.10a Original Image 'woman' ................................................................................. 52 5.10b Original Image 'woman'+ Noise .................................................................... 52 5.lla SNRI of Row Wise De-Noising 'woman' ....................................................... 52 5.llb Row De-Noised Image 'woman' ..................................................................... 52 5 .12a SNRI of Column Wise De-Noising 'woman' ................................................. 53 5.12b Column De-Noised Image 'woman' ............................................................... 53 5.13b SNRI of Row/Column De-Noising 'woman' .................................................. 53 5.13a Row/Column De-Noised Image 'woman' ....................................................... 53 5.14a SNRI of Non-Separable De-Noising .............................................. ............... 54 5.14b Non-Separable De-Noised Image 'woman' .................................................... 54 5.15a SNRI of Modified Cubic Filtering De-Noising ............................................... 54 X

PAGE 11

Figure FIGURES (continued) 5.15b Modified Cubic Filtering De-Noised Image 'woman' .................................... 54 5.16a Original Image 'julia' ...................................................................................... 56 5.16b Original Image 'julia'+ Noise ........................................................................ 56 5.17a SNRI of Row Wise De-Noising 'julia' ........................................................... 56 5.17b Row De-Noised Image 'julia' ....... 1 56 5.18a SNRI of Column Wise De-Noising 'julia' ...................................................... 57 5.18b Column De-Noised Image 'julia' .................................................................... 57 5.19b SNRI of Row/Column De-Noising 'julia' ....................................................... 57 5.19a Row/Column De-Noised Image 'julia' ........................................................... 57 5.20a SNRI of Non-Separable De-Noising ............................................................... 58 5.20b Non-Separable De-Noised Image 'julia' ......................................................... 58 5.2la SNRI of Modified Cubic Filtering De-Noising ............................................... 58 5.2lb Modified Cubic Filtering De-Noised Image 'julia' ......................................... 58 xi

PAGE 12

TABLES Table 2.1 BVDV Comparison ............................................................................................ 9 3.1 RiskShrink Coefficients .................................................................................. 25 5.1 Performance Summary for Image 'tire' ........................................................... 50 5.2 Performance Summary for Image 'woman' .................................................... 55 5.3 Performance Summary for Image 'julia' ......................................................... 59 6.1 Performance Summary with Ranking ............................................................. 61 xii

PAGE 13

1. Introduction The focus of image filtering and restoration is to improve the quality of an image that has been degraded via some systematic or random process. For a signal corrupted by Gaussian or impulsive noise, the goal is to reconstruct from the noisy image a representation of the true image. Since all restoration techniques produce an approximation of the original image, an element of uncertainty is ever present. Some methods of restoration develop a formal bound on reconstructing the image. For example, wavelet de-noising minimizes the error in a least squares sense providing some mathematical basis for the quality of reconstruction. Other methods rely on data content alone and leave it to the viewer to judge restoration quality. In a broader sense, image restoration is a subset in the diverse field of image processing. It is closely related to data compression and image enhancement. The intent of image enhancement is to accentuate features in an image or portion of an image. This can potentially degrade other aspects of the image but this may be acceptable. hnage enhancement presupposes the existence of a feature leading to pattern recognition which is beyond the scope of this thesis but is a consideration when applying image enhancement. As is the case with image restoration, enhancement can be done spatially or in other domains, most notably the frequency 1

PAGE 14

domain via the Fourier transform. implementations. This has the advantage of efficient Data compression is rooted to image enhancement and restoration since many applications of data compression are lossy and introduce errors in the decompression process. Although lossless techniques are available, the advantage to lossy algorithms lies in larger compression ratios. If an imperfect reconstruction can be tolerated, then lossy compression algorithms substantially reduce file sizes. The goal of this thesis is to examine a relatively new method of image restoration by application of the wavelet transform. There are six chapters comprising this thesis, a brief synopsis follows. Chapter 2 introduces noise and it's effects on m-dimensional signals. Alternative methods of noise compensation are briefly mentioned and references given. Although noise is easily distinguished by the human senses, it is harder to quantify numerically. In Chapter 2, some quantitative measures of noise are described and examples given to reinforce the concepts. In Chapter 3 the wavelet transform is explained in a heuristic fashion. The central tenet to wavelet theory is multi-resolution which is demonstrated within the heuristic model via the Haar wavelet. The Haar wavelet is nothing more than a highpass ( (xi -Xi_1)/2 ) and lowpass ( (xi+ Xi.-1)/2 ) filter which conveniently fits the wavelet criteria. Other wavelet concepts are given and two examples provided. The 2

PAGE 15

first example decomposes a one dimensional signal with the wavelet transform into a time vs. frequency digraph exemplifying the virtues of multi-resolution. The second example uses the wavelet transform to decompose an image into its constituent highpass and lowpass components. With the fundamentals of wavelet transforms exposed, Chapter 3 moves on to de-noising a multi-dimensional signal with wavelet shrinkage, that is, selective adjustment of wavelet coefficient magnitudes to reject noise in the signal. A more traditional approach to image restoration is with non-linear filtering. Chapter 4 presents a filtering scheme centered on Laplacian filtering of an image. Modifications to the basic Laplacian filter are made to reduce the filter's sensitivity to noise. To highlight the advantages of the modified Laplacian filter some example images processed with the Laplacian based methods are shown. In Chapter 5 computer simulations, implemented with MATLAB, take an image, add Gaussian noise and then process the noisy image with the algorithms from Chapters 3 and 4. Results are presented in tabular form and interpreted. Note that all images used in this thesis are available in the MATLAB wavelets toolkit. Finally, Chapter 6 concludes the thesis with some remarks about the tradeoffs between wavelet de-noising and Laplacian filtering. 3

PAGE 16

2. Noise Measurement Techniques The ability to continuously measure the state of any system, actively, passively, remotely or locally, is inevitably corrupted by noise. The degree to which this corrupts meaningful data varies from insignificant to catastrophic. In between these two extremes' lies the challenge of removing noise from sensed data to produce an accurate measurement of the observed system. The dilemma does not end here; not all noise is the same. Impulsive, white, colored, shot and thermal noise comprise many of the common types encountered. Additionally, noise characteristics could be altered as one steps through the system. Any band limiting process modifies true white noise (infinite frequency content) to colored noise (finite frequency content). Clearly, the corruptive influence of noise is not trivial and has prompted copious amounts of research spanning many decades. Classical techniques geared towards either noise reduction or extraction of sensor measurements from noisy data includes: 1. Kalman filtering strictly speaking not a noise reduction technique, Kalman filtering [1] is a state estimator. It estimates the states of a system from the system inputs and outputs. 4

PAGE 17

2. Matched bandwidth filtering -a classical noise reduction technique for 1-D signals that achieves an SNR processing gain by narrowband filtering around a spectral feature of interest. 3. Frame averaging -a conceptually simple 2-D technique whereby an SNR processing gain is achieved by averaging successive video frames [2]. 4. Mean/Median filtering -a kernel driven technique for 2-D data that determines, for data within ann x n window, whether the center pixel should be ascribed a mean or median value based on the window statistics [3]. 5. Segmentation -a method of segmenting the image into regions of similar structure. Simple implementations use thresholding criteria for segmentation, followed by filtering and then reconstruction of the image [ 4]. 6. Histogram equalization to more efficiently use under-utilized gray scales the distribution of the image histogram is reorganized. A typical application of histogram equalization would result in a straight line distribution of pixels [2]. 7. Homomorphic processing an image enhancement technique which simultaneously compresses dynamic range and increases image contrast [5]. Not intended to be an exhaustive compendium of noise reduction algorithms, these examples illustrate the variety of methods and approaches one could use to process the data at hand. 5

PAGE 18

2.1 Signal to Noise Ratio Based on the qualitative nature of noise just described a logical question is, how could noise be quantitatively characterized? One measure, the Signal to Noise Ratio, or SNR, expressed in decibels (dB), is defined as 2 SNR = 10log10 (J 2Signal 0 Noise (2.1) An observation about SNR concerns its interpretation in 1-D and 2-D applications. One dimensional signals, e.g., communications, provide a more intuitive understanding of SNR. Typically, SNR's are well controlled as a signal propagates along a constrained and well defined path. To begin with, signals of this type are generated and carried in an electronic medium facilitating control and measurement with finely tuned electronic devices. Corruptive influences are well understood and compensated for as a signal jumps from one relay station to the next. Because SNR's can be accurately measured, signals are maintained at high SNR's allowing incredibly low (10-6 or less) bit rate errors as a result. In contrast, SNR characterization for two dimensional signals are much more subjective. The medium is not necessarily electronic; typically pixels are the unit of measurement and absolute measurements are not always known. For images the SNR is measured in dB and is defined relative to the original and noisy representations as such: 6

PAGE 19

SNR = 10log10 (2.2) In this expression I is the original image and IN the noisy version of I. This definition of SNR is extended to gauge how well a given restoration technique has improved the image from a corrupted version of itself. In this sense the SNR Improvement (SNRI) is computed from I, IN and IR, where IR is the output of the restoration process. With these, the SNRI is given by: SNRI = 10log10 LL(I-IN)2 j k 2.2 Background Variance Detail Variance (2.3) While the SNR measure is universally accepted and straight forward to compute, it does have a major deficiency. Specifically, it provides no information about local features within an image. To characterize local features in an image a technique known as the Background Variance Detail Variance (BVDV) can be implemented. BVDV categorizes pixels into background or detail regions according to their local statistics. As part of the BVDV process two accumulation registers are set up for the background and detail measures. This algorithm works by computing 7

PAGE 20

the variance in an n x n window within the image and comparing the variance value to an arbitrary threshold. If the variance exceeds the threshold the center pixel of the kernel is considered as belonging to a detail region. Its coordinates are kept along with all other detail pixel coordinates and the variance added to the detail register. The same is true for kernel variances beneath the threshold. This process is applied to every pixel in the image. In the end, the background and detail registers are divided by the respective number of elements comprising the register sum to yield an average BV and DV measure. To exemplify, two images are shown below. The first image, Figure 2.1, is of a tire; it is characterized by expansive regions of uniform pixelation and well defined edges or areas of sharp contrast. The second image, Figure 2.2, is a section of fingerprint. Relative to Figure 2.1 its features are much less distinct, there is more texture in the image and the edges, while visually apparent, are much smoother than the former. Applying the BVDV algorithm provides more information about the content of these images than does SNR. Intuitively, images with uniform regions would have a lower background variance than one with a textured appearance. Conversely, features with well defined boundaries and sharp contrast would exhibit a large detail variance. The BVDV numbers for these two images are given in Table 2.1. Consistent with the intuitive notion discussed above, the BVDV values for these images fit this paradigm. 8

PAGE 21

Tire Fingerprint BV DV BV DV 3.9 413.9 8.4 157.2 Table 2.1BVDV Comparison 9

PAGE 22

3. Wavelet Theory Transformation of a time domain signal into its fundamental temporal and frequency components is achieved with the wavelet transform. At the heart of wavelet theory is multi-resolution whereby a signal is decomposed at varying scales. Scales are on the order of '2! where j is typically no greater than 5 or 6 but could, theoretically, be infinite. The wavelet implementation discussed in this thesis is built around a cascaded filter bank and the interaction of the lowpass and highpass filters comprising the wavelet filter bank. Central to construction of a successful wavelet filter bank is the concept of perfect reconstruction, the ability to reconstruct a distortionless, un-aliased version of the input signal. In the following sections the dilation and wavelet equations are given. To solidify the meaning of dilation and translation, the Haar wavelet is introduced to graphically illustrate the interplay between scale and dilation/translation. To show that the wavelet transform really works, a contrived example of a multi-tone signal and an image are transformed and the wavelet coefficients visualized. Finally, signal de-noising is presented for removal of Gaussian noise. 3.1 The Wavelet Transform A conceptual approach to the Wavelet transform is diagrammed below in Figure 3.1. It shows a block diagram of a cascaded filter bank upon which this 10

PAGE 23

heuristic of the Wavelet transform is built [6,7]. This figure will be subsequently amended but for now it suffices to diagram the 'analysis' portion of the complete Figure 3.1 Analysis Filter Bank filter bank. Each level of decomposition in this figure results in two filtering operations on the "input. At the first level (by convention the first level of decomposition starts at J and progresses downward to 1), j = J, x is lowpass and highpass filtered by Ho and H1 respectively. After decimating the filter outputs by two, the highpass output dj is stored for later use and the lowpass outputs are passed into the next level of filtering, j = J-l. This iterative process of filtering and decimation will be termed decomposition. If the process stops after J iterations then it can be said that x has undergone a J level decomposition. The purpose is to decompose x in time and frequency to isolate the details of the signal. This is the crux of multi-resolution and is achieved with the dilation equation ljJ (t) defined as: 11

PAGE 24

Dilation Equation f/J(t) = 2I h0(k 'JP(2j t-k ). k (3.1) A distinguishing characteristic of (t) is its bounded nature, it is supported (nonzero) over the interval [O,N] and zero elsewhere. This is because (t) is a simultaneous two time scale function. Without the two time scale feature (t) would have an exponential solution and would be excluded from multi-resolution analysis. A final note about (t) involves the influence of j and k: j results in dilation of the time scale and k translates (t). It can be seen from Figure 3.1 that the highpass branch is associated with the wavelet equation w (t). Through an appropriate choice for the dilation equation (t ). the decomposition of x produces coefficients dj whose scale is proportional to 2!. For future reference the wavelet equation is defined as: Wavelet Equation w(t)= 2:Lh1(k}P(2jtk ). k (3.2) It is important to understand the significance of the translation and dilation operations. Multi-resolution is fundamental to the wavelet paradigm and is a direct result of (3.1) and (3.2). To illustrate the point, consider the box function 12

PAGE 25

diagrammed in Figure 3.2. It shows fjJ (t) in its basic form (j=O), plus tt dilation of fjJ (t) ()=1) and its corresponding wavelet equation. Of interest in th.i (2t) (2t-1) (t) w(t) w(2t) 1 ;v<2t-1) ...... -Figure 3.2-Scaling and Dilation diagram are several features. By definition, fjJ (t) = 1 over the interval [0, 1] an elsewhere. From (3.1) (t) = 2[ho(0)(2t )+ ho(l)(2t -1)], assume for arguments sake the lowpass filter ho (k) to be [, ] then (3.3) n: to 13

PAGE 26

(t) = [(2t )+ (2t -1)] (3.4) which confirms the premise in (3.1) as well as the graphical model in Figure 3.2. Similarly, from (3.2) with an h1 (k) of [Y2, -Y2], w(t)= 2[h1 (0)(2t)+h1 (1)(2t -1)]= [(2t)-(2t-1)] (3.5) again confirming the premise of (3.2) and Figure 3.2. To confirm the assumption of the filtering effects of ho and h1 take the Z transform of ho and h1 : H0(z)= Yz(1+z-1). H1(z)= Yz(1-z-1). (3.6) (3.7) Clearly, the net effect of the former is to average Xi and Xi-I while the latter differences Xi and Xi-l This is nothing more than a set of lowpass and highpass filters. Another observation to make comes from writing ho and h1 in vector notation: Yz] -k (3.8) Taking the dot product of this matrix yields a null result implying h 0 .1. h 1 which satisfies another criteria for the wavelet filter bank. The choice of a box function for J (t) is not coincidental. It turns out to be the Haar wavelet, a simple but useful wavelet for demonstrating the virtues of scaling and dilation. 14

PAGE 27

The question now is, how do we define tjJ (t ) in general? This is answered by applying the continuous Fourier transform to both sides of tjJ (t): (3.9) { m) = 2 J I ho(2tk )ejmt dm (3.10a) k {m) = 2'LhoJ (2tk )ejmt dm. (3.10b) k With application of variable substitution and some more reduction, the final form of ( m ) is given by (3.11) This is an interesting result in that it defines a recursive process for solving ( m ) By repeating the same steps, (3.12) = Ho( )Ho(: )Ho( (3.12a) 15

PAGE 28

= TI Ho( J(O) j=1 21 (3.12b) = TI j=1 21 (3.12c) So fjJ (t) is derived from an infinite product of sums. Clearly, the filter coefficients ho are fundamental to the development of fjJ (t) warranting examination of ho and what its properties might be. From Figure 3.1 it is apparent thath0 and h1 are filtering operations but what are the characteristics of these filters? If and how are they related? x (t) enters from the left, is acted upon in some fashion (filtering) to produce detail coefficients d and an input stream for the next filtering level but J then what? To help answer these questions Figure 3.3 shows an amended Figure 3.1 which introduces a 'synthesis' filter bank receiving the detail and approximation coefficients, d and a respectively, from the analysis portion of the filter bank. J J Figure 3.3-Wavelet Filter Bank 16

PAGE 29

The complete set of filters is comprised of ho h1 f 0 and J1 It also shows A A the reconstruction x from passing x through the filter banlc Ideally, x = x that A is, x is nothing more than a delayed vers10n of x This is called perfect A reconstruction and x = x when, 1. There is no aliasing of X (co ) 2. There is no distortion of x The focus is now on the relationship between ho h1 f 0 arid f 1 Consider the Z transform of ho H0(z)= "Lh(k )l-k (3.13) k the Z transforms of the remaining filters are similarly defined. Perfect reconstruction is guaranteed when the following criteria are met: Perfect Reconstruction Fo(z)H0(z)+ FJ.(z)H1(z)= 2z-1 (3.14a) F0(z)H0(z)+ F1 (z)H1 (-z)= 0 (3.14b) where l is the delay of the filter. From these equations, it cari be shown that alias cancellation occurs when (3.15a) 17

PAGE 30

and Fi (z)= -H0(z). (3.15b) Finally, to complete the analysis, from (3.14a) and (3.14b) define the product filters Po and P1 P0(z)= F0(z)H0(z) 1l (z)= Fi (z)H1 (z). (3.16a) (3.16b) When defining Fo and F1 from (3.15a) and (3.15b), a fallout of the product filters is the relation between Po and P1 namely, ll (z)= -P0(z). (3.17) The relevance here is Po(z) is in a natural form for spectral factorization [8] as well as satisfying the perfect reconstruction criteria P0(z)-P0(-z) = 2z-1 A fmal note about choosing Ho was put forth by Smith and Barnwell [9] providing guidance on how H1 (and hence F0 and F1 ) can be found from H0 For an Ho correctly chosen to satisfy the above criteria, Ho, F0 and F1 are directly obtained from (3.15a) and (3_15b). The bonus offered by the Smith-Barnwell criteria of (3.15a) and (3.15b) is an orthogonal filter bank, one of the fundamental concepts defining a successful wavelet based filter bank. Referring back to Figure 3.1, the output of the wavelet equation (3.2) is a set of wavelet coefficients d j for each reconstruction level 1 j J These 18

PAGE 31

coefficients comprise the time and frequency content of the input signal. A common method of analysis is plotting the coefficient magnitude versus index. Another more intuitive method constructs a layered matrix shown in Figure 3.4. In this way each coefficient assumes a pixel like character where the amplitude is proportional to the magnitude of the coefficient. This graph shows the inverse relationship between time and frequency; increased resolution in one dimension invariably leads to decreased resolution in the other. The advantage though, lies in mutli-resolution. The corresponding spectrum of a Fourier transformed signal would indicate the presence of signal energy and frequency content but it would give no temporal identification. Observe that the number of wavelet coefficients at a given level of reconstruction is roughly equal to the size of the dataset divided by 2, raised to the power of the reconstruction level. To finalize the foregoing discussion, two simple but illustrative examples will be given. The first example shows one dimensional decomposition and the second two dimensional decomposition. In Figure 3.5, a signal s(t )with three separate tones and a low level of Gaussian noise added is shown. A five level wavelet transform of s(t) is taken and shown immediately below the graph of s(t ) In the wavelet domain the three tones are seen both temporally and in frequency (the vertical axis is reconstruction level). As can be seen the three tones correlate in time to the graph of 19

PAGE 32

= GJ GJ rt Time Figure 3.4-Frequency vs. Time Wavelet Digraph s(t ) Note that there is a temporal delay in the wavelet domain due to the delay through the filters. For the two dimensional case, Figure 3.1 must be extended to two dimensions. Conceptually, the wavelet transform still performs the same decomposition at varying scales except now the input is an image and there are four outputs from the filter bank as shown in Figure 3.6. This figure shows four branches through the filter bank resulting in four filtering operations on the image: 20

PAGE 33

I. Lowpass Lowpass (LL) these coefficients are passed on to the next level of the wavelet transform. 2. Lowpass Highpass (LH) these coefficients indicate horizontal details m the 1m age. 3. Highpass Lowpass (HL) these coefficients indicate vertical details in the image. 4. Highpass Highpass (HH) these coefficients indicate diagonal details in the 1mage. Similar to Figure 3.4, the convention for visualizing the four sets of coefficients is 10 -10 -20 1 2 3 4 5 0 100 200 200 S(t) With Locality 400 600 Discrete Wavelet Transform 300 400 500 600 Sample BOO 700 Figure 3.5 -Simulated Three Tone Signal 1000 BOO 900 shown below in Figure 3.7. The varying scales are seen in Figure 3.7 as the 21 1200 1000

PAGE 34

Rows Columns = Dowosample columns = Dowosample rows Figure 3.6-Two Dimensional Wavelet Filter Bank LL LH HL HH reconstruction progresses from level to level. The high frequency components are seen in level 3, the medium range components in level two and, finally, the lowest frequency components in level 1. Also shown in level one are the lowpass lowpass coefficients known as the approximation coefficients. Using the image 'wbarb', shown in Figure 3.8a, as the input to the wavelet transform results in the decomposition seen in Figure 3.8b. Features of interest can be seen in the respective filter outputs. This image has obvious vertical, diagonal and horizontal content that is seen in the HL, HH and LH regions respectively of levels 1 and 2. 22

PAGE 35

LL HL HL LH HH HL LH HH LH HH Figure 3.7-Two Dimensional Wavelet Digraph 23 Figure 3.8b-Wavelet decomposition of 'wbarb'

PAGE 36

3.2 Wavelet De-Noising A consequence of transfonning an evenly sampled, time varying function into the wavelet domain is suppression of noise via judicious shrinkage of certain wavelet coefficients. The theory was put forth by Donoho and Johnstone in [10,11] as a way to reject Gaussian noise resulting in a reconstruction that is at least as smooth, in a probabilistic sense, as the true signal. Donoho and Johnstone develop the model as follows. Let (3.17) be the sample set with ei independently distributed Gaussian noise of zero mean and 2 vanancea The goal is to estimate from J1I an/; such that II J f 112 is minimized. Thus, de-noising with wavelet shrinkage is a mean squared error approach for signal estimation. The minutiae of wavelet shrinkage are presented in detail in [10]. In essence, the authors develop their theory around a probabilistic model that through the appropriate choice of a threshold, signal reconstruction and noise suppression is optimized in a mean squared error sense. The probabilistic model does constrain this 1\ statement to the condition that f is at least as smooth as f with high probability. Admittedly, this is a nebulous statement. In [11] this is more rigorously defined: 1\ equivalent smoothness with high probability means f cannot oscillate more than f. 24

PAGE 37

The cornerstone to wavelet shrinkage is selection of the right threshold. Two methods proposed by Donoho and Johnstone are Riskshrink and Visushrink in [10]. A brief summary follows: Riskshrink -Threshold values for the Riskshrink technique were presented by the authors in tabulated format for specific values of n, the number of points in f They are repeated below in Table 3.1 and are valid only for the given values of n: Riskshrink Thresholds n n 64 1.474 4096 2.594 128 1.669 8192 2.773 256 1.860 16384 2.952 512 2.048 32768 3.131 1024 2.232 65536 3.310 2048 2.414 Table 3.1 -Riskshrink Coefficients Visushrink -A more flexible technique for choosing a threshold since it is not limited to specific values of n. The threshold is given by = a(2ln{n))X' (3.18) where cr is equal to the median absolute deviation of the wavelet coefficients at the finest scale of decomposition divided by 0.6745. 25

PAGE 38

Thresholding the wavelet coefficients is the next step after selecting the thresholding technique. Two methods are available to do this, soft or hard thresholding. Hard thresholding is implemented by keeping all coefficients above the threshold intact and setting coefficients below the threshold to zero. Soft thresholding sets coefficients below the threshold to zero but coefficients above the threshold are handled differently. These coefficients are reduced in value by the threshold, thus the term 'wavelet shrinkage'. The net effect is to remove a bias from legitimate wavelet coefficients due to Gaussian noise. Accordingly, the two thresholding techniques are described by {w-W > W= t l hard thresholding; (3.19) l 0 W < l w; = { sgn(w; A;;) wi > soft thresholding. (3.20) wi < The choice of which method to use, hard or soft thresholding, is not well documented. Consistent with most literature contributing to this thesis, soft thresholding will be used for computer simulations. The final step in the wavelet shrinkage algorithm concerns choosing the appropriate decomposition level at which to start the thresholding process. As with thresholding, there are no clear criteria from which a choice can be made. It is also true that a starting decomposition for one image may not be the best choice for 26

PAGE 39

another image. This can be explained rather heuristically. Consider the spectrums of an arbitrary signal shown in Figure 3.9. The spectrum of the original signal x is shown along with the spectrums from a four level decomposition of x. Generally speaking, most the energy content of x lies in the lower end of the frequency spectrum. This corresponds to low contrast regions of x; conversely, higher frequencies correspond to edges and high contrast artifacts in x. So, based on the content of x, thresholding at higher decomposition levels effects the details contained in the reconstructedx. Conceptually speaking, consider a signal, x1 with a high DV measure implying regions of high contrast (high frequency content). Consider a second signal, x2 with a relatively low DV measure implying regions of moderate contrast (mid-range frequency content) compared to x1 If both signals are de-noised with a four level decomposition and thresholding of level 4, then one would expect 1\ 1\ that the reconstruction x 2 would be a better reconstruction than x 1 fu this sense, a 1\ better reconstruction means the x2 DV measure is closer to its original than the 1\ x1 DV. This is attributed to manipulation of the d4 wavelet coefficients of which x1 contributes more to than x2 To illustrate the effects of thresholding at various levels, Figure 3.10 shows a set of signals with a 7 dB SNR. Each signal underwent a 4 level decomposition followed by thresholding and reconstruction. The first plot 27

PAGE 40

shows the original signal and the next four correspond to thresholding at level J "' n = 4, 3, 2 and 1. 1 X(ro) ro 1 X(m'Z) ro 1 X(Ol/4) I ro ro ro Figure 3.9 Spectrum of Four Level Decomposition 28

PAGE 41

S(k), SNR = 7 dB Qiginal 1000 1f00 2000 .,. ..,.._.. : F\IL] \..J wuw I -1UL------L----L.------L.--....U 1::::4 1=3 1000 1f00 2000 1=2 1000 1f00 2000 1=1 0 &X) 1000 1f00 2000 Figure 3.10Signal Reconstruction vs. ln 29

PAGE 42

4. Polynomial Filters The legacy of polynomial filters is rooted in an image enhancement technique for edge detection. This chapter highlights the migration of gradient based edge detection to Laplacian filters to polynomial filters. Functionally, polynomial filters are designed for the purposes of image enhancement, however, they can also exhibit a side benefit of noise rejection. 4.1 Gradient Based Filtering Of interest in image enhancement is accentuating details in the image without a corresponding gain in background features or noise artifacts. Accentuating details is akin to edge detection, a method of categorizing what is and is not an edge. Typically, once an edge is found it's pixels are artificially manipulated to produce a sharper edge by increasing the contrast between edge and background pixels. As usual, there is no hard rule defining what is an edge and where it begins. One method [12] of edge determination is diagrammed in Figure 4.1. The advantage of this method is the reliance on image content at a given point x 0 For a two dimensional function f (x, y) an edge point is determined from the gradient, Vf( )= lif(x,y) + df(x,y) x, Y ax ay (4.1) 30

PAGE 43

Figure 4.1 Edge Determination For images with unit step size the above partials can be replaced with first differences, of(x,y) =f(x,y)-f(x-l,y), ox (4.2a) of ( x, Y) = f ( ) f ( -1) X, y X, y (4.2b) ox 31

PAGE 44

It is noteworthy that the difference equations ( 4.2a) and ( 4.2b) are the same as convolution with a filter h(x, y ), the nature of which will soon be defined. Another method of edge detennination is evaluation of f '(x, y) to detennine if the derivative is at a local maximum meaning the second derivative f '' ( x, y) is at a minimum. Accordingly, x. k is judged an edge point when ), (4.3) Similar to the first partials of J, the second order partials can be computed from difference equations: V2 f(x,y) =4f(x,y)-f(x+ l,y)-f(x-l,y)-f(x,y+ 1)-f(x,y-1) (4.4) Viewing this as a convolution operation, the filter h(x,y) realizing v2 f(x,y) will sense the presence of horizontal and vertical edges. It turns out that h(x, y) is a common filter referred to as a Laplacian with coefficients 0 -1 0 L(x, y ).= -1 4 -1 0 -1 0 (4.5) The disadvantage of Laplacian filters is their sensitivity to noise. Any zero crossing is deemed an edge point leading to many false edges contaminating the edge detected image, see Lim [12] for methods of reducing false edges. This is shown in 32

PAGE 45

Figure 4.2 where the Laplacian of 'woman' is shown. Figure 4.2-Laplacian of Image 'woman' 4.2 Cubic Sharpening To overcome the introduction of false edges, G. Ramponi proposed [13] a method using a filter similar to the Laplacian for sharpening an image. In addition to image sharpening, it demonstrates useful image restoration properties due to it's inherent noise suppression. Underlying Ramponi' s method is unsharp masking, a well known technique which operates by adding a high pass filtered version of the image onto itself. The drawback to unsharp masking is it's vulnerability to noise created by the action of the high pass filter. The basic form of the unsharp masking filter is defined by Yk=Xk+AZk ), ], ), (4.6) 33

PAGE 46

where z j ,k the linear high pass filter, is defined as and A. is a data dependent scaling factor judiciously selected by the viewer. Without modification the unsharp masking filter can increase the images DV measure due to its noise sensitivity. Ramponi mitigates this sensitivity by incorporating a quadratic term into z j ,k as such: Zk =(x k -X-tk)2(2xk -X lk -X_1,k)+ ], j+l, J ], ]+ J (X. k 1-X. k-t)2(2x. k-X. k 1-X. k-t) (4.8) ), + ), ], ), + ), Now, the output of his filtering scheme is the sum of the original image plus a scaled, cubic high pass filtered version of the image. Ramponi calls the quadratic term an edge sensor (the linear term keeps it's identity as the high pass filter). In this scheme the edge sensor acts like a switch. Uniform regions of x have a small output from the edge sensor while contrasting regions have a much larger response from the edge detector due to its non-linear behavior. The resulting effect of the cubic filter is to alleviate noise perturbations while strongly responding to 'true' image details. A block diagram is shown in Figure 4.3. The relevance of A now becomes apparent since the output of the cubic filter can far exceed the dynamic range of a grayscale 34

PAGE 47

X Edge Sensor HP Filter Figure 4.3 Cubic Filtering Block Diagram t 1/.A image. To deal with this a limiter is inserted into the filter block as shown. The choice of saturation limits is arbitrary as well as the value of lambda. In [13] a saturation limit of 50000 was used with A.= 0.001. A last note about the cubic operator z j ,k : the Laplacian operator just described used a plus shaped pattern in the 3 x 3 window. A cross shaped window can also be used but should be scaled by 1 due to the distance from the center pixel. .J2 4.3 Modified Cubic Sharpening In another paper by Ramponi [14], the modified unsharp masking algorithm described in Section 4.2 is further refined to act simultaneously in four directions. This is achieved by decomposing the 3 x 3 Laplacian into four 1 x 3 vectors corresponding to the four orientations within a 3 x 3 window. After filtering with the uni-directional Laplacians, another filtering operation is performed from which the 35

PAGE 48

output determines correlation in the four orientations. To put this scheme into context, a block diagram is shown in Figure 4.4. X Lowpass Filter Horizontal W.b. Highpass Filter Vertical Highpass Filter 45Deg. Highpass Filter B5Deg. Wm Highpass Filter Vertical SFJ.lter Horizontal S Filter 135Deg. S Filter 45Deg. S Filter Figure 4.4 Modified Cubic Sharpening 1/l Of concern in this figure are the lowpass, uni-directional highpass and S filters. The frrst of these, the lowpass filter, is a simple 3 x 3 kernel with the following coefficients: 36

PAGE 49

Lowpass Filter a a a a l-8a a (4.9) a a a In this filter a controls the degree of smoothing administered by the kernel and is bounded by 0 :::; a :::; 0.1. The W filters are the uni-directional Laplacian operators previously mentioned. They function by sensing the presence of edges in a direction orthogonal to the orientation of the filter; large outputs correspond to image edges, zero or near zero outputs result from uniform regions. The filters are similar to those in Section 4.2 and are given by: = 2x k -X k+ 1 -X k-1 ], ), ], (4.10a) W = 2x k -X "+l k -X -l k v ], J 1 (4.10b) "'4s = 2x j,k -x j-l,k+l-x j+l,k-l (4.10c) = 2x j,k -X j+1,k+1-X j-1,k-1 (4.10d) The third, and final, filters of interest are the 3 x 3 S filters. Like the W filters they are directional and act to determine the presence of correlated data in a direction orthogonal to the orientation of the particular S filter. Each directional S filter is applied to the corresponding directional W matrix where the correlation action is 37

PAGE 50

manifested by seeking edges. A subtlety of the technique lies in rearranging each 3 x 3 W matrix into a 1 x 9 vector. This is Ramponi' s method, presumably this is done for intuitive reasons. The discussion on how the S filters work will be more understandable by defining them first: sh = lwv Wv (4 ( 6) + lwv (4 )wv ( 6)) Sv = lwh(5)(wh(2)wh(8) +lwh(2)wh(8)) s45 = lw13s (5)( w13s (l)w135 (9) + lw13s (1 )w135 (9))) s135 = + lw45(5)(w45(3)w45(7) + lw45(3)w45(7))) (4.lla) (4.1lb) (4.11c) (4.lld) Each filter behaves in a curious way by outputting zeros except when the sign of each W component is the same. In this way the non-linear behavior of each S filter produces a strong response when correlated data in the form of edges are present in the W matrices. Note that the response is from edge data orthogonal to the particular S filter, so a large response from the horizontal S filter indicates the presence of a 38

PAGE 51

vertical edge. This underscores the purpose of the modified cubic filter. In its basic form the cubic filter responds strongly to large differences within the 3 x 3 data kernel. The modified cubic filter also exhibits a strong response but only when the directional content within the 3 x 3 data kernel is correlated, that is, adjacent cells within the kernel are of the same sign for a given orientation. 39

PAGE 52

5. Computer Simulations and Comparative Analysis Wavelet de-noising is an active area of research prompting the question how does wavelet shrinkage compare to other noise reduction strategies? This chapter provides some insight to this question by comparing wavelet shrinkage to modified cubic sharpening. An initial evaluation of wavelet shrinkage is performed on a set of one dimensional signals commonly seen in Donoho's de-noising literature. This is followed by an analysis of image data where each algorithm is optimized for peak SNRI performance. 5.1 One Dimensional De-Noising Results Although not a 'standard' set of signals, equations (5.la-d) are used (5.la)-Blocks f(t) = LhjK(ttj ). K(t) = [1 + sgn(t)]l 2 t j = ( 0.1 0,0.13,0.15,0.23,0.25,0.40,0.44,0.65,0.7 6,0.78,0.81) hj = (4,-5,3,-4,5,-4.2,2.1,4.3,-3.1,2.1,-4.2) (5.lb)-Bumps f(t)= LhjK((t-tj)!wj) K(t)=(1+Jtl)-4 h j = ( 4,5,3,4,5,4.2,2.1,4.3,3.1,5.1,4.2) w j = (.005, .005, .006, .0 1, .0 1, .03, .0 1, .0 1, .005, .008, .005) (5.lc) Heavisine f(t) = 4sin(4nt )-sgn(t0.3)sgn(0.72-t) 40

PAGE 53

(5.1d) -Doppler f(t) = [t(l-t )]112 sin[2n"(l +c)! (t +c)]. t: = 0.05 throughout Donoho's research and are representative of typical phenomenon seen in many signals. From Figure 5.1 one can see these signals contain discontinuities, instantaneous jumps, low to high frequency content and non-periodic features. Blocks Bumps 6 6 4 4 2 2 0 0 )_ 0.5 0 0.5 Heavisine Doppler 5 5 0 0 -5 -5 0 0.5 0 0.5 Figure 5.1One Dimensional De-Noising Signal Set Using this signal set as a basis, performance of wavelet shrinkage is measured by adding Gaussian noise to the signal, taking the wavelet transform, soft thresholding and then reconstructing the signal by taking the inverse wavelet transform. With this 41

PAGE 54

approach SNRI can be measured since all three components needed for SNRI computation are present, namely, the original, noisy and reconstructed signals. The dilemma arises when choosing the reconstruction level and threshold. In this analysis, each signal set contains 2048 points and a decomposition depth of six was used. Based on the observation that reconstruction quality degrades when thresholding at lower levels (see Figure 3.10), a somewhat arbitrary choice of levels five and six were selected for application of soft thresholding. The other unknown concerns selecting the right threshold for wavelet shrinkage. To answer this a series of reconstructions were performed at varying thresholds to build a Threshold vs. SNRI performance curve for each of the four signals above. In the subsequent analysis, Gaussian noise was added to produce an SNR of 7 dB. In Figures 5.2a-d the Threshold vs. SNRI performance can be seen. Shown along with each figure is the noisy signal and the reconstruction. Each reconstruction used the Daubechies 'DB4' mother wavelet. From these four pictures the following observations can be made: 1. A positive SNR improvement was achieved validating the use of wavelet shrinkage as a method of noise reduction. 2. The amount of SNR improvement hovers around 8 dB with the exception of the Heavisine signal. This is not surprising given the low frequency content of the 42

PAGE 55

signal. In fact, the only high frequency content to the signal is from the added noise and the two discontinuities seen m the theoretical signal. Figure 5.2a-Wavelet De-Noising of 'Blocks' with Threshold of 0.6. 1 u 2 3 1 0 -S{Q 1lm: Figure 5.2c-Wavelet De-Noising of 'Heavisine' with Threshold of 1.5. 43 Figure 5.2b-Wavelet De-Noising of 'Bumps' with Threshold of 0.2. 0 QQS 0.1 Q15 Q2 02i 0 1 0 M TIIII) Figure 5.2d-Wavelet De-Noising of 'Doppler' with Threshold of 0.1.

PAGE 56

3. High frequency components suffer the most from wavelet shrinkage. Since these are the coefficients targeted for shrinkage, artificial manipulation is bound to have an effect. 5.1 Two Dimensional De-Noising Results Measuring the performance of two dimensional wavelet shrinkage and modified cubic filtering is closely aligned to the one dimensional method of Section 5.1. The process begins with the selection of three test images shown in Figures 5.3a, 5.3b and 5.3c. These images were chosen due to their contrasting features. While the image 'tire' is more uniform with hard edges, the image 'woman' is rich with texture and soft edges. This is characterized by the DV measure of each figure; DV for 'woman' is 598.4 while the DV for 'tire' is 413.9. 44

PAGE 57

The model for measuring performance of image de-noising begins by adding 7dB of Gaussian noise to the image. The next step is optimization of SNRI performance through parameter selection. The optimization procedure is different for the two algorithms. For wavelet de-noising a threshold vs. SNRI performance curve is generated just like in the 1-D case. This curve dictates the choice of threshold for maximum SNRI. In the analysis wavelet de-noising is executed two ways. The first approach assumes separability of the image and measures performance of row wise de-noising, column wise de-noising and row followed by column de-noising. The second approach invokes a true two dimensional filtering scheme to produce the wavelet coefficients. Both methods use the Daubechies 'db4' mother wavelet. For modified cubic filtering there are two parameters effecting SNRI, the smoothing factor a and the threshold A shown in Figure 4.4. Similar to the wavelet SNRI curve, a SNRI performance surface is generated yielding the proper choice of a and A for the best overall SNRI. In this implementation of modified cubic filtering the scaling factor A is applied such that the maximum value in the T matrix is scaled to the value of A itself. In other words, if the SNRI performance surface indicates a 45

PAGE 58

peak at A. = 100, then T, which contains the detail pixels to be added back into the lowpass filtered image, has a dynamic range between 0 and 100. Effectively, this acts as the limiter in the highpass filter path but with an altered interpretation. Whereas choosing a (A,a) pair coupled with the limiter in Figure 4.4 assures a bound on T, there is no assurance that the optimum (A..,a) pair was employed. The method used in this analysis circumvents the trial and error involved in gauging SNRI performance. At this point, an extension to the BVDV figure of merit will be introduced. After characterizing pixels within an image as either background or detail it is useful to compute the SNRI of the de-noised image as well as the SNRI of the background and detail regions. In this way a direct measure of overall gain is realized and, more importantly, a qualitative characterization for how well a process improves (a positive SNRI) or degrades (a negative SNRI) an image is achieved. The analysis results are presented in the next three sections. Section 5.2.1 is for the image 'tire', Section 5.2.2 is for image 'woman' and Section 5.2.3 is for 'julia'. 46

PAGE 59

5.2.1 Analysis of 'tire' Separable Wavelet De-Noising Row Reconstruction, SNRl vs Threshold, # Sim = 5 4.7 ,--------,-----r-----r----, 4.6 4.5 SNR 4.4 4.3 4.2 .__ ___. __ ___._ __ ___,_ __ _J 35 40 45 Threshold 50 55 Figure 5.5a Ensemble average of SNRI row wise de-noising 47 .. ...., .. ,.._ ... Image a peak SNRI of 4.6 using a threshold of 52 as seen in Figure 5.5a. The SNRI8v = 5.6 and SNRinv = 3.9.

PAGE 60

Col Reconstruction, SNRI vs Threshold,# Sim = 5 4.45 4.4 4.35 SNRI 4.15 L._ ___jL___----l. __ _L_ __ _j 35 40 45 50 55 Threshold Figure 5.6a-Ensemble average of SNRI column wise de-noising. RC Reconstruction, SNRI vs Threshold.# Sim = 5 7.1 ,---,------y--.-------.---7.05 SNR &95 ... Threshold Figure 5.7a-Ensemble average of SNRI row column de-noising. 48 tgure mn tmage with a peak SNRI of 4.5 using a threshold of 52 as seen in Figure 5.6a. The SNRI6v = 5.6 and SNRiov = 3.7. umn de-noised image with a peak SNRI of 7.0 using a threshold of 23 as seen in Figure 5.7a. The SNRI6v = 10.6 and SNRiov = 5.3.

PAGE 61

Non-Separable Wavelet De-Noising 2[) De-Noising. Threshold vs SNRI. :'\ Simulation' I 7f i i 6'15 I SNIHr I E-'j 6" I 50 55 "' Thrcshoh.J Figure 5.8a-Ensemble average SNRI of non-separable de-noising. Figure 5.8b Non-separable de-noised image with a peak SNRI of 7 .I using a threshold of 52 as seen in Figure 5.8a. The SNRIHv = I 0.4 and SNRlnv = 5.4. Modified Cubic Filtering SNRI vs bmlxla !'or Image "'tire'' '; Figure 5.9a-Ensemble SNRI perfor mance surface for modified cubic sharpening, 5 iterations in the ensemble. 49 and a= 0.1 as seen in Figure 5.9a. SNRI13v = 9.0 and SNRlnv = 6.5.

PAGE 62

Visually, there is a clear distinction between the wavelet restored images and the image out of the modified cubic filter. The most apparent distinction to note is the resolvability between the differing techniques. While the wavelet filtered images are smoother, it comes at the price of less detail in the image relative to the modified cubic filter image. This is attributed to filter size. The polynomial filter uses a 3 x 3 kernel while the Daubechies 4 filter is 16 x 16. The net result of a smaller filter is less smoothing relative to a larger filter. This accounts for the smoother (less detail) appearance seen in the wavelet images and the coarser (more detail) appearance in the modified cubic filter image. Recall that the desired restoration process should .reduce the BV as much as possible while maintaining DV near that of the original. It can be seen, from Table 5.1, that this is indeed true. With this criteria it is clear that row wise and column wise Algorithm BV DV SNRI SNRisv SNRiov Original 3.9 413.9 Original+ Noise 766.0 1178.0 Row De-Noising 172.4 495.0 4.6 5.6 3.9 Col. De-Noising 172.3 480.0 4.5 5.6 3.7 RIC De-Noising 32.1 289.5 7.0 10.6 5.3 2-D De-Noising 37.1 306.7 7.1 10.4 5.4 Mod. Cubic Filter 57.6 301.5 7.5 9.0 6.5 Table 5.1-Performance summary for image 'tire' BV lagged behind the BV measures of row/column, non-separable and modified cubic filtering values. Conversely, DV for row and column measures were superior since frequency components in the orthogonal direction went untouched. The SNRI 50

PAGE 63

components complete the picture by measuring overall performance gain and the corresponding gain in background and details. Simply stated, higher SNRI's directly translate to better restoration. The SNRI numbers for this image show that row wise and column wise processing perform poorly when compared to the other methods. As a whole, Table 5.1 shows non-separable processing to be the best performer for this image. Even though it is not the best performer in any category, it demonstrates consistently good behavior across all categories. This is not true for any of the other techniques. 51

PAGE 64

5.2.2 Analysis of 'woman' Row Reconstruction, SNRI vs Threshold,# Sim = 5 1.6 SNRI 1.55 1.5 L_ ______ .l__ _____ ___.J 15 20 25 Threshold Figure 5.11a-Ensemble average SNRI of row wise de-noising. 52 with a peak SNRI of 1.6 using a threshold of 20 as seen in Figure 5.11a. The SNRisv = 3.6 and SNRiov = 1.6.

PAGE 65

Col Reconstruction, SNRI vs Threshold,# Sim = 5 2.8 2.8 ... 10 15 Threshold Figure 5.12a-Ensemble average SNRI of column wise de-noising. RC Reconstruction, SNRI vs Threshold,# Sim = 5 2.75 2.7 2.88 2.6 =L_ ___ 1L0 ___ Threshold Figure 5.13a-Ensemble average SNRI of row column de-noising. 53 5.13b-Row/Column image with a peak SNRI of 2.7 using a threshold of 13 as seen in Figure 5.13a. The SNRI8v = 7.0 and SNRiov = 2.7.

PAGE 66

Non-Separable Wavelet De-Noising SNRI vs Threshold, 5 Sirnulalions 2 95 2 9 2.85 SNRI 2 8 275 Threshold Figure 5. 14a-Ensemble average SNRI of non-separable de-noising. Modified Cubic Filtering SNRI vs lambda For Image "woman ... .. . .. ..: .. .. : 3_1 ; .. 2.8 SNR 2.7 2.6 0 08 .. . . ....... .. : . .. ... .. ; ..... ..... .. 150 100 0 05 50 Srnoolhing Faclor Larnhda 200 Figure 5.15a-Ensemble average SNRI surface for modified cubic sharpening 54 1gure 5.15b-Modified cubic filtering with SNRI = 3.0 using A= 70 and a = .075 as seen in Figure 5.15a. The SNRI8v = 5.8 and SNRIDv = 3.0.

PAGE 67

This image was chosen due to the prevalence of edges and contrast. Comparing the appearance of the processed image reveals visually similar results to 'tire' in terms of resolution. This image is more suitable for visual inspection of detail retention. To this viewer modified cubic filtering retains more detail than nonseparable or row/column processing. Again, this is attributable to the 3 x 3 kernel used for polynomial filtering. Numerically, Table 5.2 shows row/column processing to have the best BV and SNRIBv but the second worst DV. It is curious that the DV Algorithm BV DV SNRI SNRIBv SNRiov Original 1.1 598.4 Original + Noise 517.3 1101.8 Row De-Noising 198.7 543.9 1.6 3.6 1.6 Col. De-Noising 244.6 729.2 2.1 3.0 2.1 RIC De-Noising 75.6 343.9 2.7 7.0 2.7 2-D De-Noising 83.0 426.8 2.9 6.3 2.9 Mod. Cubic Filter 119.2 379.1 3.0 5.8 3.0 Table 5.2-Performance summary for image 'woman' for column processing is so large on an image with substantial vertically oriented content. Overall, Table 5.2 indicates that non-separable processing is the best performer. Like it's performance with the image 'tire', it shows consistently good behavior across all measurement categories. 55

PAGE 68

5.2.3 Analysis of 'julia' Separable Wavelet De-Noising Row Reconstruction. SNRI \'S Threshold.# Sim = 5 3 32 3 I 29 SNRI 2 "f I 2 71 2 61 I 2 5 9 10 I 1 12 13 Threshold Figure 5.17a-Ensemble average SNRI of row wise de-noising. 56 1gure Image with a peak SNRI of 3.1 using a threshold of 9 as seen in Figure 5.17a. The SNRIHv = 4.8 and SNRIDv = -0.4.

PAGE 69

Col RL-construction. SN Rl vs Threshold, # Sirn = 5 3.1 .----.---.----.---.-----, SNRI 10 Threshold 12 14 Figure 5.18a-Ensemble average SNRI of column wise de-noising. RC Reconstruction, SNRI vs Threshold,# Sirn = 5 39 38 37 Threshold Figure 5.19a-Ensemble average SNRI of row/column de-noising. 57 F igure 5 1 umn with a peak SNRI of 2.9 using a threshold of 10 as seen in Figure 5.18a. The SNRI8v = 5.0 and SNRlov = -1.0. 1gure mn image with a peak SNRI of 4.2 using a threshold of 5 as seen in Figure 5.19a. The SNRI8v = 9.1 and SNRiov = -1.4.

PAGE 70

Non-Separable Wavelet De-Noising SNRI vs Threshold, 5 Simubtions 4.6 44 42 38 32'------'--------' 10 Threshold 15 Figure 5.20a-Ensemble average SNRI of non-separable de-noising. 1gure on n01 image with SNRI = 4.2 using a threshold of 9 as seen in Figure 5.20a. The SNRIBv = 7.7 and SNRiov = -0.7. Modified Cubic Filtering SNRI vs lambda For lmal!e "julia" ..... .. 3.5 ,.4, : .. 0.08 150 0.07 Smmthing Factor 0 06 0 L.untxla Figure 5.21 a-Ensemble average SNRI surface for modified cubic sharpening. 58 1gure tenng with SNRI = 3.1 using A= 70 and a= 0.075 as seen in Figure 5.21 a. The SNRI8v = 6.4 and SNRIDv = -1.8.

PAGE 71

The image 'julia' was used for the third test case because of its color coding and uniform background. The advantage of color is revealed in the processed images in that it is easier to see filtering effects in the background regions. The effect of filtering in one dimension can be seen in the row wise and column wise images. The row processed image contains a horizontal grain due to filtering in the horizontal direction. The same effect manifests itself in the column processed image. The remaining images, which have all undergone processing in both dimensions, have a smoother appearance in the background regions. fu accordance with results from the previous images, Table 5.3 indicates that non-separable filtering demonstrates the best overall improvement. It has the best SNRI with consistently good behavior across the remaining categories. Algorithm BV DV SNRI SNRiav SNRiov Original 1.1 293.6 Original+ Noise 55.8 135.5 Row DeNoising 16.4 210.4 3.1 4.8 -0.4 Col. De-Noising 15.5 189.2 2.9 5.0 -1.0 RIC De-Noising. 4.6 142.3 4.2 9.1 -1.4 2-D De-Noising 7.3 170.5 4.2 7.7 -0.7 Mod. Cubic Filter 10.5 135.5 3.1 6.4 -1.8 Table 5.3-Performance summary for image 'julia' 59

PAGE 72

6. Conclusions In Chapter 5 the wavelet de-noising and modified cubic filtering algorithms were applied to three images. Prior to processing, Gaussian noise was added to each image producing an SNR of 7 dB. Using the theory from Section 3.2 wavelet de noising was performed two ways. The first method treated the image as a separable sequence measuring performance of row only processing, column only processing and row/column processing. The second method assumed a non-separable condition applying the two dimensional wavelet transform as diagrammed in Figure 3.6. Modified cubic filtering, described in Chapter 4, was used as an alternative technique to wavelet shrinkage for comparison purposes. The performance of each technique is repeated in Table 6.1 with an addition. Shown in the upper right corner of each cell in the table is a ranking of parameter score for each technique for each performance parameter. In this ranking scheme the best score gets a 1, the worst a 5 (or equivalent if there is a tie). By summing the ranks for each technique a total score is computed. In this scenario the lowest score wins. Examination of the totals forces one to the conclusion that, for the algorithms employed in this analysis, non-separable wavelet de-noising is the best performer. This is confirmed in a second way. It was stated earlier that any de-noising technique should lower BV as much as possible and maintain DV. This means the best performer should score a 1 (ideally) for 60

PAGE 73

Algorithm BV DV SNRI SNRisv SNRiov Original 3.9 413.9 Original + Noise 766.0 1178.0 Total Row De-Noising 172.4) 495.0 2 4.6 4 5.6 4 3.9 4 19 Col. De-Noising 172.3 4 480.0 I 4.5 5 5.6 4 3.7 5 18 RIC De-Noising 32.1 I 289.5 5 7.0 3 10.6 j 5.3 3 13 2-D De-Noising 37.1 2 306.7 3 7.1 2 10.4 2 5.4 2 11 Mod. Cubic Filter 57.6 3 301.5 4 7.5 I 9.0 l 6.5 I 12 Performance summary for image 'tire' Algorithm BV DV SNRI SNRisv SNRiov Original 1.1 598.4 Original + Noise 517.3 1101.8 Total Row De-Noising 198.7 4 543.9 I 1.6 5 3.6 4 1.6 5 19 Col. De-Noising 244.6 5 729.2 2 2.1 4 3.0 ) 2.1 4 20 RIC De-Noising 75.6 I 343.9 5 2.7 3 7.0 I 2.7 3 13 2-D De-Noising 83. 0 2 426 8 3 2 9 2 6.3 2 2.9 2 II Mod. Cubic Filter 119.2 3 379.1 4 3.0 I 5.8 j 3.0 1 12 Performance summary for image 'woman' Algorithm BV DV SNRI SNRisv SNRiov Original 1.1 293.6 Original+ Noise 55.8 135 5 Total Row De-Noising 16.4 5 210.4 I 3.1 2 4.8 5 -0.4 l 14 Col. De-Noising 15.5 4 189.2 2 2.9 3 5.0 4 -1.0 3 16 RIC De-Noising 4.6 1 142.3 4 4.2 1 9.1 I -1.4 4 11 2-D De-Noising 7.3 2 170.5 3 4.2 I 7.7 2 -0.7 2 10 Mod. Cubic Filter 10.5 3 135.5 5 3.1 2 6.4 3 -1.8 5 18 Performance summary for image 'julia' Table 6.1 Performance summary with ranking background and detail variance. By looking at the BVDV ranks, non-separable wavelet de-noising comes closer to the ideal than any of the other methods. 61

PAGE 74

APPENDIX A Separable De-Noising Source Code function dnrc( img, w_name, snr, n_sim, t_low, t_high, t_points, j_O, b_thresh ); % dn2d( 'img', 'w_name', snr, n_sim, t_low, t_high, t_points, j_O, b_thresh ); % Function to de-noise an image non-coherently by computing the wavelet % transfor musing the mother wavelet 'w_name', applying soft thresholding % and inverse transforming the signal to get the de-noised version of the % 1-D signal. This procedure is repeated across the rows and columns % of the image to produce the final form of the de-noised image. Note that % the de-noised images for row, column and row+column are saved as a % bitmap file. % % Inputs % img = Input image for de-noising, assumed to be a .mat file % w_name =Wavelet filter to be used % snr = Requested SNR % n_sim =Number of simulations to run % t_low = Minimum threshold % t_high = Maximum threshold % t_points = Number of threshold points % j_O = Beginning reconstruction level % b_thresh = BVDV threshold (Std. deviation, not variance) % % Outputs % Threshold vs SNRI plot % % Get the image and add noise load( img ); check= 'd_idx'; s = whos( check ); if( isempty(s) == 1 ) [bv,b_idx,dv,d_idx] = bvdv_idx( X, 3, b_thresh ); save( img, X', 'map', bv', 'dv', b_idx', 'd_idx') end [m,n] = size( X); imshow( X, map ); txt= sprintf( 'Original Image, Bv = %4.1f, Dv = %5.lf, BvDv Threshold = %2d', bv, dv, b_thresh ); title( txt ) ; % Get the threshold table and noise floor variance 62

PAGE 75

if( t_low -= t_high ) thresh = vs_thresh( t_low, t_high, t_points ); else thresh = t_low; end n_thresh = length( thresh ); n_sigma = n_floor( X, snr ); fname = sprintf( 'dnrc_%1c_n%03d_t%02.0f_%02.0f.txt', img(1), n_sim, t_low, t_high ); fid = fopen( fname, 'wt' ); fprintf( fid, Thresh BV DV SNRI SNRI_BV SNRI_DV\n' ); fprintf( fid, %6.1f %6.1f Original lmage\n', bv, dv ); % Get the number of decomposition levels levels= wmaxlev( min(m,n), w_name ); j_O =levels-1; snri_r = zeros(n_thresh,3); snri_c = zeros(n_thresh,3); snri_rc = zeros(n_thresh,3); img_bd = zeros(n_thresh,4,2); for i = 1 :n_thresh fprintf( 1, Thresh: %d of %d\n', i, n_thresh ); for j = 1 :n_sim thresh(i) = 5.0; dnr_img = zeros(m,n); randn('state ',sum( 1 OO*clock)); gn = randn( m, n ); Xpn = X + n_sigma gn; [bv,dv] = bvdv( Xpn, 3, b_thresh, b_idx, d_idx ); img_bd(i,1,1) = img_bd(i,1,1) + bv; img_bd(i, 1,2) = img_bd(i,1 ,2) + dv; fprintf( 1, 'R: sim = %d of %d\n', j, n_sim ); fork= l:m s = Xpn(k,l :n); [C,L] = wavedec( s,levels, w_name ); % Now threshold the signal [C, L] = vs_den( C, L, j_O, thresh(i) ); % And do the inverse wavelet transform dnr_img(k,1:n) = waverec( C, L, w_name ); end [bv,dv] = bvdv( dnr_img, 3, b_thresh, b_idx, d_idx ); img_bd(i,2,1) = img_bd(i,2,1) + bv; img_bd(i,2,2) = img_bd(i,2,2) + dv; [tmp,tmp_b,tmp_d] = meas2_snri( X, Xpn, dnr_img, b_idx, d_idx ); 63

PAGE 76

snri_r(i, 1) = snri_r(i, 1) + tmp; snri_r(i,2) = snri_r(i,2) + tmp_b; snri_r(i,3) = snri_r(i,3) + tmp_d; fprintf(fid, 'row: %6.2f %6.2f %6.2f %6.2f %6.2f\n', bv,dv,tmp,tmp_b,tmp_d); % De-noise the columns of the row filtered image dnrc_img = zeros(m,n); fprintf( 1, 'RC: sim = %d of %d\n', j, n_sim ); thresh(i) = 3.8; fork= 1:n s = dnr_img(1:m,k); [C,L] = wavedec( s, levels, w_narne ); % Now threshold the signal [C, L] = vs_den( C, L, j_O, thresh(i) ); % And do the inverse wavelet transform dnrc_img(1:m,k) = waverec( C, L, w_narne ); end [bv,dv] = bvdv( dnrc_img, 3, b_thresh, b_idx, d_idx ); img_bd(i,3,1) = img_bd(i,3,1) + bv; img_bd(i,3,2) = img_bd(i,3,2) + dv; [tmp,tmp_b,tmp_d] = meas2_snri( X, Xpn, dnrc_img, b_idx, d_idx ); snri_rc(i,1) = snri_rc(i,1) + tmp; snri_rc(i,2) = snri_rc(i,2) + tmp_b; snri_rc(i,3) = snri_rc(i,3) + tmp_d; fprintf(fid, 'r/c: %6.2f %6.2f %6.2f %6.2f %6.2f\n', bv,dv,tmp,tmp_b,tmp_d); % De-noise the columns only dnc_img = zeros(m,n); fprintf( 1, 'C: sim = %d of %d\n', j, n_sim ); thresh(i) = 5.5; fork= 1:n s = Xpn(l :m,k); [C,L] = wavedec( s, levels, w_narne ); % Now threshold the signal [C, L] = vs_den( C, L, j_O, thresh(i) ); % And do the inverse wavelet transform dnc_img(l:m,k) = waverec( C, L, w_name ); end [bv,dv] = bvdv( dnc_img, 3, b_thresh, b_idx, d_idx ); img_bd(i,4,1) = img_bd(i,4,1) + bv; img_bd(i,4,2) = img_bd(i,4,2) + dv; [tmp,tmp_b,tmp_d] = meas2_snri( X, Xpn, dnc_img, b_idx, d_idx ); snri_c(i, 1) = snri_c(i, 1) + tmp; snri_c(i,2) = snri_c(i,2) + tmp_b; 64

PAGE 77

snri_c(i,3) = snri_c(i,3) + tmp_d; fprintf(fid, 'c: %6.2f %6.2f %6.2f %6.2f %6.2f\n', bv,dv,tmp,tmp_b,tmp_d); % end % End n_sim loop clear dnr_img dnrc_imb dnc_img; end % End thresh loop snri_r = snri_r I n_sim; snri_rc = snri_rc I n_sim; snri_c = snri_c I n_sim; img_bd = img_bd I n_sim; fprintf( tid, %6.1f %6.1f for i = 1 :n_thresh Image+ Noise\n', img_bd(1,1,1), img_bd(1,1,2) ); fprintf( fid, Rows: %7.4f %6.2f %6.1f %8.4f %8.4f %8.4f\n', thresh(i), img_bd(i,2,1), img_bd(i,2,2), snri_r(i,1), snri_r(i,2), snri_r(i,3) ); end for i = 1 :n_thresh fprintf( fid, Columns: %7.4f %6.2f %6.lf %8.4f %8.4f %8.4f\n', thresh(i), img_bd(i,4,1), img_bd(i,4,2), snri_c(i,1), snri_c(i,2), snri_c(i,3) ); end for i = 1 :n_thresh fprintf( tid, 'Row/Column: %7.4f %6.2f %6.lf %8.4f %8.4f %8.4f\n', thresh(i), img_bd(i,3,1), img_bd(i,3,2), snri_rc(i,1), snri_rc(i,2), snri_rc(i,3) ); end fclose( fid ); if( n_sim > 1 ) figure; plot( thresh, snri_r(:, 1), 'k' ); xlabel( 'Threshold'); ylabel( 'SNRI') txt= sprintf( 'Row Reconstruction, SNRI vs Threshold,# Sim = %d', n_sim ); title( txt ); figure; plot( thresh, snri_c(:,1), 'k'); xlabel( 'Threshold'); ylabel( 'SNRI') txt= sprintf( 'Column Reconstruction, SNRI vs Threshold,# Sim = %d', n_sim ); title( txt ); figure; plot( thresh, snri_rc(:,1), 'k'); xlabel( 'Threshold'); ylabel( 'SNRI') 65

PAGE 78

txt= sprintf( RC Reconstruction, SNRI vs Threshold,# Sim = %d', n_sim ); title( txt); end save dnrcj; function [C, L] = vs_den( C, L, j_O, thresh ); % Function: vs_den % [C, L] = vs_den( C, L, j_O ); % Function to denoise the wavelet transform of the signal s. This algorithm % uses Donoho and Johnstone's method titled Riskshrink' to do hard or % soft thresholding of the signal. It assumes the noise floor is known. % Hard thresholding sets all coefficients above level j_O to zero that fall % below the threshold. Soft thresholding subtracts out the threshold level % from the coefficients above level j_O. % % Inputs % C =Wavelet decomposition coefficients returned by wavedec % L =Bookkeeping vector returned by wavedec % j_O = Lowest decomposition level to apply thresholding, default = jmax/3 % thresh = Threshold value for de-noising % % Outputs % C = De-noised wavelet coefficient vector C % L = Bookkeeping vector returned by wavedec % First set the jmin and jmax values N = length(L); type = 'soft'; if( nargin == 2 ) j_O =floor( (N -2) I 3 ); end % Figure out array indices wavelet shrinkage % r_start = L(l) + 1; % r_stop = r_start +sum( L(2:j_O+ 1))-1; % r_start =sum( L(1:N-j_0-1)) + 1; r_start = sum( L(1 :j_O) )+ 1; r_stop =length( C); % Get the length of the vector s for selection of the threshold lambda l_indx = log(L(N)) I log( 2 ); % load lambda; if( type = 'soft') z_indx = find( abs( C(r_start:r_stop) ) <= thresh ) + r_start -1; s_indx = find( abs( C(r_start:r_stop) ) > thresh ) + r_start -1; C(z_indx) = 0.; C(s_indx) = sign( C(s_indx) ) ( abs( C(s_indx) ) thresh ); 66

PAGE 79

else end I_indx =find( abs( C(r_start:r_stop)) <=thresh); C(l_indx) = 0.; 67

PAGE 80

Non-Separable De-Noising Source Code function dn2d_bv( img, w_name, snr, n_sim, t_low, t_high, t_points, j_O, b_thresh ); % dn2d_bv( 'img', 'w_name', snr, n_sim, t_low, t_high, t_points, j_O, b_thresh ); % Function to de-noise an image using the 2-D theory of Donoho and Johnstone. % This function computes the threshold, thresholding type and approximation % coefficients from 'ddencmp' to be used in the function 'wdencmp'. % De-noising is achieved by calling 'wdencmp'. % % Inputs % img = Input image for de-noising, assumed to be a .mat file % w_name =Wavelet filter to be used % snr = Requested SNR % n_sim = Number of simulations to run % t_low = Minimum threshold % t_high = Maximum threshold % t_points =Number of threshold points % j_O = Beginning reconstruction level % b_thresh = BvDv threshold (Std. deviation, not variance) % % Outputs % Threshold vs SNRI plot % Get the image and add noise load( img ); check= 'd_idx'; s = whos( check ); if( isempty(s) == 1 ) [bv,b_idx,dv,d_idx] = bvdv_idx( X, 3, b_thresh ); save( img, X', 'map', bv', 'dv', b_idx', 'd_idx') end [m,n] = size( X); imshow( X, map ); txt= sprintf( 'Original Image, Bv = %4.1f, Dv = %5.1f, BvDv Threshold = %2d', bv, dv, b_thresh ); title( txt ); fname = sprintf( 'dn2d_%s_n%03d_t%02.0f_%02.0f.txt', img, n_sim, t_low, t_high ); fid = fopen( fname, 'w' ); fprintf( fid, Thresh Bv Dv SNRI SNRI_Bv SNRI_Dv\n' ); fprintf( fid,' %6.2f%6.1f Original Image\n', bv, dv ); % Get the threshold table and noise floor variance if( t_low -= t_high ) thresh= vs_thresh( t_low, t_high, t_points ); else thresh = t_low; end 68

PAGE 81

n_thresh = length( thresh ); n_sigma = n_floor( X, snr ); k_app = 1; % Plot a representative noisy image gn = randn( m, n ); Xpn = X + n_sigma gn; [bv,dv] = bvdv( Xpn, 3, b_thresh, b_idx, d_idx ); figure; imshow( Xpn, map ); txt= sprintf( 'Image+ Noise, SNR = %2d, Bv = %4.lf, Dv = %5.lf, BvDv = %2d', snr, bv, dv, b_thresh ); title( txt ); clear gn; fprintf( fid, 'Simulations: %d, BvDv Threshold: %d\n', n_sim, b_thresh ); fprintf( fid, %6.2f %6.lf Image+ Noise, SNR = %d\n', bv, dv, snr ); snri = zeros(n_thresh,3); % De-noise the image in a 20 proper sense for i = 1 :n_thresh fprintf( 1, Thresh %d of %d\n', i, n_thresh ); for j = 1 :n_sim fprintf( 1, 'Sim %d of %d\n', j, n_sim ); randn( 'state', sum( IOO*clock) ); gn = randn( m, n ); end Xpn = X + n_sigma gn; dn_img = wdencmp( 'gbl', Xpn, w_name, 2, thresh(i), 's', k_app ); [bv,dv] = bvdv( dn_img, 3, b_thresh, b_idx, d_idx ); [tmp,tmp_b,tmp_d] = meas2_snri( X, Xpn, dn_img, b_idx, d_idx ); snri{i,1) = snri(i,l) + tmp; snri(i,2) = snri(i,2) + tmp_b; snri{i,3) = snri(i,3) + tmp_d; snri(i,:) = snri{i,:) I n_sim; fprintf( fid, '%7.4f %6.2f %6.1f %8.4f %8.4f %8.4f\n', thresh(i), bv, dv, snri{i,l), snri(i,2), snri(i,3) ); end txt = sprintf( 'SNRI = %4.lf, SNRI_B_v = %4.lf, SNRI_D_v = %4.lf, # Sim = %3d', snri(l,l), snri(1,2), snri(1,3), n_sim ); xlabel( txt); fclose( fid ); figure; hold on; plot( thresh, snri(:,1) ); xlabel( Threshold'); ylabel( 'SNRI') 69

PAGE 82

title(['SNRI vs Threshold, ',num2str(n_sirn),' Sirnulations1 ); save dn2d_j; 70

PAGE 83

Modified Cubic Filtering Source Code function polyf(img,n_sim,snr,t_low,t_high,t_step,b_thresh,a_low,a_high,a_step) % polyf('img',n_sim,snr,t_low,t_high,t_step,b_thresh,a_low,a_high,a_step ) % Function to enhance an image using the polynomial filter described % by A. Vanzo, G. Ramponi and G. L. Sicuranza. This algorithm is % similar to unsharp masking except four 3 x 1 directional Laplacian % filters act on the image instead of the UM 3 x 3 kernel. A % filter is applied to indicate direction of the edge which is then % fed into the feedback loop of the highpass filter path. % % Inputs % img = Input image for de-noising, assumed to be a .mat file % n_sim = Number of simulations % snr =Requested Signal to Noise Ratio % t_low = Minimum threshold % t_high = Maximum threshold % t_step = Step size in pixels between low and high % b_thresh = BVDV threshold (Std. deviation, not variance ) % a_low = Minimum smoothing factor for the low pass filter 0. <= a <= 0.1 % a_high = Maximum smoothing factor for the low pass filter 0. <= a <= 0.1 % a_step = Step size between a_low and a_high % % Outputs % BV, DV, SNRI, SNRI_BV, SNRI_DV % Get the image and comput the noise floor load( img ); check= 'd_idx'; s = whos( check ); if( isempty(s) = 1 ) % Compute the BVDV and get the background and detail pixels [bv,b_idx,dv,d_idx] = bvdv_idx( X, 3, b_thresh ); save( irng, X', 'map', 'bv', 'dv', b_idx', 'd_idx') end [m,n] =size( X); n_sigrna = n_floor( X, snr ); % Set the detail pixel threshold values if( t_low -= t_high ) max_pix = t_low:t_step:t_high; else max_pix = t_Iow; end n_mxpx = Iength(max_pix); 71

PAGE 84

% Set the smoothing threshold values if( a_low -= a_high ) a= a_low:a_step:a_high; else a= a_low; end n_a = length( a ); p_snri = zeros(n_a,n_mxpx,3); p_bv = zeros(n_a,n_mxpx); p_dv = zeros(n_a,n_mxpx); n_bv = 0.; n_dv=O.; tmp_b =0.; tmp_d = 0.; fname = sprintf( 'p_% 1 c_n%02d_a%02d_ %02d_t%03d_ %03d.txt',img(l ),n_sim,fix(l OO*a(l )),fix(1 OO*a(n_a)),max_pix (1),max_pix(n_mxpx) ); tid = fopen( fname, 'wt' ); fprintf( tid, 'Original Image: BV = %6.2f DV = %7.1f\n', bv, dv ); imshow( X, map ); txt= sprintf( 'Original Image, Bv = %4.1f, Dv = %5.lf, bv, dv ); title( txt ); for h = l:n_a fprintf( 1, 'a: %d of %d\n', h, n_a ); for i = 1 :n_sim randn( 'state', sum( 100*clock) ); gn = randn( m, n ); Xpn = X + n_sigma gn; [tmp_b,tmp_d] = bvdv( Xpn, 3, b_thresh, b_idx, d_idx ); n_bv = n_bv + tmp_b; n_dv = n_dv + tmp_d; % Get the directional laplacian of the noisy image [w_h, w_v, w_ 45, w_135] =laplacian( Xpn ); % S filter the directional laplacian [t_h, t_v, t_ 45, t_135] = s_filter( w_h, w_v, w_ 45, w_135); t = ( t_h + t_ v + t_ 45 + t_135 ); tmp =max( max( abs(t) ) ); lambda= tmp .I max_pix; clear w_h w_v w_ 45 w_135 t_h t_v t_ 45 t_135; % Lowpass filter the image fltrd_img = lowpass( Xpn, a(h) ); for j = 1 :n_mxpx; 72

PAGE 85

fprintf( 1, 'Sim: %d of%d lambda: %d of%d\n', i, n_sim,j,length(lambda) ); tmp_t = t llambdaU); % Processed image = fltrd_img + tmp_t end p_img = fltrd_img + tmp_t; [bv,dv] = bvdv( p_img, 3, b_thresh, b_idx, d_idx ); [tmp,tmp_b,tmp_d] = meas2_snri( X, Xpn, p_img, b_idx,d_idx ); p_snri(h,j,l) = p_snri(h,j,l) + tmp; p_snri(h,j,2) = p_snri(h,j,2) + tmp_b; p_snri(h,j,3) = p_snri(h,j,3) + tmp_d; p_bv(h,j) = p_bv(h,j) + bv; p_dv(h,j) = p_dv(h,j) + dv; end % End sim loop end % End a loop p_snri = p_snri I n_sim; p_bv = p_bv I n_sim; p_dv = p_dv I n_sim; n_bv = n_bv I n_sim; n_dv = n_dv I n_sim; for h = l:n_a fprintf( fid, \n\n pix Bv Dv SNRI SNRI_Bv SNRI_Dv\n' ); for i = 1 :n_rnxpx; fprintf( fid, '%5.3f %3d %4.1f %5.1f %6.2f %6.1f %6.lf\n', a(h), max_pix(i), p_bv(h,i), p_dv(h,i), p_snri(h,i, I), p_:_snri(h,i,2), p_snri(h,i,3) ); end end % Do these figures only if you want them % randn( 'state', sum( IOO*clock) ); % gn = randn( m, n ); % Xpn = X + n_sigma gn; % figure; % imshow( Xpn, map ); % txt = sprintf( 'Image+Noise, Bv = %4.1f, Dv = %5.lf, BvDv Thresh = %2d', n_bv, n_dv, b_thresh ); % title( txt ); % figure; % imshow( p_img, map ); % txt= sprintf( Filtered Image, Bv = %4.1f, Dv = %5.1f, BvDv Thresh= %2d', bv, dv, b_thresh ); % title( txt); % txt= sprintf( Lambda= %d, SNRI = %4.1f, SNRI_B_v = %4.lf, SNRI_D_v = %5.lf, t_low, p_snri(l,l), p_snri(1,2), p_snri(1,3) ); % xlabel( txt ); % txt= sprintf( 'f:\db4_pictures\polyf_ %s_n%02d_t%03d', img, n_sim, t_low ); 73

PAGE 86

% fname = 'f:\\db4_pictures\p_% 1 c_n%02d_a%0.2f_t%03d_',img( 1 ),n_sim,a( 1 ),max_pix( 1) ); % bmpwrite( p_img, map, fname ); fclose( fid ); save p_snri; if( n_rnxpx -= 1 ) figure; mesh( max_pix, p_snri(:,:,l) ); ylabel( 'Smoothing Factor'); xlabel( 'Lambda'); zlabel( 'SNRI' ); txt= sprintf( 'SNRI vs lambda For Image "%s"', img ); title( txt ); end function [w_h,w_v,w_45,w_135] =laplacian( img_trnp ); % [w_h,w_v,w_45,w_135] =laplacian( img_trnp) % Function to compute the horizontal Laplacian of the input image. % The algorithm is from A. Vanzo, G. Ramponi and G. Sicuranza's % paper "An Image Enhancement Technique Using Polynomial Filters". % The Laplacian algorithm defines a highpass filter whose output % are 2*imgQ,k)-imgQ-l,k)imgG+l,k). This filter is run % over the entire image to form the w_h matrix. The edge points % are arbitrarily duplicated from the first row or column to provide % a value at the negative index positions. % % Inputs % img_trnp = Input matrix % % Outputs % w _h = Horizontal Laplacian matrix % w_v =Vertical Laplacian matrix % w_ 45 =Diagonal Laplacian matrix along the 45 degree axis % w_135 =Diagonal Laplacian matrix along the 135 degree axis [m,n] =size( img_tmp ); w_h = zeros(m,n); w_v = zeros(m,n); w_ 45 = zeros(m,n); w_135 = zeros(m,n); % Enlarge the image to deal with the edges. m = m + 2; n = n + 2; img = zeros(m,n); img(2:m-l ,2:n-1) = img_trnp; 74 sprintf(

PAGE 87

% First and last columns excluding corners img(2:m-1,1) = img_tmp(1:m-2,2); img(2:m-1,n) = img_tmp(l:m-2,n-3); % First and last rows excluding corners img(1,2:n-1) = img_tmp(2,1:n-2); img(m,2:n-1) = img_tmp(m-3,l:n-2); %Comers img(1,1) = img_tmp(2,2); img(m,l) = img_tmp(m-3,2); img(1,n) = img_tmp(2,n-3); img(m,n) = img_tmp(m-3,n-3); for j = 2:m-1 fork= 2:n-1 % Do the horizontal laplacian = 2 img(j,k)img(j-1,k)img(j+l,k); % Do the vertical laplacian w_v(j-1,k-1) = 2 img(j,k)img(j,k-1)img(j,k+l); % Do the diagonallaplacians w_ 45(j-l,k-1) = 2 img(j,k)img(j-1,k-1)img(j+l,k+l); w_135(j-l,k-1) = 2 img(j,k)img(j-1,k+l)img(j+l,k-1); end end function [s_h,s_ v,s_ 45,s_135] = s_filter( w_h, w_ v, w_ 45, w_135 ); % [s_h,s_v,s_ 45,s_135] = s_filter( w_h, w_v, w_ 45, w_135) % Function to compute the S filtering of the directional Laplacian. % The algorithm is from A. Vanzo, G. Ramponi and G. Sicuranza's % paper "An Image Enhancement Technique Using Polynomial Filters". % The S filter algorithm detects correlated data in a direction % orthogonal to the w matrix its operating on. This filter is run % over the entire image to form the s_ matrices. The edge points % are arbitrarily duplicated from the first row or column to provide % a value at the negative index positions. % % Inputs % w_h =Horizontal directional Laplacian matrix % w_v =Vertical directional Laplacian S filtered matrix % w_ 45 =Directional Laplacian matrix along the 45 degree axis % w_135 =Directional Laplacian matrix along the 135 degree axis % % Outputs % s_h = Horizontal S filtered matrix % s_v =VerticalS filtered matrix % s_ 45 = Diagonal S filtered matrix along the 45 degree axis % s_135 = Diagonal S filtered matrix along the 135 degree axis [m,n] =size( w_h ); 75

PAGE 88

m=m+2; n = n + 2; t_h = zeros(m,n); t_h(2:m-1,2:n-1) = w_h; % First and last columns excluding corners t_h(2:m-l,l) = w_h(l:m-2,2); t_h(2:m-l,n) = w_h(l:m-2,n-3); % First and last rows excluding corners t_h(1,2:n-1) = w_h(2,1:n-2); t_h(m,2:n-1) = w_h(m-3,l:n-2); %Corners t_h(l,l) = w_h(2,2); t_h(m,l) = w_h(m-3,2); t_h(l,n) = w_h(2,n-3); t_h(m,n) = w_h(m-3,n-3); t_ v = zeros(m,n); t_v(2:m-1,2:n-1) = w_v; % First and last columns excluding corners t_v(2:m-l,l) = w_v(l:m-2,2); t_v(2:m-l,n) = w_v(l:m-2,n-3); % First and last rows excluding corners t_v(1,2:n-1) = w_v(2,l:n-2); t_v(m,2:n-1) = w_v(m-3,l:n-2); %Comers t_v(l,l) = w_v(2,2); t_v(m,l) = w_v(m-3,2); t_v(l,n) = w_v(2,n-3); t_v(m,n) = w_v(m-3,n-3); t_ 45 = zeros(m,n); t_ 45(2:m-1,2:n-1) = w_ 45; % First and last columns excluding corners t_ 45(2:m-l,l) = w_ 45(l:m-2,2); t_ 45(2:m-l,n) = w_ 45(1:m-2,n-3); % First and last rows excluding corners t_ 45(1,2:n-i) = w_ 45(2,l:n-2); t_ 45(m,2:n-1) = w_ 45(rn-3,1:n-2); %Corners t_ 45(1,1) = w_ 45(2,2); t_ 45(m,l) = w_ 45(m-3,2); t_ 45(1,n) = w_ 45(2,n-3); t_ 45(m,n) = w_ 45(m-3,n-3); t_135 = zeros(m,n); t_135(2:m-1,2:n-1) = w_135; % First and last columns excluding corners t_135(2:m-l,l) = w_l35(1:m-2,2); t_135(2:m-l,n) = w_135(1:m-2,n-3); % First and last rows excluding corners t_135(1,2:n-1) = w_135(2,l:n-2); t_135(m,2:n-1) = w_135(m-3,l:n-2); %Corners t_135(1,1) = w_135(2,2); t_135(m,l) = w_135(m-3,2); t_l35(l,n) = w_135(2,n-3); t_l35(m,n) = w_135(m-3,n-3); % Do the S filter computation for j = 2:rn-1 fork= 2:n-1 % Do the vertical S filter 76

PAGE 89

tmp = [t_hG-l,k-l:k+I) t_hG,k-I:k+l) t_hG+I,k-I:k+I)J; s_vG-I,k-1) = tmp(5) abs( tmp(2) abs( tmp(8)) + ... abs( tmp(2) ) tmp(8) ) + ... abs( tmp(5) ) ( tmp(2) abs( tmp(8) ) + ... abs( tmp(2) ) tmp(8) ); % Do the horizontal S filter tmp = [t_vG-1,k-1:k+1) t_vG,k-1:k+1) t_vG+1,k-1:k+l)]; s_hG-1,k-1) = tmp(5) abs( tmp(4) abs( tmp(6)) + ... abs( tmp(4)) tmp(6)) + ... abs( tmp(5)) ( tmp(4) abs( tmp(6)) + ... abs( tmp(4)) tmp(6) ); % Do the 135 degree S filter tmp = [t_ 45G-1,k-1:k+1) t_ 45G,k-1:k+1) t_ 45G+1,k-1:k+1)]; s_135G-1,k-l) = tmp(5) abs( tmp(1) abs( tmp(9)) + ... abs( tmp(1)) tmp(9)) + ... % Do the 45 degree S filter abs( tmp(5)) ( tmp(l) abs( tmp(9)) + ... abs( tmp(1)) tmp(9)) /2.; tmp = [t_135G-I,k-I:k+1) t_135G,k-1:k+1) t_135G+1,k-1:k+1)]; s_ 45G-1,k-1) = tmp(5) abs( tmp(3) abs( tmp(7)) + ... end end function f_img = lowpass( img, a ); % f_img = lowpass( img, a ); abs( tmp(3) ) tmp(7) ) + ... abs( tmp(5)) ( tmp(3) abs( tmp(7)) + ... abs( tmp(3) ) tmp(7) ) /2.; % Function to compute the low pass filtered version of % The filter uses A. Vanzo, G. Rarnponi and G. Sicuranza's % lowpass filter whose coefficients are given by % Ia a al % Ia 1-8a al % Ia a al % as documented in their paper "An hnage Enhancement Technique % Using Polynomial Filters". % % Inputs % img = hnage to be lowpass filtered % a= Filtering constant, 0. <= a<= 0.1 % % Outputs % f_img = Lowpass filtered version of 77

PAGE 90

[m,n] =size( img ); m= m+ 2; n= n + 2; t_img = zeros(m,n); t_img(2:m-1,2:n-1) = img; % First and last columns excluding corners t_img(2:m-l,l) = img(l:m-2,2); t_irng(2:m-l,n) = img(l:m-2,n-3); % First and last rows excluding comers t_img(l,2:n-1) = img(2,l:n-2); t_img(m,2:n-1) = img(m-3,1:n-2); %Comers t_img(1,1) = img(2,2); t_img(m,1) = img(m-3,2); t_img(1,n) = img(2,n-3); t)mg(m,n) = img(m-3,n-3); % Initialize the lowpass filter lpf =[a a a;a 1-S*a a;a a a]; % Do the lowpass filter computation for j = 2:m-1 fork= 2:n-1 tmp = [t_imgQ-1,k-1:k+ 1);t_imgQ,k-l:k+ l);t_imgG+ 1,k-1:k+ 1)]; f_imgQ-1,k-1) =sum( sum( lpf .* trnp) ); end end 78

PAGE 91

One Dimensional De-Noising Source Code function [mean,thresh,spn,spn_r] = dnld( s...:.type, w_name, snr, n_sim, t_low, t_high, t_points,j_O ); % dnld( s_type, 'w_name', snr, n_sim, t_low, t_high, t_points, j_O ); % 'dnld' adds noise to the input signal to achieve the requested SNR % and de-noises the signal using Donoho's soft thresholding algorithm. % It uses 'wname' for the mother wavelet to decompose the signal. % % Inputs % s_type =Data type: 'Blocks', 'Bumps', Doppler', Heavisine' % w_narne =Wavelet filter to be used % snr = Requested SNR % n_sim =Number of simulations to run % t_low = Minimum threshold % t_high = Maximum threshold % t_points = Number of threshold points % j_O =Beginning reconstruction level % % Outputs % Threshold vs. SNR plot % Generate the signal and add noise if( strcmp( s_type, 'Blocks')= 1 ) s =blocks; elseif( strcmp( s_type, 'Bumps')== 1 ) s =bumps; elseif( strcmp( s_type, Heavisine') == 1 ) s = heavisine; elseif( strcmp( s_type, Doppler' ) == 1 ) s =doppler; else error( 'Could not match data type, check spelling'); end m = length( s ); % Get the threshold table and noise floor variance if( t_low -= t_high ) thresh= vs_thresh( t_low, t_high, t_points ); else thresh = t_low; end n_thresh = length( thresh ); n_sigma = n_floor( s, sm ); % levels= wmaxlev( m, w_name ); levels= 6; 79

PAGE 92

randn('state',sum( I OO*clock)); gn = randn( I, m ); spn = s + n_sigma gn; figure; subplot(3, I ,2); plot( spn, 'k' ); [thr,sorh,keepapp] = ddencmp('den', 'wv',spn); clear thr sorb gn; % Output some results tid= fopen( 'f:\report\simfoml.doc', 'w' ); fprintf( tid, Threshold SNRI Mean SNRI Sigma\n' ); fori= I:n_thresh fprintf( I, 'Processing Loop %d\n', i ); fprintf( tid, Threshold= %5.3f\n', thresh(i) ); snri =zeros(!, n_sim); for j = I :n_sim randn('state ',sum( 1 OO*clock)); gn = randn( 1, m ); spn = s + n_sigma gn; [thr,sorh,keepapp] = ddencmp('den', 'wv',spn); clear gn % Do the inverse wavelet transform spn_r = wdencmp('gbl',spn, w _name,j_O,thresh(i),'s',keepapp ); % Compute the SNRI end end snri(j) = snri(j) + measi_snri( s, spn, spn_r, tid); if(j = 1) myplot( spn_r, thresh(i), snri(j), j_O ); end mean(i) = sum( snri ) I n_sim; sig(i) = sqrt( sum( ( snri-mean(i) )./\2) I (n_sim-1) ); clear snri fprintf( tid, Mean SNRI: %6.4f Sigma SNRI: %6.4f\n\n',mean(i),sig(i) ); fclose( tid); % figure; subplot(3, 1, I); plot( thresh, mean, 'k' ); xlabel(Threshold); ylabel('SNR Improvement); title( ['SNRI vs Threshold for ',s_type,', M = ',num2str(m),', J_n = ',num2str(j_O),', ',num2str(n_sim),' Simulationsj ); 80

PAGE 93

function s = blocks( dur, cnt ); % s = blocks( dur, cnt ); % Function to compute a bumpy wave using Donoho and Johnstone's function: % f(t) =sum( h*k( (t-tj)/w), k(t) = (1 + abs(t) )"-4 % where h = [4 -53 -4 5 -4.2 2.1 4.3-3.1 2.1 -4.2] % tj = [0.1 0.13 0.15 0.23 0.25 0.40 0.44 0.65 0.76 0.78 0.81] % %Inputs % dur = time duration of sampled signal, default= I sec % cnt = number of data points to compute, default = 2048 % %Outputs % s =Signal % First lets deal with the defaults ifnargin=O dur = 1.0; cnt = 2048; elseif nargin = 1 cnt= 2048; end % Initilize the vectors h, w and tj h = [4 -53 -4 5-4.2 2.1 4.3-3.1 2.1 -4.2]; tj = [0.1 0.13 0.15 0.23 0.25 0.40 0.44 0.65 0.76 0.78 0.81]; % Now define the sample rate f_s f_s = cnt I dur; % Now initialize the time vector t t = O:cnt-1; t = t/ f_s; % Now computes over the intervals j = l, ... ,J s = zeros(1,cnt); for j = 1 :length(tj) end k = ( 1 +sign( (ttjG)))) /2; s = s + h(j) k; function s = bumps( dur, cnt ); % Function to compute a bumpy wave using Donoho and Johnstone's function: % f(t) =sum( h*k( (t-tj)/w), k(t) = (1 + abs(t) )"-4 % where h = [4 53 4 5 4.2 2.1 4.3 3.1 5.1 4.2] % w = [0.005 0.005 0.006 0.01 O.Ql 0.03 0.01 0.01 0.005 0.008 0.005] % tj = [0.1 0.13 0.15 0.23 0.25 0.40 0.44 0.65 0.76 0.78 0.81] % dur = time duration of sampled signal, default = 1 sec % cnt = number of data points to compute, default = 2048 81

PAGE 94

% First lets deal with the defaults if nargin == 0 dur = 1.0; cnt = 2048; elseif nargin == 1 cnt= 2048; end % Initilize the vectors h, w and tj h = [4 53 4 5 4.2 2.1 4.3 3.1 5.14.2]; w;:: [0.005 0.005 0.006 0.01 0.01 0.03 0.01 0.01 0.005 0.008 0.005]; u = [0.1 0.13 0.15 0.23 0.25 0.40 0.44 0.65 0.76 0.78 0.81]; % Now define the sample rate f_s f_s = cnt I dur; % Now initialize the time vector t t = O:cnt-1; t = tl f_s; % Now compute s over the intervals j = 1 ... ,J s = zeros(l ,cnt); for j = 1:length(tj) k = ( 1 + abs( (t-tj(j)) ./ w(j)) )."-4; s = s + h(j) k; end function s = heavisine( dur, cnt ); % s = heavisine( dur, cnt ); %Function to compute a goofy sine wave using Donoho and Johnstone's function: % f(t) = 4 sin(4*pi*t)sign(t-0.3)sign(0.72-t) % dur = time duration of sampled signal, default = 1 sec % cnt = number of data points to compute, default = 2048 % %Inputs % dur = Signal duration in seconds % cnt = Number of samples to compute % %Outputs % s =Signal % First lets deal with the defaults if nargin == 0 dur = 1.0; cnt = 2048; elseif nargin = 1 cnt= 2048; end 82

PAGE 95

% Now define the sample rate f_s f_s = cnt I dur; % Now initialize the time vector t t = O:cnt-1; t = tl f_s; s = 4 sin(4*pi*t)sign(t-0.3)sign(0.72-t); function s =doppler( dur, cnt, epsilon ); % s = doppler( dur, cnt, epsilon ); %Function to compute a chirped signal using Donoho and Johnstone's function: % f(t) = { t*(1-t)".5 *sin( 2*pi*(l+epsilon)l(t+epsilon) }, epsilon= 0.05 %Inputs % dur = time duration of sampled signal, default = 1 sec % cnt = number of data points to compute, default = 2048 % epsilon =damping factor, default= 0.05 %Outputs % x = Doppler signal % First lets deal with the defaults if nargin == 0 dur = 1.0; cnt= 2048; epsilon= 0.05; elseif nargin == I cnt= 2048; epsilon= 0.05; elseif nargin = 2 epsilon= 0.05; end % Now define the sample rate f_s f_s = cnt I dur; % Now initialize the time vector t t = O:cnt-1; t = tl f_s; s = ( t .* (dur-t) ).".5 .*sin( 2*pi*(1+epsilon)./(t+epsilon) ); 83

PAGE 96

Utilities function [bv,dv] = bvdv( img, w_size, thresh, b_idx, d_idx ); % [bv, dv] = bvdv( img, w_size, thresh, b_idx, d_idx ); % bvdv' computes the background variance detail variance figure % of merit for the input image. The variance is computed local to % every pixel in the image using an odd window size (square) of 'w_size' % input by the user. The variance for every pixel is computed and a % histogram of the variances is displayed. Based on this histogram % the user is queried for a threshold which defines whether a given % pixel is assigned to the background or detail category. % % Inputs % img = Input image % w_size =Window size for computing local variance % thresh = Variance threshold between background and detail regions % b_idx =Optional input for computing the BV measure % d_idx = Optional input for computing the DV measure % % Outputs % bv = Background variance % dv = Detail variance % Initialize parameters [m,n] = size( img ); mid= floor( w_size I 2 ); w2 = w_size"2; bv= 0.; n_bv = 0.; dv= 0.; n_dv = 0.; % Enlarge the image to deal with the edges. m=m+2; n = n + 2; t_h = zeros(m,n); t_h(2:m-I,2:n-I) = img; % First and last columns excluding corners t_h(2:m-I,I) = img(I:m-2,2); t_h(2:m-I,n) = img(l:m-2,n-3); % First and last rows excluding corners t_h( I ,2:n-l) = img(2, I :n-2); t_h(m,2:n-I) = img(m-3, I :n-2); %Corners t_h(l,I) = img(2,2); t_h(m,I) = img(m-3,2); t_h(I,n) = img(2,n-3); t_h(m,n) = img(m-3,n-3); % Compute the background variance for j = I :length(b_idx) tmp = b_idx(j); 84

PAGE 97

idx = [tmp-m-1 tmp-m tmp-m+l tmp-1 tmp tmp+l tmp+m-1 tmp+m tmp+m+l]; var = t_h(idx); end var = std( t_h(idx) )A2; bv = bv + var; bv = bv I length(b_idx); % Compute the background variance for j = 1 :length(d_idx) tmp = d_idx(j); idx = [tmp-m-1 tmp-m tmp-m+l tmp-1 tmp tmp+l tmp+m-1 tmp+m tmp+m+l]; var = t_h(idx); end var = std( t_h(idx) )A2; dv = dv + var; dv = dv I length(d_idx); function [bv,b_idx,dv,d_idx] = bvdv_idx( img, w_size, thresh); % [bv,b_idx,dv,d_idx,] = bvdv_idx( img, w_size, thresh); % bvdv_idx' computes the background variance detail variance figure % of merit for the input image. The variance is computed local to % every pixel in the image using an odd window size (square) of 'w_size' % input by the user. The variance for every pixel is computed and a % histogram of the variances is displayed. Based on this histogram % the user is queried for a threshold which defines whether a given % pixel is assigned to the background or detail category. % % Inputs % img = Input image % w_size =Window size for computing local variance % thresh= Variance threshold between background and detail regions % % Outputs % bv = Background variance % b_idx = Background variance pixel indices % dv = Detail variance % d_idx = Detail variance pixel indices % Initialize parameters [m,n] =size( img ); mid= floor( w_size I 2 ); w2 = w_sizeA2; bv= 0.; n_bv = 0; dv=O.; n_dv = 0; % Get the background and detail pixels. % Enlarge the image to deal with the edges. 85

PAGE 98

m=m+2; n = n + 2; t_h = zeros(m,n); t_h(2:m-1 ,2:n-1) = img; % First and last columns excluding corners t_h(2:m-l,l) = img(l:m-2,2); t_h(2:m-l,n) = img(l:m-2,n-3); % First and last rows excluding corners t_h(l ,2:n-1) = img(2, I :n-2); t_h(m,2:n-1) = img(m-3, I :n-2); %Corners t_h(l, I)= img(2,2); t_h(m, I)= img(m-3,2); t_h(l,n) = img(2,n-3); t_h(m,n) = img(m-3,n-3); % Compute the local variance for j = mid+ I :m-mid for k = mid+ I :n-mid end end bv = bv I n_bv; dv = dv I n_dv; var =reshape( t_h(j-mid:j+mid,k-mid:k+mid), I, w2 ); var = std( var )"2; if( var >= thresh ) else end dv = dv + var; n_dv = n_dv + I; d_idx(n_dv) = (k-1) m + j; bv = bv + var; n_bv = n_bv + I; b _idx(n_bv) = (k-1) m + j; function [snri,snri_bv,snri_dv] = meas2_snri( X, Xpn, Xpn_r, b_idx, d_idx) % [snri,snri_bv,snri_dv] = meas2_snri( X, Xpn, Xpn_r, b_idx, d_idx) % Function to compute the signal to noise ratio improvement given by: % snri = IO*loglO( sum(img-img_c)"2 I sum(img-img_f)"2 ); % % Inputs % X = Uncorrupted image % Xpn = Corrupted signal % Xpn_r = Filtered signal % % Outputs % snri = Signal to noise ratio improvement % snri_bv = SNRI of background pixels % snri_bv = SNRI of detail pixels snri = IO*loglO( sum( sum( (X-Xpn)."2)) I sum( sum( (X-Xpn_r)."2)) ); snri_bv = 0.; snri_dv = 0.; 86

PAGE 99

if( nargin == 5 ) [m,n] = size( X ); m= m+2; n = n+2; tmpX =X; X= zeros(m,n); X(2:m-l ,2:n-l) = tmpX; clear tmpX; tmpXpn = Xpn; Xpn = zeros(m,n); Xpn(2:m-1 ,2:n-l) = tmpXpn; clear tmpXpn; tmpXpn_r = Xpn_r; Xpn_r = zeros(m,n); Xpn_r(2:m-1 ,2:n-1) = tmpXpn_r; clear tmpXpn_r; snri_bv = lO*IoglO( sum( ( X(b_idx)-Xpn(b_idx) )."2 ) I sum( ( X(b_idx) Xpn_r(b_idx) )."2 ) ); snri_dv = IO*IoglO( sum( ( X(d_idx)-Xpn(d_idx) ).112 ) I sum( ( X(d_idx) Xpn_r(d_idx) ).112) ); end function n_sigma = n_floor( img, snr ); % n_sigma = n_floor( img, snr ) % Function to compute the noise floor sigma for a given SNR. % SNR = 20*1og10( [signal sigma] I [noise sigma] ) % % Inputs % img = Uncorrupted signal % snr = Requested SNR % % Outputs % n_sigma =Noise floor sigma for requested SNR [m,n] =size( img ); tmp = reshape( img, I, m*n ); img_sigma = std( tmp ); n_sigma = img_sigma I 10.11(snrl20.); function thresh= vs_thresh( t_low, t_high, t_points ); % thresh = vs_thresh( m, t_low, t_high, t_points ); % Function to compute the visushrink threshold given as: % thresh = sqrt( 2 log( m ) ) % 87

PAGE 100

% Inputs % t_low = Minimum threshold % t_high = Maximum threshold % t_points = Number of threshold points % % Outputs % thresh = Table of visushrink thresholds step = ( t_high t_low ) I t_points; thresh = t_low:step:t_high; 88

PAGE 101

REFERENCES [ 1] R. E. Kalman and R. S. B ucy, "New Results in Linear Filtering and Prediction Theory", Journal of Basic Eng., Vol. 83, Dec. 1961, pp. 95-107. [2] J. C. Russ, "The Image Processing Handbook, Second Edition," CRC Press, Ann Arbor, MI, 1994. [3] J. Schroeder and M. Chitre, "Decision-Based Switched Mean/Median Filtering Using Kurtosis", EDICS SPL.SP.3.6. [4] E. W. Campbell, "Segmentation Based Adaptive Image Restoration", Masters Thesis, University of Colorado at Denver, 1997. [5] A. V. Oppenheim and R. W. Schafer, "Digital Signal Processing", Prentice Hall, Englewood Cliffs, New Jersey, 07632. [6] G. Strang and T. Nguyen, "Wavelets and Filter Banks", Wellesley-Cambridge Press, Wellesley, MA, 1996. [7] Class Notes, "Wavelets and Multi-Rate Signal Processing", Prof. Tarnal Bose, University of Colorado at Denver, spring semester, 1997. [8] M. Lang, B.C. Frenzel, 'Polynomial Root Finding', IEEE Letters (1994), Rice University, ECE Report 9308. [9] M. J. T. Smith, T. P. Barnwell, 'Exact reconstruction techniques for tree structured sub band coders', IEEE Trans. ASSP 34, 1996, pp. 343 441. [ 10] D. L. Donoho and I. M. Johnstone, "Ideal spatial adaptation by wavelet shrinkage", Biometrika, Vol. 81, No. 3, pp. 425-455, 1994. [11] Donoho, "De-Noising by Soft-Thresholding", IEEE Transactions on Information Theory, Vol. 41, No.3, May, 1995. [12] J. Lim, "2-D Signal Processing", Prentice Hall, Englewood Cliffs, New Jersey, 07632. [13] G. Ramponi, "A Simple Cubic Operator for Sharpening an Image", Proc. IEEE Workshop on Nonlinear Signals and Image Processing, 20-22 June, 1995. 89

PAGE 102

[14] A. Vanzo, G. Ramponi and G. L. Sicuranza, "An Image Enhancement Technique Using Polynomial Filters", pre-published manuscript, Journal of Electronic Imaging. 90