Citation
Adaptive filtering for speckle noise cancellation

Material Information

Title:
Adaptive filtering for speckle noise cancellation
Creator:
Sanchez, Joseph A
Place of Publication:
Denver, Colo.
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
112 leaves : illustrations ; 29 cm

Thesis/Dissertation Information

Degree:
Master's ( Master of Science)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
Department of Electrical Engineering, CU Denver
Degree Disciplines:
Electrical Engineering
Committee Chair:
Bose, Tamal
Committee Members:
Radenkovic, Miloje
Goggin, Shelly

Subjects

Subjects / Keywords:
Adaptive filters ( lcsh )
Adaptive filters ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 110-112).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Electrical Engineering.
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Joseph A. Sanchez.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
35326452 ( OCLC )
ocm35326452
Classification:
LD1190.E54 1995m .S36 ( lcc )

Full Text
ADAPTIVE FILTERING FOR SPECKLE NOISE CANCELLATION
by
Joseph A. Sanchez
B.S. Colorado School of Mines, 1989
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirement for the degree of
Master of Science
Electrical Engineering
1995
AL


This thesis for the Master of Science
degree by
Joseph A. Sanchez
has been approved for the
Department of
Electrical Engineering
by
Miloje Radenkovic
Date MXJ


Sanchez, Joseph A. (M.S., Electrical Engineering)
Adaptive Filtering for Speckle Noise Cancellation
Thesis directed by Professor Tamal Bose
ABSTRACT
Adaptive Filtering in signal processing has become more useful
through the use of fast computer systems. Algorithms are becoming more
complex and computationally intensive. Since computers are capable of
large data storage with significant speed, it is feasible to apply adaptive
methods to improve quality of data that may contain noisy effects because of
the equipment or the nature of the system. One particular usage would be
Synthetic Aperture Radar (SAR). SAR images are used in a variety of ways
involving many different areas of study. Some of the areas are weather
analysis and prediction, planetary mapping, geology, and military
intelligence. Although collecting SAR data is very effective in covering large
land masses, it contains problems that are inherent to the way the data is
processed. SAR data contains speckle noise that often corrupts the images.
The speckle noise becomes introduced in the image during the processing
of the return radar signal. The speckle noise has been explained to be
multiplicative in nature because of the phasing errors introduced when
processing the return radar signal. Since the nature of the speckle noise is
3


multiplicative, it is somewhat difficult to apply linear adaptive filtering
schemes to remove the noise. Most adaptive filtering does not account for
multiplicative nature of noise. The question is how good are the existing
adaptive filtering methods in removing speckle noise from SAR data.
This thesis investigates the use of current parameter estimation and
adaptive algorithms for speckle noise reduction. The Minimum Mean
Squared Error (MMSE) estimation algorithm and the Extended Recursive
Least Squares (ERLS) adaptive algorithm are investigated for noise
removal. The third method investigated is a statistical algorithm termed the
Sigma Filter. These methods are studied for the removal of additive noise,
multiplicative noise, and SAR characteristic noise.
This abstract accurately represents the content of the candidates thesis. I
recommend its publication.
Signed
Tamal Bose
4


Table of Contents
Chapter Page
1. INTRODUCTION..................................................1
1.1 THESIS OUTLINE...............................................9
1.2 NOTATION AND TERMINOLOGY....................................11
2. MAXIMUM A POSTERIORI (MAP) AND MAXIMUM LIKELIHOOD (ML) ESTIMATION
METHODS.........................................................12
2.1 MINIMUM MEAN-SQUARED ERROR (MMSE) ESTIMATION................15
2.2 IMPLEMENTATION OF THE ADAPTIVE MMSE ALGORITHM...............17
3. RECURSIVE LEAST SQUARES ALGORITHM............................22
3.1 EXTENDED LEAST SQUARES ALGORITHM............................25
3.2 IMPLEMENTATION OF THE ERLS FOR IMAGE PROCESSING.............26
4. NOISE MODELS AND THE SIGMA FILTER............................30
4.1 MULTIPLICATIVE (SPECKLE) NOISE MODEL PRESENTED..............30
4.2 ADDITIVE NOISE MODEL PRESENTED..............................33
4.3 SIGMA FILTER DESCRIPTION AND DERIVATION.....................34
5. PRACTICAL EXAMPLES...........................................37
5.1 VARIATIONS WITH THE SIGMA FILTER............................37
5.2 VARIATIONS FOR THE MMSE FILTER..............................38
5.3 VARIATIONS FOR THE ERLS ALGORITHM...........................39
5.4 THE RESULTS COMPARED........................................40
5.5 RESULTS WITH ADDITIVE NOISE.................................42
5.6 RESULTS WITH MULTIPLICATIVE NOISE...........................57
5.7 RESULTS WITH ACTUAL SAR.....................................67
6. CONCLUSIONS..................................................78
5


Appendix
A. MATLAB PROGRAMS..............................................80
References......................................................110
6


Figures
Page
FIGURE 1.1 MOTION OF ANTENNA..........................................3
FIGURE 1.2 TOTAL VECTOR AND INDIVIDUAL COMPONENTS.....................6
FIGURE 1.3 SAR SYSTEM AND SPECKLE FORMATION...........................6
FIGURE 1.4 SPECKLE PATTERN............................................7
FIGURE 2.1 FILTER A GIVEN SIGNAL WITH STATIONARY MULTIPLICATIVE NOISE.17
FIGURE3.1 RLSOPTIMAL FILTERING. F(T) IS DESIGNEDTO MINIMIZETHE MEAN-
SQUARED ERROR BETWEEN THE DESIRED SIGNAL D(T) AND OUTPUT Y(T).....22
FIGURE 3.2 ERLS BLOCK DIAGRAM.........................................27
FIGURE 3.3 MATRIX REPRESENTATION OF A LARGER IMAGE MATRIX.............28
FIGURE 4.1 OPTICAL COHERENT IMAGING SYSTEM............................31
FIGURE 4.2 BLOCK DIAGRAM OF SPECKLED IMAGE............................32
FIGURE 5.1 3X3 WINDOW SHOWING SHARP SPOT NOISE........................38
FIGURE 5.2 SIGMA FILTER, SNR = 5 DB, 5X5 WINDOW, SNRI = 6.26 DB,
SNRI MOD. = 3.99 DB...................................................43
FIGURE 5.3 SIGMA FILTER, SNR = 10 DB, 5X5 WINDOW, SNRI = 4.90 DB,
SNRI MOD. = 1.91 DB...................................................44
FIGURE 5.4 SIGMA FILTER, SNR = 5 DB, 7X7 WINDOW, SNRI = 6.54 DB,
SNRI MOD. = 3.91 DB...................................................45
FIGURE 5.5 SIGMA FILTER, SNR = 10 DB, 7X7 WINDOW, SNRI = 4.76 DB,
SNRI MOD. = 1.47 DB...................................................46
FIGURE 5.6 MMSE, SNR = 5 DB, 5X5 WINDOW, K1=5.3, SNRI = 5.54 DB.......47
FIGURE 5.7 MMSE, SNR = 10 DB, 5X5 WINDOW, K1=5.3, SNRI = 4.29 DB..48
FIGURE 5.8 MMSE, SNR = 5 DB, 5X5 WINDOW, K1=7.3, SNRI = 5.74 DB...49
FIGURE 5.9 MMSE, SNR = 10 DB, 5X5 WINDOW, K1=7.3, SNRI = 4.24 DB..50
FIGURE 5.10 MMSE, SNR = 5 DB, 7X7 WINDOW, K1=7.3, SNRI = 5.48 DB..51
FIGURE 5.11 MMSE, SNR = 10 DB, 5X5 WINDOW, K1=7.3, SNRI = 4.24 DB.....52
7


FIGURE 5.12 ERLS, SNR = 5 DB, ^ = 0.97, SNRI = 5.53 DB..............53
FIGURE 5.13 ERLS, SNR = 10 DB, ^ = 0.97, SNRI = 5.66 DB.............54
FIGURE 5.14 ERLS, SNR = 5 DB, = 1.0, SNRI = 6.55 DB.................55
FIGURE 5.15 ERLS, SNR =10 DB, ^ = 1.0, SNRI = 4.71 DB...............56
FIGURE 5.16 SIGMA FILTER, SNR = 10 DB, 5X5 WINDOW, SNRI = 4.7 DB,
SNRI MOD. = 0.22 DB.................................................59
FIGURE 5.17 SIGMA FILTER, SNR = 10 DB, 7X7 WINDOW, SNRI = 5.04 DB,
SNRI MOD. = 0.22 DB.................................................60
FIGURE 5.18 MMSE, SNR = 10 DB, 5X5 WINDOW, K1=5.3, SNRI = 4.95 DB...61
FIGURE 5.19 MMSE, SNR = 10 DB, 5X5 WINDOW, K1=7.3, SNRI = 4.81 DB...62
FIGURE 5.20 MMSE, SNR = 10 DB, 5X5 WINDOW, K1=17.3, SNRI = 4.91 DB..63
FIGURE 5.21 MMSE, SNR = 10 DB, 7X7 WINDOW, K1=7.3, SNRI = 4.24 DB...64
FIGURE 5.22 ERLS, SNR = 5 DB, ^ = 0.97, SNRI = 2.32 DB..............65
FIGURE 5.23 ERLS, SNR = 5 DB, = 1.0, SNRI = 4.97 DB.................66
FIGURE 5.24 ACTUAL SAR IMAGE, SIGMA FILTER, 5X5 WINDOW, K=1.........70
FIGURE 5.25 ACTUAL SAR IMAGE, SIGMA FILTER, 7X7 WINDOW, K=1.........71
FIGURE 5.26 ACTUAL SAR IMAGE, MMSE, 5X5 WINDOW, K1=5.3..............72
FIGURE 5.27 ACTUAL SAR IMAGE, MMSE, 5X5 WINDOW, K1=7.3..............73
FIGURE 5.28 ACTUAL SAR IMAGE, MMSE, 5X5 WINDOW, K1=10.3.............74
FIGURE 5.29 ACTUAL SAR IMAGE, MMSE, 7X7 WINDOW, K1=7.3..............75
FIGURE 5.30 ACTUAL SAR IMAGE, ERLS, =0.97...........................76
FIGURE 5.31 ACTUAL SAR IMAGE, ERLS, ^ = 1.0.........................77
8


Tables
Page
TABLE 5.1 LIST OF THE RESULTS....................................41
TABLE 5.2 SPECIFICATION OF SAR SYSTEM............................66
9


1. Introduction
An adaptive filter is one whose coefficients can be adjusted in
response to changing signal properties[1]. Adaptive filters are very useful in
analyzing systems with a priori data. Since the parameters of an a priori
system are random prior to any measurements, it is useful to have a filter that
adjusts to the system as it changes in response to the changing input data.
Another important use of adaptive filters is their use with systems whose
properties change with time. Adaptive filters can be very useful in industry
applications where systems properties change with time. Adaptive filtering
is useful to adjusting to a dynamically changing system to provide
observable filtered data.
Adaptive filters may be developed through the evaluation or
minimization of a cost function. Most often the cost function is the Minimum
Mean Squared Error (MMSE) between the estimate and the desired
parameter. The adaptive filter can be viewed as a procedure that iteratively
adapts the coefficients of the filter toward the optimal MMSE solution and
tracks changes in the optimal solution[2]. The MMSE estimation method is a
random parameter estimator which uses a priori knowledge of the
parameter behavior to produce an estimate based on both a prior and a
posteriori information^].
Two other estimation methods similar to the MMSE are the Maximum
A Posteriori (MAP) estimator and the Maximum Likelihood (ML). The MAP
1


uses the same information as the MMSE in its derivation but the cost
function finds the peak estimate and not the mean value. The Maximum
Likelihood (ML) function estimation method differs from the MAP and MMSE
by requiring no prior knowledge of parameter behavior. The ML estimator is
often more useful because of this characteristic. The MMSE method is the
most useful of the three methods in developing adaptive algorithms because
its cost function is a well behaved quadratic function with one minimum
value. The Least Mean Squares (LMS) and Recursive Least Squares (RLS)
adaptive derivations use the MMSE cost function.
The estimation methods mentioned above are most often derived
from the assumption that the noise varies randomly according to a Gaussian
signal and is added to the parameter
x = 0 + w (1.1)
where x is a single sample and 0 is the single parameter, w is the additive
noise with Gaussian characteristics. Also, the noise is not correlated to the
signal which allows for simplifications. These assumptions are carried
through the derivation of the adaptive algorithms such as the LMS and RLS.
Therefore the algorithms are strategic for systems in the presence of
Gaussian additive noise.
It is of interest to apply these methods not only to systems with
additive noise, but also to systems with multiplicative noise. In applying the
methods to the multiplicative noise, modifications must be made in the way
the algorithms are applied to make useful the derivation assumptions of the
algorithms. Most of the modifications are applying the filters to sections of a
2


pixeled image with multiplicative noise and assuming stationarity for both
the signal and noise for that section. Section by section application allows
us to assume that the methods are consistent with the derivations with the
additive Gaussian noise.
One industrial application is the reduction of speckle noise present in
Synthetic Aperture Radar (SAR). SAR data is used extensively in the
aerospace industry to observe planetary land-masses, weather phenomena,
and military intelligence. Its primary purpose is to form an image of very
large objects such as areas of terrain[4], SAR imaging is done by using an
antenna with a dual function that directs radiated waveforms onto a specific
land-mass and then receives the returned signal scattered and reflected by
the land mass. As the vehicle with the antenna traverses, the motion is used
to synthesize a very long antenna whose length is along the flight path as
shown in figure 1.1 [5].
-----------------------Synthetic Aperture Length, L---------------------
Figure 1.1 Motion of Antenna
3


A pulse is transmitted at spaced-time intervals and return echoes pass
through the receiver and are recorded digitally. Doppler frequency
variations for the different objects on the ground provide a radar picture of
the objects. SAR processing then involves matching the Doppler frequency
variations by adjusting frequency variations in the returned echoes for each
reflected point on the ground. The receiver is used to convert the waveforms
received from a very high frequency band to a lower frequency band, more
suitable for viewing and analyzing. The radars ability to distinguish, or
resolve, finely spaced or small objects is dependent on the length of the
waveform, termed the pulse[6]. The waveform is a function of the size of the
antenna and pulsed energy. The general shape of an antenna is
rectangular to receive images in a Cartesian coordinate system making it
simpler to process. Other antenna shapes used are circular antennas and
other various shapes. The size of the antenna and general shape determine
the waveform and area covered during a scan. The smaller the antenna the
more detailed the SAR image but more recording medium is required and
less terrain is covered.
Some advantages to using SAR data are its ability to collect
information over remote observation location with very good resolution at
varying distances. Also, large land-masses can be covered very quickly
because of the rapid speed of the SAR vehicle, which are satellites or other
airborne vehicles. SAR systems are not affected by changes in weather or
lighting as is a photographic system. SAR is an imaging system that is very
4


useful in different areas of study with advantages that make it important
enough to understand the science involved with the system.
The disadvantages of using SAR are the expense associated with the
collection and processing of the data. Highly trained personnel and image
evaluators are needed in the collection and analysis of SAR data. Also,
expensive satellites and computer systems are used in the collection of SAR
data. The nature of SAR and the methods of processing always creates an
image that contains speckle. Speckle degrades the image quality making it
difficult to distinguish the finer details and edges of the studied land mass.
This creates the need to study and apply filtering methods to make the data
more useful.
Speckle noise is a prevalent characteristic with coherent SAR
systems. Many of the surfaces or objects are very rough on a scale smaller
or equivalent to the optical wavelength. This phenomena creates fully
developed speckle. Coherent light reflected by rough surfaces and objects
creates a wave consisting of contributions from each independent scattering
element. This resultant reflected wave is a summation of all the individual
scattered components with differing delays ranging from a few to many
wavelengths depending on the shape of the surface. Figure 1.2 shows a
vector sum of many individual dephased coherent wavelets.
5


Figure 1.2 Total Vector and Individual Components
Changing the point of observation shown in figure 1.3 would produce a
slightly different total vector because the path length and phasing of the
6


individual vectors have changed. The total vector may not change as
drastically, but the individual components of the total may have drastic
changes and may be described as random behavior. If the total vector
represents the intensity, we may have the speckle pattern shown in figure
1.4. The picture consists of a number of bright spots and dark spots. The
dark spots are points where interference of the dephased and coherent
wavelets is destructive and the bright spots are where the interference is
constructive.
Figure 1.4 Speckle Pattern
7


Speckle noise exhibits random behavior that may be explained using
a statistical model. Consider a detector placed at point (x,y,z) in figure 1.3.
Assuming that the field incident at (x,y,z) is perfectly polarized and
monochromatic, we may represent the field as a complex-valued analytical
signal shown below.
u(x,y,z:t) = A(x,y,z)exp(i2jivt), (1.2)
where v is the optical frequency and A(x,y,z) is a complex phasor amplitude
A(x,y,z) = IA(x,y,z)lexp[i0(x,y,z)]. (1.3)
The irradiance at (x,y,z) is given by
Kx.y.z)
lim
T-*ooT
T/2 ^
Jlu(x,y,z;t)l dt=IA(x,y,z)l2.
-T/2
(1.4)
The complex amplitude of the field at (x,y,z) may be described by
N
A(x,y,z) = 2 lakl exp(ik), (1.5)
k=1
where lakl and <()k represent the amplitude and phase of the contribution from
the kth scattering area and N is the total number of contributions. Again
figure 1.2 shows the individual vectors and total sum.
Two assumptions are made concerning ak and <()k. First the amplitude
ak and the phase <|>k of the kth vector are statistically independent of each
other and of the other vector elements that are summed to obtain a total.
Second the phases 4>k of the individual vector components are equally likely
to lie anywhere in the interval (-jt,jt). If N (individual vector components) is
large, we conclude that the real and imaginary parts of the point (x,y,z) are
independent, zero mean, identically distributed Gaussian random variables
8


and the irradiance l(x,y,z) obeys negative exponential statistics. The
probability density function may be expressed as
(1/T)exrt-1/T), I&0,
P(l) = {
0, otherwise,
(1.6)
where I is the mean or expected irradiance. Its interesting to note that the
standard deviation is equal to the mean for the negative exponential
probability density function. The contrast of the speckle image may be
defined by
C=a,/I, (1.7)
which is always unity.
Since the characteristics of speckle vary across a land-mass, the
noise is not easily separated from the image through common linear filtering
techniques, but time invariant methods must be examined. Algorithms are
often developed uniquely for each situation depending on the characteristics
of the multiplicative noise which vary from one SAR image to another or
throughout different areas within the image. This thesis will study
conventional estimation and adaptive methods to understand how effective
the methods are with actual SAR data.
1.1 Thesis Outline
Chapter 2 will present three well known estimation methods, the
Maximum A Posteriori (MAP), the Maximum Likelihood (ML) and the
Minimum Mean-Squared Error (MMSE) method. A derivation of the
methods will be presented providing insight in to how the algorithms are
developed. Also, discussed will be specific applications for the three
9


estimation methods and differences that exist between the methods. Along
with the derivations will be a derivation of the MMSE estimate that was
simulated to observe the effectiveness of the algorithm with different types of
noise applied.
In chapter 3 the Recursive Least Squares (RLS) and the Extended
Recursive Least Squares (ERLS) adaptive algorithms will be presented.
Again, an in depth derivation will be given discussing aspects of the
algorithms. A comparison of the two algorithms will be made addressing
some applications for each adaptive algorithm. The actual ERLS simulation
will be derived and explained.
Chapter 4 will discuss the additive and multiplicative noise models
tested with the simulations. A speckle noise model will be derived. This is
used to create speckling in clean images. Having the original clean image
without speckle allows us to determine how effective the above mentioned
estimation methods and adaptive algorithms are. Also, the Sigma Filter
used in the simulations will be explained and derived.
Chapter 5 presents actual computer simulations of the MMSE
estimation method, RLS ERLS, and a Sigma statistical filter. A comparison
is made among the computer simulations to determine effectiveness with
additive noise, modeled multiplicative noise, and actual SAR speckle noise.
Conclusions are made between the theory presented in chapters 2, 3
and 4 and the simulations that were made in chapter 5. The conclusion is
presented in chapter 6.
10


1.2 Notation and Terminology
Abbreviations
CCRS Canada Centre for Remote Sensing
C/X C and X-band frequency
HH horizontal horizontal
ERLS extended recursive least squares
FIR finite impulse response
MAP maximum a posteriori
MMSE minimum mean squared error
ML maximum likelihood
MS mean-square
RLS recursive least squares
SAR synthetic aperture radar
Notations
E[] expectation or mean value
X vector of elements
e parameter vector
0 estimated parameter vector
11


2. Maximum A Posteriori (MAP) and Maximum Likelihood (ML)
Estimation Methods
In this section a tutorial is presented on the MAP and ML estimation
methods based on the presentation given by P.M. Clark [7]. Let us examine
the MAP and ML estimation methods by estimating a single parameter given
one or more measurements. First, consider the MAP estimation of a single
random parameter. Suppose we have M samples from a discrete random
sequence x(n) for n=0,1...M-1. Let us examine the random parameter 6
which depends on x(n) in some manner. Now, let us assume that the
parameter 6 is characterized by a probability density function f(6) which is
an a priori density function. We may think of 0 as being a constant signal
part of the noisy measurements x(n), or it may be a parameter of a signal
generated model. We may represent the observed data by the following
vector:
x = [x(0),x(1),...,x(M-1)]t. (2.1)
Consider the probability density function f(0/ x), which is the density
function of 0 This joint density function is conditioned by the observed
values of x(n) and is the a posteriori density for 0. The MAP estimate of 0 is
the peak value of the above described joint density function. The maximum
value will correspond to the most probable value of 0 from the observed
data. This maximum may be found by taking the derivative of f(0/x) with
respect to 0 and equating to zero then solving for MAP
12


(2.2)
0MAP= {f(6/x)} = 0.
If we assume the probability density function to be unimodal, we will obtain
the true maximum by solving (2.2). When solving the actual function, one
should verify that the value is the global maximum of the known function.
Since we have the random modal for 0, we may classify the estimation
method as Bayesian. This random model for 0 is often characterized as an
exponential distribution function which lends itself to consider the natural
logarithm of that function. The maximum value of 0 found with f(0 / x) is also
found with ln[f(0/x)] because the logarithm is a monotonic function
0MAP= {h[f(0/x)]}- 0.
(2.3)
In determining f(0/x) or ln[f(0/x)], one may apply Bayes' Rule[8] to f(0/x)
to obtain another form which may be easier to work with
f(0/x) = ie).
V f(x)
(2.4)
f(0) is the a priori density function of the parameter 0. Now taking the
natural logarithm of expression (2.4) we obtain
h [f (0 / x)] = h [f (x / 0) + h [f (0)] h [f (x)], (2.5)
and taking the partial derivative with respect to 0 we get
^{h[f(x/e)+h[f(e)]-h[f(x)l>-o.
at)
(2.6)
Since f(x) is not dependent on the parameter 0 we may simplify the
expression as
0MAP- ^-- 0. (2.7)
at)
13


eMAP estimate may be obtained from the solutions of equations (2.3), (2.4)
or (2.8).
Similarly, the ML estimation algorithm is derived as follows. Given M
samples from a discrete random sequence x(n) where n=0,1,...,M-1. Again,
assume 0 is an unknown constant. The Maximum Likelihood (ML) is
obtained by selecting 0m |_ value which maximizes the density of the
observed data x with respect to the parameter 0
0ML = maximum{Le = f(x; 0)}. (2.8)
Lq is the density of the observations x. The expression f(x;0) describes the
dependence of the density on 0
Equation (2.9) is the likelihood function of 0. The equality f(x;0-|) > f(x; 2)
shows that 0-| is a more possible value of 0 than 2. Since the ML estimate
0 is an unknown deterministic constant, we may categorize the estimation
method as a classical estimation method. This is not an absolute
classification of the ML esitmator. It is also possible to derive the ML
estimator similar to the MAP estimator by describing 0 to be a random (a
priori) distinct probability function. Taking the logarithm of the likelihood
function
Again, assuming f(x;0) is unimodal so the maximum is easily found from
l = f(x;e).
(2.9)
h(L0) = h[f(x;0)].
(2.10)
0ML=^{h L0>=O.
(2.11)
14


Observe that l_e and h Lq are functionally dependent on the random data
and are also random variables. The solutions to equations (2.8) or (2.11)
give the Maximum Likelihood estimate.
Considering the differences between the two estimators, one can see
that the MAP estimate is derived using a combination of a priori and a
posteriori knowledge. We assume that something is known about 0 which is
shown by the probability density function f(6). The ML estimator does not
require a priori knowledge of the parameter 0. Both procedures do require
information about the joint a posteriori density of the observations. If the
density function f(0) is not accurate, the ML estimator will prove more useful
then the MAP. If the density function is modeled accurately then the MAP
will provide better accuracy.
2.2 Minimum Mean-Squared Error (MMSE) Estimation
Consider the random parameter 0 as a function of an observed
random variable x. Let us construct an estimate 0m S = h(x) which minimizes
the mean-square error between the estimate and the actual or "desired"
parameter 0. We may begin by choosing 0m s which minimizes the
following quadratic cost function J
J E{[6-h(x)]2>. (2.12)
The cost function is evaluated using the expected value:
00 00
J= f /[0-h(x)]2f(x,0)d0dx (2.13)
00 00
f(x,0) is the joint density of x and 0 which is also expressed as
15


f(x,0) = f (0 / x)f(x). (2.14)
Substituting (2.14) into (2.13) we obtain nr' m
J= / f(x)[/[0-h(x)]2f(0/x)d0]dx. (2.15)
-00 00
Simplification of the above integral is possible since both the inner and outer
integrals are positive everywhere. Also, h(x) does not affect the outer
integral and therefore we may simply minimize just the inner integral
/[e-hMj^e/xjde
- 00
(2.16)
Minimization is achieved by differentiating with respect to h(x) and equating
to zero to obtain
00
-2 /[0-h(x)]f(0/x)d0 = 0, (2.17)
00
X X
h(x) / f(0/x)d0 = /0f(6/x)d0. (2.18)
-00 -00
Since the cumulative density function can be expressed as
X
/f(0 /x)d0 = 1, (2.19)
- 00
now from (2.19) and (2.18) we obtain the minimum mean-squared error
estimate
0MS=h(x) = E{0/x}. (2.20)
The MMSE estimator is shown above in equation (2.20) which is obtained
by choosing h(x) as the expectation of 0 of the observed data. A very
important observation is unlike the MAP and ML methods, the MMSE
requires the conditional mean value of the joint a posteriori density but does
16


not rely on the specific knowledge of the density function itself. Also, note
that 0m s = E{9 /x} is generally a nonlinear function of the data except when
the joint a posteriori density is Gaussian. In this situation 0m s is a linear
function of x.
The MMSE and MAP methods can both be classified as Bayesian
since both require knowledge of the distinct density function f(0). Both
methods produce estimates of the random parameter 0 that are a function
of the a posteriori density f (0 / x). The difference between the two methods
is the MAP method finds the peak of f(0/ x) while the MMSE method finds
the mean. If f(0 / x) is a symmetrical function, then the two estimators will
produce the same results.
2.3 Implementation of the Adaptive MMSE Algorithm
A MMSE adaptive algorithm was implemented to remove speckle
noise from images. The algorithm models the noise as multiplicative. A
MMSE derivation of the algorithm is shown below.
Kt)
n(t)
Figure 2.1 Filter a given signal with stationary multiplicative noise.
From the above figure we may model the recorded image as
l(x,y)=[r(x,y) x n(x,y)] h(x,y) (2.21)
17


where h(x,y) is the coherent point spread function and r(x,y) is the desired
image and n(x,y) is the multiplicative noise.
The adaptive MMSE algorithm used and simulated was developed by
Frost, Stiles, Shanmugan, and Holtzman.[9] The algorithm models the
image l(x,y) being corrupted by multiplicative-convolved noise. The MMSE
filter is predicted on the assumption that the signal and noise are both
stationary. The assumption is valid when small sections of the image are
observed, but in a global sense stationarity is not valid. The MMSE filter that
provides an estimate for the desired image r(x,y) is obtained by minimizing
the following cost function (minimum mean square error)
J = E[(r(t) l(t) m(t))2] (2,22)
where t = (x,y) is the spatial coordinate. The MMSE solution leads to the
transfer function [10]
{ nSr(f) 1
M(f)-{ Sr n
f *0
(2.23)
f = 0
where n =E{n(t)}and f = (fx.fy) is the spatial frequency coordinate. Sr(f) and
S n(f) are the power spectral densities of the surface reflectivity and the noise
process, respectively. The filter derived in (2.23) is valid for processing data
only within homogeneous areas which makes r(t) a stationary random
process. Inspecting (2.23), one finds the transfer function H*(f) is not data
dependent and may be assumed to remain constant over some finite
bandwidth. Rewriting (2.23) to obtain only the data dependent portion of the
transfer function we obtain
18


M'(f) =
ffSr(f)
(2.24)
Sr(f)xSn(f)
The model used for r(t) is an autoregressive process. The autocorrelation
function Rr(k) and the two-sided power spectral density Sr(f) is represented
below
O a IH O
(2.25)
Rr(k)= cr2e alkl + r2
2cr2a p
Sr(f)- o --^ + r %
a2+4,2f2
(2.26)
2 =2
where variables af r and "a" have different values for different target
categories (ex: mountains, ocean, or corn field)[11]. Now the model for the
multiplicative noise given in the same fashion is shown below as
Rn(k)= a25(k) + n2 (2.27)
Sn(f) = a2 + n26(f)
(2.28)
2 -2
where parameters ap and n depend on the sensors but not on the scene.
2 -2
This is an assumption used during the derivation, but in reality a^ and n
may depend somewhat on the terrain imaged. The more complex the terrain
the more backscatter will be experienced by the received signal and the
more speckle present in the image. Substituting equations (2.26) and (2.28)
into (2.24), one obtains the impulse response filter [12]
m'(t) = K-|ae-a^
a = i/( 2a[ ]2 [---r-] + a)
V n 1 + ()2
r
(2.29)
(2.30)
where Ki is a normalizing constant.
19


The MMSE filter found in (2.29) was derived assuming that both r(t)
and n(t) moments are shift-invariant or wide-sense stationary. Given the
multiplicative nature of the noise, we can not assume stationarity for the
image r(t) unless we obtain homogeneous areas within the image. If we
estimate a from observed data within a local area centered around the
position (x,y), the MMSE filter will adapt as the local area is varied
throughout the entire image. The decay constant is a function of of, r and
2 2
"a". As different sections within an image are examined, ar and r will
experience change, but "a" will vary only slightly. For our purpose we will
keep "a" constant while calculating of and r of and r are the standard
deviation and mean squared of the desired image which we do not have
2 -2
readily available. Therefore, it is possible to use oj and I calculated from
the noisy image which we do have. These parameters can be used to
properly adapt the filter to local changes within the image. We may then
simplify (2.30) by
2 f
a =K2y2" (2.31)
where K2 is a constant.
-p
The algorithm implementation is done by obtaining parameters I
and of using data from local sections of the image (say 7X7 window)
centered at (xo.yo)- a is obtained from the entire image given the statistics
for the local sections. Once a is obtained for the image the adaptive filter is
easily calculated using (2.29). The adaptive filter is then convolved per
localized section for the individual pixel to perform a weighted average of
20


the data. The simulations results presented were obtained using a 5X5 and
7X7 window to obtain the local neighborhood statistics and to perform the
filtering. The window size used to find the local neighborhood statistics and
to perform the filtering could be any size chosen but of odd number size to
truly have a center pixel. All the simulation results are presented in chapter
5 and were compared to other algorithms.
21


3. Recursive Least Squares Algorithm
The RLS algorithm uses the mean-squared error cost function to find
the optimum filter coefficients. The RLS algorithm uses all the data available
up to the point of the estimate to solve for the optimum filter. The ERLS
algorithm differs only slightly from the RLS by introducing a weighting
variable placed on each parameter which allows one to "weight" the most
recent variables. The ERLS algorithm proves to be useful with nonstationary
data.
The RLS and ERLS adaptive algorithms are presented below with the
derivation as in [13]. Consider the one dimensional system in figure 3.1
where
Figure 3.1 RLS optimal filtering. f(t) is designed to minimize the mean-
squared error between the desired signal d(t) and output y(t).
d(t) represents a reference signal with additive noise and parameters of the
system are represented by the parameter vector
0T = [ft(O)ft(1)-ft(L)]. (3.1)
The measurement or observation vector is given by
22


The output of the system may be defined as
y(t)=e T (t).
(3.2)
(3.3)
Now the error of the system may be described as follows
N
e(') 2J(d(*)- y('))
(3.4)
t=1
Let us consider a quadratic cost function and minimize to obtain the optimal
estimate parameter 0(t)
L T 2
l(0)= 2[cKt)-0'4>(t)r. (3.5)
t=1
taking the derivative of (3.5) and setting equal to zero we obtain
dl N
3T 2 2(-(t)(d(t)-eT de t-1
Now solving for 0 gives
Equation (3.7) is the non-recursive form of the RLS. We shall continue and
derive the recursive form of the RLS. First we will define the matrix
(3.7)
t-1
t-1
P(t)"1= 2(k)(k)T,
(3.8)
k-1
and the matrix P(t -1)_1 is given by
P(t-1) 2 ftkW)
k-1
From equations (3.8) and (3.9) one may write
(3.9)
p(tr1-p(t-ir1++(t)4)(t)T,
(3.10)
Now observe that equations (3.7) and (3.8) give
23


t
0(t)= P(t) 2 k=1
(3.11)
which is the least-square estimate obtained at time t. The estimate obtained
at time (t-1) is given below
t-1
0(t-i) = P(t-l) 2*(k)d(k). (3.12)
k=1
Rearranging (3.11) and (3.12) one obtains
- 1* 1
P(t)_10(t)= 24>(k)d(k) (3.13)
k=1
1* t1
P(t-1)-I0(t-1) = I k=i
Now subtracting (3.13) from (3.14) we obtain
P(tr1e(t)-P(t-ir1e(t-i)-<|>(t)d(t) (3.15)
or
P(t)_10(t) P(t-1)_1e(t-1) + Continuing from equation (3.16) but adding then subtracting <(>(t) right hand side of the equation we get
P(t)-1e(t) (P(t -1)"1+(t)(t) +(>(t)d(t).
Now using (3.10) we arrive at
P(t)-10(t) P(tr18(t -1) + (t){d(t) when we multiply by P(t) to get
0(t) 8(t 1) + t)P(t)[d(t)- t)T§(t 1)] (3.19)
and now we have the recursive form of the least-squares parameter
estimator. Consider that in (3.19) and (3.8) matrix P(t)-1 must be inverted
24


for each calculation. We can avoid this by applying the Matrix Inversion
Lemma to find the recursive relation for P(t).
Lemma 3.1.1 [14] Let F be a regular matrix of dimension (nxn), and , a
vector of dimension n, then:
(F+#Tr1-F-1
F^F
1 -t- F
(3.20)
From equations (3.10) and (3.20) one may write
l + 4>(t)'P(t-l)4>(t)
RLS algorithm is presented
0(t) 0(t -1) + <(>(t)P(t)[d(t) t)T0(t -1)]
6(0) =>initial conditions usually set to zero
P(t) = P(t-1)-
P(t-lH(tH(t)TP(t-l)
i + 4>(t)TP(t-iH(t)
P(0) = P0I, I is the identity matrix, and Pq>0
(3.21)
3.1 Extended Least Squares Algorithm
The Extended Recursive Least Squares (ERLS) recursive algorithm is
very similar to the RLS algorithm. The ERLS method may be developed by
minimizing the following cost function.
l(0). 2*n'tW)-eT,t>(t)]2
(3.22)
t=1
From this point on the derivation of the ERLS follows the same as with the
RLS and will not be presented.
ERLS algorithm is presented [151
25


6(t) 6(t-1) + <(>(t) P(t)[d(t) t)T0(t 1)]
6(0) =>initial conditions usually set to zero

P(t-iH(t)t)TP(t-i)
>. + <|.(t)TP(t-i)<(.(t)
P(0) = P0I, I is the identity matrix, and Po>0
X is a weighting factor sometimes termed the forgetting factor with values
equal from 0 to 1. If we use 1 then the ERLS looks exactly like the RLS. It is
suggested to use values between 0.95 and 0.9995 for X. Values lower than
0.95 may create instabilities with the algorithm.
3.2 Implementation of the ERLS for Image Processing
An Extended Recursive Least Squares algorithm was implemented to
use with two-dimensional images for the reduction of noise. The ERLS
implementation uses a Finite Impulse Response (FIR) filter model. The FIR
model implementation provides a stable algorithm. Larger number of
coefficients were needed and more computer calculations were needed for
the FIR model, but faster computer work stations provided adequate speed
to run the simulation. The ERLS algorithm gave the quickest results among
the three algorithms simulated.
Figure 3.2 is a block diagram showing the nature of the ERLS
implementation.
26


Figure 3.2 ERLS Block Diagram
The ERLS algorithm used in the simulation has the same form as
described above, but was implemented for a two-dimensional system. First
the parameter vector was chosen to be of length (L=15) with the parameters
shown below where t = (x,y) and the parameters of the system may be
represented below
0T = [ft(O)ft(1)-ft(L)]. (3.23)
This defines the P(t) matrix to be an identity matrix of size 15X15. The initial
gain Po for the simulations was set to 20.
Although the matrix we are working with is two-dimensional, the
operation of the ERLS is similar to its operation with a one-dimensional data.
The two-dimensional sequence may be considered a very long one-
27


dimensional sequence with a particular path used to traverse the sequence.
Let us consider figure 3.3 which is part of a large matrix image.
j= 1 2 3 4 5 6 7 . .
i=l
2
3
4
5
6
7
8
12792278781312
51 1781 13467924
127978787821 12
44803591116853
51 1781 13467924
127978787821 12
127978787821 12
Figure 3.3 Matrix representation of a larger image matrix
The pixel of interest may be i =4 and j = 4 whose amplitude is 1. The
measurement vector is formed by taking values in a 4X4 block with the pixel
of interest at the lower right of the block. Using pixel i = 4 and j = 4 we may
form the measurement vector as such
<(>=[979871 7421 241 5 1]. (3.24)
Once we have the measurement vector, we may calculate the algorithm gain
P(t) and compute the filter parameters using the equation below
0(t) §(t -1) + t)P(t)[d(t) (t)Te We then compute the next P(t) from the previous P(t 1) using
P(t) = -[ P(t -1) ^(t ~ j!1') ^(t ~1) ]. (3.26)
x x + 28


The computations are done recursively as the 4X4 block slides along the
entire image. Note that the data near the right and top edges will not be
useful since it is used as initial values to the algorithm.
Simulations were made with the ERLS algorithm with various X
values. The performance of the RLS was observed by making X = 1. The
simulations were compared to the other algorithms discussed in chapter 1.
The comparisons were made for images with additive noise and
multiplicative noise and actual SAR data. The results are discussed in
chapter 5.
29


4. Noise Models and the Sigma Filter
It was necessary to learn how to generate noise models to determine
how useful the simulated algorithms are in reducing noise present in
images. If we only tested the algorithms on SAR images, the only measure
of the effectiveness would be the observed filtered image since speckle
noise is always present in SAR. By obtaining multiplicative and additive
noise models, one can determine the signal-to-noise ratio improvement
(SNRI) to give us an indication to the performance of the adaptive filters.
Therefore, multiplicative and additive noise models were created to use with
the simulations in determining usefulness of the adaptive filters.
The third algorithm tested was the Sigma Filter algorithm and a
comparison was made to the other algorithms discussed in chapter 5. The
Sigma Filter is a simpler statistical model which proved to be very effective in
the removal of noise. A discussion and derivation about the Sigma Filter will
be presented below.
4.1 Multiplicative (Speckle) Noise Model Presented
Speckle noise is very different from additive noise but similar to
multiplicative noise. It is signal-dependent and spatially correlated making it
different from multiplicative noise. The multiplicative noise model for speckle
is only an approximation that ignores the correlation of the speckle. If we
model speckle noise as multiplicative we can observe the noise to be a
30


_1
amplitude transmittance f2(x,y) and the random phasor exp(j(x,y)) which
is due to the roughness of the terrain surface. A block diagram is given in
figure 4.2 showing how we obtain a speckled image.
1
exp(j(x,y))
Figure 4.2 Block diagram of Speckled Image
h(x,y) is the coherent point spread function shown below
(
an
h(x,y) =
^ i sin ' its/'
Mi

JlX
v Ry y
(4.1)
where (Rx.Ry) are the pixel point resolution for point P. If the terrain surface
is rough compared to the wavelength, we will obtain "fully developed"
speckle which is described with exp(j(x,y)) where <|>(x,y) is an
uncorrelated random field uniformly distributed between 0 and 2ji. We
assume that the phase <|>(x,y) is statistically independent of the original
image f(x,y). b(x,y) is the complex amplitude speckle image expressed
(4.2)
fV2 /
b(x,y) = 2 £h(x-i,y-k)f ^(i,k)-extfMx, y)).
i k
Now g(x,y) is the speckle intensity shown as
g(x,y) = |b(x,y)|2.
(4.3)
32


g(x,y) is our image containing multiplicative noise. The simulations used the
multiplicative noise model described above to corrupt images. Although the
model only approximates speckle noise, it allows us to determine the
amount of noise used to corrupt the images. The simulations were
compared in chapter 5 to determine how effective the algorithms were in the
removal of multiplicative noise.
4.2 Additive Noise Model Presented
Performance of the adaptive algorithms were tested by corrupting the
images with additive noise. The additive noise model is a more simple
model then the multiplicative model and is disscussed below. We first
consider a clean pixel x(i,j) and add noise w(i,j) as follows
z(i.j) =x(i,j) +w(i,j) (4.4)
where z(i,j) is now the corrupted image pixel. We choose w(i,j) to be a white
random sequence with
E[w(i,j)] = 0 (4.5)
and
E[w(i,j)2] = 1 (4.6)
The variance w(i,j) was modified to add more or less noise by multiplying by
some constant.
The above model is the additive noise model we used to corrupt
images for the simulations given in chapter 5. This gave us an indication of
how well the adaptive filters functions in the presence of additive noise.
Again, the results and comparisons are given in chapter 5.
33


4.3 Sigma Filter Description and Derivation
The third algorithm tested is a statistical filter which we have termed
the "Sigma Filter". The Sigma Filter is a speckle smoothing algorithm based
on local statistics [17] which is designed to reduce speckle noise present in
SAR. The derivation of the filter assumes the reflectivity of the return signal
to be Gaussian distributed. The algorithm did demonstrates effectively the
removal of speckle noise without degrading the subtle details of an image
which is presented at the end of chapter 5.
In order to develop the algorithm, a thorough understanding of an
assumed speckle model is needed. As mentioned previously, speckle noise
is characteristic to SAR images due to the phasing errors introduced when
processing the data. Given the nature of speckle noise, it is shown that
speckling obeys approximately a negative exponential distribution and is
somewhat multiplicative in nature. For the sigma derivation, we assume the
reflectivity's standard deviation is equal to its mean. The derivation below
follows from a paper written by Jong-Sen Lee [18].
First let z(i,j) be the intensity of the noisy image pixel and x(i,j) be the
desired or noise-free pixel. Then we may relate the two by
z(i.j) = x(U) v(i.j) (4.7)
where v(i,j) represents the multiplicative noise with mean of unity and
variance ar. ar is independent of x and therefore we can assume that x(i,j)
and v(i,j) are uncorrelated. We then express the mean of z as
z=xv = x, (4.8)
34


and the variance of z as
(4.9)
After obtaining statistical information from the image, we use a windowed
area which is stationary and say
The algorithm takes a typical window size (2n+1) X (2m+1) perhaps a
7X7 window and computes the statistics associated with that window to filter
the center pixel (i,j). We assume that the a priori mean z(i, j) is the same as
x(i,j). An average is determined from the noisy image and then a variance
and standard deviation are computed. Once the statistics for the window
size have been determined, a filtered average is computed where only pixel
values that lie within a two-sigma (z(i,j) 2ovz(i,j), z(i,j) + 2avz(i,j)) range of
the center pixel (i,j) are included in the average. The center pixel (i,j) is then
replaced by the newly computed average.
A variation of this algorithm is also simulated which computes two
averages. One average is in the higher intensity range (z(i,j) + 2avz(i,j))
and another average is in the lower intensity range (z(i,j) 2avz(i,j)). The
center pixel is then replaced by the closest average present to the pixel (i,j).
This algorithm should reduce blur and contrast loss.
E[x2] = x2,
(4.10)
and
_2
var(z) = x ov.
The standard deviation may be expressed as
(4.11)
Vvar(z) Vvar(z)
(4.12)
35


Often when smoothing an image, one may encounter sharp spot
noise. Sharp noise occurs when the gray level is significantly different from
neighboring pixels within the window. If this occurs it will not be averaged
because most of the neighboring pixels will lie outside the two-sigma range.
One way to correct this is to check if the number of pixels within the
2ovz(i, j) range is smaller or equal to some threshold. If this occurs, the
immediate four neighbor average will replace the center pixel (i,j) as the
filtered value and not the two-sigma average. One should choose a
reasonable threshold otherwise the detail features of the image could be
compromised or blurred.
The Sigma Filter algorithm was simulated and tested with
multiplicative, additive, and actual SAR to determine its usefulness in the
presence of noise. The results show that the sigma filter performed well and
did not blur the image excessively. The modified sigma filter using two
averages performed better with less blur and preserves the edges quite well
for the actual SAR image only. The results are compared to the other filters
and presented in chapter 5.
36


5. Practical Examples
Practical examples of the discussed algorithms in chapters 2, 3, and 4
are simulated and studied in this chapter. The MMSE, ERLS and the Sigma
algorithms were simulated using the software package MATLAB. The
algorithms were studied and observed in the presence of different noise.
Multiplicative and additive noise models discussed in chapter 4 were
simulated and used to corrupt the images. Knowing the amount of noise
used to corrupt the images was essential to determine the effectiveness of
the filter methods. Also, the performance of the algorithms was observed
with actual SAR data that was obtained from Radarsat International of
Richmond, Canada. The SAR is a fine mode Radarsat simulation of the
Saint John River in New Brunswick which is C-Band, HH polarized from the
CCRS C/X SAR [15]. A signal to noise ratio improvement (SNRI) calculation
was determined for the simulations if the amount of noise used was known.
For the actual SAR image, the filter performance was evaluated by
observing the filtered image and comparing it to the original. Within each of
the algorithms tested were variations. These slight variations to the
algorithms made it possible to obtain slightly different results. The variations
are discussed below.
37


5.1 Variations with the Sigma Filter
The Sigma Filter was presented in section 4.4 and shown to be a
statistical filter. The filtering was based on local statistics within a window
size. The window size is a variation possible with the Sigma Filter. The
window sizes used with the Sigma Filter were a 5X5 window and a 7X7
window. Another variation for the Sigma Filter is its ability to reject sharp
spot noise. Sharp spot noise occurs when the gray level is significantly
different from neighboring pixels within the window. The algorithm
incorporates a threshold value that it checks against the 2avz(i, j) range to
determine if spot noise is present. For example, if we have a 3X3 window of
data with pixel values shown below in figure 5.1.
1 1 1
2 255 3
4 5 6
Figure 5.1 3X3 window showing sharp spot noise
All values will lie outside the 2avz(i,j) range and therefore the center pixel
will not be changed through averaging. If the threshold value chosen is
zero, then the four neighborhood average would replace the center pixel.
Again the threshold is a variation to the algorithm, but for the simulations
presented below only a threshold of one was used.
38


5.2 Variations for the MMSE Filter
The MMSE filter algorithm was discussed in section 2.2. A variation
of the algorithm was the Ki constant shown in equation (2.29). The Ki
constant is a normalizing constant that was varied to filter the image more or
less. Using different Ki values showed different results. To determine an
optimal Ki constant for the additive noise and multiplicative noise, many
simulations were made with different values. Similar to the Sigma Filter,
another variation present with the MMSE algorithm was the window size
used to calculate the impulse function that filtered the noise. The algorithm
was flexible enough to use any odd sized window. It was necessary to use
odd sized windows to filter the middle pixel of the window. Window sizes
used for the simulations presented in this chapter were a 5X5 window and a
7X7 window.
5.3 Variations for the ERLS Algorithm
The Exponential Recursive Least Squares (ERLS) algorithm was
discussed in section 3.2. The adaptive filter was designed as a FIR filter with
variable number of coefficients. All simulations presented here used 15
coefficients, but it was possible to increase or decrease the number.
Another variation present with the ERLS design was the implementation to
allow flexibility in weighting the filtered parameters. X is a weighting factor
sometimes termed the forgetting factor. By varying X, we could observe the
performance of a pure RLS implementation by making X = 1, or of an ERLS
implementation by making X less than 1. X was suggested to be varied
39


between 0.95 and 0.9995 but not lower than 0.95. Values of X lower than
0.95 may create instabilities with the algorithm. The simulations presented
in this chapter used X = 1 and X = 0.97.
5.4 The Results Compared
The table of the results is presented below to summarizes and
compares the performance of the algorithms. Comparisons are made
among the three algorithms. The variations simulated with the algorithm are
also shown in the table.
40


Table 5.1 List of the Results
Filters Additive Noise SNR = 5 dB SNR = 10 dB Mult. Noise SNR = 10 dB
SNRI (dB) SNRI (dB) SNRI (dB)
Sigma Filter
5X5 K=1 Std =6.26 dB Mod.=3.99 dB Std. =4.90 dB Mod.=1.91 dB Std.=4.7 dB Mod.=0.22 dB
7X7 K=1 Std. =6.54 dB Mod.=3.91 dB Std.= 4.76 dB Mod.= 1.47 dB Std.=5.04 dB Mod.=0.22 dB

MMSE Filter
5X5 Ki=5.3 5.54 dB 4.29 dB 4.95 dB
5X5 Ki=7.3 5.74 dB 4.24 dB 4.81 dB
5X5 K1=17.3 4.91 dB
7X7 K1=7.3 5.48 dB 4.24 dB 4.81 dB

ERLS
X=0.97 5.53 dB 5.66 dB 2.32 dB
X=1.0 6.55 dB 4.71 dB 4.97 dB

41


5.5 Results with Additive Noise
Comparing the results of the simulation with additive noise, we can
readily observe that the algorithms all performed about the same according
to the SNRI. The pure RLS (X = 1.0) algorithm gave the best SNRI = 6.55
dB and the modified Sigma Filter tended to give the worst results with the
SNRI = 3.91 dB. If we observe the actual images, we do notice the RLS
(X = 1.0) algorithm gave better results physically. The MMSE algorithm did
not perform well and gave results only slightly better than the modified
Sigma Filter. Comparing the two window sizes used with the Sigma Filter,
we observe the 7X7 window gave better results than the 5X5 window. The
MMSE algorithm did blur the filtered images significantly more then the
Sigma Filter or the ERLS. The Sigma Filter and the pure RLS methods
tended to preserve edges and detail better than the MMSE method. The
MMSE method produced big contrast differences showing a lot of black and
white pixels, but not many graylevels in between. The RLS and Sigma Filter
contained many more graylevels.
The additive noise simulations can be observed in figures 5.2 through
5.15. The order for the list of figures is the same as shown in the table.
Figures 5.2 through 5.5 are the simulation results from the Sigma Filter.
Figures 5.6 through 5.11 are the simulation results from the MMSE
algorithm. Figures 5.12 through 5.15 are the results of the ERLS
implementation.
42


Original Image |mage + Noi
Figure 5.2 Sigma Filter, SNR = 5 dB, 5X5 window, SNRI = 6.26 dB, SNRI
Mod. = 3.99 dB
43


Original Image Image + Noisy
100 200 300 100 200 300
Figure 5.3 Sigma Filter, SNR = 10 dB, 5X5 window, SNRI = 4.90 dB, SNRI
Mod. = 1.91 dB
44


Original Image Image + Noisy
100 200 300
100 200 300
Figure 5.4 Sigma Filter, SNR = 5 dB, 7X7 window, SNRI = 6.54 dB, SNRI
Mod. =3.91 dB
45


Equalized Sigma Filtered Image Equalized Modified Sigma Filtered Image
100 200 300
100 200 300
Figure 5.5 Sigma Filter, SNR = 10 dB, 7X7 window, SNRI = 4.76 dB, SNRI
Mod. = 1.47 dB
46


Figure 5.6 MMSE, SNR = 5 dB, 5X5 window, K-|=5.3, SNRI = 5.54 dB
47


Figure 5.7 MMSE, SNR = 10 dB, 5X5 window, Ki=5.3, SNRI = 4.29 dB
48


Original Image Image + Noise
100 200 300 100 200 300
Filtered Image Equalized Filtered Image
100 200 300 100 200 300
Figure 5.8 MMSE, SNR = 5 dB, 5X5 window, Ki=7.3, SNRI = 5.74 dB
49


Original Image
Image + Noise
Filtered Image Equalized Filtered Image
100 200 300 100 200 300
Figure 5.9 MMSE, SNR = 10 dB, 5X5 window, Ki =7.3, SNRI = 4.24 dB
50


Original Image
Image + Noise
Filtered Image
Equalized Filtered Image
100 200 300 100 200 300
Figure 5.10 MMSE, SNR = 5 dB, 7X7 window, Ki=7.3, SNRI = 5.48 dB
51


Original Image Image + Noise
Filtered Image Equalized Filtered Image
Figure 5.11 MMSE, SNR = 10 dB, 5X5 window, Ki=7.3, SNRI = 4.24 dB
52


Figure 5.12 ERLS, SNR = 5 dB, X = 0.97, SNRI = 5.53 dB
53


Equalized Clean Image Image + Noise
100 200 300 100 200 300
Equalized Filtered Image
100 200 300
Figure 5.13 ERLS, SNR = 10 dB, X = 0.97, SNRI = 5.66 dB
54


Equalized Clean Image Equalized Image + Noise
100 200 300 100 200 300
Equalized Filtered Image
100 200 300
Figure 5.14 ERLS, SNR = 5 dB, X = 1.0, SNRI = 6.55 dB
55


Equalized Clean Image Equalized Image + Noise
100 200 300 100 200 300
Equalized Filtered Image
100 200 300
Figure 5.15 ERLS, SNR = 10 dB, k = 1.0, SNRI = 4.71 dB
56


5.6 Results with Multiplicative Noise
Observing and comparing the results of the simulation with
multiplicative noise, we again notice that they all perform almost equal to
each other according to SNRI. Physically there is not much difference
among the algorithms since the severity of the noise corrupts the images
thoroughly. The images became very distorted in the presence of
multiplicative noise. The noise corrupting the images caused the images to
appear worse than the actual speckled SAR image. SAR images contain
the necessary information but the total of the dephased wavelets will not be
phased correctly. SAR contains certain pixels that are not phased correctly,
but neighboring pixels with in the filter window may have correct information
and the filtering becomes more useful than it is shown here. The
multiplicative noise distorts all pixels severely with higher concentration of
distortion in the higher intensity region (lower pixel magnitudes) and
therefore the statistical filters do not respond well. The noisy figure appears
very spotty with white and as a result of the filtering the filtered image
appears less noisy but very light colored. Perhaps a statistical filter giving
more weight to the darker pixel values would equalize the image better.
Each of the filtered images showed poor results comparing to the
original. Observing the SNRI, we see the values are similar to the additive
noise cases. This reflects the improvement from the noise corrupted image
but the noise due to random phasing is still present in the filtered image.
Again similar to the additive noise cases, the MMSE method tended to blur
the images more. Even though more blur is present with the MMSE, from
57


observation of the filtered image it performed better than the other
algorithms. Figure 5.19 (MMSE, 5X5, K-|=17.3) showed the best results of
the three algorithms.
The multiplicative noise simulations can be observed in figures 5.16
through 5.24. Again, the order is the same as shown in the table. Figures
5.16 through 5.17 show the results for the Sigma Filter applied to images
that contain multiplicative noise. Figures 5.18 through 5.22 are the results of
the MMSE algorithm and figures 5.23 through 5.24 are the results of the
ERLS algorithm.
58


Original Image Noisy Image
100 200 300
100 200 300
Figure 5.16 Sigma Filter, SNR = 10 dB, 5X5 window, SNRI = 4.7 dB, SNRI
Mod. = 0.22 dB
59


Original Image
Noisy Image
Figure 5.17 Sigma Filter, SNR = 10 dB, 7X7 window, SNRI = 5.04 dB,
SNRI Mod. =0.22 dB
60


Original Image
Image X Noise
Figure 5.18 MMSE, SNR = 10 dB, 5X5 window, K-|=5.3, SNRI = 4.95 dB
61


Figure 5.19 MMSE, SNR = 10 dB, 5X5 window, K-|=7.3, SNRI = 4.81 dB
62


Original Image
Image X Noise
Figure 5.20 MMSE, SNR = 10 dB, 5X5 window, K-|=17.3, SNRI = 4.91 dB
63


Original Image
Image X Noise
Figure 5.21 MMSE, SNR = 10 dB, 7X7 window, Ki =7.3, SNRI = 4.24 dB
64


Original Image Image x Noise
100 200 300
100 200 300
Figure 5.22 ERLS, SNR = 10 dB, X = 0.97, SNRI = 2.32 dB
65


Figure 5.23 ERLS. SNR = 10 dB, X = 1.0, SNRI = 4.97 dB
66


5.7 Results with Actual SAR
The algorithms tested above were also tested on actual SAR data.
The SAR data was obtained through Radarsat International, INC and is C-
band, HH polarized from the CCRS C/X SAR. The SAR image observed is
of the Saint John River in New Brunswick, Canada. The specifications of the
SAR system are provided below in table 5.2 [16].
Table 5.2 Specification of SAR System
PARAMETER C-BAND SYSTEM
Transmitter
Frequency 5.30 GHz
Wavelength 5.66 cm
Radiated Peak Power 34 kW
Polarization Horizontal or Vertical
Receiver
A/D Converter Dynamic Range 30 dB
Compressed Pulse Width High 40 ns Low 120 ns
Noise Figure High 5.2 dB Low 3.7 dB
Antenna
Polarization Horiz. Vert.
Azimuth Beam Width 3 deg. 3 deg.
Elevation Beam Width 28 deg. 28 deg.
Peak Gain 24 dB 24 dB
67


The SAR data was collected by a Convair 580 system which is fully
equipped for remote sensing. As well as the C/X-SAR system it can carry
two scatterometers for measuring backscatter intensities. It is also equipped
with a variety of cameras, a laser altimeter, infrared radiometer and the state-
of-the-art flight control and navigation equipment which includes the Global
Positioning System unit.
The SAR imagery was collected during the 1987 flooding of the Saint
John River to study the potential of SAR in delineating flooded areas and to
assess flood-induced damages [16]. Satellite images were simulated using
the Convair 580 airborne system described above. The SAR data is used
here to determine the effectiveness of estimation methods and adaptive
filtering to remove the speckle noise that is inherent to the SAR data. There
are many aspects to the collection of SAR data and many ways to collect the
data. It is possible to obtain other types of SAR data and to apply the filtering
methods described by this paper. The filtering methods studied can be very
computationally intensive and therefore more time would be needed to
make the simulations with other SAR data types.
The MMSE algorithm again produced more blur than the other tested
algorithms but performed better than the other two algorithms as seen in the
filtered results. The MMSE method with large Ki produced better results
than smaller Ki. The larger window size improved the results slightly for the
Sigma Filter and for the MMSE method. The Sigma Filter performed better
than the ERLS but not as good as the MMSE method. The modified Sigma
Filter preserved edges very well but contained slightly more speckle noise.
68


The ERLS algorithm performed worst among the algorithms even when
changing the weighting X.
The three algorithms tested on the SAR imagery of the Saint John
River can be shown in figures 5.24 through 5.29. The Sigma Filter
simulations are shown in figures 5.24 and 5.25. The MMSE algorithm
simulations are shown in figures 5.26 through 5.28. The ERLS simulations
are shown in figures 5.29 and 5.30.
69


Original Image
Sigma Filtered Image
100 200 300 400 500
Modified Sigma Filtered Image
100 200 300 400 500
Figure 5.24 Actual SAR image, Sigma Filter, 5X5 window, K=1
70


Original Image Sigma Filtered Image
100 200 300 400 500 100 200 300 400 500
Modified Sigma Filtered Image
100 200 300 400 500
Figure 5.25 Actual SAR image, Sigma Filter, 7X7 window, K=1
71


Original Image Equalized Original Image
100 200 300 100 200 300
Filtered Image Equalized Filtered Image
100 200 300 100 200 300
Figure 5.26 Actual SAR image, MMSE, 5X5 window, Ki=5.3
72


Original Image
Equalized Original Image
Equalized Filtered Image
100 200 300
Figure 5.27 Actual SAR image, MMSE, 5X5 window, K-|=7.3
73


Original Image
Filtered Image
100
200
300
400
100 200 300 400 500 100 200 300 400
Figure 5.28 Actual SAR image, MMSE, 5X5 window, K-|=10.3
74


Original Image Filtered Image
Figure 5.29 Actual SAR image, MMSE, 7X7 window, K-|=7.3
75


Equalized Original SAR Image
100 200 300 400 500
Equalized Filtered Image
100
200
300
400
500
100 200 300 400 500
Figure 5.30 Actual SAR image, ERLS, X =0.97
76


Equalized Original SAR Image
100 200 300 400 500
Equalized Filtered Image
100 200 300 400 500
Figure 5.31 Actual SAR image, ERLS, X = 1.0
77


6. Conclusions
Synthetic Apterature Radar (SAR) has many uses in industry today.
Image quality problems exist with SAR because speckle noise is an inherent
characteristic of the data. Speckle noise may be approximated as
multiplicative noise which was discussed in chapter 1 and section 4.1. In
this thesis we investigated adaptive signal processing methods for removing
additive, multiplicative and SAR inherent noise in images. A tutorial
explaination of the MMSE, MAP, and ML estimation methods was presented
in chapter 2. Also, the MMSE method that was implemented and simulated
was discussed in section 2.2. Explaining the estimation methods then led to
an investigation of the ERLS and RLS adaptive algorithms. The ERLS and
RLS derivations were presented in chapter 3. To determine the
effectiveness of the ERLS algorithm, it was simulated for use with image
processing and presented in section 3.2.
The usefulness of the adaptive filters was determined by knowing the
amount of noise used to corrupt clean images and by observing the filtered
image. Multiplicative and additive noise models were developed in chapter
4. Using these models allowed us to corrupt images with a known amount of
noise and then to determine a SNRI. The Sigma Filter was another
algorithm tested with the MMSE and ERLS to determine performance in
filtering noise. The Sigma Filter is a statistical filter which was presented in
section 4.3.
78


The final results were presented in chapter 5 showing a comparison
among the methods simulated. It was possible to understand filter
performance through the calculated SNR I for the images corrupted with
additive or multiplicative noise. The additive results showed improvement
over the corrupted image for each algorithm with the RLS performing best.
The multiplicative results did not show good performance with any of the
algorithms perhaps due to the assumptions the algorithms were developed
from or the multiplicative noise model. The MMSE responded to the
multiplicative noise slightly better than the other algorithms but did not
reduce the noise significantly. It appears that the multiplicative noise model
is much more extreme than the actual speckle found in SAR. Again, the
multiplicative noise model is only an approximation to speckle and ignores
the correlation that exists between the noise and signal. For the actual SAR
images the only gauge of performance was the comparison of the filtered
image with the original SAR image. The filtered images did show significant
improvement over the original image.
79


APPENDIX A MATLAB Programs
MATLAB SIMULATION PROGRAMS
Sigma Filter for Additive Noise
% sigmaadd.m
% Speckle Noise Project, Demonstrate Sigma Filter
% Joseph Sanchez
%
% Clean image corrupted with Additive Noise
% Additive Noise Simulation with Sigma Filter
%
%
%
% > Generate a fixed point 2-D array of %s <
%
clear
clg
load clown
%a=X(149:199,50:100);
a=X;
[m1,n1]=size(a);
% > add additive noise to the image <
%ivn=input('noise');
ivn=12;
nn=(ivn*randn(m1 ,n1));
a1=a+nn;
%
% > Calculate the statistics for a given window size <
%
%dim=input('Enter Window Size in Matrix Form. ')
dim=[3 3]; % 7X7 window
m=dim(1);n=dim(2);
[m1,n1]=size(a1);
a=a(1 :m1,1 :n1);
80


%thres=input('Enter the K value to remove isolated noise. ')
thres=1;
% Calculate the SNR
%sn=10*log10((sum(sum(a A2)))/(sum(sum((a-a1).A2))))
sn=10*log10((sum(sum(a.A2)))/(sum(sum((nn) A2))))
%
% > Calculate the Average, Variance and Sigma <
for i=m+1 :(m1-m)
for j=n+1 :(n1-n)
zave(i,j)=mean(mean(a1(i-m:i+m,j-n:j+n)));
temp=a1(i-m:i+m,j-n:j+n);
sigm(i,j)=std1(temp(:));
var(i,j)=sigm(i1j)*sigm(i,j);
end
end
% > Calculate the Delta Matrix <
xesttot=0;
deltatot=0;
xestup=0;xestlow=0;updeltatot=0;lowdeltatot=0;
for i=n+1 :m1-n
for j=m+1 :n1-m
for k=(i-n):(i+n)
forl=(j-m):(j+m)
if a1(k,l) >= (a1(i,j)-2*sigm(i,j)) & a1(k,l) <= (a1(i,j)+2*sigm(i,j))
delta(k,l)=1;
ede1ta(k,l)=0;
end
if a1 (k,l) >= a1 (i,j) & a1 (k,l) <= (a1 (i,j)+2*sigm(i,j))
updelta(k,l)=1;
else
updelta(k,l)=0;
end
if a1(k,l) <= a(i,j) & a1(k,l) >= (a1(i,j)-2*sigm(i,j))
lowdelta(k,l)=1;
else
lowdelta(k,l)=0;
end
xestup=updelta(k,l)*a1(k,l)+xestup;
xestlow=lowdelta(k,l)*a1(k,l)+xestlow;
u pd eltatot=u pd e 11 a( k, I)+u pd eltatot;
lowdeltatot=lowdelta(k,l)+lowdeltatot;
81


xesttot=delta(k,l)*a1(k,l) + xesttot;
deltatot=delta(k,!)+deltatot;
end
end
deltot(i,j)=deltatot;
if deltatot <= thres
xest(i,j)=(a1 (i-1 ,j)+a1 (i+1 ,j)+a1 (i,j-1 )+a1 (i,j+1 ))/4;
else
xest(i,j)=xesttot/deltatot;
end
xesttot=0;
deltatot=0;
if updeltatot <= thres I lowdeltatot <= thres
xestbias(i,j)=(a1 (i-1 ,j)+a1 (i+1 ,j)+a1 (i,j-1 )+a1 (i,j+1 ))/4;
else
u pave=xestu p/u pdeltatot;
lowave=xestlow/lowdeltatot;
diffup=abs(a1 (i.j)-upave);
difflow=abs(a1 (ij)-lowave);
if d iff up >= d iff low
xestbias(i,j)=upave;
else
xestbias(i,j)=lowave;
end
end
xestu p=0 ;xestlo w=0 ;u pdeltatot=0 ;lowdeltatot=0;
end
end
% > Calculate the Signal to Noise Improvement <
iei=(a(m+1 :m1-m,n+1 :n1-n)-a1(m+1 :m1-m,n+1 :n1-n)).A2;
iei2=(a(m+1 :m1-m,n+1 :n1-n)-xest(m+1 :m1-m,n+1 :n1-n)).A2;
snri=10*log10((sum(sum(iei)))/(sum(sum(iei2))))
iei2=(a(m+1 :m1-m,n+1 :n1-n)-xestbias(m+1 :m1-m,n+1 :n1-n)).A2;
snribias=10*log10((sum(sum(iei)))/(sum(sum(iei2))))
% Call joehist to Equalize the Original Image
[p, psum, aeq]=joeh ist1 (a, 6);
a=aeq;
% Call joehist to Equalize the Noisy Image
[p1 ,psum,aeq]=joehist1 (a1,6);
%a1=aeq;
%> Display Images to Screen and Write to Postscript <
82


figure(1)
colormap(gray)
subplot(2,2,1),image(a)
title ('Equalized Original Image')
subplot(2,2,2),image(a1)
title ('Image + Noisy')
[p2,psum2,xesteq]=joehist1(xest,6);
subplot(2,2,3),image(xesteq)
title ('Equalized Sigma Filtered Image')
[p3,psum3,xestbiaseq]=joehist1(xestbias,6);
subplot(2,2,4),image(xestbiaseq)
title ('Equalized Modified Sigma Filtered Image')
print -dps sigmaadd
figure(2)
colormap (gray)
subplot(221), bar(p)
title ('Histogram Clean Image')
subplot(222), bar(p1)
title ('Histogram Image + Noise')
subplot(223), bar(p2)
title('Histogram Sigma Filter')
subplot(224), bar(p3)
title('Histogram Modified Sigma Filter')
print -dps -append sigmaadd
save sigmaadd7_710db
Sigma Filter for Multiplicative Noise
% sigmamult.m
% Speckle Noise Project, Demonstrate Sigma Filter
% Joseph Sanchez
%
% Clean image corrupted with Multiplicative Noise
% Multiplicative Noise Simulation with Sigma Filter
%
%
%
% > Generate a fixed point 2-D array of %s <
83


%
clear
clg
load clown
%a=X(149:199,50:100);
a=X;
% Call fspeckle to Create Multiplicative noise
% 2m+1 and 2n+1 are size of Coherent Spread Function Window m=1 n=1
[a1]=fspeckle(a,1,1);
% Call joehist to Equalize the Original Image
[p.psum.aaHoehist^a.e);
% Call joehist to Equalize the Noisy Image
[p1,psum,aa1]=joehist1(a1,6);
%
% > Calculate the statistics for a given window size <
%
%dim=input('Enter Window Size in Matrix Form.')
dim=[3 3]; % 7X7 window
m=dim(1);n=dim(2);
[m1,n1]=size(aa1);
aa=aa(1 :m1,1 :n1);
[m1,n1]=size(a1);
%thres=input('Enter the K value to remove isolated noise. ')
thres=1;
% Calculate the SNR
sn=10*log10((sum(sum(aa.A2)))/(sum(sum((aa-aa1).A2))))
%
% > Calculate the Average, Variance and Sigma <
for i=m+1 :(m1-m)
for j=n+1 :(n1-n)
zave(i,j)=mean(mean(a1(i-m:i+m,j-n:j+n)));
temp=a1 (i-m:i+m,j-n:j+n);
sigm(i,j)=std1(temp(:));
var(i,j)=sigm(i,j)*sigm(i1j);
end
end
% > Calculate the Delta Matrix <
xesttot=0;
deltatot=0;
84


xestup=0;xestlow=0;updeltatot=0;lowdeltatot=0;
for i=n+1 :m1-n
for j=m+1 :n1-m
for k=(i-n):(i+n)
for l=(j-m):(j+m)
if a1(k,l) >= (a1(i,j)-2*sigm(i,j)) & a1(k,l) <= (a1(i,j)+2*sigm(i,j))
delta(k,l)=1;
eddta(k,l)=0;
end
if a1 (k,l) >= a1 (i,j) & a1 (k,l) <= (a1 (i,j)+2*sigm(i,j))
updelta(k,l)=1;
else
updelta(k,l)=0;
end
if a1 (k,l) <= a(i,j) & a1 (k,l) >= (a1 (i,j)-2*sigm(i,j))
lowdelta(k,l)=1;
else
lowdelta(k,l)=0;
end
xestup=updelta(k,l)*a1(k,l)+xestup;
xestlow=lowdelta(k,l)*a1(k,l)+xestlow;
updeltatot=updelta(k,l)+updeltatot;
lowdeltatot=lowdelta(k,l)+lowdeltatot;
xesttot=delta(k,l)*a1 (k,l) + xesttot;
deltatot=delta(k,l)+deltatot;
end
end
deltot(i,j)=deltatot;
if deltatot <= thres
xest(i,j)=(a1 (i-1 ,j)+a1 (i+1 ,j)+a1 (i,j-1 )+a1 (i,j+1 ))/4;
else
xest( i, j )=xesttot/d eltatot;
end
xesttot=0;
deltatot=0;
if updeltatot <= thres I lowdeltatot <= thres
xestbias(i,j)=(a1 (i-1 ,j)+a1 (i+1 ,j)+a1 (i,j-1 )+a1 (i,j+1 ))/4;
else
u pave=xestu p/u pd eltatot;
lowave=xestlow/lowdeltatot;
diffup=abs(a1 (i,j)-upave);
difflow=abs(a1(i,j)-lowave);
if diffup >= difflow
85


xestbias(i,j)=upave;
else
xestbias(i,j)=lowave;
end
end
xestup=0;xestlow=0;updeltatot=0;lowdeltatot=0;
end
end
% > Calculate the Signal to Noise Improvement <
iei=(a(m+1 :m1-m,n+1 :n1-n)-a1(m+1 :m1-m,n+1 :n1-n)).A2;
iei2=(a(m+1 :m1-m,n+1 :n1-n)-xest(m+1 :m1-m,n+1 :n1-n)).A2;
snri=10*log10((sum(sum(iei)))/(sum(sum(iei2))))
iei2=(a(m+1 :m1-m,n+1 :n1-n)-xestbias(m+1 :m1-m,n+1 :n1-n)).A2;
snribias=10*log10((sum(sum(iei)))/(sum(sum(iei2))))
%> Display Images to Screen and Write to Postscript <
figure(1)
colormap(gray)
subplot(2,2,1 ),image(a)
title ('Original Image')
subplot(2,2,2),image(a1)
title ('Noisy Image')
[p2,psum2,xesteq]=joehist1(xestl6);
subplot(2,2,3),image(xest)
title ('Sigma Filtered Image')
[p3,psum3,xestbiaseq]=joehist1(xestbias,6);
subplot(2,2)4),image(xestbias)
title ('Modified Sigma Filtered Image')
print -dps sigmamult
figure(2)
colormap (gray)
subplot(2,2,1 ),image(xesteq)
title ('Equalized Sigma Filtered Image')
subplot(2,2,3),image(xestbiaseq)
title ('Equalized Modified Sigma Filtered Image')
print -dps -append sigmamult
% > Calculate the Signal to Noise Improvement <
iei=(aa(m+1 :m1-m,n+1 :n1-n)-aa1(m+1 :m1-m,n+1 :n1-n)).A2;
iei2=(aa(m+1 :m1-m,n-i-1 :n1-n)-xesteq(m+1 :m1-m,n+1 :n1-n)).A2;
snri=10*log10((sum(sum(iei)))/(sum(sum(iei2))))
86


iei2=(aa(m+1 :m1-m,n+1 :n1-n)-xestbiaseq(m+1 :m1-m,n+1 :n1-n)).A2;
snribias=10*log10((sum(sum(iei)))/(sum(sum(iei2))))
figure(3)
colormap (gray)
subplot(221), bar(p)
title ('Histogram Clean Image')
subplot(222), bar(p1)
title ('Histogram Image X Noise')
subplot(223), bar(p2)
title('Histogram Sigma Filter')
subplot(224), bar(p3)
title('Histogram Modified Sigma Filter')
print -dps -append sigmamult
save sigmamult7_7
Sigma Filter for SAR Images
% EE6637 Two-Dimensional Digital Signal Processing
% Speckle Noise Project, Demonstrate Sigma Filter
% Joseph Sanchez
%
% Matlab test on Sigma Algorithm to be use on
% SAR images or simulated SAR images using the MATLAB
%
% > Generate a fixed point 2-D array of %s <
%
clear
clg
load -ascii imasc
%a=imasc(1:25,1:25);
a=imasc;
%
% > Calculate the statistics for a given window size <
%
%dim=input('Enter Window Size in Matrix Form. ')
dim=[3 3]; % 7X7 window
m=dim(1);n=dim(2);
dimimage=size(a);
%thres=input('Enter the K value to remove isolated noise. ')
thres=1;
87


% Calculate the SNR
%sn=10*log10((sum(sum(a. A2)))/(sum(sum((a-a1). A2))))
%
% > Calculate the Average, Variance and Sigma <
for i=m+1 :(dimimage(1)-m)
for j=n+1 :(dimimage(2)-n)
zave(i1j)=mean(mean(a(i-m:i+m,j-n:j+n)));
temp=a(i-m :i+m, j-n :j+n);
sigm(i,j)=std1(temp(:));
var(i,j)=sigm(i,j)*sigm(i,j);
end
end
% > Calculate the Delta Matrix <
xesttot=0;
deltatot=0;
xestu p=0 ;xestlow=0; u pdeltatot=0; lowdeltatot=0;
for i=n+1 :dimimage(1)-n
for j=m+1 :dimimage(2)-m
for k=(i-n):(i+n)
forl=(j-m):(j+m)
if a(k.l) >= (a(i,j)-2*sigm(i,j)) & a(k,l) <= (a(i,j)+2*sigm(i,j))
delta(k,l)=1;
eddta(k,l)=0;
end
if a(k.l) >= a(i,j) & a(k.l) <= (a(i,j)+2*sigm(i,j))
updelta(k,l)=1;
else
updelta(k,l)=0;
end
if a(k.l) <= a(i,j) & a(k,l) >= (a(i,j)-2*sigm(i,j))
lowdelta(k,l)=1;
else
lowdelta(k,l)=0;
end
xestup=updelta(k,l)*a(k,l)+xestup;
xestlow=lowdelta(k,l)*a(k,l)+xestlow;
updeltatot=updelta(k,l)+updeltatot;
lowdeltatot=lowdelta(k,l)+lowdeltatot;
xesttot=delta(k,l)*a(k,l) + xesttot;
deltatot=delta(k,l)+deltatot;
end
end
88


deltot(i,j)=deltatot;
if deltatot <= thres
xest(i,j)=(a(i-1 ,j)+a(i+1 ,j)+a(i,j-1 )+a(ij+1 ))/4;
else
xest(i,j)=xesttot/deltatot;
end
xesttot=0;
deltatot=0;
if updeltatot <= thres I lowdeltatot <= thres
xestbias(i,j)=(a(i-1 ,j)+a(i+1 ,j)+a(i,j-1 )+a(i,j+1 ))/4;
else
u pave=xest u p/u pd e Itatot;
lowave=xestlow/lowdeltatot;
diffup=abs(a(i,j)-upave);
difflow=abs(a(i,j)-lowave);
if diffup >= difflow
xestbias(i,j)=upave;
else
xestbias(i,j)=lowave;
end
end
xestup=0;xestlow=0;updeltatot=0;lowdeltatot=0;
end
end
% > Calculate the Signal to Noise Improvement <
%iei=(a(m+1 :m1-m,n+1 :n1-n)-a1(m+1 :m1-m,n+1 :n1-n)).A2;
%iei2=(a(m+1 :m1-m,n+1 :n1-n)-rimage(m+1 :m1-m,n+1 :n1-n)).A2;
%snri=10*log10((sum(sum(iei)))/(sum(sum(iei2))))
%> Display Images to Screen and Write to Postscript <
figure(1)
colormap(gray)
[p,psum,imasceq]=joehist((imasc+1),6); %Need to add one for zero value
subplot(2,2,1),image(imasceq)
title ('Original Image')
[p1,psum2,xesteq]=joehist((xest+1),6); %Need to add one for zero value
subplot(2,2,2),image(xesteq)
title ('Sigma Filtered Image')
[p2,psum3,xestbiaseq]=joehist((xestbias+1),6); %Need to add one for zero
value
89


subplot(2,2,3),image(xestbiaseq)
title ('Modified Sigma Filtered Image')
print -dps sarltxta
figure(2)
colormap (gray)
subplot(221), bar(p)
title ('Histogram Original Image')
subplot(222), bar(p1)
title('Histogram Sigma Filter')
subplot(223), bar(p2)
title('Histogram Modified Sigma Filter')
print -dps -append sarltxta
save sar1txt7_7
MMSE for Additive Noise
% frostadapt4.m
% Speckle Noise Project, Demonstrate MMSE Adaptive Filter
% Described by Frost in Paper
% "A Model for Radar Images and Its Application to
% Adaptive Digital Filtering of additive Noise
% frostadapt4.m
%
% Joseph A. Sanchez
%
% > Generate a fixed point 2-D array of %s <
%
clear
clg
%
% > Calculate the statistics for a given window size <
%
load clown
%X=X(149:199,50:100);
a=X;
[m1,n1]=size(a);
r=zeros(m1,n1);
rimage=zeros(m1 ,n1);
% > add additive noise to the image <
90


%ivn=input('noise');
ivn=12;
nn=(ivn*randn(m1,n1));
a=a+nn;
sn=10*log10((sum(sum(X.A2)))/(sum(sum(nn.A2))))
dim=[2 2];
%dim=input('Enter Window Size in Matrix Form. ')
m=dim(1);n=dim(2);
mimp=zeros(m1,n1);
%
% > Calculate the Average, Variance and Sigma <
for i=m+1 :(m1-m)
for j=n+1 :(n1-n)
zave(i ,j)=mean(mean(a(i-m :i+m,j-n :j+n)));
temp=a(i-m:i+m,j-n:j+n);
sigm(i,j)=std1(temp(:));
var(i,j)=sigm(i,j)*sigm(i,j);
end
end
% > Calculate the adaptive constant alpha <
zave2=zave.*zave;
alpha2(m+1 :m1-m,n+1 :n1-n)=var(m+1 :m1-m,n+1 :n1-n)./zave2(m+1 :m1-
m,n+1 :n1-n);
alpha=sqrt(alpha2);
% > Calculate the Impulse Response Filter <
%for i=m+1 :(2*m+1):m1-m
% for j=n+1:(2*n+1):n1-n
for i=m+1 :m1-m
for j=n+1 :n1-n
for k=(i-m):(i+m)
for l=(j-n):(j+n)
mimp(k,l)=alpha(i,j)*exp(-alpha(i,j)*sqrt((i-k)A2+(j-l)A2));
end
end
mimp=mimp/(max(max(mimp))*7.3);
mimpfft(i-m:i+m,j-n:j+n)=fft2(mimp(i-m:i+m,j-n:j+n));
afft(i-m :i+m ,j-n :j+n)=fft2(a(i-m :i+m ,j-n :j+n));
rfft(i-m:i+m)j-n:j+n)=mimpfft(i-m:i+mj-n:j+n).*afft(i-m:i+m,j-n:j+n);
r(i-m :i+m,j-n :j+n)=ifft2(rfft(i-m :i+m,j-n :j+n));
% r(i-m:i+m,j-n:j+n)=conv2(mimp(i-m:i+mlj-n:j+n),a(i-m:i+m)j-n:j+n),'same');
rimage(i,j)=r(i,j);
mimp=zeros(m1,n1);
end
91


end
% > Calculate the Signal to Noise Improvement <
rimage=abs(rimage);
iei=(X(m+1 :m1-m,n+1 :n1-n)-a(m+1 :m1-m,n+1 :n1-n)).A2;
iei2=(X(m+1 :m1-m,n+1 :n1-n)-rimage(m+1 :m1-m,n+1 :n1-n)).A2;
snri=10*log10((sum(sum(iei)))/(sum(sum(iei2))))
% Call joehist to Equalize the Image
[p,psum,aeq]=joehist1(X,6);
X1=aeq;
% Call joehist to Equalize the Image
[p1 ,psum,aeq]=joehist1(a,6);
a1=aeq;
% Call joehist to Equalize Image
[p2,psum,aeq]=joehist1(rimage)6);
rimageq=aeq;
iei=(X1(m+1 :m1-m,n+1 :n1-n)-a1(m+1 :m1-m,n+1 :n1-n)).A2;
iei2=(X1(m+1 :m1-m,n+1 :n1-n)-rimageq(m+1 :m1-m,n+1 :n1-n)).A2;
snri=10*log10((sum(sum(iei)))/(sum(sum(iei2))))
% > Plots <
figure(1)
colormap (gray)
subplot(2,2,1), image(X)
title ('Original Image')
subplot(2,2,2), image(a)
title ('Image + Noise')
subplot(2,2,3), image(rimage)
title ('Filtered Image')
subplot(2,2,4), image(rimageq)
title ('Equalized Filtered Image')
print -dps frostadapt4
figure(2)
colormap (gray)
subplot(221), bar(p)
title ('Histogram Original Image')
subplot(222), bar(p1)
title ('Histogram Image+Noise')
subplot(223), bar(p2)
title('Histogram Filtered Image')
print -dps -append frostadapt4
92