Citation
Preprocessing based adaptive filtering for image restoration

Material Information

Title:
Preprocessing based adaptive filtering for image restoration
Creator:
Rusyn, Teresa Lyn
Publication Date:
Language:
English
Physical Description:
vi, 77 leaves : illustrations ; 28 cm

Subjects

Subjects / Keywords:
Adaptive filters ( lcsh )
Image processing -- Digital techniques ( lcsh )
Adaptive filters ( fast )
Image processing -- Digital techniques ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 76-77).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Electrical Engineering.
General Note:
Department of Electrical Engineering
Statement of Responsibility:
by Teresa Lyn Rusyn.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
36398378 ( OCLC )
ocm36398378
Classification:
LD1190.E54 1996m .R87 ( lcc )

Full Text
PREPROCESSING BASED ADAPTIVE FILTERING
FOR IMAGE RESTORATION
by
Teresa Lyn Rusyn
B.S., University of Colorado at Denver, 1991
A thesis submittedTo the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering
1996


This thesis for the Master of Science
degree by
Teresa Lyn Rusyn
has been approved
by
Tamal Bose
Jan Bialasiewicz
Miloje Radenkovic

Date


Rusyn, Teresa Lyn (M.S., Electrical Engineering)
Preprocessing Based Adaptive Filtering for Image Restoration
Thesis directed by Associate Professor Tamal Bose
ABSTRACT
The Recursive Conjugate Gradient (RCG) algorithm for adaptive filtering
converges faster than the Least Mean Squares (LMS) algorithm, but does not have
the high computational cost of the well known Recursive Least Squares (RLS)
algorithm. This thesis presents four preprocessing methods to improve the
performance of the RCG adaptive filter and shows the results. The thesis also
compares the performance of the RCG filter to the 2-D Block Diagonal Least
Mean Squares (TDBDLMS) filter. Two comparisons are made with an image
corrupted with normally distributed random noise and with an image corrupted
with "salt and pepper" noise. The results of these two comparisons are also shown.
This abstract accurately represents the content of the candidate's thesis. I
recommend its publication.
Signed
Tamal Bose
m


DEDICATION
I dedicate this thesis to my father who taught me that I can always reach for higher
goals.


CONTENTS
Chapter
1. Introduction......................................................1
2. The Optimal Filter................................................3
2.1 Review of Statistical Concepts....................................3
2.2 FIR and DR Filters................................................6
2.3 Normal Equations and the Wiener Filter............................8
3. Adaptive Filters.................................................12
3.1 Steepest Descent.................................................13
3.2 The LMS Algorithm................................................14
3.3 The RLS Algorithm................................................19
3.4 Two Dimensional Adaptive Filters................................21
3.4.1 Two Dimensional FIR and HR Filters...............................21
3.4.2 Two Dimensional LMS Filters......................................22
4. Recursive Conjugate Gradient Adaptive Filter.....................28
4.1 The Conjugate Gradient Filter....................................28
4.2 The Recursive Conjugate Gradient Algorithm.......................30


4.3 2-D Recursive Conjugate Gradient Algorithm.....................32
5. Preprocessing Techniques and Results...........................35
5.1 Preprocessing Methods..........................................42
5.2 RCG and TDBLMS Comparison......................................56
6. Conclusion.....................................................64
Appendix
A.l Rcg.m..........................................................66
A.2 Tdbdlms.m......................................................68
A.3 Filbank.m......................................................73
A.4 Segment.m......................................................74
References...........................................................76
VI


1. Introduction
Adaptive filtering has become an important tool in image processing for
eliminating noise in the image. A low pass filter can effectively eliminate noise
from an image, but the filter will often blur the edges of the image to an
unacceptable degree in the process. An adaptive filter can preserve some of the
edges that low pass filtering blurs while it filters out noise in the image. The
adaptive filter does this by calculating new filter coefficients at each iteration of
the filter. The new values for the filter are usually based on minimizing some
objective function at every iteration. This thesis presents work based on several
sample based adaptive filtering algorithms.
The beginning of this thesis works through a development of adaptive
filters from the optimum filter algorithm, through the gradient and least mean
squares type filters, to the Recursive Conjugate Gradient (RCG) filter. During the
portion of the thesis that develops adaptive filters, we use FIR filter models and
introduce the DR filter model just before we introduce the RCG filter. After the
development of the filters, the paper presents the preprocessing methods used and
the results from the output of the RCG filter using those methods. We also
compare the RCG filter with the Two Dimensional Block Diagonal Least Mean
l


Squares (TDBDLMS) filter. The noise in the images is a normally distributed
white noise for most of the paper although we do show some results for an impulse
(salt and pepper) type noise.
This thesis deals entirely with digital signal processing Therefore, all the
signals in this paper are sampled signals. Each sample is represented as x (n)
where n is the index of the sample in signal x.
The next chapter develops the concepts behind the optimum filter, the
Wiener filter, starting with a short review of the statistics that relate directly to
adaptive filtering. This section defines the objective function for noise reduction
that filters such as the least mean squared, LMS, filter tries to minimize. Chapter 3
first develops the one dimensional LMS filter and then expands the algorithm to a
two dimensional LMS filter. The section goes on to discuss the Recursive Least
Squares (RLS) filter. Chapter 4 develops the one dimensional RCG filter and then
the two dimensional RCG filter.
The thesis discusses four methods of preprocessing and presents some
results on those preprocessing methods in chapter 5. The comparison between the
RCG filter and the TDBDLMS filter are also in chapter 5. Conclusions are in
chapter 6.
2


2. The Optimal Filter
This chapter defines the optimal filter that uses the minimum mean squared
error as the smallest possible error. The first section of this chapter is a short
overview of statistics and stationarity. Section 2.2 introduces the FIR filter and IIR
filter models. The third section develops the Wiener filter as the optimal filter.
2.1 Review of Statistical Concepts
Adaptive filtering depends on a statistical analysis of signals. Many
algorithms use statistical tools directly or indirectly. This chapter touches on the
basic concepts of the first and second moments of a signal and then introduces
stationarity and the effect stationarity has on estimating moments[l,2].
The first moment of a variable is the mean, pn which is the expected value
of the variable. One method for finding the mean is to average the sample over a
large number of iterations of the signal:
= lim
oo
N
M
(2.1)
This average does not use the assumption that even points next to each other in the
signal necessarily have the same statistics.
3


A second order moment of the signal is the correlation. The correlation,
R, is the expected value of the product of two samples, given by
i?(/7,/n)=lim >,()*, .
N-+ 1
(2.2)
This formula actually determines the autocorrelation which compares samples
from the same signal. Cross correlation, P, compares samples from different
signals as
A problem that often arises is that these sample based equations require
several realizations of the signal to calculate the values of the moments for each
sample. Often there are not many realizations of the signal or even more than one
realization of the signal. This problem can be overcome when the signal is wide
sense stationary.
In a strict sense stationary (sss) signal, the distribution function is shift
invariant. A weaker, but more useful characteristic of stationarity states that the
first and second moments of a wide sense stationary (wss) signal are finite and are
independent of the index of the sample. The mean value of a wss signal is
(2.3)
4


constant; pn equals pm. The correlation becomes dependent only on the distance
between the samples being compared; R(m,n) equals R(n-m).
The characteristics of the wide sense stationary signal provide a new
method to estimate the first two moments of the signal. First, since the mean is
constant for all the samples of the signal, a time average rather than a sample
average can estimate the mean:
The correlation in a wss signal depends only on the distance between the two
samples not which two particular samples are being compared. Again, a time
average can estimate the value. However, two possible scaling factors to use are
(2.6)
Estimate rta is not biased, that is, the expected value of the estimate tends toward
(2.4)
(2.5)
and
5


the true correlation value. However, the variance can be large. The second
estimate, rta\ is biased, but can have a lower variance than rta- The preferred
estimate is often rta because its mean squared error, MSE, is usually lower than the
mean squared error for rta. Equations (2.5) and (2.6) refer to the autocorrelation,
but the formulas are valid for the cross correlation as well.
The first and second moments of a signal are very important in adaptive
filtering. Many adaptive filters use the mean, the autocorrelation, and the cross
correlation to determine the minimum mean squared error which is the standard to
rate the filters.
2.2 FIR and HR Filters
Two of the most common filter models in general are the finite impulse
response (FIR) filter and the infinite impulse response (HR) filter. Both filters can
be defined by the z-transform equation,
N
m~ 1
where Y(z) is the output of the filter and U(z) is the input to the filter.
(2.7)
6


The equivalent difference equation for a FIR filter is
K*)=Z
*=0
(2.8)
In the FIR filter, all the am's in equation (2.7) are equal to zero[3]. FIR filters are
stable, but may need a large number of terms to define the filter accurately. These
filters are also called nonrecursive filters because the output depends only on
present and past values of the input signal.
The equivalent difference equation of the IIR filter from equation (2.7) is
M N
y(k)=Zy(k w)+Z b u(k ~n)
m=1 n=0
(2.9)
An IIR filter generally needs fewer coefficients than a similar FIR filter, but the IIR
filter can become unstable unless the stability is closely monitored. The IIR filter
is called a recursive filter because it depends on past values of the output of the
filter as well as the present and past inputs of the filter[3].
The vector forms of the filter coefficients and the data inputs are both now
(M+N) by 1 vectors, given by
HI1R = [fl(l),fl(2),...,a(M),6(0),Z>(l)..,b(N-1)]'
(2.10)
7


and
ck = [X* -1),y(k 2),... ,y(k M),u(k),u{k -1),... ,u(k N +1)]'.
(2.11)
Using these forms, the output of the filter is y(n) = HorC k-
2.3 Normal Equations and the Wiener Filter
The signal at the input to a practical filter is composed of two signals, an
ideal signal and a noise signal which is added to the ideal signal. If u (n) is the
input to the filter, it is composed of an ideal signal component, s (n), and a noise
component, w (n). The following figure shows a diagram of a general filtering
circuit that has y (n) as the output signal.
w ( n )
u (n ) ,_____,
-<5£> 1 " P'Ucrj y (n)
s ( n )
Figure 2.1 General Filter Diagram
The first filter this thesis develops is not an adaptive filter, but an optimum
filter known as the Wiener filter[ 1 ]. The Wiener filter uses the assumption that
both the ideal signal, s (n), and the noise signal, w (n), are wide sense stationary.
This thesis considers nonstationary signals in the next chapter.
8


Referring back to figure 2.1, the output, y (n), is a convolution sum of the
input signal and the filter coefficients, h (n). The error signal, e (n), of the filter is
the difference of the desired signal and the output signal. The objective of the
adaptive filters is noise reduction and, therefore, the desired signal of the filter, d
(n), is the ideal signal, s (n). The error signal is given by
e{n) = d(ri)-y(ri).
(2.12)
The error signal is used to define the mean squared error which is the objective for
noise reduction. The mean squared error, MSE, is the expected value of the square
of the error signal:
e=E[e2(n)\
(2.13)
It can be shown[ 1,2] that the MSE is a quadratic function which implies that it has
a unique minimum or maximum. The derivative of the MSE with respect to the
filter coefficients produces the normal equations that define the extreme points of
the function [1,2]:
o


ds
M
= -2/?(£-l)+2^ r(m k)h(m) = 0
m=l
(2.14)
which becomes
M
Y^r(m-k)h0(m) = p(k-1) £ = 1,2,..., M.
m-1
(2.15)
The r (m-k) term is the autocorrelation which depends only on the difference
between the two samples for stationary signals and the p (k-1) term is the cross
correlation term. A second derivative of the MSE equals the autocorrelation
factor, r (m-k). Since the autocorrelation matrix, R, is positive semidefmite, r (m)
is nonnegative. Therefore, all the values of the second derivative of the MSE are
either inflection points or on a curve which is concave up. This implies that the
extremum is a minimum point. This defines a minimum error for the filter to
achieve. Having a single minimum point of the MSE is a very attractive concept,
because there are no local minimum points to fall into.
In the optimum solution for the Wiener filter, the expected value of the
error and the input signal values equals zero, that is
E^e0(n)u{n k + l)] = 0 £ = 1,2,..., M.
(2.16)
10


This implies that the error of the optimum solution is orthogonal to any sample of
the received signal, u (n). In addition, the expected value of the error of the
optimum Wiener filter and the optimum output is zero,
k = l,2,...,M,
(2.17)
which implies that eo(n) and yo(n) are orthogonal. When e (n) is orthogonal to y
(n), the value of e (n) is a minimum value for e (n). This defines the value that
many adaptive filters work to achieve.
The next chapter introduces nonstationary signals and the adaptive filters
that try to track the change in the statistics of the input signal. One of the most
common adaptive filters, the least mean squares filter, uses a gradient of the error
signal to find the minimum mean squared error.
11


3. Adaptive Filters
Using the Wiener filter produces the optimum filtering for a stationary
signal with stationary noise added to it. Unfortunately, many signals are not
stationary. This provides two problems for the Wiener filter. The first problem is
that the minimum mean squared error is not a fixed value. Second, the time
average estimates for the statistical moments of the signal are no longer valid. As
a consequence, these signal processing problems require a different solution.
Adaptive filters can provide a solution, although the result will not be an optimum
solution as in the case of the Wiener filter. Figure 3.1 shows the general diagram
for an adaptive filter.
s + n , ^
Figure 3.1 General adaptive filter diagram
The signal s + no is the signal plus noise. This signal is delayed and becomes
s + ni. The noise component of the signal is listed as a different signal because the
noise is decorrelated. The image (s) on the other hand is still highly correlated.
12


The output of the filter (s') is subtracted form the desired signal which is s + no.
The difference is the error signal (e) which affects the coefficients of the filter.
3.1 Steepest Descent
Adaptive filters still use the minimum mean squared error as a goal. The
MSE is still a quadratic function. The minimum, unfortunately, is not constant,
but there is only one minimum for any one iteration. With a quadratic function,
the obvious solution to find the minimum point is to use the gradient in some
manner. The most straightforward adaptive algorithm that uses the gradient is the
steepest descent method.
The steepest descent method for updating the values of the filter
coefficients starts with the normal equation[l,2] which is equation (2.5) from
chapter 2. The matrix equivalent of equation (2.5) that solves for the filter
coefficients is
iV=R-,P
(3.1)
The vector hopt represents the optimum filter coefficients. The steepest descent
method finds this optimum point iteratively using the negative of the direction of
the gradient,
13


hn+,=h+an(Fn Vf).
(3.2)
The gradient of the mean squared error, s, is a vector of partial derivatives with
respect to the filter coefficients. Fn is a weighting matrix which for the steepest
descent is equal to the identity matrix. The term an is a step size parameter which
value changes according to the values of the other variables in the equation.
Referring to equation (2.14), the gradient of 8 in matrix form is
Vf=2(Rh-p).
Equation (3.2) then becomes
hn+i=h-2a(Rh-p).
(3.3)
(3.4)
The steepest descent does converge toward the optimum value as long as an is a
reasonable value, but it will converge slowly especially once it reaches the vicinity
of the optimum value.
3.2 The LMS Algorithm
The steepest descent algorithm does solve the first problem that occurs
when the signal is not stationary, namely that the minimum squared error is no
14


longer a constant value. However, the computational cost is very high. One has to
recalculate the autocorrelation and cross correlation for each iteration of the filter.
In addition, the steepest descent does not solve the second problem associated with
nonstationary signals. When a signal is not wide sense stationary one cannot
assume that the correlation is based only on the difference between two samples of
a signal. Therefore, the time averages of the autocorrelation and the cross
correlation are no longer valid. The least mean squares, LMS, filter solves both
problems by introducing an estimate of the gradient of the mean squared error.
Instead of taking the gradient of the expected value of the square of the
error function as in the steepest descent filter, the LMS filter[l,2] estimates that
value by taking the gradient of the instantaneous value of the square of the error
function:
Using the definition of the error signal in equation (2.12) and the fact that y (n) is
the convolution sum of the input to the filter and the filter coefficients, the
gradient of the square of the error signal becomes
d h(£)
(3.5)
15


= -2e(n)u(n).
^(e2())
dh{k)
(c?()-h()*u())"
dh(k)
(3.6)
The term on the right side of the equation times the step size parameter, a, is the
update for the filter coefficients. The entire LMS update equation turns out to be
computationally simple, and is shown by
h(n+i)=hw+aeWuW-
(3.7)
In equation (3.7) the step size variable, a, includes the -2 coefficient from equation
(3.6).
Equations (3.6) and (3.7) refer to LMS FIR filters. The gradient of the
y
instantaneous value of e for an DR filter is not as straightforward as it is for the
FIR filter. From equation (3.3), the y(k-n) terms are dependent on the filter
coefficients and are not canceled out in the derivative. The gradient takes the form

(3.8)
where h (i) is the vector containing the filter coefficients for both the input data
terms and the output data terms. The gradient equation is no longer a simple
16


quadratic with one global minimum point. There are possible local minimum
points that the algorithm can find instead of the global minimum pointfl]. One
way of avoiding this problem is to replace the y(k-n) terms in equation (2.9) with
d(k-n) terms. The algorithm is then called the equation-error method as opposed to
the output-error method. The results of this method are biased, but the estimated
mean squared error for the equation-error method is still just a quadratic equation
and has only one minimum point[4].
The size of a is another important consideration in the LMS filter. As a
increases, the step size of the filter increases and the filter converges faster.
However, increasing a also increases the noise included in the gradient of the
instantaneous error squared. This gradient estimate fluctuates about the true value
of the gradient of the mean squared error. These fluctuations increase as a
increases up to and including causing instability in the filter. A general range for a
is [1,2]
(3.9)
where Xmax is the largest eigenvalue in the correlation matrix, R.
The LMS filter tends to converge only slowly to the optimum solution.
Also, one great disadvantage of the LMS algorithm occurs when the input signal,
17


u (n), is random. The filter coefficients, h (n), become random and the filter does
not converge smoothly to the minimum mean squared error. In fact, the LMS filter
does not converge to exactly the optimum filter coefficients. This causes the LMS
filter to reach a steady state MSE that is larger than the minimum mean squared
error. The difference between the minimum MSE and the steady state MSE of the
LMS filter is called the excess mean squared error of the LMS filter.
There are many variations of the LMS filter. One basic algorithm is for the
block LMS (BLMS) filter[ 1 ]. This algorithm is designed to decrease the number
of computations for the filter updates. In the BLMS filter, the filter is updated only
once for a block of N sample points. The gradient is an average of the gradients
over the block and the output, y, is a vector. The update equation becomes
The BLMS, in general, has a small computational advantage over the LMS, but the
real computational savings is when the algorithm for the BLMS takes advantage of
such techniques as the FFT to make the convolution operation more efficient. The
speed of convergence is slower than for the LMS by a factor of N, but the excess
mean squared error is reduced by the same factor of N. These two conditions tend
;=o
(3.10)
18


to offset each other to make the convergence of the BLMS comparable to the
convergence of the LMS.
3.3 The RLS Algorithm
Another algorithm for finding a solution to the normal equations introduced
in chapter 2 is based on Newtons method for finding the zero of a function[l,2].
The recursive least squares, RLS, filter converges to the optimum filter much faster
than the LMS filter. Ideally, the RLS filter will converge to the optimum filter in
just one step. This, of course, is not possible in a realizable filter.
The Newton method uses an estimate of the tangent of a function at point
xn to find a point closer to the zero point of the function:
/'(*>
/(*)
x -x.
(3.11)
A simple rearranging of the values in the formula solves the problem for the next
point in the function, xn+i. Adaptive filters try to find the zero of the gradient of
the mean squared error. Applying Newtons method to the gradient produces an
interesting solution. Since the gradient is linear, the tangent of the gradient at any
point is identical to the gradient and, therefore, Newtons method can find the zero
point in one iteration.
19


The equivalent equation for an RLS adaptive filter is[l]
ds
dh (k)
(3-12)
which, ideally, will produce the optimum filter coefficients in one iteration.
Unfortunately, we must be satisfied with estimates for the gradient term and for the
inverse of the correlation matrix. A reasonable estimate for the gradient is the
LMS estimate from equations (3.6) and (3.7). This term introduces noise into the
ideal RLS equation which leads to some of the drawbacks in the LMS algorithm.
It might be prudent to include the step size variable, a, to control the effect of the
gradient estimate. The estimate for the correlation matrix, R, can be a recursive
estimate based on a time average, as
We still need the inverse of the correlation matrix. We can use the matrix
inversion lemma to save computations. This lemma requires a rectangular,
nonsingular matrix and a term that is in the form of the term on the right side of
equation (3.13). Using the lemma, this estimate of the inverse of the matrix never
needs to be explicitly calculated:
(3.13)
20


-1
R
-i
(+0
R(n)+x(+i)xVo
i-i
= R
-i
()
R^-1 Vi)x Vor(,,)
i+x'(-i)R(n)'lx(+,)
(3.14)
All the RLS algorithm needs is the initial filter coefficients and correlation matrix.
The RLS algorithm is much more computationally intensive than the LMS
algorithm, but the convergence is much faster.
3.4 Two Dimensional Adaptive Filters
Now the thesis moves on to filters that are appropriate for image
processing, two dimensional filters. We start with a quick introduction to the two
dimensional FIR and HR filters. Next we discuss the two dimensional least mean
squares filter and then expand the definition to two dimensional block least mean
squares filters.
3.4.1 Two Dimensional FIR and IIR Filters
In the second section of chapter 2 is a discussion of FIR and HR filters in
one dimension. This section expands those definitions to two dimensional filter
models. Section 2.2 starts with equation (2.7) which is the basic definition for
both filter models using the z-transform. Equation (2.7) expanded to two
dimensions becomes
21


Y(Zl>Z2)
U(zz2)
II%.2)zr,2r
("l .2^)______________
1+ Y.Ha(m"m2)Z\m'Z2m2'
(m,,^ etfo-(0,0))
(3.15)
Ra and Rb are the regions of support for the a terms and the b terms respectively.
The difference equation for a 2-D FIR filter is
y(k\>k2)= -i>*2 ~ni)-
("l.2 ^Rb)
The difference equation for a 2-D HR filter is
y(kl,k2)= YiHa(rrhm2)u(k\~nhk2-m2) +
(m, ,m2 eR,, -(0,0))
-P^2 "2)-
(3.16)
(3.17)
Again, the a terms in equation (3.15) are all equal to zero for the FIR filter. As in
the one dimensional case, the FIR filter may require a large number of coefficients,
but is stable while the HR filter needs fewer coefficients, but may be unstable[5].
3.4.2 Two Dimensional LMS Filters
The two dimensional least mean squares, TDLMS[6,7], filter is based on a
similar update equation as the one dimensional LMS filter is based on. However,
22


the filter coefficients and the input signal are now in matrix form. Since the filter
coefficients are in matrix form, the gradient of the square of the error signal is also
in matrix form, given by
H/+] =H; +aej. U.
(3.18)
In equation (3.18) above, H is the filter coefficients, ej is the error at realization j,
and U is the input data matrix.
The TDLMS filter retains most of the characteristics of the one
dimensional LMS filter. There are no matrix computations more complicated than
addition. There is no averaging. The algorithms convergence is independent of
the initial conditions. Also, the value of a can decrease the speed that the filter
converges if it is too small or increase the excess MSE if it is too large just as the a
in the one dimensional LMS does.
Just as there is an algorithm for a two dimensional LMS filter, there is a
basic algorithm for two dimensional block adaptive filters that are based on least
mean squares. The input signal, the desired signal, and the filter are all the same
matrices as in an algorithm that does not use blocks. The output signal, however,
is now a matrix as well[7,8]. In the basic two dimensional block adaptive filter
algorithm, the two dimensional signal, U, using a fixed block of data at each row
23


is:
>,;(/1/2)=ZZ^(WM/i ~ml2 + {j-l)L~n) l2=l,2,...,L.
m=0n=0
(3.19)
The size of the filter weight matrix is Ni by N2 and the size of the block is L. The j
subscript is the number of the block that is being processed. For the 2D block
algorithm[6,9], the filter weights are arranged in a row ordered vector which is Nz
by 1 when Nj = N2 = N. The error is also in a row ordered vector form. The input
signal matrix, U, is L by N2 where the elements are ordered as
(3.20)
where each Uj is a N2 by 1 vector of input signal samples. The update equation
becomes

(3-21)
There are many variations of this basic algorithm. One obvious variation is to
change the convergence factor, a.B, according to some optimizing criteria such as
the 2-D Optimum Block Adaptive (TDOBA) filter. The update equation for the
step size parameter is
24


When the step size parameter is constant for all the blocks in this basic algorithm,
the method is the two dimensional block least mean squares, TDBLMS.
A different type of variation for the basic 2D block adaptive filter is the
Two Dimensional Block Diagonal Least Mean Squares (TDBDLMS) filter. In this
filter, the output signal is calculated in a 2-D form[8]. The matrix form of the
output equation is
y-,=u,,h,
(3.23)
The output signal is in a row ordered vector form and is KL by 1 for a filter that is
K by L. The filter weights, h, are also in a KL by 1 row ordered vector form. The
input signal is in the following form:
U

U U ^ iK-K+\
U*+, UV- Uk-+2
u U*+*-2. -U',*
where
(3.24)


u(p,jL) u(p,jL-1) ... u(p,jL-L +1)
u(p,jL +1) u(p,jL) ... u(p,jL-L + 2)
wOJI + 1-1) w(/?,;X + L 2)... u{p,jL)
(3-25)
The algorithm puts the desired signal and the error signal into a KL by 1
row ordered matrix form. With the signals in these forms, the calculations for the
adaptive filter become very straightforward, as given by
E =D -Y =D -U H
i.j >,j .j >,j ,j
(3.26)
and
H, =H, +
2aR ,rt ^
^-U .E.
KL ,J ,J
(3.27)
Substituting the error equation into the update equation provides the update
equation necessary for the TDBDLMS algorithm.
In these equations, the subscripts i and j refer to the block position in the
image. The blocks do not overlap. The general algorithm is for a block least mean
squares filter, but the order that the blocks are filtered in is along diagonals in the
image instead of along rows or columns. This is to reduce continuity problems in
the adaptation process at the end of a row or column in the BLMS.
26


This algorithm assumes that the statistics do not change much in the
individual blocks. Outside of the block, the filter depends on the step size
parameter, c*b, to adjust the convergence speed, the tracking ability, and the error
in non stationary images. One way to improve the performance of the filter is to
adjust a
B(J)
AXE' U U' .E.
IJ ijljlyj
2 E' U U' U U' E .
lJ lJ lJ l,J 1>J l*J
(3.28)
Adjusting ccb has drawbacks. Obviously, the computational cost increases for
adjusting the step size and the value of the step size can vary beyond the
convergence condition for the step size [7].
27


4. Recursive Conjugate Gradient Adaptive Filter
In this chapter we develop the sample based conjugate gradient, RCG,
filter. First, we discuss the one dimensional RCG. After that the one dimensional
case is extended to a two dimensional case.
The RCG algorithm like the other algorithms developed in this paper is
based on finding the minimum point in an equation that is quadratic for a
stationary signal and is close to being quadratic if the signal is not stationary. The
algorithm converges faster than the LMS filter without the computational
complexity of the RLS filter.
4.1 The Conjugate Gradient Filter
The LMS filter is very popular because it is simple to implement.
Unfortunately, the LMS filter is also very slow to converge. The RLS filter
converges faster, but is computationally expensive. In this section, we discuss an
intermediate method that converges faster than the LMS filter, but is less
computationally intense than the RLS filter.
The conjugate gradient algoriiftm[10] solves the quadratic problem of
minimizing the function
28


(4.1)
/(h) = lh'Qh-b'h
where Q is a symmetric positive definite matrix of size n by n. Therefore, the
formula a'Qa is greater than zero for any n sized vector, a. The negative gradient
of equation (4.1) generates an estimate of a direction vector (x) that is conjugate to
all the previous direction vectors with respect to the matrix Q. The filter
coefficient vector is of the form
h*+i=h*
(4.2)
where otk is a step size parameter. The first step in the algorithm is the same as the
steepest descent algorithm. The initial direction vector is equal to the negative of
the gradient of equation (4.1). Succeeding values of the direction vector are a
combination of the gradient of the present iteration of equation (4.1) and the
preceding directions:
**=-g*+/?*-i**-i-
The value (3 keeps the new direction conjugate to Q.
(4.3)
29


For U, as a vector of n input and output data samples to satisfy the HR filter
form, Q, is the n by n symmetric matrix equal to 2U,tUi. This basic algorithm is
solved in block form to give Q a rank equal to n in order to keep Q a positive
definite matrix. However, the larger the block size the higher the computational
cost of the algorithm[10].
4.2 The Recursive Conjugate Gradient Algorithm
The block conjugate gradient, BCG, algorithm converges faster than the
LMS algorithm, but can become computationally expensive for larger blocks. One
way to reduce the computational cost is to use a sample based algorithm[4] which
we develop here.
The objective function for the RCG algorithm to minimize is
In this equation, C is the data set in the form of equation (2.11) for an IIR filter.
The vectors h and d are the filter coefficients and the desired signal respectively.
The X is the forgetting factor. Q is the second derivative of J(h) with respect to h,
and is given by
(4.4)
30


(4.5)
?y(h..,)
Equation (4.5) shows the recursive form of the derivative. The a and P terms are
calculated with the gradient as
a. =

(4.6)
and
Pn =
8n+i§ n+1
g*g,,+£
(4.7)
where p = Qnxn. The factor s is included in the equation to avoid divide by zero
errors, but it is small enough not to affect the calculations.
The algorithm introduces a vector, Dn, for simplicity:
D =/lD +2d ,C ,.
/t+l n n+1 n+i
(4.8)
The gradient, g, now becomes
g.=2E^""(c;h.-d,)c, =Q.h.-D,
1=0
(4.9)
31


The background computations for the RCG algorithm are complete. To make the
algorithm more efficient takes a few more steps. The error can be written as
(4.10)
The algorithm can now calculate the gradient recursively, as
g*+i =^(g+npJ+e+1C+1.
(4.11)
With the recursive equations for the matrix, Q, and the gradient, the RCG can
update the filter coefficients according to equation (4.2). The algorithm calculates
the new direction vector using equation (4.3)[4],
4.3 2-D Recursive Conjugate Gradient Algorithm
The two dimensional form of the error function for the conjugate gradient
The vector C is of the form in equation (2.11) for the HR filter input and vector h
is in the form of equation (2.10) for the HR filter coefficients. All the parameters
in the algorithm have two indices for position. The equations are essentially the
is
(4.12)
32


same as the one dimensional equations The difference is that the algorithm only
changes one index at a time. The equivalent equations for the two dimensional
case are
a.
g g
oflj ,^2 O/J] ,n2
X,j.2Pnn,+f
(4.13)
Q 1 AQ +2C ,
^-Wj+I,/72 Wj + l,rt2
Q i = /lQ +2C
C'
cr
l'-'n1,n2+p
or
(4.14)
i + l,n2
n,,n2 + l
= C'
i + l,n2
=c:
nl>n2H
Wj + l,n2 + l,n2
l^n!,n2+l ^n!,n2+l
or
(4.15)
g_ , 'Mg +# p. )+e i C i or
O +1 \ O yfl 2 rt|,/l2 / /tj + 1 ,2 j +1 ,^2
gfl,,n2 + l ^ ,rt2 m, ,n2 Pflj ,n2 ,2 + l ^'i>n2+l
(4.16)
and
Pn, ,n,
gn,+ l,n2 g/^+l./ii
glgn+*
or
Prt,n2
g n,
,n2+l
gi,ni + l
g*gn+*
The update equation for the filter coefficients becomes
(4.17)
33


*-l,n2
= h
rt,,M2
a x
M, ,M2 M, ,2
or
^n,,n,+l ^ r.
+ a*l,n*nl,n2'
(4.18)
The calculations for the new direction vector depends on the value of the gradient.
On one hand, if
6n.
+ 1/1?
Sm, ,m2
or
g,
.m-,+1
8m, ,m2
(4.19)
then the direction vector is
x = p or x ~ ct
m, + I,m2 &m, + 1,m2 m,,m2 + 1 &Mj,m2 + 1'
(4.20)
Otherwise, the direction vector becomes
xMJ., = -gJ, + /? x or x = -g + /? x .
Mj + 1,M^ M, + 1,M2 M, ,M2 M, ,M2 Mj,M2 + 1 Mj,M2 + 1 < M, ,M2 M,,M^
(4.21)
The number of computations for the RCG is 3n2+8n+2 multiplications and
2n +5n+2 additions. This compares to 4n +4n multiplications and 3n +n-l
additions for the RLS algorithm. As n increases the RCG filter has less
computational cost than the RLS filter[4,l 1]. The RCG filter is, therefore, an
intermediate filter that can converge faster than the TDLMS filter with a lower
computational cost than the RLS filter.
34


5. Preprocessing Techniques and Results
This chapter introduces the experimental work behind the thesis. Here we
present several preprocessing techniques for the RCG filter using a normally
distributed white noise. We also show a comparison between the RCG filter and
the two dimensional block diagonal least mean squares, TDBDLMS, filter. The
images we use for examples are the flujet and the clown demo images from
MatlabjM- Figure 5.1 shows the original flujet image and figure 5.2 shows the
original clown image.
We use two types of noise in the filter comparison, one that is a normally
distributed white noise and another that is an impulse ("salt and pepper") type of
noise. The signal to noise ratio of each of the original images corrupted by the
white noise is 2.903 dB for the flujet and 2.672 dB for the clown. Figures 5.3 and
5.4 show, respectively, the flujet image corrupted by the white noise and the clown
image corrupted by the white noise. The signal to noise ratio for the images
corrupted by the "salt and pepper" type noise is 3.746 dB for the flujet and 3.773
dB for the clown. Figure 5.5 shows the flujet image corrupted with "salt and
pepper" noise and figure 5.6 shows the clown image corrupted with "salt and
pepper" noise.
35


Figure 5.1 Original flujet image
36


Figure 5.2 Original clown image
37


Figure 5.3 Flujet image with white noise
38


Figure 5.4 Clown image with white noise
39


Figure 5.5 Flujet image with "salt & pepper" noise
40


41


The next section discusses the preprocessing methods and presents the
results of each method for both images. The last section of this chapter presents
the results of the comparison between the RCG filter and the TDBDLMS filter.
5.1 Preprocessing Methods
The different types of preprocessing we show here are using a filter bank,
segmenting the image, and a combination of segmenting and filtering. We only
used these preprocessing techniques on the normally distributed white noise and
only with the RCG filter. However, we used all four preprocessing techniques on
each of the two images. Figures 5.7 shows the flujet after it has been filtered by
the RCG adaptive filter without any preprocessing. Figure 5.8 presents the clown
after the RCG adaptive filter and without any preprocessing.
The filter bank is designed to take advantage of the frequency spectrum of
the images. The filter bank consists of five filters that cover the entire spectrum of
the images. Even the high frequency components of the images have been kept in
spite of the fact that the noise is mostly in the higher frequencies. This to help
preserve the edges as much as possible. All five filters are two dimensional,
circularly symmetric, FIR filters made with Kaiser windows. The table 5.1 shows
the bands for each filter. Figures 5.9 and 5.10 show the images after being
preprocessed with the filter bank and then filtered with the RCG adaptive filter.
42


Figure 5.7 Flujet image after the RCG filter
43


Figure 5.8 Clown image after the RCG filter
44


Figure 5.9 Flujet image after filter bank
45


I
I
46
MM


The sampling rate for all of the filters is 10 rad/s and the maximum ripple is 0.07.
Using a filter bank adds large amount of computational cost to processing the
image. However, this particular preprocessing method can be run very efficiently
in a parallel processing algorithm.
Filter Pass Band 1 (rad/s) Stop Band 1 (rad/s) Pass Band 2 (rad/s) Stop Band 2 (rad/s)
Low Pass 0.15 0.35
Band Pass 1 0.35 0.15 0.40 0.60
Band Pass 2 0.60 0.40 0.65 0.85
Band Pass 3 0.85 0.65 0.90 1.10
High Pass 1.10 0.90
Table 5.1 Filter bands
We selected the segments of the images manually. The flujet image has
large areas of background where the statistics of the area does not change.
Therefore, for the flujet, the segments of the image consist of rectangular areas of
the background and one rectangular area which contains the part of the image
which changes statistically. Figure 5.11 is of the flujet after it has been segmented
and has been through the adaptive filter. The clown image has been divided into
47


Figure 5.11 Flujet image after segmenting
48


nine separate rectangular regions. Each region has been selected to contain a
section of the image that is somewhat similar across the section statistically.
Figure 5.12 shows the results of segmenting the clown image and then running the
RCG adaptive filter.
The final preprocessing methods are a combination of the first two
methods. First, we filtered the image and then segmented each filtered image
before using the adaptive filter. Next, we segmented the image and filtered the
segments before using the adaptive filter. Figures 5.13 and 5.14 present the flujet
image and the clown image respectively. The two images have been through the
filter bank, then segmented, and then filtered with the RCG filter. Figure 5.15
shows the flujet after segmenting, then filtering with the filter bank, and then
filtering with the RCG filter. Figure 5.16 shows the clown image after the same
procedure.
Table 5.2 shows the resulting signal to noise ratios for the different
preprocessing methods. All these examples are for the image with the random
noise and using the RCG filter.
The filter bank can generally increase the signal to noise ratio by more than
1 dB. However, this gain is at a much higher computational cost, although using
parallel processing makes this type of method much more efficient. For the filter
49


Figure 5.12 Clown image after segmenting
50


Figure 5.13 Flujet image after filter bank then segmenting
51


Figure 5.14 Clown image after filter bank and then segmenting
52


Figure 5.15 Flujet image after segmenting and then filter bank
53


Figure 5.16 Clown image after segmenting and then filter bank
54
iit


bank method, the image is filtered five times and is adaptive filtered five times.
Segmenting does not increase the signal to noise ratio as dramatically, but the
computational cost is much lower. This is because the segmenting method uses
Preprocessing Methods Signal to Noise Ratio Flujet (dB) Signal to Noise Ratio Clown (dB)
None 13.045 10.177
Filter Bank 15.088 11.600
Segmenting 13.744 10.479
Filter Bank then Segmenting 15.095 11.556
Segmenting then Filter Bank 15.325 11.453
Table 5. 2 Signal to noise ratios for different preprocessing methods
the adaptive filter once for each segment, but the size of the pieces that are filtered
with the adaptive filter each time is much smaller than the entire image. Using
filter bank on the image after segmenting gives a smaller increase in the signal to
noise ratio than using a filter bank on the image alone or segmenting the image
alone for the flujet. Segmenting the image after filtering does not produce a large
increase in the signal to noise ratio for the flujet. The last two preprocessing
methods did not improve the clown image by much. In fact, using a filter bank
then segmenting decreased the signal to noise ratio from using a filter bank alone.
55


Segmenting then using a filter bank decreased the signal to noise ratio from using a
filter bank or segmenting alone for the clown image.
5.2 RCG and TDBDLMS Comparison
In this section of the thesis, we give a description of how we implemented
the adaptive filters and then show the results for the two types of noise. We used
each filter on each of the two types of noise.
The RCG filter uses an HR filter and delays the input image by 1 sample in
both directions. The delay is to help decorrelate the noise. The forgetting factor is
0.999. All of the initial values for the filter coefficients, the output vector, the
gradient, and the matrix Q are set to zero.
For the TDBDLMS filter, we use a FIR filter algorithm. The filter size is 4
by 4. We first run the filter for 3600 blocks for the flujet and for 500 blocks for
the clown with the filter coefficients initialized at zero. Then we run the filter with
the filter coefficients from the end of the partial run. The step size is 7 x 10'9
throughout the run.
Table 5.3 shows the results of the comparison. Figures 5.7 and 5.8 show
the images corrupted with white noise and filtered with the RCG filter. Figures
5.17 and 5.18 show the flujet image and the clown image respectively with "salt
and pepper" noise after running the RCG filter. The last four figures show the
56


results of using the TDBDLMS filter. Figure 5.19 shows the flujet with normally
distributed white noise and figure 5.20 shows the clown corrupted with normally
distributed white noise. Figures 5.21 and 5.22 show the flujet image and the clown
image respectively. Each image was corrupted with "salt and pepper" noise and
then filtered with the TDBDLMS adaptive filter.
Noise Type RCG Flujet (dB) RCG Clown (dB) TDBDLMS Flujet (dB) TDBDLMS Clown (dB)
Random, Normal 13.045 10.177 7.803 6.576
Salt & Pepper 6.173 5.751 4.230 3.902
Table 5. 3 Comparison between RCG and TDBDLMS filters
Neither filter improves the "salt and pepper" corrupted image by very
much. However, in both cases the RCG filter increases the signal to
noise ratio by more than the TDBDLMS filter. The TDBDLMS algorithm uses a
variation of the simple LMS algorithm and can take advantage of FFT methods to
decrease the computational cost of the algorithm, but running the filter through the
partial runs to calculate the initial filter weights increases the computational cost.
57


'1.1 lj
Figure 5.17 Flujet image with "salt & pepper" noise
after RCG filter
58


Figure 5.18 Clown image with "salt & pepper" noise
after RCG filter
59


Figure 5.19 Flujet image with white noise
after TDBDLMS filter
60


Figure 5.20 Clown image with white noise
after TDBDLMS filter
61


Filter 5.21 Flujet image with "salt & pepper" noise
after TDBDLMS filter
62


Figure 5.22 Clown image with "salt & pepper" noise
after TDBDLMS filter
63


6. Conclusion
Adaptive filtering is very important for image processing. This type of
filtering can attenuate the noise corrupting the image while preserving some of the
edges. The statistical properties of most images changes from one area to another
and an adaptive filter can change to filter the different areas of the image more
effectively than a fixed filter.
In adaptive filtering there are several functions different filters can use to
reach some minimum error objective. For the most part, the choices come down to
a trade-off between computational complexity and performance. The least mean
squares type of filters are generally simple, but do not converge very quickly while
an algorithm such as the Recursive Least Squares (RLS) filter is much more
complex, but converges much faster. The Recursive Conjugate Gradient(RCG)
filter stands in between the LMS and the RLS filters.
Processing the image before using an adaptive filter has the some of the
same problems. Is the better performance worth the computational cost?
Segmenting does not increase the computations substantially and increases the
signal noise ratio by a small amount. Using a filter bank can improve the
performance of the adaptive filter. Even though the additional computational cost
64


makes using a bank of filters prohibitive, a parallel processing algorithm can make
the filter bank method more efficient.
The RCG algorithm we used in this thesis performed better than the
TDBDLMS aglorithm. The computational complexity was generally the same for
the implementations we used. However, neither algorithm performed well with the
"salt and pepper" noise.
65


APPENDIX
MATLAB CODE FOR FILTER PROGRAMS
A.l Rcg.m
% This program runs the RCG adaptive filter.
% The input to the program is the image corrupted by noise.
% The output of the program is the output of the adaptive filter.
function y = rcg (LI);
% Forgetting factor,
lam = 0.999;
% Delay the input signal.
L2 = LI (2:ml,2:nl);
% Set all the initial values,
epsilon = le-34;
[ml,nl] = size (LI);
Qn = zeros (12,12);
H = zeros (12,1);
g0 = H;
y = zeros(ml,nl);
xO = -gO;
% Begin the filter
for j = 4:nl-l
for i = 4:ml-l
p = Qn xO;
alpha = -(g0'*x0)/(x0'*p+epsilon),
H = H+alpha*x0;
66


% Set the HR filter.
Cl = [y(i,j-l) y(i-lj) y(i-l J-l) y(i-2j)];
C2 = [L2(i-1 j) L2(i-1 j-l) L2(ij-l) L2(i-2j-l)];
C3 = [L2(i-1 j-2) L2(i-2j-2) L2(i-2j) 1];
C = [C1 C2 C3]';
% Determine the recursive values
Qnl = lam Qn + C C';
e(i j) = C'*H-L2(i j);
gl = lam gO + lam alpha p + e(ij) C;
% Set the new values for the next iteration of the filter,
beta = ( gl' gl) /(gO' gO + epsilon);
% Calculate the new direction vector
if((gr*gi)>(g0*g0))
xl =-gl;
else
xl =-gl + beta xO;
end
xO = xl;
gO = gl;
Qn = Qnl;
% Calculate the value of the output.
y(ij) = C'*H;
end;
end;
67


A.2 Tdbdlms.m
% This program runs the TDBDLMS filter.
% The inputs to the program are the original image, the image with noise,
% and the matrices that with the index points (i j) for the initial input matrix.
% The output of the program is the output of the adaptive filter.
% The program runs the filter for a set number of blocks with an initial
% filter of all zeros. The values of the filter coefficients after the partial
% run of the filter is the initial values of the filter coefficients for the second
% and complete run of the filter.
function Yf = tdbdlms (U,Un,Xi,Xj)
% Set the starting point of the image for filtering.
qi = 4;
qj = 4;
% Set the step size and the number of blocks for the first run.
uB = 7e-9;
nl=1000;
[ml,nl] = size (CL);
% Make working copies of the index matrices.
Xia = Xi;
Xja = Xj;
% Set the initial values of the filter, the output, and the desired signal vectors.
H = zeros (16,1);
Y = H;
D = H;
Yf = zeros (ml,nl);
imax = (ml 4)/4;
jmax = (nl 4)/4;
ibl = 1;
68


jbl = 1;
% Start the partial run of the filter.
for q = l:nl
% Update the output matrix and the desired signal vector,
for i = 1:4
forj = 1:4
a = (i-1) 4 +j;
Yf((qi+i),(qj+j)) = Y(a);
D(a) = Un((qi+i),(qj +j));
end
end
% Enter the input matrix,
for i = 1:16
for j = 1:16
iim = Xia(ij);
jim = Xja(ij);
X(i j) = Un(iimjim);
end
end
% Update the output signal and the filter coefficients.
Y = X*H;
H = H + uB (X'*D X'*X*H);
% Determine the next block to process,
iinc = 0;
jinc = 0;
r = rem((ibl + jbl),2);
if (r=0)
if (ibl == imax)
jbl = jbl +1;
jinc = 4;
else
ibl = ibl +1;
69


iinc = 4;
ifGbl> 1);
jbl = jbl-1;
jinc = -4;
end
end
else
if (jbl == jmax)
ibl = ibl + 1;
iinc = 4;
else
jbl = jbl + 1;
jinc = 4;
if (ibl >1)
ibl = ibl-1;
iinc = -4;
end
end
end
% Set the index values for the next input matrix,
blk = iinc ones(16,16);
Xia = Xia + blk;
blk = jinc ones(16,16);
Xja = Xja + blk;
clear blk
qi = qi + iinc;
qj = qj + jinc;
end
% Initialize the values for the second run of the filter.
qi = 4;
qj =4;
% Make working copies of the index matrices.
Xia = Xi;
Xja = Xj;
70


% Initialize the output signal and the desired signal.
Y = zeros(16,l);
D = Y;
Yf = zeros (ml,nl);
imax = (ml 4)/4;
jmax = (nl 4)/4;
ibl = 1;
jbl = 1;
% Start the second run of the filter,
for q = l:(ml nl)
% Update the output matrix and the desired signal vector,
for i= 1:4
for j = 1:4
a = (i-1) 4 +j;
Yf((qi+i),(qj+j)) = Y(a);
D(a) = Unl((qi+i),(qj+j));
end
end
% Enter the input matrix,
for i = 1:16
for j = 1:16
iim = Xia(i j);
jim = Xja(ij);
X(i j) = Un(iim jim);
end
end
% Update the output signal and the filter coefficients.
Y = X*H;
H = H + uB (X'*D X'*X*H);
% Determine the next block to process,
iinc = 0;
jinc = 0;
71


r = rem((ibl + jbl),2);
if (i=0)
if (ibl == imax)
jbl = jbl +1;
jinc = 4;
else
ibl = ibl +1;
iinc = 4;
if(jbl>i);
jbl = jbl-1;
jinc = -4;
end
end
else
if (jbl = imax)
ibl ibl + 1;
iinc = 4;
else
jbl = jbl+1;
jinc = 4;
if (ibl >1)
ibl = ibl-1;
iinc = -4;
end
end
end
blk = iinc ones(16,16);
Xia = Xia + blk;
blk = jinc ones(16,16);
Xja = Xja + blk;
clear blk
qi = qi + iinc;
qj = qj + jinc;
end
72


A.3 Filbank.m
% This Matlab program loads a filter bank, filters the image with
% each filter, runs the filtered images through the RCG adaptive
% filter, and recombines the filtered images.
% The program is designed for a specific image and bank of filters. The size of
% the image and the filters is known beforehand.
% Load the filter bank,
load msfilbn2
% Filter the image and run the RCG program for each filtered image.
Ff = conv2(lpl,Fnl);
Ff=Ff (31:430,31:330);
yl = rcg (Ff);
Ff = conv2(bpl,Fnl);
Ff=Ff (31:430,31:330);
y2 = rcg (Ff);
Ff = conv2(bp2,Fnl);
Ff=Ff (31:430,31:330);
y3 = rcg (Ff);
Ff = conv2(bp3,Fnl);
Ff=Ff (31:430,31:330);
y4 = rcg (Ff);
Ff = conv2(hp,Fnl);
Ff=Ff (31:430,31:330);
y5 = rcg (Ff);
% Combine the filtered images into a final image,
yf = yl+y2+y3+y4+y5;
73


A.4 Segment.m
% This program segments the image and processes each segment
% with the RCG adaptive filter. The program then recombines the
% segments into a processed image. The segments overlap to eliminate
% lines between the segments.
% The input to the program is the image corrupted by noise.
% The output of the program is the recombined image.
% The segments are determined manually and fit one particular image.
function yf = segscg(Ff)
% Segment the image,
fl =Ff(l:30,:);
f2 = Ff(25:90,l:70);
f3 = Ff(25:90,65:115);
f4 = Ff(25:90,l 10:160);
f5 = Ff(25:90,l 55:200);
f6 = Ff(25:90,195:260);
f7 = Ff(85:176,1:80);
f8 = Ff(85:176,75:200);
f9 = Ff(85:176,195:260);
% Run the RCG adaptive filter for each segment.
ysl = scglb(fl);
ys2 = scglb(f2);
ys3 -- scglb(f3);
ys4 = scglb(f4);
ys5 = scglb(f5);
ys6 = scglb(f6);
ys7 = scglb(f7);
ys8 = scglb(fS);
ys9 = scglb(f9);
% Resize the segments to fit the size of the original image,
ysl = ysl(l:29,:);
74


ys2 = ys2(6:65,l:69);
ys3 = ys3(6:65,6:50);
ys4 = ys4(6:65,6:50);
ys5 = ys5(6:65,6:45);
ys6 = ys6(6:65,6:66);
ys7 = ys7(6:92,l :79);
ys8 = ys8(6:92,6:125);
ys9 = ys9(6:92,6:66);
% Recombine the segments,
ysfl = [ys2 ys3 ys4 ys5 ys6];
ysf2 = [ys7 ys8 ys9];
yf= [ysl;ysfl;ysf2];
75


References
[1] P.M. Clarkson, Optimal and Adaptive Signal Processing, Boca Raton:
CRC, 1993.
[2] S. Haykin, Introduction to Adaptive Filters, New York: Macmillan, 1984.
[3] K. Ogata, Discrete-Time Control Systems, Englewood Cliffs: Prentice Hall,
1987.
[4] K.S. Joo and T. Bose, "A Fast Conjugate Gradient Algorithm for 2-D
Nonlinear Adaptive Filtering," Proc. of the International Conference on
Digital Signal Processing, pp.314-319, June 1995 (invited paper).
[5] J.S. Lim, Two-Dimensional Signal and Image Processing, Englewood
Cliffs: Prentice Hall, 1990.
[6] M.M. Hadhoud and D.W. Thomas, "The Two-Dimensional Adaptive LMS
(TDLMS) Algorithm," IEEE Transactions on Circuits and Systems, vol. 35,
no. 5, pp. 485-494, May 1998.
[7] W.B. Mikhael and S.M. Ghosh, "Two-Dimensional Block Adaptive
Filtering Algorithms with Optimum Convergence Factors," IEEE
Transactions on Circuits and Systems-II: Analog and Digital Signal
Processing, vol. 42, no. 8, pp. 505-515, August 1995.
[8] M.R. Azimi-Sadjadi and H. Pan, "Two-Dimensional Block Diagonal LMS
Adaptive Filtering," IEEE Transactions on Signal Processing, vol. 42, no.
9, pp. 2422-2429, September 1994.
[9] M.R. Azimi-Sadjadi and R.A. King, "Two-Dimensional Block Processors -
Structures and Implementations," IEEE Transactions on Circuits and
Systems, vol. CAS-33, no. 1, pp. 42-50, Janurary 1986.
[10] G.K. Boray and M.D. Srinath, "Conjugate Gradient Techniques for
Adaptive Filtering," IEEE Transactions on Circuits and Systems -1:
76


Fundamental Theory and Applications, vol. 39, no. 1, pp. 1-10, Janurary
1992.
[11] K.S Joo, T. Bose, and J.W. Hall,Jr., "2-D Nonlinear Adaptive Filtering for
Image Restoration," Proc. of the IEEE Workshop on Nonlinear Signal and
Image Processing, pp. 947-950, June 1995 (invited paper).
77