Citation
Wavelet domain adaptive filtering in signal processing

Material Information

Title:
Wavelet domain adaptive filtering in signal processing
Creator:
Chang, Su-Wei
Publication Date:
Language:
English
Physical Description:
vii, 94 leaves : illustrations ; 28 cm

Subjects

Subjects / Keywords:
Signal processing ( lcsh )
Adaptive filters ( lcsh )
Wavelets (Mathematics) ( lcsh )
Adaptive filters ( fast )
Signal processing ( fast )
Wavelets (Mathematics) ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaf 94).
Statement of Responsibility:
by Su-Wei Chang.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
42611475 ( OCLC )
ocm42611475
Classification:
LD1190.E54 1999m .C43 ( lcc )

Downloads

This item has the following downloads:


Full Text
WAVELET DOMAIN ADAPTIVE FILTERING
IN SIGNAL PROCESSING
by
Su-Wei Chang
B.S.E.E., University of Colorado, 1994
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Electrical Engineering
1999


This thesis for the Master of Science
degree by
Su-Wei Chang
has been approved
by
Jan Bialasiewicz
Miloje Radenkovic
4/s/if
DATE


Chang, Su-Wei (M. S. Electrical Engineering)
Wavelet Domain Adaptive Filtering in Signal Processing
Thesis Directed by Associate Professor Tamal Bose
ABSTRACT
The standard time domain least-mean-square (LMS) algorithm is a widely
used and standard adaptive finite impulse response (FIR) filtering algorithm. This
thesis presents and studies a recently developed wavelet domain LMS adaptive FIR
filtering algorithm. The proposed algorithm uses the inverse of the Cholesky factor
in the wavelet domain to pre-whiten the input. The pre-whitened input has a lower
eigenvalue spread than the time domain input. Therefore, the convergence rate of the
LMS algorithm is improved. This thesis shows that the wavelet domain LMS
algorithm can be used to improve the convergence rate for the system identification
problem and for the adaptive differential pulse code modulation (ADPCM)
coding/compression method, as compared to the standard time domain LMS FIR
filtering algorithm.
This abstract accurately presents the content of the candidates thesis. I recommend
its publication.
Signed
Tamal Bose
m


CONTENTS
Figures.......................................................................v
Tables......................................................................vii
Chapter
1. Introduction...............................................................1
2. Wavelet Transform and Filter Banks.........................................3
2.1 Wavelet Transform...................................................... 3
2.2 Wavelet Filter Banks....................................................6
2.2.1 Two-Band Wavelet Filter Banks..........................................10
Perfect Reconstruction.........................:.....................10
Haar Wavelet Filter Banks.............................................12
Daubechies Compactly Supported Wavelets Filter Banks..................13
2.2.2 M-Band Regular Perfect Reconstruction Filter Banks........,...........14
3. FIR Adaptive Filtering Using Least Mean Square (LMS) Algorithm...........15
3.1 Wiener Filter..........................................................15
3.2 Steepest Descent Algorithm.............................................18
3.3 LMS Algorithm..........................................................21
3.3.1 Standard Time Domain LMS Algorithm.....................................21
3.3.2 Self-Orthogonalizing Adaptive LMS Filtering Algorithm..................23
3.3.3 Frequency Domain LMS Algorithm.........................................24
3.3.4 Wavelet Domain LMS Algorithm...........................................25
4. System Identification.....................................................29
4.1 System Identification Using the Time Domain LMS Filtering Algorithm...29
4.2 System Identification Using the Wavelet Domain LMS Filtering Algorithm 39
4.3 Comparisons and Discussions............................................49
5. Adaptive Differential Pulse Code Modulation (ADPCM).......................51
5.1 ADPCM Using the Time Domain LMS Filtering Algorithm....................55
5.2 ADPCM Using the Wavelet Domain LMS Filtering Algorithm.................59
5.3 Comparisons and Discussions............................................62
6. Conclusions.............................................,................64
Appendix.....................................................................66
References...................................................................94
IV


FIGURES
Figure
2-1 Basis Functions and Time-Frequency Resolution of the STFT..................5
2-2 Basis Functions and Time-Frequency Resolution of the WT....................6
2-3 Generic Two-Band Wavelet Analysis Filter Bank..............................7
2-4 Example of Orthonormal Wavelet Bases.......................................9
2-5 Two-Band Perfect Reconstruction Wavelet Filter Banks......................10
2-6 Two-Band Perfect Reconstruction Filter Coefficients.......................12
2- 7 M-Band Perfect Reconstruction Filter Banks................................14
3- 1 Block Diagram of a Generic Statistical Filtering Problem..................15
3-2 Block Diagram of a Generic Steepest Descent FIR Filtering Algorithm.......20
3-3 Block Diagram of a General LMS FIR Filtering Algorithm....................23
3- 4 Block Diagram of a Wavelet Domain LMS FIR Adaptive Filter.................25
4- 1 Block Diagram of a System Identification Model Using Time Domain LMS
Filtering Algorithm.......................................................30
4-2 The Desired Output d(n) vs. the Filter Output y(n) (Time Domain LMS
Algorithm, p = 0.05 ).....................................................36
4-3 The Estimated Error (Time Domain LMS Algorithm, p = 0.05 )................36
4-4 Adaptive Filter Coefficients ho, hi, h2, and I13..........................37
4-5 Adaptive Filter Coefficients h4,I15, h6, and I17..........................37
4-6 Adaptive Filter Coefficients hg, he, hio, and hn..........................38
4-7 Adaptive Filter Coefficients hn, hn, hi4, and h^..........................38
4-8 Adaptive Filter Impulse Response After 500 Iterations (Time Domain LMS
Algorithm, p = 0.05 ).....................................................39
4-9 Block Diagram of a System Identification Model Using Wavelet Domain LMS
Filtering Algorithm.......................................................40
4-10 The Desired Output d{n) vs. the Filter Output y(n) (Wavelet Domain LMS
Algorithm, p = 0.05 ).....................................................45
4-11 The Estimated Error (Wavelet Domain LMS Algorithm, p = 0.05 ).............45
4-12 Wavelet Domain Filter Impulse Response After 500 Iterations (Wavelet
Domain LMS Algorithm, p 0.05 )..........................................46
v


4-13 Time Domain Filter Impulse Response After 500 Iterations (Wavelet Domain
LMS Algorithm, // = 0.05 )..........................................46
4-14 The Desired Output d{n) vs. the Filter Output y(n) (Wavelet Domain LMS
Algorithm, // = 0.3)................................................47
4-15 The Estimated Error (Wavelet Domain LMS Algorithm, p. = 0.3).........47
4-16 Wavelet Domain Filter Impulse Response After 500 Iterations (Wavelet
Domain LMS Algorithm, n = 0.3)......................................48
4-17 Time Domain Filter Impulse Response After 500 Iterations (Wavelet Domain
LMS Algorithm, n = 0.3).............................................48
4-18 Mean-Squared Error of an Ensemble of 100 Runs (// = 0.05 )...........50
4- 19 Mean-Squared Error of an Ensemble of 100 Runs (Time domain LMS fi = 0.05 ,
Wavelet Domain n = 0.3).............................................50
5- 1 Generic PCM Encoder..................................................51
5-2a Generic ADPCM Encoder.................................................53
5-2b Generic ADPCM Decoder.................................................53
5-3 Generic Time Domain ADPCM System.......................................55
5-4 Time Domain ADPCM Applied to a Sine Wave.............................58
5-5 Time Domain ADPCM Applied to a Sine Wave with a Sudden Change........59
5-6 Generic Wavelet Domain ADPCM System....................................59
5-7 Wavelet Domain ADPCM Applied to a Sine Wave............................61
5-8 Wavelet Domain ADPCM Applied to a Sine Wave with a Sudden Change.... 62
5-9 MSE of an Ensemble of 50 Runs (Input is a Sine Wave).................63
5-10 MSE of an Ensemble of 50 Runs (Sine Wave with a Sudden Change).......63
vi


Table
TABLES
2-1 The /Zq Coefficients of Daubechies Compactly Supported Wavelets.............13
5-1 Jayant Quantizer............................................................54
Vll


1.
Introduction
The adaptive filtering algorithm may be classified in two groups: linear
adaptive filtering and nonlinear adaptive filtering. Simply described, the filter is
considered linear if the filter output varies linearly as a function of the filter input,
otherwise, the filter is nonlinear. Some of the common linear adaptive filtering
algorithms are: the steepest descent algorithm, the least-mean-square (LMS)
algorithm, the recursive least-squares (RLS) algorithm, and the square root (QR)
adaptive algorithm. Some of the common nonlinear adaptive filtering algorithms
include: the blind deconvolution, the back-propagation learning network, and the
radial basis function network. Detailed descriptions of the adaptive filtering
algorithms mentioned above can be found in [2].
A recently developed wavelet domain LMS adaptive FIR filtering algorithm
[1] shows that the autocorrelation matrix of the wavelet domain FIR filter input has a
special sparse structure. The correlation matrices of the details (please see Chapter
2 for the definition of details) of the wavelet domain FIR filter input also have
special sparse stmctures. From [1], the autocorrelation matrix of the filter input can
be estimated and updated at each recursive step using the correlation matrices of the
input details to reduce computational complexity of the algorithm.
1


The purpose of this thesis is to study, simulate, and demonstrate that the
wavelet domain LMS adaptive FIR filtering algorithm provides convergence rate
improvement over the time domain LMS adaptive FIR filtering algorithm. This
thesis also shows the wavelet domain algorithm can be applied for the system
identification problem and for the ADPCM coding/compression method.
Chapter 2 provides an introduction of the theory of wavelet transformation
and filter banks. This chapter is focused on describing the details of 2-Band perfect
reconstruction filter bank theory which are used in the simulations of subsequent
chapters. The M-Band perfect reconstruction filter bank theory is discussed topically
in this chapter to introduce the reader to alternate techniques.
Chapter 3 presents the derivation of the LMS FIR filtering algorithm from the
steepest descent algorithm. Time domain, frequency domain, and wavelet domain
LMS FIR filtering algorithms are included.
Chapters 4 and 5 describe two applications of the LMS adaptive algorithms:
system identification and ADPCM. Simulations of these applications using time
domain and wavelet domain LMS adaptive filtering algorithms are also presented.
The thesis concludes with Chapter 6, which summarizes and compares the
results of the simulations. Also, advantages and disadvantages of the time domain
and wavelet domain LMS FIR filtering algorithms are presented.
?


2.
Wavelet Transform and Filter Banks
An overview of comparisons between Fourier transform and wavelet
transform is given in section 2.1. The traditional Fourier transform bases come from
differential equations, which does not have a dilation equation. Wavelets include the
scaling function, which is the solution of the dilation equation. The dilation equation
includes two time scales, therefore, the solution of a dilation equation (f) (?) is
finitely bounded. The scaling function 0 (?) is nonzero in a finite interval, and
therefore (j) (2?) is zero outside of half of the interval. This unique bounded
characteristic leads the wavelet bases to be localized. To best describe the
relationships between the low pass filter /zq and the scaling function, and the high
pass filter and the wavelets, section 2.2 shows the dilation and wavelet equations
and the derivation of the solutions to these equations.
2.1 Wavelet Transform
The Fourier transform is a traditional way of presenting a signal in the
frequency domain. The standard Fourier transform of a continuous time signal x(t) is
X(w) = -7== TVt)e~iMdt. (2-1)
yjlTl -oo
3


The Fourier transform described above works fine for stationary signals (e.g.,
sinusoidal waves), where the frequency of the signal does not vary with time.
However, in many applications (e.g., audio and video signal processing), the signals
are nonstationary and one is interested to obtain localized frequency information in a
specific period of time. Another difficulty with the standard Fourier transform, even
with a stationary signal, is when there is a sudden change in the signal (e.g., noise
transients). The result is seen as broadband energy over the entire frequency axis
because of the infinite extent of the basis functions [3], To over come these short-
comings of the standard Fourier transformation, a windowed or short-time Fourier
transform (STFT) was developed. The STFT obtains the time-localized interval of
the x(n)-of-interest and takes the Fourier transform of it as
Another alternative is to use the wavelet transform to localize the time-
frequency characteristics of a signal. The wavelet transform of x(t) is
where a is the scaling parameter. Small values of the absolute value of the scaling
parameter (lal) represent high frequencies and large values of lal represent low
frequencies. The time localization is controlled by the translation parameter b. The
Xwindow(w,T)= J x(t)g(t T)e iwtdt.
(2.2)
oo
(2.3)
functions y/a^ are wavelets and the function \f/ is the mother wave.
4


Figure 2-1 shows the comparisons between the windowed Fourier transform
and the wavelet transform [4]. According to Figure 2-1, the windowed Fourier
transform only provides frequency information of the signal since the time-axis
bandwidth is a constant and only limited time domain information is available [3].
Figure 2-2 shows that for the wavelet transform, a higher frequency corresponds to a
wider bandwidth in frequency axis but a narrower bandwidth in time-axis. A lower
frequency corresponds to a narrower bandwidth in frequency axis but a wider
bandwidth in time-axis [3]. Therefore, the wavelet transform (WT) is more capable
of presenting multiresolution of time-frequency characteristics of the signal
compared to STFT.

frequency
Figure 2-1 Basis Functions and Time-Frequency Resolution of the STFT
5


time
Figure 2-2 Basis Functions and Time-Frequency Resolution of the WT
2.2 Wavelet Filter Banks
Filter banks are at the heart of wavelet domain algorithm simulations. The signal is
decomposed or transformed to wavelet domain using the analysis filter bank and
reconstructed using the synthesis filter bank. Figure 2-3 shows the block diagram of
a generic 2-Band analysis filter bank. In Figure 2-3, H represents the low pass
filter and H represents the high pass filter in the analysis bank [7]. The output
signal of each filtering stage is downsampled by a factor of two.
6


\
w(t)
Figure 2-3 Generic Two-Band Wavelet Analysis Filter Bank
From Figure 2-3, the outputs of the filters (a3, d3, a2, d2, al, dl, etc.) are
called details of the input signal x(t). To describe the relations between the
highpass and lowpass filters in Figure 2-3 and the scaling and wavelet functions, the
dilation equation (equation 2.4) and the wavelet equation (equation 2.5) are
introduced as
(t) = 2ZhQ(k) k
N
y/(t) = 2£/z {k)(j){2t k). (2.5)
k
The dilation equation can be solved by taking the continuous-time Fourier
transform of both sides of the equation [5]. The results (equations 2.6-2.9) show that
the solution of the dilation equation is a product of cascaded filters.
7


(2.6)
3>() = (|)H0(|)
<1>(ffl) = 0(|)Ho(^)ffo(|) (2.7)
*(e*(j)ff0(j)ff0(f)B0(§) (2.8)
00 0() = n H (r)O(O) (2.9)
7=1 02;
To obtain 0 (t), perform the inverse Fourier transform of 0( equation 2.9. In general, continuous filtering and dowsampling of x(t) until the
output of the low pass filter is out of resolution will lead to the scaling function
0 (?). The scaling function is dependent on the values of the low pass filter
coefficients. The wavelet equation (equation 2.5) can be solved after obtaining
0 (0 Figure 2-4 shows some examples of orthonormal wavelet bases [4]. A
wavelet basis is orthonormal when the scaling and translation parameters are non-
overlapping. The finite-interval-bounded characteristics of 0{t)and (t) are
shown clearly in Figure 2-4 [4],
8


(a) 2
fi
h.W Vx
lb) 1.5 r
0.5 [
0 -
0.5 :
(
11
0.5
0
-0.5
* ^BL.1
v '/---------
-5 0 5
(c)
CBL,3
i /x..
(d)
o -
W) 1
; o
i
-1
i j! H'Msyei |
i 1. rJ [ 11 ij ,V ;
i ( ill |
-5 0 5
i 1 i i ii -"Vs ?! Vbl,i j
|
-5 0 5
ft i] ! VBI.,3 ;
1 '' ii! H \. ; 1
-5 0 5
j .. j '"j VHaar !
i j
(e) 2

" /[
* i 1
0
NT
!
/
( i,
'---/ \ t*
V i
V
2V .
Figure 2-4 Example of Orthonormal Wavelet Bases
9


2.2.1 Two-Band Wavelet Filter Banks
Two-Band wavelet filter banks are used for simulations in this thesis because of their
simplicities. The Haar filter bank is used for the system identification problem in
Chapter 4 to verify the convergence rate improvement of the wavelet domain LMS
FIR filtering algorithm. The Haar basis function is orthonormal, but it is not
continuous. Therefore, the Haar basis is in general not adequate for signal processing
[3]. The Daubechies compactly supported wavelets filter banks have continuous
basis functions and are widely used. The Daubechies compactly supported wavelets
filters with length three and six were used for the ADPCM simulations in this thesis.
2.2.1.1 Perfect Reconstruction
Figure 2-5 shows a block diagram of a generic Two-Band perfect
reconstruction filter bank [6]. From Figure 2-5, the gap between the analysis bank
and synthesis bank indicates the downsampled signal can be transmitted,
compressed, coded, or stored with a lower computational complexity. The signal is
subsequently reconstructed using the synthesis filter bank.
x(/i)
x(n)
input analysis decimators
expanders synthesis output
Figure 2-5 Two-Band Perfect Reconstruction Wavelet Filter Banks
10


Two crucial requirements make the perfect reconstruction possible: the alias
cancellation condition (equation 2.10) and the no distortion condition (equation
2.11), given by
F0(z)H0(-z) + Fl(z)H1(-z) = 0, (2.10)
Fa(z)H0(z) + Fl(z)Hl(z) = 2z~'. (2.11)
The filter designs for the filters HQ (z) , (z), FQ (z), and F (z) are as follows:
1) To satisfy equation 2.10, choose FQ(z) = H^z) and F^z) = H (z).
2) Let PQ(z) = Fq(z)Hq(z) PQ(z) is a low pass filter since both F (z) and
H (z) are low pass filters.
3) Fj (z) = Fj (z)^ (z), substitute values of F (z) and H (z) from step 1).
F^-Zy-z^-z^-P^-z).
4) Now, the equation 2.11 is simplified to
PQ(z) PQ(-z) = 2z~1 (2.12)
5) Design a low pass filter that satisfies equation (2.12) and use step 1) to derive
other filters. Figure 2-6 shows the relations between filter coefficients.
11


Figure 2-6 Two-Band Perfect Reconstruction Filter Coefficients
2.2.1.2 Haar Wavelet Filter Banks
The Haar filter banks consist of four simple filters: The low pass (equation
2.13) and high pass (equation 2.14) filters in the analysis filter bank, and the low pass
(equation 2.15) and high pass (equation 2.16) filters in the synthesis bank.
(2.13)
(2.14)
f = [--]
J V2V2J
(2.15)
(2.16)
12


2.2.1.3 Daubechies Compactly Supported
Wavelets Filter Banks
The /Zq (low pass filter) coefficients of Daubechies compactly supported
wavelet filter banks is shown in Table 2-1 [4]. Filter coefficients for \ f0, and fl
can be derived using Figure 2-6.
Table 2-1 The (Low Pass) Coefficients of Daubechies Compactly Supported
Wavelets

/V = 2 0 .4629629131445341 A; = 8 0 .0544158422431072
1 .0305163037378077 1 .3128715909143166
2 .2241438080420234 2 .6756307362973195
3 - .1204095225512603 3 .5853546836542159
,V ss 3 0 4I32I17055295008V5 4 - .0158291052563823
1 .SC689150931]0024 5 - .2840155420615824
2 .450877502118*1914 6 .0004724645739124
3 - .1350110200102540 7 . 1287474266204893
4 -.0354412738820207 8 - .017369301001S090
o .0352262918857095 9 - .0440882539307971
,V = .| 0 .2303778133038964 10 .0139810279174001
1 .7148465705520154 11 00874609401740G5
2 .630.8807070396567 12 - .0018703529934520
3 - .02798.376.94168599 13 - .0003917403733770
4 .1K70348I17190931 14 .0006754494004506
5 .03064 7 3818355607 15 -.0001174767641246
0 .0328830116668852 M st 0 0 .03807794 73038778
7 - .0105974017850690 1 .24363*10740125858
V s 5 0 . ICO 1023970741 920 2 .6048231236900955
1 .6938292697971805 3 .6572880780612730
2 7243085284377726 4 .1331973858249883
3 .1384261469013203 t> - .2932737832791603
4 -.2422948870663823 6 - .0968407832229*192
5 - .0322448693846361 7 .1485407493381256
6 .0775714938400469 8 .0307256814793335
7 - .0002414992127863 9 - .0076328290613279
8 - .0125807519990820 10 .00025094711483*10
9 .0033357252864733 11 .0223616621236 70S
* fl c* 0 .1115407433501095 12 - .0047232047577018
1 .4946235903984533 13 - 2X142* 15036824635
2 .7511329080210950 14 .00184 70108830563
3 .3152503S1705)1982 15 .0002303087035232
4 - .2262616939654400 16 - .0002519631889427
5 - .1297668076672625 17 .0000393173203163
f> .0975016055873225 ;Y = 10 0 .0206700579005473
7 .0275228655:103053 1 . 1 681 708000776347
8 .3315520393174662 2 .5272011889315757
3 .0005535422011614 3 .688*1590394534363
10 .004 7772576109*155 4 .2811723436005715
11 - .0010773010853085 5 - .2498464243271598
;Y = 7 0 .U773520M08S0037 6 - .19584627437728*2
1 .3965393194816912 7 .1273603*103357541
2 .7291320908461957 8 .0930573646035547
3 .4097822874051889 9 -.07139.11471663501
4 - .1'1390600392842! 2 10 - .0291575368216399
3 -.2240361849938412 11 .0332126740593632
8 ,0713092192668272 12 .GG3G065533669870
7 .0806126091510774 13 - .0107331754833007
8 - .03802903693-50101 14 .0013953517470688
9 - .016574S4 16306655 15 .001992*1052051325
10 .(J1255099855609S6 16 - .0C0685856949S 11 .000 129577972921-1 17 - .0001104668551285
12 -.001801640*040473 18 .9U009358S6703202
13 .IWU3M71378&97.15 i 19 - .0000:32042028946
13


2.2.2 M-Band Regular Perfect Reconstruction
Filter Banks
Section 2.1 shows the Two-Band Wavelet transform works great to represent
the localized time-frequency characterizations of transients in a low frequency
signal. However, a long-duration high frequency signal (wide bandwidth in
frequency-axis and narrow bandwidth in time-axis) is not well decomposed using a
Two-Band wavelet transform described in previous sections. M-Band wavelets
provide the capability of zoom in into narrow bandwidth (in time-axis) high
frequency components of a signal.
Figure 2-7 shows a generic M-Band wavelet filter bank [8]. The design of M-Band
wavelet filters for a perfect reconstruction is not straightforward as for the Two-Band
wavelet filter bank. Please refer to [8] for details of M-Band K-Regular perfect
reconstruction filter banks.
Figure 2-7 M-Band Perfect Reconstruction Filter Banks
14


3. FIR Adaptive Filtering Using Least Mean
Square (LMS) Algorithm
The LMS algorithm is a widely used linear adaptive filtering algorithm. In
order to understand the derivation of the LMS algorithm, the Wiener filter and the
steepest descent algorithm are introduced in sections 3.1 and 3.2. Section 3.3
describes the time domain LMS algorithm, the frequency domain LMS algorithm,
and the wavelet domain LMS algorithm. Please note that only real-valued systems
are considered in this thesis. Therefore, the conjugate parts are eliminated from the
equations. In order to distinguish between scalar, vector and matrix variables, the
vector or matrix variables will be presented in boldface characters.
3.1 Wiener Filter
Figure 3-1 shows the block diagram of a general statistical filtering problem
[2]. The purpose of the linear discrete filter is to estimate the desired response
d(n).
Given input samples and filter coefficients Conditions at time n
Input Linear Output response
u(0),i/(1),u(2), discrete-time m /C\ m
filter
W0,Wy.WZ.
Estimation
error
e(n)
Figure 3-1 Block Diagram of a Generic Statistical Filtering Problem
15


The filter output y(n) and the estimated error e(ri) of the system shown in Figure
3-1 are defined as
oo
y(n)= X w^uin-k), and (3.1)
k = 0
1 II (3.2)
For the optimum filter design shown in Figure 3-1, the mean-square error of the
system needs to be minimized. The cost function (J) is the function that needs to be
minimized for the system to reach the optimum solution. Therefore, the cost
function of the system shown in Figure 3-1 can be defined as the mean-square error
of the system. Equation (3.3) shows the definition of the mean-square error of the
system, where E[ ] is the expectation operator.
J = E[\e(n)\2] (3.3)
The optimum solution of the problem shown in Figure 3-1 occurs when the output of
the linear filter, y(n), becomes very close to the desired response d(n) Or in
other words, the minimum mean-squared error of the system is obtained. Let y0 (n)
be the output of the optimized filter W0 (n), the minimum mean-squared error Jmin
is defined as
oo
y0(n)= X w0(ri)u{n-k), (3.4)
k=o
16


e0 (n) = d(ri)~ y0 (ft) = d(ri)~ Estimated d(n), and (3.5)
./*, = £[le0(n)l2], 0-6)
where e0 (ft) is the optimum estimation error and y0 (n) is the optimum filter
output.
The principle of orthogonality may be summarized as The necessary and
sufficient condition for the cost function J to attain its minimum value is that the
corresponding value of the estimated error e0 (ft) is orthogonal to each input sample
that enters into the estimation of the desired response at time ft [2]. The principle
of orthogonality provides a method to verify that the optimum filter design is
reached. Equation (3.7) defines the principle of orthogonality.
E[u(n-k)eo(n)]=0 (3.7)
Substituting equations (3.4) and (3.5) into equation (3.7), the Wiener-Hopf equations
are
oo
E[u(n-k)(d(n)-Zwnlu(n-i))] = 0, k = 0,1,2,... (3.8)
/=o
Rearranging equation (3.8) yields
oo
'£wniE[u(n-k)u(n i)] = E[u(n k)d(n)], k = 0,1,2,... (3.9)
i=0
Let Rbe the autocorrelation matrix of the filter input andpbe the cross-correlation
matrix of the filter input and desired response, where R and p are defined as
17


R = E[u(n)uT(n)] and (3.10)
p = E[u(n)d(n)]. (3.11)
Combining equations (3.9), (3.10), and (3.11), the Wiener-Hopf equations are
reduced to a compact matrix form,
Rw p . (3.12)
O x
The solution to the Wiener-Hopf equations is
Wo=R_1p. (3.13)
Please note that the filter input of the Wiener filter is assumed to be zero mean. The
solution shown in equation (3.13) is straightforward, but is computationally
expensive, especially for a large number of tap weights (inverse of R has to be
computed). Another way to obtain the minimum mean-squared error of the system is
using the steepest descent algorithm.
3.2 Steepest Descent Algorithm
The method of steepest descent is a gradient and iterative algorithm. The step
size can either be chosen to accomplish the maximum decrease of the cost function
in each iterative step or can be chosen to be a constant (fixed step-size) [10]. For the
purpose of this thesis, the fixed step-size algorithm will be used. This algorithm uses
the negative gradient as the direction vector of the cost function to reach the
18


minimum point of the cost function. The algorithm starts with some initial condition
assumptions, computes the gradient vector using the initial conditions, and updates
filter coefficients based on the negative gradient as the direction vector. The
recursive process continues until the minimum of the cost function is reached.
The cost function J of the system shown in Figure 3-1 can be expressed as a
function of filter tap weights. Rearranging equation (3.3), using equations (3.1) and
(3.2), yields [9]
J(w) = E[\e{n)\2]
= E[e(n)e(n)]
= E[d(ri)d(ri)]-2E[d(ri) X w u(n-k)]+E[ X X w wu(n-k)u(n-i)]
A- =0 k A = 0 / = 0 k '
M-1 M \ M-\
= <7 2-2 X w E[d(n)u(n-k)]+ X Iw wE[u(n-k)u(n-i)\
d A = 0 k A = 0 / = 0 k
T T
= <7 2 -2p w()+w (n)Rw(n), (3.14)
d
where (7^2 is the variance of the desired response, assuming that the desired
response d (n) is zero mean. The minimum mean-squared error of the system, or the
minimum point of the cost function J, can be determined by obtaining the gradient
of J and setting it to zero. From equation (3.14) the gradient of J is
V7(n)=-2p+2Rw(n). (3.15)
Setting VJ(n) = 0 to find the minimum point of J(n) ,
V/(n)=2p+2Rwo(?2)=0, and (3.16)
19


=R~'p.
(3.17)
This yields the same solution found in equation (3.13), which is the solution of the
Wiener-Hopf equations based on the principle of orthogonality.
Figure 3-2 shows a generic block diagram of the steepest descent FIR filtering
algorithm.
P w(n)
Figure 3-2 Block Diagram of a Generic Steepest Descent FIR Filtering Algorithm
Equation (3.19) is the update equation of the algorithm shown in Figure 3-2, where
/I is the fixed step size of the algorithm. The update equation is used to update filter
coefficients of the subsequent recursive step of the algorithm.
w(rc+l)=w(rc)+^//(-V/()) (3.18)
w(n +1) = w(n) + fl (p Rw(n)), n = 0,1,2,... (3.19)
20


3.3 LMS Algorithm
To proceed with the steepest descent algorithm described in section 3.2,
statistical characteristics of the filter input u(w) and the desired response d(n) have
to be known to determine R and p In reality, a priori knowledge of statistical
characteristics of the system are generally not available. The simplest solution is to
estimate R and p using available data. The estimated R and p are shown in
equations (3.20) and (3.21).
A
R=u(n)uT(n) (3.20)
A
p=u (n)d(n) (3.21)
3.3.1 Standard Time Domain LMS Algorithm
Substituting estimated R and p from equations (3.20) and (3.21) into
equation (3.19) results in
T
w(/i + l) = w(n) + //(u(n) T
\v(n + Y) = 'w(n)+juu(n)(d(n)-u (n)w(n)), (3.23)
w(n +1) = w(n) + juu(n)(d(n) y(n)), and (3.24)
w(n+l)='w(n)+juu(n)e(n), n= 0,1,2,.... (3.25)
21


Equation (3.25) shows the standard time domain LMS FIR filtering algorithm. The
simplicity of the LMS algorithm is that the estimated values of R and p are used.
2
For stability of the system, the fixed step size fl ranges between 0 and -j where
max
X is the maximum eigenvalue of the autocorrelation matrix of the filter input,
max & v
The difference between the final value of the cost function J obtained by the LMS
algorithm J() and the minimum value of J obtained by Wiener filter J is called
min
the excess mean-squared error. The ratio of the excess mean-squared error to the
minimum mean-squared error obtained by Wiener filter is called the mis adjustment.
The misadjustment can be controlled by the value assigned to the fixed step size jl.
A large value of fl leads the system to converge faster but also leads to a larger
misadjustment. This is because the LMS algorithm uses estimate values ofR and p
at each recursive step to compute the estimated gradient vector, and the filter tap
weights suffer from a gradient noise. With a large step size jl, the system progress
faster, but the gradient noise is not largely filtered out as when a small value of fl is
assigned to the system. A small value of fl makes the system converge slower, but
reduce the misadjustment. Figure 3-3 shows a generic block diagram of the LMS
FIR filtering algorithm.
99


cr*(n)
Figure 3-3 Block Diagram of a General LMS FIR Filtering Algorithm
Please note that to be able to distinguish between the time domain adaptive filter and
the wavelet domain adaptive filter to be discussed in section 3.3.4, the time domain
adaptive filter in equations (3.22) through (3.25) will be denoted ash(Ti).
3.3.2 Self-Orthogonalizing Adaptive LMS Filtering Algorithm
The self-orthogonalizing adaptive LMS algorithm is a modified LMS
algorithm to improve the convergence rate of the LMS algorithm. The filter input is
prewhitened and transformed into a less correlated (or uncorrelated) vector. This
transformed vector is then used as the LMS algorithm filter input. The convergence
rate of the LMS algorithm is improved because the eigenvalue spread of the filter
input is reduced. The update equation for this algorithm is [2]
23


(3.26)
-1
w(/z + l) = w(ft) + orR u (n)e(n), n = 0,1,2,...,
where a is a constant and ranges between 0 and 1. According to [4], Cl can set to
1
be where M is the filter length. A statement in [2] makes this modified LMS
M
algorithm very powerful: An important property of the self-orthogonalizing
filtering algorithm is that, in theory, it guarantees a constant rate of convergence,
irrespective of input statistics. For detailed information of this algorithm, please
refer to Reference 2.
3.3.3 Frequency Domain LMS Algorithm
The self-orthogonalizing filtering algorithm described in section 3.3.1
provides convergence rate improvement to the LMS algorithm. The disadvantage of
this algorithm is that it is computationally expensive. To reduce computation
complexity and take advantage of the fast convergence speed of the self-
orthogonalizing filtering algorithm, the frequency domain LMS algorithm was
developed. The frequency domain LMS algorithm uses discrete-cosine transform
(DCT) to transform the input vector to frequency domain, estimates the eigenvalues
using estimated autocorrelation of the input, and updates the filter coefficients. The
outcome of this algorithm is a more efficient algorithm than the time domain
24


algorithm. The frequency domain LMS algorithm is out of the scope of this thesis,
for detailed description of this algorithm, please refer to Reference [2].
3.3.4 Wavelet Domain LMS Algorithm
The main idea of the wavelet domain LMS algorithm is similar to the
frequency domain LMS algorithm: to reduce computational complexity. A recently
developed wavelet domain LMS adaptive filtering algorithm [1] uses sparse
structures of matrices in wavelet domain to estimate and update autocorrelation
matrix of the filter input. Figure 3-4 shows block diagram of a wavelet domain LMS
FIR adaptive filter.
Figure 3-4 Block Diagram of a Wavelet Domain LMS FIR Adaptive Filter
25


Let y(ft) be the wavelet transform of the filter input x(ft). Let Q be the wavelet
transformation matrix, y(ft) = Qx(n). The matrix Q contains high pass and low
pass filters and decimators described in Chapter 2. For example, let a two-band, one
level wavelet be used to decompose the filter input x(n). Let a be the output of the
low pass filter and the decimator of the analysis filter bank and let b be the output of
the high pass filter and the decimator of the analysis filter bank. The wavelet
transform of the filter input x(ft) is y (ft) = [a b], where a and b are vectors of
half of the size of x(ft).
According to [1], the update equation for the wavelet domain adaptive filter
coefficients is
A
w(ft + l) = w(n)-ju g(ft), n = 0,1,2,..., (3.26)
A A
Ry(n)g(n) = -2e{n)y(n), (3.27)
where W(ft) is the wavelet transform of the time domain adaptive filter h(ft), that
A
is, w(ft) = Qh(n). Ry is the estimated autocorrelation matrix of the wavelet
A
domain filter input, Ry (ft) = yT(ri)y(ri). The estimated error e(n) has the
similar definition as for the time domain LMS algorithm,
e(ri) = d(n) z{ri), and (3.28)
26


(3.29)
T
z{n) = w (n)y(n).
Expanding (3.29) yields
z(n) = (Qh(n))TQx(n), and (3.30)
z(n) = h7(n)QrQx(w). (3.31)
From Figure 3-4, the wavelet transformation is a unitary transformation. Therefore,
QrQ = /, (3.32)
z(n) = h7 (n)x(n). (3.33)
Equation (3.33) shows the output of the wavelet domain LMS adaptive filter is
actually the same as the output of the time domain LMS adaptive filter. This implies
that the time domain desired response d(n) is used for the wavelet domain LMS
algorithm. Combining and rearranging equations (3.26) and (3.27) yields the final
update equation used for the simulations in this thesis,
a -1
w(n + l) = w(n)+2juRy e(n)y(n), n = 0,1,2,.... (3.34)
A difficulty was encountered when using equation (3.34) for simulations. When the
A
autocorrelation matrix R> is either singular or badly scaled (in other words, the
A A
eigenvalue spread of R}, is large), the inverse of R-y cannot be computed, an error
occurred, and the recursive procedure stopped. According to [1], when situations
27


A
like this occurs, a diagonal loading methodology and be used to ensure the R y is
not ill-conditioned. Let i be iteration step of the algorithm, c be a small positive
constant, the diagonal loading can be implemented as
a c A
Ry=~l + Ry. (3.35)
i
The emphasis of this thesis is to study, simulate, and apply the wavelet domain LMS
FIR adaptive algorithm to different applications. More efficient way of running the
wavelet domain LMS algorithm that requires more complex estimation
methodologies using the sparse structure of the wavelet domain matrices is
documented in [1].
28


4. System Identification
System identification is used to provide a linear mathematical model of an
unknown dynamic process or plant. There are many ways to obtain the mathematical
model of an unknown system. The adaptive controller using pole placement design
with or without zero cancellation (deterministic self-tuning regulators) [11], and
adaptive filtering using LMS or recursive least-squares (RLS) algorithms, are
examples of commonly used methodologies. The system identification using the
adaptive LMS filtering algorithm is described in subsequent sections.
4.1 System Identification Using the Time Domain LMS Filtering Algorithm
Figure 4-1 shows the block diagram of a system identification model using a
standard time domain adaptive LMS algorithm. The unknown system and the
adaptive filter share the same input u(). The error e(ri) is the difference between
the output of the unknown system d(n) and the output of the adaptive filter y(n).
The ideal mathematical model of the unknown system is obtained when e(ri) = 0.
Therefore, the error e(ri) is used to control the adaptive filter. The cost function that
2
needs to be minimized is / = E\\e(ri)\ ], where e{ri) = d(n) y(ri). Please note
that in reality the error e(n) will never reach zero because the LMS algorithm uses
29


estimated values of R and p R is the autocorrelation matrix of the adaptive filter
input and p is the cross correlation of the filter input and the unknown system output.
Figure 4-1 Block Diagram of a System Identification Model Using Time Domain
LMS Filtering Algorithm
The procedures of system identification using a standard time domain LMS filtering
algorithm are described below.
1. Determining the unknown system
The unknown system to be modeled by system identification has to be
determined, so the input u can be applied to the unknown system and the output of
the unknown system d(ri) can be obtained. The purpose of the system identification
model shown in this thesis is to demonstrate that the adaptive FIR filter correctly
modeled the unknown system. The unknown system is chosen to be a 16-tap
30


FIR filter (we will assume that we do not know the system). The filter coefficients
are
g(/i)=[0.1,0.2,0.3,0.4,05,0.6,0.7,0.8,0.8,0.7,0.6,05,0.4,0.3,0.2,0.1], (4.1)
where n is the steps of iteration. After the appropriate steps of iteration, the
adaptive FIR filter coefficients should converge to the values shown in equation
(4.1).
2. Choosing an FIR filter tap length for system identification.
Generally, for real applications, the system is unknown. Therefore, the filter
k
tap length is normally assumed to be 2 .where k = 1,2, 3,.... In this thesis, the
adaptive filter tap length is chosen to be 16. Let h(n) be the adaptive filter impulse
response, then the initial condition of this filter is
h(n) = 0. (4.2)
When the system identification works properly, h(n) will identify with g(ri) as the
system converges (when error e(ri) is close to zero and maintained as a constant
value).
3. Defining an input u and assuming a total iteration number.
31


The input U used for the system identification simulation in this thesis is a random
signal. The random signal is chosen because it can model a non-stationary input, and
the tracking capability of the adaptive filter can be verified. The input u is a vector,
and the length of the input li has to be equal or greater than the sum of the total
iteration steps plus the filter tap length minus one. For example, if a 16-tap FIR filter
is used and 500 iterations are planned for the simulation, the length of vector u must
be equal or greater than 515, i.e.,
u=[m(499),m(498)m(497),...m(0),m(-1),m(-2),...,m(-15)]. (4.3)
Let 72=500 be the iteration steps of the system identification model. Let 16 be the
adaptive filter tap length, u(tz) is determined as
u(0)=[m(0),u(-1),m(-2),w(-3),...w(-15)] (4.4)
u(l)=[w(l),w(0),w(-l),w(-2),...w(-14)] (4.5)
u(2)=[m(2),w(1),w(0),w(-1),...m(-13)] (4.6)
u(499)=[w(499),w(498),w(497),w(496),...w(484)) (4.7)
4. Using LMS algorithm to update the adaptive filter tap weights and choosing a
fixed step size jl for the LMS algorithm.
From equation 3.25, the update equation for a standard time domain LMS
filtering algorithm is h(72+l)=h(/2)+//ll(n)e(72), 72=0,1,2,..., where fl is the step
size of the LMS algorithm and e(n) = d(n)y(n) As discussed earlier in section
32


2
3.3.1, system stability requires that the fixed step size fl ranges between 0 and -y
''1 max
For the fastest LMS algorithm convergence rate, the ideal way is to use a fixed step
2
size of y - e, where f is a small positive number The problem is that a priori
A max
information of the filter input is generally not available, and the maximum
eigenvalue of the autocorrelation matrix of the filter input X cannot be
determined. In general, the system is evaluated with various fl values, e.i., 0.01,
0.05, 0.1, 0.5 etc. If the value of fl is too large, the system will become unstable
2
which means the fl value used probably exceeds -y -Try smaller fl values until
4 max
the system becomes stable and converges nicely.
5. Recursively updating the adaptive filter tap weights using step 4, and adjusting
iteration steps if necessary.
Depending on the convergence rate of the system, adjust iteration numbers. It
is desirable to choose a number that is beyond the convergence of the system to show
that the convergence is stable.
Now, following the above 5 steps, an example of the time domain LMS algorithm is
shown below.
33


1. Unknown system is a 16-tap FIR filter, and the impulse response of this FIR filter
is g(72)=[0.1,0.2,0.3,0.4,05,0.6,0.7,0.8,0.8,0.7,0.6,0.5,0.4,0.3,0.2,0.1].
2. The adaptive FIR filter is a 16-tap FIR filter, and the initial condition of this filter
ish() = 0.
3. Assume an iteration number of two, the length of U has to be equal or greater
than 2+16-1=17. Let u be a random signal and a vector of length 17, then
u=[l,2,3,-1,4,7,-5,3,9,20,-9,1,-3,8,0,2,10],
4. Since the iteration number is two,/2=0,1. Let the fixed step size fl be 0.05.
72 = 0
d (0)=g(0)u r (0) = (0.1 1+0.2 2+0.3 3+0.4 -1+...0.1 2)=23.8 (4.8)
y(0)=h(0)ur(0) = (0*l+0*2+0*3+0*-l+...0*2)=0 (4.9)
e(0)=J(0)-y(0)=23.8-0= 23.8 (4.10)
h(l)=h(0)+//u(0)e(0)
=[0,0,0,...,0]+0.Q5*[l,2,3-1,4,7 -5,3,920,-9,1-3,8,0,2]*23.8 (4.11)
=[1.19238357,-L19,4.76,833,-5.95,357,10.7138,-10.71,1.19,-357,952,0,238]
72 = 1,
J(l)=g(l)ur(l)=(0.1*2+0.2*3+0.3*-l+0.4*4+...0.1*10)=25.3 (4.12)
y(l)=h(l)ur(l) = (1.19*2+2.38*3+357*-l+-1.19*4+...2.38*10)=-11.9 (4.13)
e(l)=rf(l)-y(l)=25.3+l 1.9=37.2
(4.14)


h(2)=h(l)+/zu(l)e(l)
=[238,4.76,...,4.76]+0.1*[2,3-1,4,7,-5,3,920,-9,1-3,8,0,2,10]*49.1
(4.15)
5. Obviously the system does not converge within two iterations, so the iteration
number and the step size /J. will be adjusted. The recursive process described above
continues until the tap weights of the adaptive filter converge to the tap weights of
the unknown system.
The results of the above example simulated with an iteration number of 500,
and a fixed step size of 0.05, is shown in Figures 4-2 through 4-8. For detailed
Matlab codes of simulations, please see the Appendix. Other simulations (not
documented herein) show that a step size greater than 0.05 makes the system
unstable, therefore, fl = 0.05 is the largest step size that the system can use. Figure
4-2 shows the output of the adaptive FIR filter y(ri) identifies with the unknown
system d(n) From Figure 4-2, after a transition period about 100 iterations,
y(n) converges to d{n), and the two signals overlap with each other. Figure 4-3
shows the estimation error e(ri) vs. iterations. Figures 4-4 through 4-7 show each of
the 16 coefficients of the adaptive filter converge to the unknown system g(ri).
Figure 4-8 shows the adaptive filters impulse response after 500 iterations, which
identified with g(ri).
35


Error
0 SO 100 150 200 250 300 350 400 450 500
Number of Iterations
Figure 4-2 The Desired Output d(n) vs. the Filter Output y(n)
(Time Domain LMS Algorithm, /I = 0.05 )








. I.. i
Lil L
0 50 100 150 200 250 300 350 400 450 500
Number ol Iterations
Figure 4-3 The Estimated Error (Time Domain LMS Algorithm, fl = 0.05 )
36


0
100
200 300 400
Number ol Iterations
500
600
Figure 4-4 Adaptive Filter Coefficients ho, hi, hi, and h3
0
100 200 300 400 500
Number of Iterations
600
Figure 4-5 Adaptive Filter Coefficients I14,115, h6, and h7
37


0.5

/
0 1 30 2 0 3 30 4 30 5 0 600

0 1 0 2 0 3 0 4 0 5C 0 600

0 1C 0 2C 0 3 Number o 0 4 Iterations 0 5C 0 600
Figure 4-6 Adaptive Filter Coefficients h8, h9, hio, and hi i
Figure 4-7 Adaptive Filter Coefficients hi2, hn, hn, and his
38


0.9
M 0.2
Number of Iterations
Figure 4-8 Adaptive Filter Impulse Response After 500 Iterations
(Time Domain LMS Algorithm, /J. = 0.05 )
4.2 System Identification Using the Wavelet
Domain LMS Filtering Algorithm
Figure 4-9 shows the block diagram of a system identification model using
wavelet domain LMS filtering algorithm. Note that the only difference between
Figures 4-1 and 4-9 is the LMS algorithm portion. The wavelet domain
transformation of ll(72) is 11 w (72) and is used as the adaptive filter input. As
described in section 3.3.4, the adaptive filter is a wavelet transform of the time
domain adaptive filter that is denoted by w(72).
39


Figure 4-9 Block Diagram of a System Identification Model Using Wavelet
Domain LMS Filtering Algorithm
The procedures of system identification using a wavelet domain LMS filtering
algorithm are described below.
1. Determining the unknown system (same as section 4.1).
2. Choosing an FIR filter tap length for system identification.
In this thesis, the adaptive filter tap length is chosen to be 16. Note that the
adaptive filter is in the wavelet domain. Let W(n) be the wavelet domain adaptive
filter, where the initial condition of this filter is
w(/z) = 0. (4.16)
3. Defining an input 11 and assuming an iteration number.
The input u is generated the same way as described in section 4-1, equation
(4.3). Let 72=500 be the iteration steps of the system identification model. Let 16
40


be the adaptive filter tap length. 11(72) is determined as shown in equations (4.4)
through (4.6).
The 2-Band Haar analysis filter banks are used to transform ll(ft) from the time
domain to the wavelet domain. The Haar analysis filter banks have a low pass filter
11 11
HaarhQ =[-^=,-^=] and a high pass filter Haarhj =[j=j=]. Using
the
Haar analysis filter banks for the wavelet domain transformation, Uw(n) is defined as
uw(0)=[Dec(conv(u(0),HaarhQ)),Dec(conv(u(0),Haarh]))] (4.17)
uw (499) =[dec{conv{u{A99),Haarh^j),dec{conv{vL{A99),Haarh^))\ ,(4.18)
where dec is the decimation of factor two, and conv is the time domain convolution.
4. Using wavelet domain LMS algorithm to update the adaptive filter tap weights
and choosing a fixed step size // for the LMS algorithm.
From equation 3.26, the update equation for a wavelet domain LMS filtering
a -l
algorithm is yv(n+l)=w(n)+2/lR-w e(n)y(n), 72=0,1,2,...where /J. is the step
A 1
size of the LMS algorithm ande(7i) = d(ri)-y(ri) and Rw is the estimated
autocorrelation matrix of the wavelet domain filter input Uw(ri) .
5. Recursively updating the adaptive filter tap weights using step 4 and adjusting
41


iteration steps if necessary.
6. Using Haar synthesis filter banks to convert the filter tap weights back to time
domain.
Following the above 6 steps, an example of the wavelet domain LMS algorithm is
given, where one iteration is shown.
1. The unknown system is a 16-tap FIR filter, where the impulse response is
g(rc)=[0.1,0.2,0.3,0.4,05,0.6,0.7,0.8,0.8,0.7,0.6,05,0.4,0.3,0.2,0.1],
2. The adaptive FIR filter is a 16-tap FIR filter, where the initial condition of this
filter is W(ri) = 0.
3. Assume an iteration number of one, the length of U has to be equal or greater
than 1+16-1=16. Let ube a random signal and U is a vector of length
16, u(0)=[l,2,3,1,4,7,5,3,9,20,9,1,3,8,0,2]. The time domain convolution
of the low pass Haar filter and ll(0) is
conv(u(0),Haar hn) = [0.71,2.12,3.56,1.41,2.12,7.78,1.41,-1.41,
u (4.19)
8.48.20.5.7.78, -5.66,-1.41,3.54,5.66,1.41,1.41 ].
The decimation of factor two of the equation (4.19) is
Jec(conv(u(0),//aar/tQ))=[2.12,1.41,7.78,-1.41,20.5,-5.66,3.54,1.41].(4.20)
The time domain convolution of the high pass Haar filter and u(0) is
conv(u(0),Haarfa)=[0.71,0.71,0.71 ,-2.83,3.54,2.12,-8.48.5.66,
1 (4.21)
4.24.7.78, -20.5,7.07,-2.83,7.78,-5.66,1.41,-1.41],
42


The decimation of factor two of the equation (4.21) is
rfgc(conv(u(0),Haflr/i1))=[0.71,-2.83,2.12,5.66,7.78,7.07,7.78,1.41].(4.22)
The wavelet domain input 11^(0) is the combination of equations (4.20) and (4.22),
U (0) =[2.12,1.41,7.78,-1.41,20.5,-5.66,3.54,1.41,0.71,-2.83,2.12,5.66,7.78,7.07,7.78,1.41].(4.23)
W
4. Since the iteration number is one, n = 0. Let the fixed step size fl be 0.05.
=0,
d(0)=g(0)ur(0)=(0.1*l+0.2*2+0.3*3+0.4*-l+...0.1*2)=23.8 (4.24)
y(0) = w(0)u/(0) = (0*L12+0*1.41+0*7.78+0*-1.41+...0*1.41)=0 (4.25)
e(0)=d(0)-y(0)=23.8-0=23.8 (4.26)
A -I
R.v (0) = Rr(0)R (0) + (c/w)I (4.27)
VI w
A -1
w(n+l) = w(n)+2//R- e(n)y(n), n=0,1,2,... (4.28)
Note that sometimes the estimate of R lv is singular and the inverse cannot be
determined. In order to make sure the estimated R w is non-singular, diagonal
loading shown in equation (4.27) is used, where n is the number of iterations and c
is a small positive constant. A value of c 0.0001 is used for simulations.
5. The recursive process described above continues until the tap weights of the
adaptive filter identify with the tap weights of the unknown system.
43


6. Convert the converged adaptive filter tap weights to the time domain using Haar
synthesis filter banks. The low pass filter is Haar/q =\j=^j=
pass filter is Haar /j =[-^=,-y=]
=[7=,7=] and the high
The results of the above example simulated with an iteration number of 500
and a fixed step size of 0.05 is shown in Figures 4-10 through 4-13. Figure 4-10
shows that the output of the adaptive FIR filter y{ri) converges to the desired signal
d{ri) Figure 4-11 shows the estimated error of the wavelet domain LMS algorithm.
Figure 4-12 shows the wavelet domain adaptive filters impulse response after 500
iterations. Figure 4-13 shows the time domain adaptive filters impulse response
(converted from the wavelet domain back to the time domain) after 500 iterations,
which identified with g(ft). The fixed step size of JLl = 0.05 was used for
comparison purposes (compare with the time domain LMS algorithm results).
Figures 4-14 through 4-18 show the maximum step size of fj. = 0.3 can be used for
the wavelet domain LMS algorithm simulated in this section.
44


0 50 100 150 200 250 300 350 400 450 500
Number ot Iterations
Figure 4-10 The Desired Output d(ri) vs. the Filter Output y(ri)
(Wavelet Domain LMS Algorithm, fl 0.05 )
0 50 100 150 200 250 300 350 400 450 500
Number of iterations
Figure 4-11 The Estimated Error (Wavelet Domain LMS Algorithm, // = 0.05 )
45


Time Domain Filter Impulse Response

< > c 3
< 3 < >
( 3 ( >

r 3 ^3 (
0 > i 3
Figure 4-12 Wavelet Domain Filter Impulse Response After 500 Iterations
(Wavelet Domain LMS Algorithm, // = 0.05 )
0 2 4 6 8 10 12 14 16
Number ot Iterations
Figure 4-13 Time Domain Filter Impulse Response After 500 Iterations
(Wavelet Domain LMS Algorithm, fl 0.05 )
46


Error
k
0 50 100 150 200 250 300 350 400 450 500
Number of Iterations
Figure 4-14 The Desired Output d(ri) vs. the Filter Output y(n)
(Wavelet Domain LMS Algorithm, fl = 0.3 )
Number ot Iterations
Figure 4-15 The Estimated Error (Wavelet Domain LMS Algorithm, // = 0.3 )
47


1.2 1 0.8 9 C o Q. cr 0.6 9 3 a e O 0.4 [Z t- 5 o 0.2 0 ( > C >

> < >
1 > ( >

r ' >

i . i i i >
0 2 4 6 8 10 12 14 16
Number ot Iterations
Figure 4-16 Wavelet Domain Filter Impulse Response After 500 Iterations
(Wavelet Domain LMS Algorithm, // = 0.3 )
0 2 4 6 S 10 12 14 16
Number of Iterations
Figure 4-17 Time Domain Filter Impulse Response After 500 Iterations
(Wavelet Domain LMS Algorithm, // = 0.3 )
48


4.3 Comparisons and Discussions
The system identification used in this thesis proves that the wavelet domain LMS
filtering algorithm converged properly. Figures 4-8, 4-13, and 4-17 demonstrated
that like the time domain LMS algorithm, the wavelet domain LMS algorithm also
successfully identified the unknown system. Figure 4-18 shows comparisons of
MSE of an ensemble of 100 runs for both time domain and wavelet domain LMS
algorithms simulated with fl = 0.05 From Figure 4-18, the wavelet domain LMS
algorithm shows a slower convergence rate. Is this a true statement? Further
investigation demonstrates that the comparison between the time domain LMS
algorithm with the wavelet domain algorithm using the same fixed step size is not
valid. Different update equations are used and the estimation of the gradient of the
cost are different for the time domain LMS algorithm and the wavelet domain
algorithm. Therefore, for comparison purposes, the best system performance
should be used. The convergence rate using the largest possible fixed step size
before the system becomes unstable for each algorithm may be compared.
Figure 4-19 shows that the largest step size for the time domain is fl = 0.05 and the
largest step size for the wavelet domain is JU = 0.3 Note that there is no
improvement of convergence rate using the wavelet domain LMS algorithm in this
case. This is because the input used was a random signal, where no correlation of
49


Time Domain IMS H vs. Wavelel Domain LMS(-) q Time Domain LMS () vs. Wavelet Domain LMS{--)
any kind exists between each inputs. Therefore, the prewhitening of input in the
wavelet domain has no effect on the convergence rate.

'i
K
PT i \ \ i i \i \
\ i l \ 1 ^ w y
VM s v/',
0 SO 100 150 200 250 300 350 400 450 500
Number ol Iterations
4-18 Mean-Squared Error of an Ensemble of 100 Runs (jl = 0.05 )


i ft
r
\
\ ^~
0 50 100 150 200 250 300 350 400 450 500
Number of Iterations
Figure 4-19 Mean-Squared Error of an Ensemble of 100 Runs
(Time domain LMS // = 0.05 Wavelet Domain jU = 0.3 )
50


5. Adaptive Differential Pulse Code Modulation (ADPCM)
Adaptive Differential Pulse-Code Modulation (ADPCM) is a generic data
compression algorithm that is widely used in speech applications. The speech signal
is compressed before it is transmitted to the receiver and is decompressed after it is
received by the receiver. ADPCM uses an adaptive predictor and quantizer to
account for sudden changes and the nonstationary nature of the speech signal.
ADPCM is a modified algorithm based on the pulse-code modulation (PCM)
scheme. The PCM encoder typically consists of three processes: sampling,
quantization, and coding. The sampling process converts a continuous-time signal
into a discrete-time signal. The quantization process converts a continuous-
amplitude signal into a discrete-amplitude signal. And the coding provides
appropriate digital presentations for the samples of the discrete signal. Figure 5-1
shows a generic PCM encoder.
Sampled Input
Uniform or
Nonuniform
Quatizer
PCM Wave
Figure 5-1 Generic PCM Encoder
In a typical telephony system, the speech signal is generally coded with the PCM
algorithm. A sample rate of 8 kHz and a nonlinear quantizer is used and the
51


quantized speech signal is coded into 8-bit words. The required bit rate for this
system is 64 kbps.
Instead of transmitting the PCM waveform, the ADPCM transmits the
quantized estimation error. The ADPCM encoders use an adaptive predictor to
estimate the next audio sample from previous samples obtained from the input
signal. This predicted sample is compared with the previous sample; the difference
between the two is the estimation error. The estimation error is quantized using an
adaptive quantizer before transmitting. When the adaptive predictor is optimized,
the variance of the estimation error should be smaller than the variance of the input
signal. Therefore, a smaller number of quatization levels is required for the
estimation error than for the PCM waveform. A common implementation takes 16-
bit PCM samples and converts them into 4-bit samples using ADPCM, giving a
compression rate of 4:1. For a radio frequency (RF) transmitting system, a
compression rate of 4:1 of bit rate yields a good 6 dB link margin improvement. The
improvement of the link budget leads to less restricted hardware requirements
(required transmitter power, antenna gain, etc.) and therefore, lower cost. Figures 5-
2a and 5-2b show the simplified ADPCM encoder and decoder.
52


Figure 5-2b Generic ADPCM Decoder
53


There are two types of adaptive quantizers, they are the feedforward and the feedback
adaptive quantizers [9]. When the feedforward adaptive quantizer is used, the
quantization step sizes, A (ft) have to be transmitted together with the sample bits
[9]. When the feedback adaptive quantizer is used, only the sample bits have to be
transmitted. The Jayant quantizer that is used for simulations in this thesis is a
feedback quantizer. Table 5-1 shows the Jayant quantizer [9].
Table 5-1 Jayant Quantizer
No. Bits 2 3 4
M(l) 0.8 0.9 0.9
M(2) 1.6 0.9 0.9
M(3) 0.25 0.9
M(4) 1.7 0.9
M(5) 1.2
M(6) 1.6
M(7) 2.0
M(8) 2.4
The M(JJ) are used to update the quantizer steps,
An+1=AnM(77),
(5.1)
where
TJ = floor(
\e(n)\
(5.2)
The initial quantizer step size was obtained using the bit length (b) and the full
magnitude (Fs) of the signal,
A0=2b~]Fs. (5.3)
54


5.1 ADPCM Using the Time Domain LMS Filtering Algorithm
Figure 5-3 shows the generic block diagram of a time domain ADPCM
encoder/decoder. The quantized estimation error is used to control the adaptive
algorithm of the system because the quantized estimation error is transmitted to the
receiver.
The time domain ADPCM method consists of following procedures:
1. Defining an input signal. The ADPCM algorithm is tested on two signals for
simulation purposes. The first signal is a 100 Hz sine wave with a sampling
frequency of 8 kHz. This input signal is used to test the convergence rate the
time domain ADPCM algorithm. The second signal is the same sine wave
(frequency is 100 Hz, sampling frequency is 8 kHz) but with a sudden change
after the signal converged. This second input signal is used to test the behavior
55


of the time domain ADPCM algorithm in the event that a sudden input signal
change occurs.
2. Picking FIR filter tap weights (h(n)) and setting initial conditions. A 16-tap FIR
filter is used for simulations, the initial FIR filter coefficients are zeros.
3. Defining the quantizer initial step size using equation (5.3).
4. Letting the reconstructed signal s (n) be zeros (see Figure 5-3). The signal s (n)
is the input of the adaptive FIR filter, therefore, the length of s (n) is the same as
the FIR filter tap length. The updated s(/z) are generated at each iteration of the
algorithm.
5. Obtaining the filter output, s(n) estimated = s(n) *h (n).
6. Obtaining e(n), e(n) = s(n) s(n) estimated.
7. Updating quantizer step using equations (5.1) and (5.2).
8. Obtaining the quantized estimation error e(n).
9. Obtaining the input of the FIR filter for next iteration, s (n) = e (n) + s (n)
estimated.
10. Using LMS algorithm to update the adaptive filter tap weights and choosing a
fixed step size fl for the LMS algorithm. From equation 3.25, the update
56


equation for a standard time domain LMS filtering algorithm is
h(72+l)=h(ft)+//u(/2)e(72), n=Q, 1,2,..., where jl is the step size of the LMS
algorithm and e(n) = d(n)y(n). After many tests and iterations, the results
show the best fixed step size JJ. is 0.8 for the normal sine wave and 0.5 for the
sine wave with a sudden change.
11. Repeat from step 5.
12. Calculate the signal to noise ratio (SNR) after the signal converges.
SNR = mog
vai(original signal)
=lOlog
var(original signal new signal)
vai(original signal) ^
var (noise)
(5.4)
Figure 5-4 shows results of the time domain ADPCM applied to a 100 Hz
sine wave with a sampling frequency of 8 kHz. The original signal, the predicted
signal, the reconstructed signal, and the estimation error are shown in Figure 5-4.
From Figure 5-4, it can be seen that the estimation error converges to zero at
approximately 300 iterations. The SNR of the reconstructed signal (44.04 dB) is
17.06 dB better than the SNR of the estimated signal (26.98 dB). The SNR is
calculated after estimation error converges to zero, between approximately 400 to
500 iterations.
57


ADPCM (Time Domain LMS Algorithm. Tap= 1 6. u= 0.8. SNR = 26.98 dB. SNRQ = 44.04 d 3 )
0 50 1 00 1 50 200 250 300 350 400 450 500
5 ,----------,---------,----------,----------,----------,----------,----------1----------1---------- ----------


0 50 100 150 200 250 300 350 400 450 500


0 50 100 150 200 250 300 350 400 450 500
Number ot Samples
Figure 5-4 Time Domain ADPCM Applied to a Sine Wave
Figure 5-5, shows results of the time domain ADPCM applied to a 100 Flz
sine wave with a sampling frequency of 8 kHz and a sudden change at approximately
500 iterations. The original signal, the estimated signal (estimated by the adaptive
predictor), the reconstructed signal, and the estimation error are shown in Figure 5-5.
From Figure 5-5, the estimation error converges to zero at approximately 300
iterations. A sudden input change occurred at about 500 iteration, the estimation
error re-con verges to zero at about 800 iterations. The SNR of the reconstructed
signal (59.46 dB) is 19.43 dB better than the SNR of the estimated signal (40.03 dB).
The SNR is calculated after the estimation error re-converges to about zero, between
approximately 900 to 1000 iterations.
58


Au r*/ 1 !
v V j
n } i
0 100 200 300 400 500 600 700 800 900 1000
Number ol Samples
Figure 5-5 Time Domain ADPCM Applied to a Sine Wave with a Sudden Change
5.2 ADPCM Using the Wavelet Domain LMS Filtering Algorithm
Figure 5-6 shows the generic block diagram of a wavelet domain ADPCM
encoder/decoder.
Figure 5-6 Generic Wavelet Domain ADPCM System
59


The procedures for the wavelet domain ADPCM system are similar to the
time domain ADPCM system procedures. The differences are that the input of the
FIR filter is in the wavelet domain, and the update equation for the wavelet domain
A 1
FIR filter is w(n+l)=w(n)+2juRw e(n)y(n), n=0,1,2,..., where fl is the step
A 1
size of the LMS algorithm and e(n) = d(n)y(n) and Rw is the estimated
autocorrelation matrix of the wavelet domain filter input 11^(7?) .
Figure 5-7 shows the results of the wavelet domain ADPCM applied to a 100
Hz sine wave with a sampling frequency of 8 kHz. The original signal, the estimated
signal (estimated by the adaptive predictor), the reconstructed signal, and the
estimation error are shown in Figure 5-7. From Figure 5-7, the estimation error
converges to zero at approximately 100 iterations. The SNR of the reconstructed
signal (126.09 dB) is 15.41 dB better than the SNR of the estimated signal (110.68
dB). The SNR is calculated after the estimation error converges to zero, between
approximately 400 to 500 iterations.
60


Figure 5-7 Wavelet Domain ADPCM Applied to a Sine Wave
Figure 5-8 shows the results of the time domain ADPCM applied to a 100 Hz
sine wave with a sampling frequency of 8 kHz and a sudden change at approximately
500 iterations. The original signal, the estimated signal (estimated by the adaptive
predictor), the reconstructed signal, and the estimation error are shown in Figure 5-8.
From Figure 5-8, the estimation error converges to zero at approximately 100
iterations. A sudden input change occurred at about 500 iteration, the estimation
error re-converges to zero at about 550 iterations. The SNR of the reconstructed
signal (165.54 dB) is 12.67 dB better than the SNR of the estimated signal (152.87
dB). The SNR is calculated after estimation error re-converges to zero, between
approximately 900 to 1000 iterations.
61


Figure 5-8 Wavelet ADPCM Applied to Sine Wave with a Sudden Change
5.3 Comparisons and Discussions
Figures 5-9 and 5-10 show the MSE of an ensemble of 50 runs of the time
and wavelet domain ADPCM applied to a sine wave and a sine wave with a sudden
change. Figures 5-9 and 5-10 show the wavelet domain LMS algorithm converges at
least 100 iterations faster than the time domain LMS algorithm. According to results
shown in section 5.2, the wavelet domain algorithm also provides a better SNR than
the time domain algorithm. The drawback of the wavelet domain algorithm is the
computational complexity. Reference [1] contains a proposed algorithm to minimize
the computational complexity.
62


Time Domain AOPCM MSE {) vs. Wavelet Domain ADPCM MSE () Time Domain ADPCM MSE (-) vs. Wavelet Domain ADPCM MSE ()
MSE ofan Ensemble of 50 Runs (ADPCM. Time (u= 0.8)/W a velet (u= 0.6) Domain LMS)







L
f jj
i iL A A - i s
0 50 100 150 200 250 300 350 400 450 500
Num ber ot Iterations
Figure 5-9 MSE of an Ensemble of 50 Runs(Input is a Sine Wave)
MSE otan Ensemble o(50 Runs (ADPCM, Time (u= 0.5)/W a velet (u0.3) Domain LMS)







ll Li 1 Li iU A 1 1/ A A ....
0 100 200 300 400 500 600 700 300 900 1000
Number of Iterations
Figure 5-10 MSE of Ensemble of 50 Runs(Sine Wave with Sudden Change)
63


6.
Conclusions
The simulations in chapters 4 and 5 show that the wavelet domain LMS
filtering algorithm can be used for system identification models and also for the
ADPCM encoding/decoding. Lessons learned from performing simulations are the
following:
1. The convergence rate depends on the nature of the input signal, the fixed step
size, and the adaptive FIR filter length.
2. The diagonal loading for the autocorrelation of the input signal has to be
relatively small (the value varies with the nature of the input), otherwise, the
ability to converge will be affected.
3. The wavelet domain LMS filtering algorithm provides a faster convergence rate
when the components of the input are correlated.
4. A faster convergence rate is desirable for the ADPCM applications. For
example, during a real time signal transmission, if a sudden input change occurs,
the adaptive filter has to be re-optimized. An algorithm with a faster
convergence rate provides better data quality and less risk of data loss.
5. There is not an absolute superior algorithm for signal processing. The
algorithms should be evaluated and selected depending on different applications,
hardware/software design and cost factors.
64


The advantage of the standard time domain LMS filtering algorithm is its
simplicity. But the convergence rate of the standard LMS algorithm depends heavily
on the input characteristics and therefore the designer has very little control over the
convergence rate. The advantage of the wavelet domain LMS algorithm is its faster
convergence rate. In addition, the autocorrelation matrix of the input in wavelet
domain has a sparse structure [1], which can be used to minimize computational
complexity.
65


APPENDIX
The Appendix contains Matlab codes used for simulations. The Matlab codes are
arranged based on the order of plots that appeared in the thesis.
System Identification with Time Domain LMS Filtering Algorithm
% Time Domain LMS Adaptive Filtering Algorithm
% Identify with a 16-tap FIR Filter
clear all;
u=0.05; % Selecting the fixed step size for the LMS algorithm
n=200; % Setting the iteration length
S=n+16;
X=randn(S); % Generating input signal
X=X(1,:);
h=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1];
% h is the system that the adaptive FIR filter will converge to
% Initial conditions
w(l,0=zeros(l,16); % w is the impulse response of the adaptive FIR filter
for i=l :n; % Starting the recursive LMS algorithm
x=X(i:i+15);
x=fliplr(x);
d(i)=x*h;
y(i)=x*w(i,:);
e(i)=d(i)-y(i);
w(i+l,:)=w(i,:)+u*e(i)*x; % Updating the FIR filter coefficients
end;
N=l:n;
figure; % Plotting the desired signal vs. output of the adaptive FIR filter
plot(N,d, -m,N,y, --b);
xlabelCNumber of Iterations);
ylabel(d (-) vs. y (-)1);
grid;
66


figure; % Plotting the system error
for k=l:n;
error(k)=e(k)A2;
end;
plot(error);
xlabel(Number of Iterations);
ylabel(Error);
grid;
figure; % Plotting the impulse response of the adaptive FIR filter after convergence
stem(w(501,:));
xlabel(Number of Iterations);
ylabel(Impulse Response of The FIR Adaptive Filter5);
grid;
figure; %Plotting System parameters
subplot(4,l,l);
plot(w(:,l));
ylabel(hO);
grid;
subplot(4,l,2);
plot(w(:,2));
ylabel(hl);
grid;
subplot(4,l,3);
plot(w(:,3));
ylabelC^7);
grid;
subplot(4,l,4);
plot(w(:,4));
xlabel(Number of Iterations);
ylabel(h3);
grid;
67


figure; %Plotting System parameters
subplot(4,l,l);
plot(w(:,5));
ylabel(M);
grid;
subplot(4,l,2);
plot(w(:,6));
ylabelthS);
grid;
subplot(4,l,3);
plot(w(:,7));
ylabelfhb);
grid;
subplot(4,l,4);
plot(w(:,8));
xlabel(Number of Iterations);
ylabel(h7T);
grid;
figure; %Plotting System parameters
subplot(4,l,l);
plot(w(:,9));
ylabelfhS);
grid;
subplot(4,I,2);
plot(w(:,10));
ylabel(h9);
grid;
subplot(4,l,3);
plot(w(:,ll));
ylabel(hlO);
grid;
subplot(4,l,4);
plot(w(:,12));
xlabel(Number of Iterations);
ylabel(hll);
grid;
68


figure; %Plotting System parameters
subplot(4,l,l);
plot(w(:,13));
ylabel(hl2);
grid;
subplot(4,l,2);
plot(w(:,14));
ylabel(hl3);
grid;
subplot(4,l,3);
plot(w(:,15));
ylabelChM);
grid;
subplot(4,l,4);
plot(w(:,16));
xlabel(Number of Iterations);
ylabel(hl5);
grid;
69


System Identification with Wavelet Domain LMS Filtering Algorithm
% Wavelet Domain LMS Adaptive Filtering Algorithm
% Identify with a 16-tap FIR Filter
clear all;
u=0.3; % Selecting the fixed step size for the LMS algorithm
n=500; % Setting the iteration length
S=n+50;
X=randn(S); % Generating input signal
X=X(1,:);
h=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1];
% h is the system that the adaptive FIR filter will converge to
% Initial conditions
w(l,:)=zeros(l,16);
for i=l :n; % Starting the recursive LMS algorithm
x=X(i:i+15);
x=fliplr(x);
d(i)=x*h;
[A11,D1 l]=dwt(x,haar7); % Wavelet decomposition of the input signal
xdwt=[All Dll];
y(i)=xdwt*w(i,:);
e(i)=d(i)-y(i);
R=xdwt*xdwt; % Computing the autocorrelation matrix of the input signal
RR=R+(0.0001/i)*eye(length(R)); % Diagonal Loading
Rinv=inv(RR);
g=-2*e(i)*xdwt*Rinv;
w(i+l,:)=w(i,:)-u*g; % Updating the FIR filter coefficients
end;
N=l:n;
figure; % Plotting the desired signal vs. output of the adaptive FIR filter
plot(N,d,-m,N,y,bO;
xlabeKNumber of Iterations);
ylabelfd (-) vs. y (-));
grid;
70


figure; % Plotting the system error
for k=l:n;
error(k)=e(k)A2;
end;
plot(error);
xlabe^Number of Iterations);
ylabel(Error);
grid;
figure; %Plotting the impulse response of the wavelet domain adaptive FIR filter
after convergence
stem(w(501,:));
xlabel(Number of Iterations);
ylabel(DWT Filter Impulse Response5);
grid;
x=w(501,:);
CA=x(l:8);
CD=x(9:16);
ww^dwtlCA^D^aar7); % Convert the wavelet domain FIR filter coefficients to
time domain
figure; % Plotting the time domain FIR impulse response
stem(ww);
xlabel(Number of Iterations);
ylabel(Time Domain Filter Impulse Response5);
grid;
71


MSE of Ensemble of 100 Runs. Time/Wavelet Domain System ID
% MSE of Ensemble of 100 Runs, Time Domain System Identification
% Identify with a 16-tap FIR Filter, u=0.05
clear all;
for k= 1:100
u=0.05;
n=500;
S=n+50;
X=randn(S);
X=X(1,:);
h=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1];
% Initial conditions
w(l,:)=zeros(l,16);
for i=l:n;
x=X(i:i+15);
x=fliplr(x);
d(i)=x*h;
y(i)=x*w(i,:);
e(i)=d(i)-y(i);
w(i+l ,:)=w(i,:)+u*e(i)*x;
end;
ee(k,:)=e;
end;
[a b]=size(ee);
for i=l:a
for j=l:b
e2(i,j)=ee(i,j)A2;
end;
end;
Esquare=mean(e2);
save timesysid;
72


% MSE of Ensemble of 100 Runs, Wavelet Domain System Identification
% Identify with a 16-tap FIR Filter, u=0.05
clear all;
for k=l:100
u=0.3;
n=500;
S=n+50;
X=randn(S);
X=X(1,:);
h=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1];
% Initial conditions
w(l,:)=zeros(l,16);
for i=l:n;
x=X(i:i+15);
x=fliplr(x);
d(i)=x*h;
[Al,Dl]=dwt(x,haarT); % 1st level
xdwt=[Al Dl];
y(i)=xdwt*w(i,:);
e(i)=d(i)-y(i);
R=xdwt*xdwt;
RR=R+(0.0001/i)*eye(length(R));
Rinv=inv(RR);
g=-2*e(i)*xdwt*Rinv;
w(i+l,:)=w(i,:)-u*g;
end;
ee(k,:)=e;
end;
[ab]=size(ee);
for i=l:a
for j=l:b
e2(i,j)=ee(i,j)A2;
end;
end;
73


Esquare=mean(e2);
save dwtsysidl;
% MSE of Ensemble of 100 Runs, Wavelet Domain System Identification
% Identify with a 16-tap FIR Filter, u=0.3
clear all;
for k=l:100
u=0.3;
n=500;
S=n+50;
X=randn(S);
X=X(1,:);
h=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1];
% Initial conditions
w(l,:)=zeros(l,16);
for i=l:n;
x=X(i:i+15);
x=fliplr(x);
d(i)=x*h;
[Al,Dl]=dwt(x,haarT); % 1st level
xdwt=[Al Dl];
y(i)=xdwt*w(i,:);
e(i)=d(i)-y(i);
R=xdwf*xdwt;
RR=R+(0.0001/i)*eye(length(R));
Rinv=inv(RR);
g=-2*e(i)*xdwt*Rinv;
w(i+l,:)=w(i,:)-u*g;
end;
ee(k,:)=e;
end;
[a b]=size(ee);
for i=l:a
74


for j=l:b
e2(i,j)=ee(i,j)A2;
end;
end;
Esquare=mean(e2);
save dwtsysid2;
% Plot MSE of Ensemble of 100 Runs, Time/Wavelet Domain System Identification
clear all;
load timesysid;
time=Esquare;
load dwtsysidl;
dwtl=Esquare;
n=l:500;
figure;
plot(n,time, -m,n,dwt 1, b 0;
xlabelCNumber of Iterations);
ylabel(Time Domain LMS (-) vs. Wavelet Domain LMS(l1);
grid;
load dwtsysid2;
dwt2=Esquare;
n= 1:500;
figure;
plot(n,time,-m,n,dwt2,--b);
xlabe^Number of Iterations);
ylabel(Time Domain LMS (-) vs. Wavelet Domain LMS(--));
grid;
75


ADPCM with Time Domain LMS Filtering Algorithm
(Input is a sine wave)
% Adaptive Differential Pulse Code Modulation (ADPCM)
% Time Domain LMS Algorithm applied to a sinusoidal input
clear all;
% Generate a sinusoidal signal
Fs=8000;
£=100;
t=0:l/Fs:0.4;
s=sin(2*pi*f*t);
% Jayant Quatizer (4-bit)
b=4;
do=(2A-(b-l))*l;
M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4];
delta(l)=do;
% LMS Algorithm FIR Filter
n=500;
u=0.8;
tap=16;
previous=zeros( 1 ,tap);
wf=zeros(size(l:tap));
estimateqs(l)=previous*wf;
% 4-bit Adaptive standard Jayant quantizer is applied to s(n)
% LMS Algorithm is applied
for i=l:n
e(i)=s(i+tap)-estimateqs(i);
qe(i)=floor(e(i)/delta(i)) *delta(i);
for j=2: tap
qslms(j)=previous(j-1);
end;
qslms(l)=estimateqs(i)+qe(i);
previous=qslms;
76


reconstruct(i)=estimateqs(i)+qe(i);
qslms=qslms-mean(qslms);
m(i)=mean(qslms);
wf(i+l,:)=wf(i,:)+u*qe(i)*qslms;
estimateqs (i+1 )=qslms wf(i+1,:)
k=floor(abs(qe(i))/delta(i));
if k>8
k=8;
elseif k==0
k=l;
end;
delta(i+1 )=delta(i)*M(k);
end;
recounctrut=reconstruct+m;
figure;
subplot(411);
plot(s(tap+l :tap+n));
ylabeKs(n)7);
title(ADPCM (Time Domain LMS Algorithm, Tap=16, u=0.8, SNR=47.51 dB,
SNRQ=66.48 dB));
grid;
subplot(412);
plot(estimateqs(l :n));
ylabel(Estimated s-Cn)7);
grid;
subplot(413);
plot(reconstruct( 1 :n));
ylabel(s~(n));
grid;
subplot(414);
plot(qe(l:n));
xlabel(Number of Samples);
ylabel(e~(n));
grid;
SNRQ=10*log 10(var(s( l+tap+400:n+tap))/var(reconstruct( 1 +400:n)-s( 1 +tap+400:n+tap)))
SNR= 10*log 10(var(s( 1 +tap+400:n+tap))/var(estimateqs( 1 +400:n)-s( 1 +tap+400:n+tap)))
77


ADPCM with Time Domain LMS Filtering Algorithm
(Input is a sine wave with a sudden change)
% Adaptive Differential Pulse Code Modulation (ADPCM)
% Time Domain LMS Algorithm applied to a sinusoidal input with a sudden change
clear all;
% Generate a sinusoidal signal
Fs=8000;
f=l 00;
t=0:l/Fs:0.4;
s=sin(2*pi*f*t);
s(516)=-0.2;
s(517)=-0.3;
s(518)=-0.5;
% Jay ant Quatizer (4-bit)
b=4;
do=(2A-(b-l))*l;
M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4];
delta(l)=do;
% LMS Algorithm FIR Filter
n=1000;
u=0.5;
tap=16;
previous=zeros( 1 ,tap);
wf=zeros(size( 1: tap));
estimateqs( 1 )=previous* wf;
% 4-bit Adaptive standard Jayant quantizer is applied to s(n)
% LMS Algorithm is applied
for i=l:n
e(i)=s(i+tap)-estimateqs(i);
qe(i)=floor(e(i)/delta(i)) *delta(i);
for j =2: tap
qslms(j)=previous(J-1);
end;
qslms( 1 )=estimateqs(i)+qe(i);
78


previous=qslms;
reconstruct(i)=estimateqs (i)+qe(i);
qslms=qslms-mean(qslms);
m(i)=mean(qslms);
wf(i+1, :)=wf(i, :)+u*qe(i) *qslms;
estimateqs(i+1 )=qslms* wf(i+1,
k=floor(abs(qe(i))/delta(i));
if k>8
k=8;
elseif k==0
k=l;
end;
delta(i+1 )=delta(i)*M(k);
end;
recounctrut=reconstruct+m;
figure;
subplot(411);
plot(s(tap+l :tap+n));
ylabel(s(n));
title(ADPCM (Time Domain LMS Algorithm, Tap=16, u=0.5, SNR=40.03 dB,
SNRQ=59.46 dB)*);
grid;
subplot(412);
plot(estimateqs( 1 :n));
ylabel(Estimated s~(n)T);
grid;
subplot(413);
plot(reconstruct( 1 :n));
ylabel(s~(n)0;
grid;
subplot(414);
plot(qe(l:n));
xlabelCNumber of Samples^);
ylabel(e~(n));
grid;
SNRQ=10*logl0(var(s(l+tap+900:n+tap))/var(reconstruct(l+900:n)-s(l+tap+900:n+tap)))
SNR=10*logl 0(var(s( 1 +tap+900:n+tap))/var(estimateqs( 1 +900:n)-s( 1 +tap+900:n+tap)))
79


ADPCM with Wavelet Domain LMS Filtering Algorithm
(Input is a sine wave)
% Adaptive Differential Pulse Code Modulation (ADPCM)
% DWT Domain LMS Algorithm applied to a sinusoidal signal
clear all;
% Generate a sinusoidal signal
Fs=8000;
£=100;
t=0:l/Fs:0.4;
s=sin(2*pi*f*t);
% Jayant Quatizer (4-bit)
b=4;
do=(2A-(b-l))*l;
M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4];
delta(l)=do;
% LMS Algorithm FIR Filter
n=500;
u=0.6;
tap=16;
previous=zeros( 1 ,tap);
w=zeros(size(l:tap));
estimateqs (1 )=previous w ;
% 4-bit Adaptive standard Jayant quantizer is applied to s(n)
% LMS Algorithm is applied
for i=l:n
e(i)=s(i+tap)-estimateqs(i);
qe(i)=floor(e(i)/delta(i))*delta(i);
for j=2 dap
qslms(j)=previous(j -1);
end;
qslms( 1 )=estimateqs(i)+qe(i);
reconstruct(i)=estimateqs(i)+qe(i);
80


previous=qslms;
qslms=qslms-mean(qslms);
m(i)=mean(qslms);
%wdt input of the FIR filter input qslms
[AlJDl^dwttqslms/dBd); % 1st level
A1 l=Al(6:length(Al)); % Low frequency component of 1st level
D11 =D 1(1 :length(Dl)-5); % high frequency component of 1st level
[A2,D2]=dwt(Al l,dB3); % 2nd level
A22=A2(3 :length( A2));
D22=D2( 1 :length(D2)-2);
qsdwt=[A22 D22 Dll]; % wdt LMS FIR filter input
R=qsdwt*qsdwt;
R=R+(0.0001/i) *eye(tap);
Rinv=inv(R);
g=-2*e(i)*qsdwt*Rinv;
w(i+l,:)=w(i,:)-u*g;
estimateqs(i+1 )=qsdwt* w(i+1
k=floor(abs(qe(i))/delta(i));
if k>8
k=8;
elseif k==0
k=l;
end;
delta(i+1 )=delta(i)*M(k);
end;
recounctrut=reconstruct+m;
figure;
subplot(411);
plot(s(l+tap:n+tap));
ylabel(s(n));
title(ADPCM (DWT Domain LMS Algorithm, Tap=16, u=0.6, SNRQ=110.68 dB,
SNR=126.09 dB)I;
grid;
subplot(412);
81


plot(estimateqs(l :n));
ylabel(Estimated s~(n));
grid;
subplot(413);
plot(reconstruct(l :n));
ylabel(s~(n));
grid;
subplot(414);
plot(qe(l:n));
xlabel(Number of Samples);
ylabelfe-Cn)7);
grid;
SNRQ= 10*logl 0(var(s( 1 +tap+400:n+tap))/var(reconstruct( 1 +400:n)-s( 1 +tap+400:n+tap)))
SNR= 10*log 10(var(s( 1 +tap+400:n+tap))/var(estimateqs( 1 +400:n)-s( 1 +tap+400:n+tap)))
82


ADPCM with Wavelet Domain LMS Filtering Algorithm
(Input is a sine wave with a sudden change)
% Adaptive Differential Pulse Code Modulation (ADPCM)
% DWT Domain LMS Algorithm, input is a sine wave with a sudden change
clear all;
% Generate a sinusoidal signal
Fs=8000;
f=100;
t=0:l/Fs:0.4;
s=sin(2*pi*f*t);
s(516)=-0.2;
s(517)=-0.3;
s(518)=-0.5;
% Jayant Quatizer (4-bit)
b=4;
do=(2A-(b-l))*l;
M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4];
delta(l)=do;
% LMS Algorithm FIR Filter
n=1000;
u=0.5;
tap=16;
previous=zeros( 1 ,tap);
w=zeros(size(l :tap));
estimateqs( 1 )=previous* w ;
% 4-bit Adaptive standard Jayant quantizer is applied to s(n)
% LMS Algorithm is applied
for i=l:n
e(i)=s(i+tap)-estimateqs(i);
qe(i)=floor(e(i)/delta(i))*delta(i);
for j=2: tap
qslms(j )=previous(j -1);
end;
83


qslms( 1 )=estimateqs(i)+qe(i);
reconstruct(i)=estimateqs(i)+qe(i);
previous=qslms;
qslms=qslms-mean(qslms);
m(i)=mean(qslms);
%wdt input of the FIR filter input qslms
[Al,Dl]=dwt(qslms,dB6); % 1st level
A1 l=Al(6:length(Al)); % Low frequency component of 1st level
Dll =D 1(1: length(D 1 )-5); % high frequency component of 1 st level
[A2,D2]=dwt(All,dB3); % 2nd level
A22=A2(3:length(A2));
D22=D2( 1 :length(D2)-2);
qsdwt=[A22 D22 D11]; % wdt LMS FIR filter input
R=qsdwt*qsdwt;
R=R+(0.0001/i)*eye(tap);
Rinv=inv(R);
g=-2*e(i)*qsdwt*Rinv;
w(i+l,:)=w(i,:)-u*g;
estimateqs(i+l)=qsdwt*w(i+l,:);
k=floor(abs(qe(i))/delta(i));
if k>8
k=8;
elseif k==0
k=l;
end;
delta(i+1 )=delta(i) *M(k);
end;
recounctrut=reconstruct+m;
figure;
subplot(411);
plot(s (1 +tap: n+tap));
ylabelCsCn));
title(ADPCM (DWT Domain LMS Algorithm, Tap=16, u=0.5, SNRQ=165.54,
SNR=152.87));
84


grid;
subplot(412);
plot(estimateqs(l :n));
ylabel(7Estimated s-(n)7);
grid;
subplot(413);
plot(reconstruct( 1 :n));
ylabelCs-Cn)7);
grid;
subplot(414);
plot(qe(l:n));
xlabel(Number of Samples7);
ylabelfe-Cn)7);
grid;
SNRQ= 10*log 10(var(s( 1 +tap+900:n+tap))/var(reconstruct( 1 +900:n)-s( 1 +tap+900:n+tap)))
SNR=10*log 10(var(s( l+tap+900:n+tap))/var(estimateqs( 1 +900:n)-s( 1 +tap+900:n+tap)))
85


MSE of Ensemble of 50 Runs, Time/Wavelet Domain ADPCM
(Input is a sine wave)
% MSE of an Ensemble of 50 runs
% Time Domian Adaptive Differential Pulse Code Modulation (ADPCM)
% LMS Algorithm applied to a sinusoidal input
clear all;
for k= 1:50
% Generate a sinusoidal signal
Fs=8000;
f=100;
t=0:l/Fs:0.4;
s=sin(2*pi*f*t);
% Jayant Quatizer (4-bit)
b=4;
do=(2A-(b-l))*l;
M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4];
delta(l)=do;
% LMS Algorithm FIR Filter
n=500;
u=0.8;
tap=16;
previous=zeros( 1 ,tap);
wf=zeros(size(l :tap));
estimateqs(l)=previous*wf;
% 4-bit Adaptive standard Jayant quantizer is applied to s(n)
% LMS Algorithm is applied
for i=l:n
e(i)=s(i+tap)-estimateqs(i);
qe(i)=floor(e(i)/delta(i))*delta(i);
for j=2: tap
qslms(j)=previous(j-l);
end;
qslms( 1 )=estimateqs(i)+qe(i);
86


previous=qslms;
reconstruct(i)=estimateqs(i)+qe(i);
qslms=qslms-mean(qslms);
m(i)=mean(qslms);
wf(i+l,:)=wf(i,:)+u*qe(i)*qslms;
estimateqs(i+1 )=qslms* wf(i+1,:)
k=floor(abs(qe(i))/delta(i));
if k>8
k=8;
elseif k==0
k=l;
end;
delta(i+1 )=delta(i) *M(k);
end;
recounctrut=reconstruct+m;
ee(k,:)=qe;
end;
[a b]=size(ee);
for i=l:a
for j=l:b
e2(i,j)=ee(i,j)A2;
end;
end;
Esquare=mean(e2);
save timeMSE;
% Adaptive Differential Pulse Code Modulation (ADPCM)
% DWT Domain LMS Algorithm
clear all;
for kk=l:50
% Generate a sinusoidal signal
Fs=8000;
f=100;
t=0:l/Fs:0.4;
s=sin(2*pi*f*t);
87


% Jayant Quatizer (4-bit)
b=4;
do=(2A-(b-l))*l;
M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4];
delta(l)=do;
% LMS Algorithm FIR Filter
n=500;
u=0.6;
tap=16;
previous=zeros( 1 ,tap);
w=zeros(size( 1 :tap));
estimateqs( 1 )=previous*w;
% 4-bit Adaptive standard Jayant quantizer is applied to s(n)
% LMS Algorithm is applied
for i=l:n
e(i)=s(i+tap)-estimateqs(i);
qe(i)=floor(e(i)/delta(i))*delta(i);
for j=2: tap
qslms(j)=previous(j-1);
end;
qslms( 1 )=estimateqs(i)+qe(i);
reconstruct(i)=estimateqs(i)+qe(i);
previous=qslms;
qslms=qslms-mean(qslms);
m(i)=mean(qslms);
%wdt input of the FIR filter input qslms
[Al,Dl]=dwt(qslms,dB67); % 1st level
A1 l=Al(6:length(Al)); % Low frequency component of 1st level
D1 l=Dl(l:length(Dl)-5); % high frequency component of 1st level
[A2,D2]=dwt(Al l,dB3); % 2nd level
A22=A2(3:length(A2));
D22=D2( 1 :length(D2)-2);
qsdwt=[A22 D22 Dll]; % wdt LMS FIR filter input
R=qsdwt*qsdwt;
R=R+(0.0001/i)*eye(tap);
88


Rinv=inv(R);
g=-2*e(i)*qsdwt*Rinv;
w(i+l,:)=w(i,:)-u*g;
estimateqs(i+1 )=qsdwt* w(i+1,:)
k=floor(abs(qe(i))/delta(i));
if k>8
k=8;
elseif k==0
k=l;
end;
delta(i+l)=delta(i)*M(k);
end;
recounctrut=reconstruct+m;
ee(kk,:)=qe;
end;
[a b]=size(ee);
for i=l:a
for j=l:b
e2(i,j)=ee(i,j)A2;
end;
end;
Esquare=mean(e2);
save dwtMSE;
clear all;
load timeMSE;
timeE=Esquare;
load dwtMSE;
dwtE=Esquare;
N=1 :length(timeE);
figure;
plot(N,timeE,-m,N,dwtE,--b);
xlabel(Number of Iterations);
ylabel(Time Domain ADPCM MSE (-) vs. Wavelet Domain ADPCM MSE ());
title(MSE of an Ensemble of 50 Runs (ADPCM, Time/Wavelet Domain LMS));
grid;
89


MSE of Ensemble of 50 Runs, Time/Wavelet Domain ADPCM
(Input is a sine wave with a sudden change)
% Adaptive Differential Pulse Code Modulation (ADPCM)
% LMS Algorithm applied to a sinusoidal input with a sudden change
clear all;
for kk=l:50
% Generate a sinusoidal signal
Fs=8000;
f=100;
t=0:l/Fs:0.4;
s=sin(2*pi*f*t);
s(516)=-0.2;
s(517)=-0.3;
s(518)=-0.5;
% Jayant Quatizer (4-bit)
b=4;
do=(2A-(b-l))*l;
M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4];
delta(l)=do;
% LMS Algorithm FIR Filter
n=1000;
u=0.5;
tap=16;
previous=zeros( 1 ,tap);
wf=zeros(size(l :tap));
estimateqs( 1 )=previous*wf;
% 4-bit Adaptive standard Jayant quantizer is applied to s(n)
% LMS Algorithm is applied
for i=l:n
e(i)=s(i+tap)-estimateqs(i);
qe(i)=floor(e(i)/delta(i)) *delta(i);
for j=2:tap
qslms(j)=previous(j-1);
end;
90


qslms( 1 )=estimateqs(i)+qe(i);
previous=qslms;
reconstruct(i)=estimateqs(i)+qe(i);
qslms=qslms-mean(qslms);
m(i)=mean(qslms);
wf(i+l,:)=wf(i,:)+u*qe(i)*qslms;
estimateqs(i+1 )=qslms*wf(i+1
k=floor(abs(qe(i))/delta(i));
if k>8
k=8;
elseif k==0
k=l;
end;
delta(i+1 )=delta(i)*M(k);
end;
reconctrut=reconstruct+m;
ee(kk,:)=qe;
end;
[a b]=size(ee);
for i=l:a
for j=l:b
e2(i,j)=ee(i,j)A2;
end;
end;
Esquare=mean(e2);
save timeMSE;
% Adaptive Differential Pulse Code Modulation (ADPCM)
% DWT Domain LMS Algorithm
clear all;
for kk= 1:50
% Generate a sinusoidal signal
Fs=8000;
f=100;
t=0:l/Fs:0.4;
s=sin(2*pi*f*t);
s(516)=-0.2;
91


s(517)=-0.3;
s(518)=-0.5;
% Jayant Quatizer (4-bit)
b=4;
do=(2A-(b-l))*l;
M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4];
delta(l)=do;
% LMS Algorithm FIR Filter
n=1000;
u=0.3;
tap=16;
previous=zeros( 1 ,tap);
w=zeros(size( 1 :tap));
estimateqs( 1 )=previous*w;
% 4-bit Adaptive standard Jayant quantizer is applied to s(n)
% LMS Algorithm is applied
for i=l:n
e(i)=s(i+tap)-estimateqs(i);
qe(i)=floor(e(i)/delta(i)) *delta(i);
for j=2: tap
qslms(j )=previous(j-1);
end;
qslms( 1 )=estimateqs(i)+qe(i);
reconstruct(i)=estimateqs(i)+qe(i);
previous=qslms;
qslms=qslms-mean(qslms);
m(i)=mean(qslms);
%wdt input of the FIR filter input qslms
[Al,Dl]=dwt(qslms,dB67); % 1st level
A1 l=Al(6:length(Al)); % Low frequency component of 1st level
D11 =D 1(1 :length(Dl)-5); % high frequency component of 1st level
[A2,D2]=dwt(Al l,dB3); % 2nd level
A22=A2(3: length( A2));
D22=D2(l:length(D2)-2);
qsdwt=[A22 D22 Dll]; % wdt LMS FIR filter input
92


R=qsdwf*qsdwt;
R=R+(0.0001/i)*eye(tap);
Rinv=inv(R);
g=-2*e(i)*qsdwt*Rinv;
w(i+l,:)=w(i,:)-u*g;
estimateqs(i+1 )=qsdwt* w(i+1,:)
k=floor(abs(qe(i))/delta(i));
if k>8
k=8;
elseif k==0
k=l;
end;
delta(i+1 )=delta(i) *M(k);
end;
recounctrut=reconstruct+m;
ee(kk,:)=qe;
end;
[a b]=size(ee);
for i=l:a
for j=l:b
e2(i,j)=ee(i,j)A2;
end;
end;
Esquare=mean(e2);
save dwtMSE;
clear all;
load timeMSE;
timeE=Esquare;
load dwtMSE;
dwtE=Esquare;
N= 1 :length(timeE);
figure;
plot(N,timeE,-m,N,dwtE,--b);
xlabel(Number of Iterations);
ylabel(Time Domain ADPCM MSE (-) vs. Wavelet Domain ADPCM MSE
title(MSE of an Ensemble of 50 Runs (ADPCM, Time/Wavelet Domain LMS));
grid;
93


Full Text

PAGE 1

WAVELET DOMAIN ADAPTIVE FILTERING IN SIGNAL PROCESSING by SuWei Chang B.S.E.E., University of Colorado, 1994 A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Electrical Engineering 1999

PAGE 2

This thesis for the Master of Science degree by Su-Wei Chang has been approved by Tarnal Bose Jan Bialasiewicz Miloje Radenk.ovic 4-/S /9'1 DATE

PAGE 3

Chang, Su-Wei (M.S. Electrical Engineering) Wavelet Domain Adaptive Filtering in Signal Processing Thesis Directed by Associate Professor Tarnal Bose ABSTRACT The standard time domain least-mean-square (LMS) algorithm is a widely used and standard adaptive finite impulse response (FIR) filtering algorithm. This thesis presents and studies a recently developed wavelet domain LMS adaptive FIR filtering algorithm. The proposed algorithm uses the inverse of the Cholesky factor in the wavelet domain to pre-whiten the input. The pre-whitened input has a lower eigenvalue spread than the time domain input. Therefore, the convergence rate of the LMS algorithm is improved. This thesis shows that the wavelet domain LMS algorithm can be used to improve the convergence rate for the system identification problem and for the adaptive differential pulse code modulation (ADPCM) coding/compression method, as compared to the standard time domain LMS FIR filtering algorithm. This abstract accurately presents the content of the candidate's thesis. I recommend its publication. Signed Tarnal Bose 111

PAGE 4

CONTENTS Figures .... ..... ................................................................................ ..... ....... ...... ............. v Tables ......................................................................................................................... vii Chapter 1. Introduction ............................................................................................................ 1 2. Wavelet Transform and Filter Banks ..................................................................... 3 2.1 Wavelet Transform ................. ........................................................... ............... 3 2.2 Wavelet Filter Banks ................................................................................... ..... 6 2.2.1 Two-Band Wavelet Filter Banks ...... ..... ....... .................................. .............. 10 Perfect Reconstruction ............................................. : ........ ............. ............... 10 Haar Wavelet Filter Banks .............................................................................. 12 Daubechies Compactly Supported Wavelets Filter Banks .............................. 13 2.2.2 M-Band Regular Perfect Reconstruction Filter Banks ............... .... ............... 14 3. FIR Adaptive Filtering Using Least Mean Square (LMS) Algorithm .................. 15 3.1 Wiener Filter .................................................................................................... 15 3.2 Steepest Descent Algorithm ............................................................................ 18 3.3 LMS Algorithm ...................................... ........................................ .... ............ 21 3.3.1 Standard Time Domain LMS Algorithm ........................................................ 21 3.3.2 Self-Orthogonalizing Adaptive LMS Filtering Algorithm .............................. 23 3.3.3 Frequency Domain LMS Algorithm ............................................................... 24 3.3.4 Wavelet Domain LMS Algorithm ................................................................... 25 4. System Identification ............................................................................................ 29 4.1 System Identification Using the Time Domain LMS Filtering Algorithm ..... 29 4.2 System Identification Using the Wavelet Domain LMS Filtering Algorithm 39 4.3 Comparisons and Discussions ......................................................................... 49 5. Adaptive Differential Pulse Code Modulation (ADPCM) ................................... 51 5.1 ADPCM Using the Time Domain LMS Filtering Algorithm ......................... 55 5.2 ADPCM Using the Wavelet Domain LMS Filtering Algorithm ..... .............. 59 5.3 Comparisons and Discussions ......................................................................... 62 6. Conclusions .................................................................................................. ....... 64 Appendix .................................................................................................. ................. 66 References .................................................................................................. ............... 94 lV

PAGE 5

FIGURES Figure 2-1 Basis Functions and Time-Frequency Resolution of the STFT .... .... .................. 5 2-2 Basis Functions and Time-Frequency Resolution of the WT ............................. 6 2-3 Generic Two-Band Wavelet Analysis Filter Bank .............................................. 7 2-4 Example of Orthonormal Wavelet Bases ............................................................ 9 2-5 Two-Band Perfect Reconstruction Wavelet Filter Banks ................................. 10 2-6 Two-Band Perfect Reconstruction Filter Coefficients ...................................... 12 2-7 M-Band Perfect Reconstruction Filter Banks ................................................... 14 3-1 Block Diagram of a Generic Statistical Filtering Problem ............................... 15 3-2 Block Diagram of a Generic Steepest Descent FIR Filtering Algorithm .......... 20 3-3 Block Diagram of a General LMS FIR Filtering Algorithm ............................. 23 3-4 Block Diagram of a Wavelet Domain LMS FIR Adaptive Filter ..................... 25 4-1 Block Diagram of a System Identification Model Using Time Domain LMS Filtering Algorithm .......................................................................................... 30 4-2 The Desired Output d(n) vs. the Filter Output y(n) (Time Domain LMS Algorithm, f.J = 0.05 ) ........................................................................................ 36 4-3 The Estimated Error (Time Domain LMS Algorithm, f.J = 0.05) ..................... 36 4-4 Adaptive Filter Coefficients h0 hi, h2 and h3 .................................................. 37 4-5 Adaptive Filter Coefficients }4, h5 h6, and h7 .................................................. 37 4-6 Adaptive Filter Coefficients hs, h9, h10, and hu ................................................ 38 4-7 Adaptive Filter Coefficients h12, hi3 hi4, and his ............................................. 38 4-8 Adaptive Filter Impulse Response After 500 Iterations (Time Domain LMS Algorithm, f.J = 0.05 ) ............... ................. ......................................................... 39 4-9 Block Diagram of a System Identification Model Using Wavelet Domain LMS Filtering Algorithm .......................................................................................... 40 4-10 The Desired Output d(n) vs. the Filter Output y(n) (Wavelet Domain LMS Algorithm, f.J = 0.05 ) ......................................................................................... 45 4-11 The Estimated Error (Wavelet Domain LMS Algorithm, f.J = 0 .05 ) ................ 45 4-12 Wavelet Domain Filter Impulse Response After 500 Iterations (Wavelet Domain LMS Algorithm, f.J = 0.05 ) ........ .. .......................................... .. .. ......... 46 v

PAGE 6

4-13 Time Domain Filter Impulse Response After 500 Iterations (Wavelet Domain LMS Algorithm, f.l = 0.05 ) ............................................................................... 46 4-14 The Desired Output d(n) vs. the Filter Output y(n) (Wavelet Domain LMS Algorithm, f.l = 0.3) .......................................................................................... 4 7 4-15 The Estimated Error (Wavelet Domain LMS Algorithm, f.L = 0.3 ) .................. 4 7 4-16 Wavelet Domain Filter Impulse Response After 500 Iterations (Wavelet Domain LMS Algorithm, f.l = 0.3 ) ................................................................... 48 4-17 Time Domain Filter Impulse Response After 500 Iterations (Wavelet Domain LMS Algorithm, f.l = 0.3 ) .................................................................... ............. 48 4-18 Mean-Squared Error of an Ensemble of 100 Runs ( f.L = 0.05 ) .......................... 50 4-19 Mean-Squared Error of an Ensemble of 100 Runs (Time domain LMS f.L = 0.05 Wavelet Domain f.L = 0.3 ) ................................................................................. 50 5-1 Generic PCM Encoder ...................................................................................... 51 5-2a Generic ADPCM Encoder.. ............................................................................... 53 5-2b Generic ADPCM Decoder .............................................................. ...... ............ 53 5-3 Generic Time Domain ADPCM System ........................................................... 55 5-4 Time Domain ADPCM Applied to a Sine Wave .............................................. 58 5-5 Time Domain ADPCM Applied to a Sine Wave with a Sudden Change ......... 59 5-6 Generic Wavelet Domain ADPCM System ...................................................... 59 5-7 Wavelet Domain ADPCM Applied to a Sine Wave ......................... ... ............. 61 5-8 Wavelet Domain ADPCM Applied to a Sine Wave with a Sudden Change .... 62 5-9 MSE of an Ensemble of 50 Runs (Input is a Sine Wave) ............... ................. 63 5-10 MSE of an Ensemble of 50 Runs (Sine Wave with a Sudden Change) ............ 63 VI

PAGE 7

TABLES 2-1 The /7o Coefficients of Daubechies Compactly Supported Wavelets ................ 13 5-1 Jayant Quantizer .................................................................................................. 54 Vll

PAGE 8

1. Introduction The adaptive filtering algorithm may be classified in two groups: linear adaptive filtering and nonlinear adaptive filtering. Simply described, the filter is considered linear if the filter output varies linearly as a function of the filter input, otherwise, the filter is nonlinear. Some of the common linear adaptive filtering algorithms are: the steepest descent algorithm, the least-mean-square (LMS) algorithm, the recursive least-squares (RLS) algorithm, and the square root (QR) adaptive algorithm. Some of the common nonlinear adaptive filtering algorithms include: the blind deconvolution, the back-propagation learning network, and the radial basis function network. Detailed descriptions of the adaptive filtering algorithms mentioned above can be found in [2]. A recently developed wavelet domain LMS adaptive FIR filtering algorithm [1] shows that the autocorrelation matrix of the wavelet domain FIR filter input has a special sparse structure. The correlation matrices of the "details" (please see Chapter 2 for the definition of "details") of the wavelet domain FIR filter input also have special sparse structures. From [ 1], the autocorrelation matrix of the filter input can be estimated and updated at each recursive step using the correlation matrices of the input "details" to reduce computational complexity of the algorithm.

PAGE 9

The purpose of this thesis is to study, simulate, and demonstrate that the wavelet domain LMS adaptive FIR filtering algorithm provides convergence rate improvement over the time domain LMS adaptive FIR filtering algorithm. This thesis also shows the wavelet domain algorithm can be applied for the system identification problem and for the ADPCM coding/compression method. Chapter 2 provides an introduction of the theory of wavelet transformation and filter banks This chapter is focused on describing the details of 2-Band perfect reconstruction filter bank theory which are used in the simulations of subsequent chapters. TheM-Band perfect reconstruction filter bank theory is discussed topically in this chapter to introduce the reader to alternate techniques. Chapter 3 presents the derivation of the LMS FIR filtering algorithm from the steepest descent algorithm. Time domain, frequency domain, and wavelet domain LMS FIR filtering algorithms are included. Chapters 4 and 5 describe two applications of the LMS adaptive algorithms: system identification and ADPCM. Simulations of these applications using time domain and wavelet domain LMS adaptive filtering algorithms are also presented. The thesis concludes with Chapter 6, which summarizes and compares the results of the simulations. Also, advantages and disadvantages of the time domain and wavelet domain LMS FIR filtering algorithms are presented. 2

PAGE 10

2. Wavelet Transform and Filter Banks An overview of comparisons between Fourier transform and wavelet transform is given in section 2.1. The traditional Fourier transform bases come from differential equations, which does not have a dilation equation. Wavelets include the scaling function, which is the solution of the dilation equation. The dilation equation includes two time scales, therefore, the solution of a dilation equation (t) is finitely bounded. The scaling function (t) is nonzero in a finite interval, and therefore (2t) is zero outside of half of the interval. This unique bounded characteristic leads the wavelet bases to be localized. To best describe the relationships between the low pass filter ho and the scaling function, and the high pass filter f1_ and the wavelets, section 2.2 shows the dilation and wavelet equations and the derivation of the solutions to these equations. 2.1 Wavelet Transform The Fourier transform is a traditional way of presenting a signal in the frequency domain. The standard Fourier transform of a continuous time signal x(t) is 1 +oo X(w)= I x(t)e-1wrdt. "\j .i.Jl -oo 3 (2-1)

PAGE 11

The Fourier transform described above works fine for stationary signals (e g., sinusoidal waves), where the frequency of the signal does not vary with time. However, in many applications (e.g., audio and video signal processing), the signals are nonstationary and one is interested to obtain localized frequency information in a specific period of time. Another difficulty with the standard Fourier transform, even with a stationary signal, is when there is a sudden change in the signal (e.g., noise transients). The result is seen as broadband energy over the entire frequency axis because of the infinite extent of the basis functions [3]. To over come these shortcomings of the standard Fourier transformation, a windowed or short-time Fourier transform (STFT) was developed. The STFT obtains the time-localized interval of the x(n)-of-interest and takes the Fourier transform of it as +oo Xwindow(w, r) = f x(t)g(t-r)eJWt dt. (2.2) -oo Another alternative is to use the wavelet transform to localize the timefrequency characteristics of a signal. The wavelet transform of x(t) is _l +oo t -b Xwavelet(a,b)=lal 2 f x(t)lf/(-)dt, -oo a (2.3) where a is the scaling parameter. Small values of the absolute value of the scaling parameter (lal) represent high frequencies and large values of lal represent low frequencies. The time localization is controlled by the translation parameter b. The functions lf/ab are "wavelets" and the function lf/ is the "mother wave". 4

PAGE 12

Figure 2-1 shows the comparisons between the windowed Fourier transform and the wavelet transform [4]. According to Figure 2-1, the windowed Fourier transform only provides frequency information of the signal since the time-axis bandwidth is a constant and only limited time domain information is available [3]. Figure 2-2 shows that for the wavelet transform, a higher frequency corresponds to a wider bandwidth in frequency axis but a narrower bandwidth in time-axis. A lower frequency corresponds to a narrower bandwidth in frequency axis but a wider bandwidth in time-axis [3]. Therefore, the wavelet transform (WT) is more capable of presenting multiresolution of time-frequency characteristics of the signal compared to STFT. frequency time Figure 2-1 Basis Functions and Time-Frequency Resolution of the STFT 5

PAGE 13

I frequency time Figure 2-2 Basis Functions and Time-Frequency Resolution of the WT 2.2 Wavelet Filter Banks Filter banks are at the heart of wavelet domain algorithm simulations. The signal is decomposed or transformed to wavelet domain using the analysis filter bank and reconstructed using the synthesis filter bank. Figure 2-3 shows the block diagram of a generic 2-Band analysis filter bank. In Figure 2-3, H0 represents the low pass filter and H1 represents the high pass filter in the analysis bank [7]. The output signal of each filtering stage is downsampled by a factor of two. 6

PAGE 14

Figure 2-3 Generic Two-Band Wavelet Analysis Filter Bank From Figure 2-3, the outputs of the filters (a3, d3, a2, d2, al, dl, etc.) are called "details" of the input signal x(t). To describe the relations between the highpass and lowpass filters in Figure 2-3 and the scaling and wavelet functions, the dilation equation (equation 2.4) and the wavelet equation (equation 2.5) are introduced as N f/J(t) = 2Ih (k)(2tk), 0 k N lf/(t) = 2Ih (k)(2tk). 1 k (2.4) (2.5) The dilation equation can be solved by taking the continuous-time Fourier transform of both sides of the equation [5]. The results (equations 2.6-2.9) show that the solution of the dilation equation is a product of cascaded filters. 7

PAGE 15

m m ( co)= (-)H0(-) 2 2 m m m ( co)= (-)H (-)H0(-) 4 0 4 2 m m m m ( co)= (-)H0(-)H0(-)H0(-) 8 8 4 2 00 co ( co)= TI H (-. )(O) j==l 0 2 1 (2.6) (2.7) (2 8) (2.9) To obtain (t), perform the inverse Fourier transform of (m) shown in equation 2.9. In general, continuous filtering and dowsampling of x(t) until the output of the low pass filter is out of resolution will lead to the scaling function (t). The scaling function is dependent on the values of the low pass filter coefficients. The wavelet equation (equation 2.5) can be solved after obtaining (t). Figure 2-4 shows some examples of orthonormal wavelet bases [4). A wavelet basis is orthonormal when the scaling and translation parameters are nonoverlapping. The finite-interval-bounded characteristics of (t) and ljJ (t) are shown clearly in Figure 2-4 [ 4]. 8

PAGE 16

(a) 2 ---5 {\ 0 5 1 b l 1.s r ------, i ,'t 0 5 i \ j 0 ---. : _,_ J 5 -"'----5 0 5 (C) .. 1 ,, Oel,3 ., 0.5 I : i 1 0 r---.....,/\/ \ ./' ....... ----0. 5 I ---' 0 5 (d) 0 --' 0 (e) 2-. --------11 2 l f / \ t / I 0 I I I / \ -1r--.....{ -1 0 2 (f) /\ I \ i I \ 0 \,r---, --.... ----... -5 0 5 1 II '" i I .-1 .,.""""'' ; 1 : J t tl t\,. o ,----v I / !1 1 1 i 11 -1"'" ---'-'-----5 0 5 2 I I 'l'al,T 1 11 I i il 0 -----) 1 r----I l -11 ..... ___ ,_ 1. ....... --' 0 5 1 1 A IVBL. 3 I ii I I 0 i II d I -1 \Iii j -5 0 5 1 ,-. --, --;;;::1 Q __j i I _, L _______ 0 2 1' ;1 2'1' j f j o r-, ___ ; \ .r--: I 1\! I -1 I \1 j -2 ------: -1 0 1 2 1 IV I II I 'i I 0 ...____..,--.,, II ;-----1 I \; 1/ : _, L_ _______ -5 0 5 Figure 2-4 Example of Orthonormal Wavelet Bases 9

PAGE 17

2.2.1 Two-Band Wavelet Filter Banks Two-Band wavelet filter banks are used for simulations in this thesis because of their simplicities. The Haar filter bank is used for the system identification problem in Chapter 4 to verify the convergence rate improvement of the wavelet domain LMS FIR filtering algorithm. The Haar basis function is orthonormal, but it is not continuous. Therefore, the Haar basis is in general not adequate for signal processing [3]. The Daubechies compactly supported wavelets filter banks have continuous basis functions and are widely used The Daubechies compactly supported wavelets filters with length three and six were used for the ADPCM simulations in this thesis. 2.2.1.1 Perfect Reconstruction Figure 2-5 shows a block diagram of a generic Two-Band perfect reconstruction filter bank [6]. From Figure 2-5, the gap between the analysis bank and synthesis bank indicates the downsampled signal can be transmitted, compressed, coded, or stored with a lower computational complexity. The signal is subsequently reconstructed using the synthesis filter bank. x(n) tlt(n) Fl x(n) input analysis decimators expanders synthesis output Figure 2-5 Two-Band Perfect Reconstruction Wavelet Filter Banks 10

PAGE 18

Two crucial requirements make the perfect reconstruction possible: the alias cancellation condition (equation 2.10) and the no distortion condition (equation 2.11), given by (2.10) (2.11) The filter designs for the filters H0 (z), HI (z), F0 (z), and FI (z) are as follows: 1) To satisfy equation 2.10, choose F0 (z) =HI ( -z) and FI (z) = -H0 ( -z). H0 (z) are low pass filters. 3) (z) = (z)H1 (z), substitute values of F1 (z) and H1 (z) from step 1). P(z)=-H (-z)F (-z)=-P (-z). I 0 0 0 4) Now, the equation 2.11 is simplified to (2.12) 5) Design a low pass filter that satisfies equation (2.12) and use step 1) to derive other filters. Figure 2-6 shows the relations between filter coefficients. 11

PAGE 19

Ho Figure 2-6 Two-Band Perfect Reconstruction Filter Coefficients 2.2.1.2 Haar Wavelet Filter Banks The Haar filter banks consist of four simple filters: The low pass (equation 2.13) and high pass (equation 2.14) filters in the analysis filter bank, and the low pass (equation 2.15) and high pass (equation 2.16) filters in the synthesis bank. 1 1 ho = [ .J2 .J2] (2.13) 1 1 = [.J2 ,.J2] (2.14) 1 1 fo = [ .J2 '.J2] (2.15) 1 1 ft = [.J2 .J2] (2.16) 12

PAGE 20

2.2.1.3 Daubechies Compactly Supported Wavelets Filter Banks The flo (low pass filter) coefficients of Daubechies compactly supported wavelet filter banks is shown in Table 2-1 [ 4]. Filter coefficients for '1, f0 and /1 can be derived using Figure 2-6. Table 2-1 The ho (Low Pass) Coefficients ofDaubechies Compactly Supported Wavelets Nil.,. 0 l I I 1 N-3 l o I ...:t3'lli70& t.'29 50081!> .-t59877!'0'21 UH!JH l i I .0.:152262!Jl8R.S709.'i 1;\' :: I 01 I .2:10."l77SJ3 )61180t;-1 .71 .aM-16.')7055:1015 2 I I .v-s I i I N I I I I IN-7. 3 j -. 0279X:n'694 J t)$S!J!J 4 I .JR1"Q:J4RJ 171009..11 .OJ()g-11 :tal H:sr,.SCIOi .O:t:.!SH301166GB&'i'l 7 0 < 9 0 I 3 9 :0 ll 0 3 "' II -.OI05YT-10l?'M.'i()69'J 1895 I .1'J21HHH70663823 .Ol22113095K46:1Sl .O'r7Si .Ul25H07S 19990h"'l0 .4!1<\!;73S0039&-t5-J.") .7511339USOZIU95U .2:262fi.I69J!l6.'iIIOO .09'i 501 .tl-11 .OO.J';772{;7!)1CYJ-155 .001017301UtJ5:iO.'J5 .lI300(;Q-:J:-t928 .'\212 .2:2403Ul&49H38-U'J. .07130'J219'Jf.t'IB27'Z .USOH1260Hl[)J07'74 .OOUI2f)S7'7972921 .OOUi01&40iO,O-l'i3 ,.v_ 10 0 05 l5H-l22-t3J072 ,jl::!8715909lo43166 .6'r. t;S:m7362D'T.Jl9S b9 .01 &82!) 1 '23 .28401S5 ... 206].'i82-4 6 l .000S.l5 73912 12B747-1.26620.J89.1' s -.01 736!1JQI001SOOO I .OMC882S3930797t I 10 .Oi39610'17t)17100l II .0087-1609G> 121 .OO.;d'i03:;29934.520 I 1'l I' OCV675G-IS06 ).r, -.0001 .17416:'641'2-16 i :Z l.z:J69(JOU..'i!; :; lr .1331 .:JY:S:.!7:Ji83:.!7916ti3 .U:JU72'i68 I ,') .CG7G32S290G13:Z'i& I .ouo:zson.;i'l\48.1 / u:.::z:st>Hi&:!l'lJGrJ.'i I Jm42t41 .:--o: J6H'l .. 63S I .001AI7VI(,SSJ0Sf .. 1 I .ooo:z:m:tsr, rna..s2:n 16 I 00025J9C..JUJ:'1? 17 I .O'lVG700:ii9005'i3 .lbtH 7G8000776347 .527'201 1 .GtQS4833007 I 1 .. 5 .UUl.93!)52:lOS! I .ocn \J "' ., o.:11 0 12 13 t fl .!1 II 11: .000 l1Gd685S128S l L; __ 13

PAGE 21

2.2.2 M-Band Regular Perfect Reconstruction Filter Banks Section 2.1 shows the Two-Band Wavelet transform works great to represent the localized time-frequency characterizations of transients in a low frequency signal. However, a long-duration high frequency signal (wide bandwidth in frequency-axis and narrow bandwidth in time-axis) is not well decomposed using a Two-Band wavelet transform described in previous sections. M-Band wavelets provide the capability of zoom in into narrow bandwidth (in time-axis) high frequency components of a signal. Figure 2-7 shows a generic M-Band wavelet filter bank [8]. The design of M-Band wavelet filters for a perfect reconstruction is not straightforward as for the Two-Band wavelet filter bank. Please refer to [8] for details of M-Band K-Regular perfect reconstruction filter banks. I ho(n) H li\tl TM H 9o(n) I I I I h1(n) H H I I lM llvl 9I(n) I .7:9 I IIM-Jin) I hM-I(n) H !M TM r-Figure 2-7 M-Band Perfect Reconstruction Filter Banks 14

PAGE 22

3. FIR Adaptive Filtering Using Least Mean Square (LMS) Algorithm The LMS algorithm is a widely used linear adaptive filtering algorithm. In order to understand the derivation of the LMS algorithm, the Wiener filter and the steepest descent algorithm are introduced in sections 3.1 and 3.2. Section 3.3 describes the time domain LMS algorithm, the frequency domain LMS algorithm, and the wavelet domain LMS algorithm. Please note that only real-valued systems are considered in this thesis. Therefore, the conjugate parts are eliminated from the equations. In order to distinguish between scalar, vector and matrix variables, the vector or matrix variables will be presented in boldface characters. 3.1 Wiener Filter Figure 3-1 shows the block diagram of a general statistical filtering problem [2]. The purpose of the linear discrete filter is to estimate the desired response d(n). Figure 3-1 Given input samples and filter coefficients Input Unear u(O),u(1),u(2), '" discrete-lime filter w0,w1,w2 Conditions at time n Output y(n) + E s timation error e(n) Desired response d(n) Block Diagram of a Generic Statistical Filtering Problem 15

PAGE 23

The filter output y(n) and the estimated error e(n) of the system shown in Figure 3-1 are defined as 00 y(n)= L w u(n-k),and k k=O e(n) = d(n)-y(n). For the optimum filter design shown in Figure 3-1, the mean-square error of the (3.1) (3.2) system needs to be minimized. The cost function ( 1) is the function that needs to be minimized for the system to reach the optimum solution. Therefore, the cost function of the system shown in Figure 3-1 can be defined as the mean-square error of the system. Equation (3.3) shows the definition of the mean-square error of the system, where E[] is the expectation operator. (3.3) The optimum solution of the problem shown in Figure 3-1 occurs when the output of the linear filter, y(n), becomes very close to the desired response d(n). Or in other words, the minimum mean-squared error of the system is obtained. Let Yo ( n) be the output of the optimized filter W 0 (n) the minimum mean-squared error 1 min is defined as 00 )'0(n)= 2. W0(n)u(n-k), (3.4) k=O 16

PAGE 24

e0(n) =d(n)y0(n) =d(n)-Estimated d(n), and (3.5) where e0 (n) is the optimum estimation error and Yo (n) is the optimum filter output. The principle of orthogonality may be summarized as "The necessary and sufficient condition for the cost function 1 to attain its minimum value is that the corresponding value of the estimated error e0 (n) is orthogonal to each input sample that enters into the estimation of the desired response at timen [2]". The principle of orthogonality provides a method to verify that the optimum filter design is reached. Equation (3.7) defines the principle of orthogonality. E[ u(n-k )e0 (n) ]=0 (3.7) Substituting equations (3.4) and (3.5) into equation (3.7), the Wiener-Hopf equations are 00 E[u(n-k)(d(n)-_L,w0;u(n-i))]=0, k=0,1,2,... (3.8) r=O Rearranging equation (3.8) yields 00 .L W0;E[u(nk)u(n -i)] = E[u(n-k)d(n)], k = 0,1,2, ... (3.9) r=O Let R be the autocorrelation matrix of the filter input andp be the cross-correlation matrix of the filter input and desired response, where R and p are defined as 17

PAGE 25

R = E[u(n)ur (n)] and (3.10) p = E[u(n)d(n)]. (3.11) Combining equations (3.9), (3.10), and (3.11), the Wiener-Hopf equations are reduced to a compact matrix form, Rw =p. 0 (3.12) The solution to the Wiener-Hopf equations is (3.13) Please note that the filter input of the Wiener filter is assumed to be zero mean. The solution shown in equation (3 .13) is straightforward, but is computationally expensive, especially for a large number of tap weights (inverse of R has to be computed). Another way to obtain the minimum mean-squared error of the system is using the steepest descent algorithm. 3.2 Steepest Descent Algorithm The method of steepest descent is a gradient and iterative algorithm. The step size can either be chosen to accomplish the maximum decrease of the cost function in each iterative step or can be chosen to be a constant (fixed step-size) [10]. For the purpose of this thesis, the fixed step-size algorithm will be used. This algorithm uses the negative gradient as the direction vector of the cost function to reach the 18

PAGE 26

minimum point of the cost function. The algorithm starts with some initial condition assumptions computes the gradient vector using the initial conditions, and updates filter coefficients based on the negative gradient as the direction vector. The recursive process continues until the minimum of the cost function is reached. The cost function J of the system shown in Figure 3-1 can be expressed as a function of filter tap weights. Rearranging equation (3.3), using equations (3.1) and (3.2), yields [9] 2 J(w)=E[Ie(n)l ] = E[e(n)e(n)] M-1 M-1M-1 =E[d(n)d(n)]-2E[d(n) 2, w u(n-k)]+E[ 2, 2, w wu(n-k)u(n-i)] k=O k k=Oi=O k i M-1 M-1M-1 =CJ' 2-2 I w E[d(n)u(n-k)]+ 2, I w w E[u(n-k)u(n-i)] d k=O k k=Oi=O k i T T =CJ' 2 -2p w(n)+w (n)Rw(n), d where (J'd 2 is the variance of the desired response, assuming that the desired (3.14) response d ( n) is zero mean. The minimum mean-squared error of the system, or the minimum point of the cost function J can be determined by obtaining the gradient of J and setting it to zero. From equation (3.14) the gradient of J is \1 J(n)=-2p+2Rw(n). (3.15) Setting \1 J (n) = 0 to find the minimum point of J ( n) \1J(n)=-2p+2RW0(n)=0, and (3. 16) 19

PAGE 27

(3.17) This yields the same solution found in equation (3.13), which is the solution of the Wiener-Hopf equations based on the principle of orthogonality. Figure 3-2 shows a generic block diagram of the steepest descent FIR filtering algorithm. w(n+l) p w(n) Figure 3-2 Block Diagram of a Generic Steepest Descent FIR Filtering Algorithm Equation (3.19) is the update equation of the algorithm shown in Figure 3-2, where f1 is the fixed step size of the algorithm The update equation is used to update filter coefficients of the subsequent recursive step of the algorithm. 1 w(n+1)=w(n)+ 211( -V J(n)) w(n + 1) = w(n) + f1 (p-Rw(n)), n = 0, 1,2, ... 20 (3.18) (3.19)

PAGE 28

3.3 LMS Algorithm To proceed with the steepest descent algorithm described in section 3.2, statistical characteristics of the filter inputo(n) and the desired response d(n) have to be known to determineR and p. In reality, a priori knowledge of statistical characteristics of the system are generally not available. The simplest solution is to estimate R and p using available data. The estimated R and p are shown in equations (3.20) and (3.21). 1\ R=u(n)uT(n) (3.20) 1\ p=u(n)d(n) (3. 21) 3.3.1 Standard Time Domain LMS Algorithm Substituting estimated Rand p from equations (3.20) and (3.21) into equation (3.19) results in T w(n + 1) = w(n) + 11 (u(n)d(n)-u(n)u (n)w(n)), (3. 22) T w(n + 1) = w(n) + flU(n)(d(n)u (n)w(n)), (3.23) w(n + 1) = w(n) + flU(n)(d(n)y(n)), and (3. 24) w(n+ 1)=w(n)+ flU(n)e(n), n=0,1,2,.... (3.25) 21

PAGE 29

Equation (3.25) shows the standard time domain LMS FIR filtering algorithm. The simplicity of the LMS algorithm is that the estimated values of R and p are used. 2 For stability of the system, the fixed step size 11 ranges between 0 and 'l where /!.max /!.. is the maximum eigenvalue of the autocorrelation matrix of the filter input. max The difference between the final value of the cost function J obtained by the LMS algorithm J ( oo) and the minimum value of J obtained by Wiener filter 1 is called mm the excess mean-squared error. The ratio of the excess mean-squared error to the minimum mean-squared error obtained by Wiener filter is called the misadjustment. The misadjustment can be controlled by the value assigned to the fixed step size 11 A large value of 111eads the system to converge faster but also leads to a larger misadjustment. This is because the LMS algorithm uses estimate values ofR and p at each recursive step to compute the estimated gradient vector, and the filter tap weights suffer from a gradient noise. With a large step size j1 the system progress faster, but the gradient noise is not largely filtered out as when a small value of 11 is assigned to the system. A small value of 11 makes the system converge slower, but reduce the misadjustment. Figure 3-3 shows a generic block diagram of the LMS FIR filtering algorithm. 22

PAGE 30

d'(n) Ficrure 3-3 0 Block Diagram of a General LMS FIR Filtering Algorithm Please note that to be able to distinguish between the time domain adaptive filter and the wavelet domain adaptive filter to be discussed in section 3.3.4, the time domain adaptive filter in equations (3.22) through (3.25) will be denoted as h(n). 3.3.2 Self-Orthogonalizing Adaptive LMS Filtering Algorithm The self-orthogonalizing adaptive LMS algorithm is a modified LMS algorithm to improve the convergence rate of the LMS algorithm. The filter input is prewhitened and transformed into a less correlated (or uncorrelated) vector. This transformed vector is then used as the LMS algorithm filter input. The convergence rate of the LMS algorithm is improved because the eigenvalue spread of the filter input is reduced. The update equation for this algorithm is [2] 23

PAGE 31

-I w(n+l)=w(n)+aR u(n)e(n), n=0,1,2, ... (3.26) where a is a constant and ranges between 0 and 1. According to [4], a can set to 1 be where M is the filter length. A statement in [2] makes this modified LMS M algorithm very powerful: "An important property of the self-orthogonalizing filtering algorithm is that, in theory, it guarantees a constant rate of convergence, irrespective of input statistics". For detailed information of this algorithm, please refer to Reference 2. 3.3.3 Frequency Domain LMS Algorithm The self-orthogonalizing filtering algorithm described in section 3.3.1 provides convergence rate improvement to the LMS algorithm. The disadvantage of this algorithm is that it is computationally expensive. To reduce computation complexity and take advantage of the fast convergence speed of the selforthogonalizing filtering algorithm, the frequency domain LMS algorithm was developed. The frequency domain LMS algorithm uses discrete-cosine transform (DCT) to transform the input vector to frequency domain, estimates the eigenvalues using estimated autocorrelation of the input, and updates the filter coefficients. The outcome of this algorithm is a more efficient algorithm than the time domain 24

PAGE 32

algorithm. The frequency domain LMS algorithm is out of the scope of this thesis, for detailed description of this algorithm, please refer to Reference [2]. 3.3.4 Wavelet Domain LMS Algorithm The main idea of the wavelet domain LMS algorithm is similar to the frequency domain LMS algorithm: to reduce computational complexity. A recently developed wavelet domain LMS adaptive filtering algorithm [1] uses sparse structures of matrices in wavelet domain to estimate and update autocorrelation matrix of the filter input. Figure 3-4 shows block diagram of a wavelet domain LMS FIR adaptive filter. LMS Figure 3-4 Block Diagram of a Wavelet Domain LMS FIR Adaptive Filter 25

PAGE 33

Let y(n) be the wavelet transform of the filter input x(n) Let Q be the wavelet transformation matrix, y(n) = Qx(n). The matrix Q contains high pass and low pass filters and decimators described in Chapter 2. For example, let a two-band, one level wavelet be used to decompose the filter input x(n) Let a be the output of the low pass filter and the decimator of the analysis filter bank and let b be the output of the high pass filter and the decimator of the analysis filter bank. The wavelet transform of the filter input x(n) is y(n) = [a b] where a and bare vectors of half of the size of x(n) According to [1], the update equation for the wavelet domain adaptive filter coefficients is 1\ w(n + 1) = w(n),LL g(n), n = 0,1,2, ... (3.26) 1\ 1\ R y (n) g(n) = -2e(n)y(n), (3.27) where w(n) is the wavelet transform of the time domain adaptive filter h(n), that A is, w(n) = Qh(n). Ry is the estimated autocorrelation matrix of the wavelet 1\ domain filter input, Ry (n) = yT (n)y(n). The estimated error e(n) has the similar definition as for the time domain LMS algorithm, e(n) = d(n)-z(n), and (3.28) 26

PAGE 34

T z(n) = w (n)y(n). (3.29) Expanding (3.29) yields z(n) = (Qh(n))T Qx(n), and (3.30) (3.31) From Figure 3-4, the wavelet transformation is a unitary transformation. Therefore, (3.32) z(n) = hT (n)x(n). (3.33) Equation (3.33) shows the output of the wavelet domain LMS adaptive filter is actually the same as the output of the time domain LMS adaptive filter. This implies that the time domain desired response d (n) is used for the wavelet domain LMS algorithm. Combining and rearranging equations (3.26) and (3.27) yields the final update equation used for the simulations in this thesis, 1\ -1 w(n+l)=w(n)+2,URy e(n)y(n), n=0,1,2, .... (3.34) A difficulty was encountered when using equation (3.34) for simulations. When the 1\ autocorrelation matrix R y is either singular or badly scaled (in other words, the 1\ 1\ eigenvalue spread of R y is large), the inverse of R y cannot be computed, an error occurred, and the recursive procedure stopped. According to [1], when situations 27

PAGE 35

1\ like this occurs, a "diagonal loading" methodology and be used to ensure the R y is not ill-conditioned. Let i be iteration step of the algorithm, c be a small positive constant, the diagonal loading can be implemented as 1\ c 1\ Ry =--:-l+Ry. l (3.35) The emphasis of this thesis is to study, simulate, and apply the wavelet domain LMS FIR adaptive algorithm to different applications. More efficient way of running the wavelet domain LMS algorithm that requires more complex estimation methodologies using the sparse structure of the wavelet domain matrices is documented in [ 1]. 28

PAGE 36

4. System Identification System identification is used to provide a linear mathematical model of an unknown dynamic process or plant. There are many ways to obtain the mathematical model of an unknown system. The adaptive controller using pole placement design with or without zero cancellation (deterministic self-tuning regulators) [11] and adaptive filtering using LMS or recursive least-squares (RLS) algorithms, are examples of commonly used methodologies The system identification using the adaptive LMS filtering algorithm is described in subsequent sections. 4.1 System Identification Using the Time Domain LMS Filtering Algorithm Figure 4-1 shows the block diagram of a system identification model using a standard time domain adaptive LMS algorithm. The unknown system and the adaptive filter share the same input u(n) The error e(n) is the difference between the output of the unknown system d(n) and the output of the adaptive filter y(n). The ideal mathematical model of the unknown system is obtained when e( n) = 0. Therefore, the error e(n) is used to control the adaptive filter. The cost function that 2 needs to be minimized is 1 = E[le(n)l ] where e(n) = d(n)-y(n). Please note that in reality the error e(n) will never reach zero because the LMS algorithm uses 29

PAGE 37

estimated values of Rand p. R is the autocorrelation matrix of the adaptive filter input and p is the cross correlation of the filter input and the unknown system output. u(n) Unknown System d(n) e(n) y(n) Time Domain LMS Ada tive Filtering Algorithm Figure 4-1 Block Diagram of a System Identification Model Using Time Domain LMS Filtering Algorithm The procedures of system identification using a standard time domain LMS filtering algorithm are described below. 1. Determining the unknown system The unknown system to be modeled by system identification has to be determined, so the input u can be applied to the unknown system and the output of the unknown system d (n) can be obtained. The purpose of the system identification model shown in this thesis is to demonstrate that the adaptive FIR filter correctly modeled the "unknown" system. The "unknown" system is chosen to be a 16-tap 30

PAGE 38

FIR filter (we will assume that we do not know the system). The filter coefficients are g(n) = [0.1,0.2,0.3,0.4,05,0.6,0.7 ,0.8,0.8,0.7 ,0.6,05,0.4,0.3,0.2,0.1], ( 4.1) where n is the steps of iteration. After the appropriate steps of iteration, the adaptive FIR filter coefficients should converge to the values shown in equation (4.1). 2. Choosing an FIR filter tap length for system identification. Generally, for real applications, the system is unknown. Therefore, the filter tap length is normally assumed to be 2 k. where k = 1, 2, 3,.... In this thesis, the adaptive filter tap length is chosen to be 16. Let h(n) be the adaptive filter impulse response, then the initial condition of this filter is h(n) = 0. (4.2) When the system identification works properly, h(n) will identify with g(n) as the system converges (when error e(n) is close to zero and maintained as a constant value). 3. Defining an input u and assuming a total iteration number. 31

PAGE 39

The input u used for the system identification simulation in this thesis is a random signal. The random signal is chosen because it can model a non-stationary input, and the tracking capability of the adaptive filter can be verified. The input u is a vector, and the length of the input u has to be equal or greater than the sum of the total iteration steps plus the filter tap length minus one. For example, if a 16-tap FIR filter is used and 500 iterations are planned for the simulation, the length of vector u must be equal or greater than 515, i.e., u=[u(499),u(498)u(497), ... u(O),u( -1),u( -2), ... ,u( -15)]. (4.3) Let n=500 be the iteration steps of the system identification model. Let 16 be the adaptive filter tap length, u(n) is determined as u(O) =[u(O),u( -1),u( -2),u( -3), ... u( -15)] (4.4) u(l)=[u(l),u(O),u( -1),u( -2), ... u( -14 )] (4.5) u(2)=[u(2),u(1),u(O),u( -1), ... u( -13)] (4.6) u( 499)=[u( 499),u( 498),u( 497),u( 496), ... u( 484)) (4.7) 4. Using LMS algorithm to update the adaptive filter tap weights and choosing a fixed step size f..L for the LMS algorithm. From equation 3.25, the update equation for a standard time domain LMS filtering algorithm is h(n+ l)=h(n)+ f..LU(n)e(n), n=O,l,2, ... where f..L is the step size of the LMS algorithm and e(n)=d(n)-y(n). As discussed earlier in section 32

PAGE 40

2 3.3.1, system stability requires that the fixed step size f.1 ranges between 0 and 1 /Lmax For the fastest LMS algorithm convergence rate, the ideal way is to use a fixed step 2 size of 1 c, where E. is a small positive number The problem is that a priori /Lmax information of the filter input is generally not available, and the maximum eigenvalue of the autocorrelation matrix of the filter input /l cannot be max determined. In general, the system is evaluated with various f.1 values, e.i., 0.01, 0.05, 0.1, 0.5 etc. If the value of f.1 is too large, the system will become unstable 2 which means the f.1 value used probably exceeds 1 Try smaller f1 values until /Lmax the system becomes stable and converges nicely. 5. Recursively updating the adaptive filter tap weights using step 4, and adjusting iteration steps if necessary. Depending on the convergence rate of the system, adjust iteration numbers. It is desirable to choose a number that is beyond the convergence of the system to show that the convergence is stable. Now, following the above 5 steps, an example of the time domain LMS algorithm is shown below. 33

PAGE 41

1. Unknown system is a 16-tap FIR filter, and the impulse response of this FIR filter is g(n) =[0.1,0.2,0.3,0.4,05,0.6,0.7 ,0.8,0.8,0.7 ,0.6,05,0.4,0.3,0.2,0.1] 0 2. The adaptive FIR filter is a 16-tap FIR filter, and the initial condition of this filter ish(n) = 00 3. Assume an iteration number of two, the length of u has to be equal or greater than 2 + 16-1 = 17 Let u be a random signal and a vector oflength 17, then U=[1,2,3,-1,4,7,-5,3,9,2Q,-9,1,-3,8,0,2,10] o 4. Since the iteration number is two, n=O,lo Let the fixed step size f.1 be 0.05. n=O n=l d(O)=g(O)u r (0)=(0.1 *1 +0.2 *2+003*3+004*-1+ .. 00.1 *2)=23.8 (408) y(O)=h(O)ur (0)=(0*1+0*2+0*3+0*-1+.000*2)=0 (4.9) e(O)=d(O)-y(0)=2308-0=23o8 (4010) h(1)=h(0)+ JLU(O)e(O) =[0,0,0, ... ,0]+0.05*[1,2,3,-1,4,7,-5,3,9,20,-9,1-3,8,0,2]*2308 =[1.19,238;357,-119,4.76,833,-5095,357,10.71,2308,-10071,1.19,-357,952,0,238] ( 4011) d(l)=g(l)ur (1)=(0.1 *2+0.2 *3+0.3*-1 +0.4*4+ .. 00.1 *10)=25.3 (4.12) y(l)=h(l)ur (1)=(1.19*2+2.38*3+357*-1 +-1.19*4+0.02.38*10)=-1 1.9 (4.13) e(l)=d(l)-y(1)=25o3+ 1 1.9=3702 (4014) 34

PAGE 42

h(2)=h(1)+pu(l)e(l) =[238,4.76, ... ,4.76]+0.1*[2,3,-1,4,7,-5,3,9,20,-9,1-3,8,0,2,10]*49.1 (4.15) =[ 4.91,7.96,1.71,6.25,17.78,-o.97,-o37 2031,47.91,7.06,-8.85,-439,1131,952,3.72,20.98] 5. Obviously the system does not converge within two iterations, so the iteration number and the step size f.1 will be adjusted. The recursive process described above continues until the tap weights of the adaptive filter converge to the tap weights of the "unknown" system. The results of the above example simulated with an iteration number of 500, and a fixed step size of 0.05, is shown in Figures 4-2 through 4-"8. For detailed Matlab codes of simulations, please see the Appendix. Other simulations (not documented herein) show that a step size greater than 0.05 makes the system unstable, therefore, f.1 = 0.05 is the largest step size that the system can use. Figure 4-2 shows the output of the adaptive FIR filter y(n) identifies with the "unknown" system d(n). From Figure 4-2, after a transition period about 100 iterations, y(n) converges to d(n), and the two signals overlap with each other. Figure 4-3 shows the estimation error e(n) vs. iterations. Figures 4-4 through 4-7 show each of the 16 coefficients ofthe adaptive filter converge to the "unknown" system g(n). Figure 4-8 shows the adaptive filter's impulse response after 500 iterations, which identified with g(n). 35

PAGE 43

I i I AI A I I I w \I I I I I i N II I J\ "I IN '" II y A I A -1 V' 1n" 1/\ v kt { IV I v\ i/ \I I /\1 v v H i 1'1 -2 I I I I vi I -3 I I I I v v I I -4 -5 0 SO 100 ISO 200 250 300 350 400 450 SOD Number ollter a tio ns Figure 4-2 The Desired Output d(n) vs. the Filter Output y(n) (Time Domain LMS Algorithm, f1 = 0.05 ) 4.5 I I I I I l I i I l 3.5 I I I I I I I g 2.5 w I I I 1.5 0.5 J I \1. JJ\.1 i L 0 0 50 100 150 200 250 300 350 400 450 500 Number a! Iterations Figure 4-3 The Estimated Error (Time Domain LMS Algorithm, f1 = 0.05) 36

PAGE 44

::H= I I I I I 0 0 100 200 300 400 500 600 :c ::H I I I I I 0 100 200 300 400 500 600 I I I i j N 0 100 200 300 400 500 600 ::A I I I I I 0 100 200 300 400 500 600 Number ollterations Figure 4-4 Adaptive Filter Coefficients h0 h1 h2 and h3 I I I I I 0 100 200 300 400 500 600 I I I I I 0 100 200 300 400 500 600 + I I I I I 0 100 200 300 400 500 600 I :: I I I I I 100 200 300 400 500 600 Number of Iterations Figure 4-5 Adaptive Filter Coefficients }4, hs, h6, and h7 37

PAGE 45

'':fz==f D.S I b I I 0 100 200 300 400 500 600 ''1r + I I I I I 0 100 200 300 400 500 600 I I I I 0 100 200 300 400 500 600 I I I I I 100 200 300 400 500 600 Number or ne r atLons Figure 4-6 Adaptive Filter Coefficients h8 h9 h10, and h11 N ::A I I I .c 0 100 200 300 400 500 600 ::H I I I I I .c 0 100 200 300 400 500 600 ::H I I I I I .c 0 100 200 300 400 500 600 ::H I I I I I .c 0 100 200 300 400 500 600 N urn bor of 11erelions Figure 4-7 Adaptive Filter Coefficients h12, h13, h14 and h1s 38

PAGE 46

0.9 I I I I I I i I -0 8 I I -I I I I I I I I I I 0 7 0.6 0 5 I I I 0 4 0 3 0.2 I o 0 0 0 2 4 Number ot Iterations Figure 4-8 Adaptive Filter Impulse Response After 500 Iterations (Time Domain LMS Algorithm, f.1 = 0.05 ) 4.2 System Identification Using the Wavelet Domain LMS Filtering Algorithm Figure 4-9 shows the block diagram of a system identification model using wavelet domain LMS filtering algorithm. Note that the only difference between Figures 4-1 and 4-9 is the LMS algorithm portion. The wavelet domain transformation of u(n) is u (n) and is used as the adaptive filter input. As w described in section 3.3.4, the adaptive filter is a wavelet transform of the time domain adaptive filter that is denoted by w(n). 39

PAGE 47

u(n) Wavelet Domain Transform uw(n) Unknown System d(n) e(n) y(n) Wavelet Domain LMS Ada tive Filtering Algorithm Figure 4-9 Block Diagram of a System Identification Model Using Wavelet Domain LMS Filtering Algorithm The procedures of system identification using a wavelet domain LMS filtering algorithm are described below. 1. Determining the unknown system (same as section 4.1). 2. Choosing an FIR filter tap length for system identification. In this thesis, the adaptive filter tap length is chosen to be 16. Note that the adaptive filter is in the wavelet domain. Let w(n) be the wavelet domain adaptive filter, where the initial condition of this filter is w(n) = 0. (4.16) 3. Defining an input u and assuming an iteration number. The input u is generated the same way as described in section 4-1, equation (4.3). Let n=500 be the iteration steps of the system identification model. Let 16 40

PAGE 48

be the adaptive filter tap length. u(n) is determined as shown in equations (4. 4) through (4.6). The 2-Band Haar analysis filter banks are used to transform u(n) from the time domain to the wavelet domain. The Haar analysis filter banks have a low pass filter Haar analysis filter banks for the wavelet domain transformation, uw (n) is defined as uw(O)=[Dec(conv(u(O),Haarho)),Dec(conv(u(O),Haarh1))] (4.17) uw ( 499) =[dec(conv(u( 499),Haarho)),dec(conv(u( 499),Haarh1 ))] ,(4.18) where dec is the decimation of factor two, and conv is the time domain convolution. 4. Using wavelet domain LMS algorithm to update the adaptive filter tap weights and choosing a fixed step size 11 for the LMS algorithm. From equation 3.26, the update equation for a wavelet domain LMS filtering 1\ -1 algorithm is w(n+l)=w(n)+211Rwe(n)y(n), n=0,1,2, ... where 11 is the step 1\-1 size of the LMS algorithm ande(n)=d(n)-y(n) and Rw is the estimated autocorrelation matrix of the wavelet domain filter input u (n) w 5. Recursively updating the adaptive filter tap weights using step 4 and adjusting 41

PAGE 49

iteration steps if necessary. .. 6. Using Haar synthesis filter banks to convert the filter tap weights back to time domain. Following the above 6 steps, an example of the wavelet domain LMS algorithm is given, where one iteration is shown. 1. The unknown system is a 16-tap FIR filter, where the impulse response is g(n) =[0.1,0.2,0.3,0.4,05,0.6,0.7 ,0.8,0.8,0.7 ,0.6,05,0.4,0.3,0.2,0.1]. 2. The adaptive FIR filter is a 16-tap FIR filter, where the initial condition of this filter is w(n) = 0. 3. Assume an iteration number of one, the length of u has to be equal or greater than 1 + 16-1 = 16. Let u be a random signal and u is a vector of length 16, u(O) = [1,2,3, -1,4, 7,-5,3,9 ,20, -9 ,1,-3,8,0,2] The time domain convolution of the low pass Haar filter and u(O) is conv(u(O),Haar hcJ) = [0.71 ,2.12,3.56, 1.41 ,2.12,7.78, 1.41 ,-1.41, 8.48,20.5,7 .78,-5.66,-1.41 ,3.54,5.66,1.41 '1.41 ]. The decimation of factor two of the equation ( 4.19) is ( 4.19) dec( conv(u(O),Haar hcJ)) =[2.12,1.41 7 78,-1.41,20.5,-5.66,3.54,1.41]. (4.20) The time domain convolution of the high pass Haar filter and u(O) is conv(u(O),Haar h 1 ) = [0.71 ,0.71 ,0.71 ,-2.83,3.54,2.12,-8.48,5.66, ( 4.21) 4.24,7 .78 -20 5 7.07,-2.83,7.78,-5.66 1.41 ,-1.41 ]. 42

PAGE 50

The decimation of factor two of the equation ( 4.21) is dec( conv(u(O),Haar h 1 )) =[0.71 ,-2 83,2.12,5.66,7 .78,7 .07 ,7 .78,1.41]. ( 4.22) The wavelet domain input uw (0) is the combination of equations (4.20) and (4.22), u (0) = [2.12,1.41,7.78,-1.41,20.5,-5.66,3.54,1.41,0.71 ,-2.83,2 .12,5.66,7.78,7.07,7.78, 1 .41]. ( 4.23) w 4 Since the iteration number is one, n = 0. Let the fixed step size f.1 be 0.05 n=O d(O)=g(O)u r (0)=(0.1 *1 +0.2 *2+0.3*3+0.4*-1+ ... 0.1 *2)=23.8 (4.24) y(O) = w(O)u"'r (0) =(0*1.12+0*1.41+0*7.78+0*-1.41+ ... 0*1.41) =0 (4.25) e(O)=d(O)y(0)=23.8-0=23.8 (4.26) 1\ -I Rw (0) = R T (O)R (0) + (c I n)l (4.27) w w 1\ -I w(n+1)=w(n)+2,LLRw e(n)y(n), n=0,1,2, ... (4.28) Note that sometimes the estimate of R IV is singular and the inverse cannot be determined. In order to make sure the estimated R IV is non-singular, diagonal loading shown in equation (4.27) is used, where n is the number of iterations and c is a small positive constant. A value of c = 0.0001 is used for simulations. 5. The recursive process described above continues until the tap weights of the adaptive filter identify with the tap weights of the "unknown" system. 43

PAGE 51

6. Convert the converged adaptive filter tap weights to the time domain using Haar 1 1 synthesis filter banks. The low pass filter is Haar f 0 = [ J2, J2] and the high 1 1 pass filter is Haar !1 =[J2' J2]. The results of the above example simulated with an iteration number of 500 and a fixed step size of 0.05 is shown in Figures 4-10 through 4-13. Figure 4-10 shows that the output of the adaptive FIR filter y(n) converges to the desired signal d ( n) Figure 4-11 shows the estimated error of the wavelet domain LMS algorithm. Figure 4-12 shows the wavelet domain adaptive filter's impulse response after 500 iterations. Figure 4-13 shows the time domain adaptive filter's impulse response (converted from the wavelet domain back to the time domain) after 500 iterations, which identified with g(n) The fixed step size of 11 = 0.05 was used for comparison purposes (compare with the time domain LMS algorithm results). Figures 4-14 through 4-18 show the maximum step size of 11 = 0.3 can be used for the wavelet domain LMS algorithm simulated in this section. 44

PAGE 52

I I I i -I i I I I I I I fl I I I; \ i I I' 1-\ I ,\ r 1 I A II \ .1\ ffi I I fu -I t\ '\ Jl A Vh v J \ ,._ /, I ,; "\ i i I I I i \1 r! b lf-'1 Jll I I i I II I! \\ I \' v'i _!;, \4 i i I I i \1 I L-fuj I \V l. I I' \ I \II \1 H I ', I v I v -2 -3 4 0 50 \00 1 so 200 250 300 350 400 450 500 Number of nerations Figure 4-10 The Desired Output d(n) vs. the Filter Output y(n) 0N avelet Domain LMS Algorithm, f.1 = 0.05 ) 10 I I I I I I I I I I 0 I Ui I I I I AI l.t V\ v v lf\ n} /' 0 0 50 100 ISO 200 250 300 350 400 450 500 Number of nerallons Figure 4-11 The Estimated Error (Wavelet Domain LMS Algorithm, J1 = 0.05) 45

PAGE 53

1.2 I I I L l I I I I I I I I I I p l I I p p I I I I 0.8 0 0 0 0. 0.6 a: g_ .E 0.4 u:: .... ;: c 0.2 I I I _t I f i I l b ;> b I _L .2 0 I 0 12 14 16 Num bar of Iterations Figure 4-12 Wavelet Domain Filter Impulse Response After 500 Iterations (Wavelet Domain LMS Algorithm, f.1 = 0.05) 0 0 0 0. 0.8 0.7 0.6 0.5 g_ 0.4 u:: 0 .. 0.3 c E 0.2 0.1 0 0 I I I I I I I I I i I> I f p I I p I I I I I I I 10 12 14 16 N urn ber ol Iterations Figure 4-13 Time Domain Filter Impulse Response After 500 Iterations (Wavelet Domain LMS Algorithm, f.1 = 0.05 ) 46

PAGE 54

! I I I I I I I I I I I I I I AI IV\ I I A I n I I I 1: 1 I n I' \ It\ I 'I \ VI vj II \\ \I \ I \I .I v I I I I I I l v v I r I IV I I v I I i o so 100 1so 200 2so 3DO aso 400 450 sao Number ollterations Figure 4-14 The Desired Output d(n) vs. the Filter Output y(n) (Wavelet Domain LMS Algorithm, f.1 = 0.3 ) 10 I I i L I I i i I i I I I I I I I I I I I I I I I i I i I I I I 5 .;; I I I I I I I I I I I I I I I i 0 0 50 100 150 200 250 300 350 400 450 500 Number ot 11erat ions Figure 4-15 The Estimated Error (Wavelet Domain LMS Algorithm, f.1 = 0.3 ) 47

PAGE 55

1.2 I I I I p p 0 8 i I I I i I I 0 0 0. 0.6 a: 3 0. 0 4 u: ,_ 3: 0 2 t l f I b b -0 2 0 10 12 14 1 s Number ot Ueralions Figure 4-16 Wavelet Domain Filter Impulse Response After 500 Iterations (Wavelet Domain LMS Algorithm, f.1 = 0.3 ) 0 0 9 0 8 0 7 0 6 a: 1 o s 0.4 0 ;;; c 0 3 e ;:: 0 2 0.1 0 I I I I I I I I I ; I r 0 I I I I I I I I 10 12 14 16 Number oll\erallons Figure 4-17 Time Domain Filter Impulse Response After 500 Iterations (Wavelet Domain LMS Algorithm, f.1 = 0.3 ) 48

PAGE 56

4.3 Comparisons and Discussions The system identification used in this thesis proves that the wavelet domain LMS filtering algorithm converged properly. Figures 4-8, 4-13, and 4-17 demonstrated that like the time domain LMS algorithm, the wavelet domain LMS algorithm also successfully identified the "unknown" system. Figure 4-18 shows comparisons of MSE of an ensemble of 100 runs for both time domain and wavelet domain LMS algorithms simulated with f.1 = 0.05 From Figure 4-18, the wavelet domain LMS algorithm shows a slower convergence rate. Is this a true statement? Further investigation demonstrates that the comparison between the time domain LMS algorithm with the wavelet domain algorithm using the same fixed step size is not valid. Different update equations are used and the estimation of the gradient of the cost are different for the time domain LMS algorithm and the wavelet domain algorithm. Therefore, for comparison purposes, the "best system performance" should be used. The convergence rate using the largest possible fixed step size before the system becomes unstable for each algorithm may be compared. Figure 4-19 shows that the largest step size for the time domain is f.1 = 0.05 and the largest step size for the wavelet domain is f.1 = 0.3 Note that there is no improvement of convergence rate using the wavelet domain LMS algorithm in this case. This is because the input used was a random signal, where no correlation of 49

PAGE 57

any kind exists between each inputs. Therefore, the prewhitening of input in the wavelet domain has no effect on the convergence rate. I I I I 4 !+1.---1 -+--1 -+--1 -+----+-1 -l-1 ------1 V\ I I I ,. 3 l Ll I I I I V\ Vi I I i \ I I = \ I ,' ,. I o;::.E 1,, I \J I I i ,' i i\/\ i i I 0 \J v '-jv,l; II: I I I 0 50 100 150 200 250 300 350 400 450 500 Number ollleralions Figure 4-18 Mean-Squared Error of an Ensemble of 100 Runs (jl = 0.05) i I I i i I I I I I I I I I I I I I I I I I I i I I I h I I I I I VI I I i l \ I I I \l I i I I I I I 0 0 50 100 1 so 200 250 300 350 400 450 500 Number of llerations Figure 4-19 Mean-Squared Error of an Ensemble of 100 Runs (Time domain LMS j.1 = 0.05 Wavelet Domain j.1 = 0.3 ) 50

PAGE 58

5. Adaptive Differential Pulse Code Modulation (ADPCM) Adaptive Differential Pulse-Code Modulation (ADPCM) is a generic data compression algorithm that is widely used in speech applications. The speech signal is compressed before it is transmitted to the receiver and is decompressed after it is received by the receiver. ADPCM uses an adaptive predictor and quantizer to account for sudden changes and the nonstationary nature of the speech signal. ADPCM is a modified algorithm based on the pulse-code modulation (PCM) scheme. The PCM encoder typically consists of three processes: sampling, quantization, and coding. The sampling process converts a continuous-time signal into a discrete-time signal The quantization process converts a continuousamplitude signal into a discrete-amplitude signal. And the coding provides appropriate digital presentations for the samples of the discrete signal. Figure 5-1 shows a generic PCM encoder. Sampled Input Uniform or Nonuniform Quatizer t----:...... PCM Wave Figure 5-1 Generic PCM Encoder In a typical telephony system, the speech signal is generally coded with the PCM algorithm. A sample rate of 8 kHz and a nonlinear quantizer is used and the 51

PAGE 59

quantized speech signal is coded into 8-bit words. The required bit rate for this system is 64 kbps. Instead of transmitting the PCM waveform, the ADPCM transmits the quantized estimation error. The ADPCM encoders use an adaptive predictor to estimate the next audio sample from previous samples obtained from the input signal. This predicted sample is compared with the previous sample; the difference between the two is the estimation error. The estimation error is quantized using an adaptive quantizer before transmitting. When the adaptive predictor is optimized, the variance of the estimation error should be smaller than the variance of the input signal. Therefore, a smaller number of quatization levels is required for the estimation error than for the PCM waveform. A common implementation takes 16-bit PCM samples and converts them into 4-bit samples using ADPCM, giving a compression rate of 4:1. For a radio frequency (RF) transmitting system, a compression rate of 4: 1 of bit rate yields a good 6 dB link margin improvement. The improvement of the link budget leads to less restricted hardware requirements (required transmitter power, antenna gain, etc.) and therefore, lower cost. Figures 52a and 5-2b show the simplified ADPCM encoder and decoder. 52

PAGE 60

e(n) Adaptive Quantizer e(n) s(n) estimated .... 1------s(n) Figure 5-2a Generic ADPCM Encoder s(n) s (n) estimated Figure 5-2b Generic ADPCM Decoder 53

PAGE 61

There are two types of adaptive quantizers, they are the feedforward and the feedback adaptive quantizers [9]. When the feedforward adaptive quantizer is used, the quantization step sizes, Ll(n), have to be transmitted together with the sample bits [9]. When the feedback adaptive quantizer is used, only the sample bits have to be transmitted. The J ayant quantizer that is used for simulations in this thesis is a feedback quantizer. Table 5-1 shows the Jayant quantizer [9]. Table 5-l J ayant Quantizer No. Bits 2 3 4 M(l) 0.8 0.9 0.9 M(2) 1.6 0.9 0.9 M(3) 0.25 0.9 M(4) 1.7 0.9 M(5) 1.2 M(6) 1.6 M(7) 2.0 M(8) 2.4 The M(TJ) are used to update the quantizer steps, (5.1) where TJ =floor( le(n )I) Lln (5.2) The initial quantizer step size was obtained using the bit length (b) and the full magnitude (Fs) of the signal, (5.3) 54

PAGE 62

5.1 ADPCM Using the Time Domain LMS Filtering Algorithm Figure 5-3 shows the generic block diagram of a time domain ADPCM encoder/decoder. The quantized estimation error is used to control the adaptive algorithm of the system because the quantized estimation error is transmitted to the receiver. e(n) -Jayant Quantizer Time Domain LMS Adaptive Filtering Algorithm -s(n) estimated ..,.1------s(n) Figure 5-3 Generic Time Domain ADPCM System The time domain ADPCM method consists of following procedures: 1. Defining an input signal. The ADPCM algorithm is tested on two signals for simulation purposes. The first signal is a 100 Hz sine wave with a sampling frequency of 8 kHz. This input signal is used to test the convergence rate the time domain ADPCM algorithm. The second signal is the same sine wave (frequency is 100 Hz, sampling frequency is 8 kHz) but with a sudden change after the signal converged This second input signal is used to test the behavior 55

PAGE 63

of the time domain ADPCM algorithm in the event that a sudden input signal change occurs. 2. Picking FIR filter tap weights (h(n)) and setting initial conditions. A 16-tap FIR filter is used for simulations, the initial FIR filter coefficients are zeros. 3. Defining the quantizer initial step size using equation (5.3). 4. Letting the reconstructed signal s(n) be zeros (see Figure 5-3). The signal s(n) is the input of the adaptive FIR filter, therefore, the length of s (n) is the same as the FIR filter tap length. The updated s (n) are generated at each iteration of the algorithm. 5. Obtaining the filter output, s(n) estimated= s(n) hT(n). 6. Obtaining e(n) e(n) = s(n)-s(n) estimated. 7. Updating quantizer step using equations (5.1) and (5.2). 8. Obtaining the quantized estimation error e(n). 9 Obtaining the input of the FIR filter for next iteration, s(n) = e(n) + s(n) estimated. 10. Using LMS algorithm to update the adaptive filter tap weights and choosing a fixed step size f.1 for the LMS algorithm. From equation 3.25, the update 56

PAGE 64

equation for a standard time domain LMS filtering algorithm is h(n+ 1)=h(n)+ JLU(n)e(n), n=0,1,2, ... where J1 is the step size of the LMS algorithm and e(n)=d(n)-y(n). After many tests and iterations the results show the best fixed step size J1 is 0.8 for the normal sine wave and 0.5 for the sine wave with a sudden change. 11. Repeat from step 5. 12. Calculate the signal to noise ratio (SNR) after the signal converges. SNR = 101 ( var( original signal) J og var( original signal-new signal) = 1 Oloo( var( signal)) 0 var(nozse) (5.4) Figure 5-4 shows results of the time domain ADPCM applied to a 100 Hz sine wave with a sampling frequency of 8 kHz. The original signal the predicted signal, the reconstructed signal, and the estimation error are shown in Figure 5-4. From Figure 5-4, it can be seen that the estimation error converges to zero at approximately 300 iterations. The SNR of the reconstructed signal ( 44.04 dB) is 17.06 dB better than the SNR of the estimated signal (26.98 dB). The SNR is calculated after estimation error converges to zero, between approximately 400 to 500 iterations. 57

PAGE 65

ADPCM (Time Domain LMS Algorltl'lm. Tap=16. ucO.B. SNR26.98 dB. SNR0=44.04 dB ) J\VJ\1\J /f\bf\Vl 0 50 100 150 200 250 300 350 400 450 500 o so 100 1so 200 2so aoo 350 400 4SO sao o so 100 1so 200 250 3oo 350 400 .450 sao t:f .. ,R .. -1 I I I I I I I so 100 1 50 200 250 300 350 400 450 500 N urn ber ol Sam pies Figure 5-4 Time Domain ADPCM Applied to a Sine Wave Figure 5-5, shows results of the time domain ADPCM applied to a 100 Hz sine wave with a sampling frequency of 8 kHz and a sudden change at approximately 500 iterations. The original signal, the estimated signal (estimated by the adaptive predictor), the reconstructed signal, and the estimation error are shown in Figure 5-5. From Figure 5-5, the estimation error converges to zero at approximately 300 iterations. A sudden input change occurred at about 500 iteration, the estimation error re-converges to zero at about 800 iterations. The SNR of the reconstructed signal (59.46 dB) is 19.43 dB better than the SNR of the estimated signal (40.03 dB). The SNR is calculated after the estimation error re-converges to about zero, between approximately 900 to 1000 iterations. 58

PAGE 66

AOPCM (Time Domain LMS Algorithm. Tape16, u=O.S, SNRzr40.03 dB, SNR0=59.46 dB) 0 100 200 300 400 500 600 700 BOO 900 1000 r .1\j _, \T \L\7 I 0 100 200 300 400 500 600 700 800 900 1000
PAGE 67

The procedures for the wavelet domain ADPCM system are similar to the time domain ADPCM system procedures. The differences are that the input of the FIR filter is in the wavelet domain, and the update equation for the wavelet domain 1\ -I FIR filter is w(n+l)=w(n)+2.URw e(n)y(n), n=0,1,2, ... where .U is the step A -1 size of the LMS algorithm ande(n)=d(n)-y(n) and Rw is the estimated autocorrelation matrix of the wavelet domain filter input uw (n). Figure 5-7 shows the results of the wavelet domain ADPCM applied to a 100 Hz sine wave with a sampling frequency of 8 kHz. The original signal, the estimated signal (estimated by the adaptive predictor), the reconstructed signal, and the estimation error are shown in Figure 5-7. From Figure 5-7, the estimation error converges to zero at approximately 100 iterations. The SNR of the reconstructed signal (126.09 dB) is 15.41 dB better than the SNR of the estimated signal (110.68 dB). The SNR is calculated after the estimation error converges to zero, between approximately 400 to 500 iterations. 60

PAGE 68

ADPCM (OWT Domain LMS Algorithm. Tap=16. u=0.6, SNR0=110.68 dB, SNAo::126.09 dB)
PAGE 69

AOPCM (OWT Domain LMS Algorilhm. Tap=16. u:0.5, SNA0=16S .54, SNRc152.B7) : o 1 oo 200 3oo 400 sao sao 100 eoo 900 1 ooo i 0 100 200 300 400 500 600 700 BOO 900 1000 1\ r _, __ T \ __ \L \I [ __ [j __ j.L_ \2 _Gl __ 0 100 200 300 400 500 600 700 800 900 1000 tJ-1 I I I I I I I 0 100 200 300 400 500 600 700 800 900 1000 N um bar ol Sam pies Figure 5-8 Wavelet ADPCM Applied to Sine Wave with a Sudden Change 5.3 Comparisons and Discussions Figures 5-9 and 5-10 show the MSE of an ensemble of 50 runs of the time and wavelet domain ADPCM applied to a sine wave and a sine wave with a sudden change. Figures 5-9 and 5-10 show the wavelet domain LMS algorithm converges at least 100 iterations faster than the time domain LMS algorithm. According to results shown in section 5.2, the wavelet domain algorithm also provides a better SNR than the time domain algorithm. The drawback of the wavelet domain algorithm is the computational complexity. Reference [1] contains a proposed algorithm to minimize the computational complexity. 62

PAGE 70

w UJ :;; :;; "-0 < c e g 0 3: w UJ :;; :;; "-0 < c 0 E ;:: w UJ :;; :;; "-0 < c g 0 1 3: w UJ :;; :;; .. 0 < c 0 E ;:: M 5 E of an Ensemble of SO A uns (A 0 P CM, T1m e (u .. 0 .8)/W ave lei (u= 0,6) Oom a1n LM S) I 1.6 1.6 1.4 -+ 1.2 0.8 0.6 0.4 0.2 0 0 I I II I :1 il 'I I' J ....... ,. \ /'. '"'"" _,..._ 50 100 150 I I I i I i I I I I I I I I I I I I I I I I I I I 200 250 300 350 400 450 Number ollie rations 500 Figure 5-9 MSE of an Ensemble of 50 Runs(Input is a Sine Wave) M S E ol an Ensemble of SO A uns (A 0 P CM, Time (u= O.S)/W e ve let (u=0.3) Domain LM S) I I I I I I 3.5 I I I 2.5 I I I i: I I I i! I I I I I I I I I P, I il I .'. ll) I I A _;\ I 1'1 j 1.5 0 5 0 0 100 200 300 400 500 600 700 800 900 1000 N urn ber of Iterations Figure 5-l 0 MSE of Ensemble of 50 Runs(Sine Wave with Sudden Change) 63

PAGE 71

6. Conclusions The simulations in chapters 4 and 5 show that the wavelet domain LMS filtering algorithm can be used for system identification models and also for the ADPCM encoding/decoding. Lessons learned from performing simulations are the following: 1. The convergence rate depends on the nature of the input signal, the fixed step size, and the adaptive FIR filter length. 2. The diagonal loading for the autocorrelation of the input signal has to be relatively small (the value varies with the nature of the input), otherwise, the ability to converge will be affected. 3 The wavelet domain LMS filtering algorithm provides a faster convergence rate when the components of the input are correlated. 4. A faster convergence rate is desirable for the ADPCM applications. For example, during a real time signal transmission, if a sudden input change occurs, the adaptive filter has to be re-optirnized. An algorithm with a faster convergence rate provides better data quality and less risk of data loss. 5. There is not an "absolute" superior algorithm for signal processing. The algorithms should be evaluated and selected depending on different applications, hardware/software design and cost factors. 64

PAGE 72

The advantage of the standard time domain LMS filtering algorithm is its simplicity. But the convergence rate of the standard LMS algorithm depends heavily on the input characteristics and therefore the designer has very little control over the convergence rate. The advantage of the wavelet domain LMS algorithm is its faster convergence rate. In addition, the autocorrelation matrix of the input in wavelet domain has a sparse structure [1], which can be used to minimize computational complexity. 65

PAGE 73

APPENDIX The Appendix contains Matlab codes used for simulations. The Matlab codes are arranged based on the order of plots that appeared in the thesis. Svstem Identification with Time Domain LMS Filtering Algorithm % Time Domain LMS Adaptive Filtering Algorithm % Identify with a 16-tap FIR Filter clear all; u=0.05; %Selecting the fixed step size for the LMS algorithm n=200; % Setting the iteration length S=n+16; X=randn(S); %Generating input signal X=X(l,:); h=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1]; % h is the system that the adaptive FIR filter will converge to % Initial conditions w(l,:)=zeros(1,16);% w is the impulse response of the adaptive FIR filter for i=1 :n; % Starting the recursive LMS algorithm x=X(i:i+15); x=fliplr(x); d(i)=x*h'; y(i)=x*w(i,:)'; e(i)=d(i)-y(i); w(i+ 1,: )=w(i,: )+u*e(i)*x; % Updating the FIR filter coefficients end; N=l:n; figure; % Plotting the desired signal vs. output of the adaptive FIR filter plot(N,d, '-m',N,y, '--b '); xlabel(Number of Iterations'); ylabel('d (-) vs. y ( -)'); grid; 66

PAGE 74

figure; % Plotting the system error for k=1:n; error(k)=e(k)"2; end; plot( error); xlabel('Number of Iterations); ylabel(Error); grid; figure; % Plotting the impulse response of the adaptive FIR filter after convergence stem(w(501,:)); xlabel('Number of Iterations); ylabel('Impulse Response of The FIR Adaptive Filter); grid; figure; %Plotting System parameters subplot( 4,1, 1 ); plot(w(:,1)); ylabel('hO); grid; subplot( 4,1 ,2); plot(w(:,2)); ylabel('hl ); grid; subplot( 4,1 ,3); plot(w(:,3)); ylabel('h2); grid; subplot(4,1,4); plot(w(:,4)); xlabel('Number of Iterations); ylabel('h3 '); grid; 67

PAGE 75

figure; %Plotting System parameters subplot(4,1,1); plot(w(:,5)); ylabel('h4 ); grid; subplot( 4, 1 ,2); plot(w(:,6)); ylabel('h5); grid; subplot(4,1,3); plot(w(:,7)); ylabel('h6); grid; subplot( 4, 1,4 ); plot(w(:,8)); xlabel(Number of Iterations); ylabel('h7); grid; figure; %Plotting System parameters subplot(4,1,1); plot(w(:,9)); ylabel('h8'); grid; subplot( 4,1 ,2); plot(w(:,lO)); ylabel('h9); grid; subplot(4,1,3); plot(w(:,11)); ylabel('h 1 0); grid; subplot( 4,1 ,4 ); plot(w(:, 12)); xlabel(Number oflterations); ylabel('h11 ); grid; 68

PAGE 76

figure; %Plotting System parameters subplot(4,1,1); plot(w(:,13)); ylabel('h12); grid; subplot(4,1,2); plot(w(:,14)); ylabel('h13 ); grid; subplot(4,1,3); plot( w(:, 15)); ylabel('h14); grid ; subplot(4,1,4); plot(w(:,16)); xlabel(Nurnber of Iterations); ylabel('h15'); grid; 69

PAGE 77

System Identification with Wavelet Domain LMS Filtering Algorithm % Wavelet Domain LMS Adaptive Filtering Algorithm % Identify with a 16-tap FIR Filter clear all; u=0.3; %Selecting the fixed step size for the LMS algorithm n=500; % Setting the iteration length S=n+50; X=randn(S); % Generating input signal X=X(l,:); h=[O.l 0.2 0.3 0.40.5 0.6 0.7 0.8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1]; % h is the system that the adaptive FIR filter will converge to % Initial conditions w(l ,:)=zeros(l ,16); for i=l :n; % Starting the recursive LMS algorithm x=X(i:i+15); x=fliplr(x); d(i)=x*h'; [All,Dll]=dwt(x, 'haar'); %Wavelet decomposition of the input signal xdwt=[A11 Dll]; y(i)=xdwt*w(i,:) '; e(i)=d(i)-y(i); R=xdwt'*xdwt; % Computing the autocorrelation matrix of the input signal RR=R+(O.OOOlli)*eye(length(R)); % Diagonal Loading Rinv=inv(RR); g=-2*e(i)*xdwt*Rinv; w(i+l,:)=w(i,:)-u*g; %Updating the FIR filter coefficients end; N=1:n; figure; % Plotting the desired signal vs. output of the adaptive FIR filter plot(N,d,'-m',N,y,'--b'); xlabel(Number of Iterations'); ylabel('d (-) vs. y (--)'); grid ; 70

PAGE 78

figure; % Plotting the system error for k=l:n; error(k)=e(k)"2; end; plot( error); xlabel(Number of Iterations); ylabel(Error); grid; figure; %Plotting the impulse response of the wavelet domain adaptive FIR filter after convergence stem(w(501,:)); xlabel(Number of Iterations); ylabel(DWT Filter Impulse Response); grid; x=w(501,:); CA=x(l:8); CD=x(9:16); ww=idwt(CA,CD, 'haar); % Convert the wavelet domain FIR filter coefficients to time domain figure; %Plotting the time domain FIR impulse response stem(ww); xlabel(Number of Iterations); ylabel(Time Domain Filter Impulse Response); grid; 71

PAGE 79

MSE of Ensemble of 100 Runs, Time/Wavelet Domain System ID % MSE of Ensemble of 100 Runs, Time Domain System Identification %Identify with a 16-tap FIR Fllter, u=0.05 clear all; for k=1:100 u=0.05; n=500; S=n+50; X=randn(S); X=X(l,:); h=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1]; % Initial conditions w( 1,: )=zeros( 1, 16); for i=1:n; x=X(i:i+ 15); x=fliplr(x); d(i)=x*h'; y(i)=x*w(i,:)'; e(i)=d(i)-y(i); w(i+ 1,: )=w(i,: )+u*e(i)*x; end; ee(k,:)=e; end; [a b]=size(ee); for i=1:a for j=1:b e2(i,j)=ee(i,j)"'2; end; end; Esquare=mean(e2); save timesysid; 72

PAGE 80

% MSE of Ensemble of 100 Runs, Wavelet Domain System Identification %Identify with a 16-tap FIR Fllter, u=0.05 clear all; for k=l:lOO u=0.3; n=500; S=n+50; X=randn(S); X=X(l,:); h=[O.l 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1]; % Initial conditions w( 1,: )=zeros( 1, 16); for i=l:n; x=X(i:i+ 15); x=fliplr(x); d(i)=x*h'; [Al,Dl]=dwt(x,'haar'); %1st level xdwt=[A1 D1]; y(i)=xdwt*w(i,:)'; e(i)=d(i)-y(i); R=xdwt'*xdwt; RR=R+(O.OOOI/i)*eye(length(R)); Rinv=inv(RR); g=-2*e(i)*xdwt*Rinv; w(i+ l,:)=w(i,:)-u*g; end; ee(k,:)=e; end; [a b]=size(ee); fori=l:a for j=l:b e2(i,j)=ee(i,j)"2; end; end; 73

PAGE 81

Esquare=mean(e2); save dwtsysid 1; % MSE of Ensemble of 100 Runs Wavelet Domain System Identification % Identify with a 16-tap FIR Fllter, u=0.3 clear all; for k=l:lOO u=0.3; n=500; S=n+50; X=randn(S); X=X(l,:); h=[0.1 0.2 0.3 0.4 0 5 0 6 0.7 0.8 0.8 0.7 0.6 0 5 0.4 0.3 0.2 0.1]; % Initial conditions w( 1,: )=zeros( 1, 16); for i=l:n; x=X(i:i+ 15); x=fliplr(x); d(i)=x*h'; [Al,Dl]=dwt(x,'haar'); %1st level xdwt=[Al D1]; y(i)=xdwt*w(i,:)'; e(i)=d(i)-y(i); R=xdwt '*xdwt; RR=R+(O.OOOl/i)*eye(length(R)); Rinv=inv(RR); g=-2*e(i)*xdwt Rinv; w(i+ 1 ,:)=w(i,:)-u*g; end; ee(k,:)=e; end; [a b]=size(ee); for i=1:a 74

PAGE 82

for j=l:b e2(i,j)=ee(i,j)"2; end; end; Esquare=mean( e2); save dwtsysid2; % Plot MSE of Ensemble of 100 Runs, Time/Wave let Domain System Identification clear all; load timesysid; time=Esquare; load dwtsysidl; dwtl =Esquare; n=1:500; figure; plot(n,time, '-m',n,dwtl, '--b'); xlabel(Number of Iterations'); ylabel('Time Domain LMS (-) vs. Wavelet Domain LMS(--)'); grid; load dwtsysid2; dwt2=Esquare; n=1:500; figure; plot(n,time, '-m ',n,dwt2, '--b '); xlabel(Number of Iterations'); ylabel('Time Domain LMS (-) vs. Wavelet Domain LMS(--)); grid; 75

PAGE 83

ADPCM with Time Domain LMS Filtering Algorithm (Input is a sine wave) % Adaptive Differential Pulse Code Modulation (ADPCM) % Time Domain LMS Algorithm applied to a sinusoidal input clear all; % Generate a sinusoidal signal Fs=8000; f=lOO; t=O: 1/Fs:0.4; s=sin(2*pi*f*t); % Jayant Quatizer (4-bit) b=4; do=(2"-(b-1) )* 1; M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4]; delta(l )=do; % LMS Algorithm FIR Filter n=500; u=0.8; tap=16; previous=zeros( 1 ,tap); wf=zeros(size(1:tap)); estimateq s( 1 )=previous *wf; % 4-bit Adaptive standard J ayant quantizer is applied to s(n) % LMS Algorithm is applied for i=1:n e(i)=s(i+tap )-estimateqs(i); qe(i)=floor( e(i)/delta(i) )*delta(i); for j=2:tap qslmsU)=previousU-1); end; qslms(1)=estimateqs(i)+qe(i); previous=qslms; 76

PAGE 84

reconstruct(i)=estimateqs(i)+qe(i); qslms=qslms-mean(qslms); m(i)=mean(qslms); wf(i+ 1,: )=wf(i,: )+u*qe(i)*qslms; estimateqs(i+ 1)=qslms*wf(i+ 1 ,:)'; k=floor( abs( qe(i) )/delta(i) ); ifk>8 k=8; elseifk==O k=1; end; delta(i+ 1)=delta(i)*M(k); end; recounctrut=reconstruct+m; figure; subplot(411); p1ot(s(tap+ 1 :tap+n)); ylabel('s(n) ); title('ADPCM (Time Domain LMS Algorithm, Tap=16, u=0.8, SNR=47.51 dB, SNRQ=66.48 dB)); grid; subplot(412); plot(estimateqs(l :n)); ylabel('Estimated s-(n)); grid; subplot(413); plot(reconstruct( 1 :n)); ylabel('s-(n)); grid; subplot(414); plot(qe(l :n)); xlabel(Number of Samples); ylabel('e-(n)); grid; SNRQ= I O*log 1 O(var(s( I +tap+400:n+tap) )/var(reconstruct( 1 +400:n)-s( 1 +tap+400:n+tap ))) SNR= 1 O*log I O(var(s( 1 +tap+400 : n+tap ))/var(estimateqs( I +400:n)-s( I +tap+400:n+tap ))) 77

PAGE 85

ADPCM with Time Domain LMS Filtering Algorithm (Input is a sine wave with a sudden change) % Adaptive Differential Pulse Code Modulation (ADPCM) % Time Domain LMS Algorithm applied to a sinusoidal input with a sudden change clear all; % Generate a sinusoidal signal Fs=8000; f=lOO; t=O: 1/Fs:0.4; s=sin(2*pi*f*t); s(516)=-0.2; s(517)=-0.3; s(518)=-0.5; % Jayant Quatizer (4-bit) b=4; do=(2"-(b-1) )* 1; M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4]; delta( 1 )=do; % LMS Algorithm FIR Filter n=1000; u=0 5; tap=16; previous=zeros(l, tap); wf=zeros(size(1 :tap)); estimateqs( 1 )=previous*wf; % 4-bit Adaptive standard Jayant quantizer is applied to s(n) % LMS Algorithm is applied for i=l :n e(i)=s(i+tap )-estimateqs(i); qe(i)=floor( e( i)/delta( i)) *delta(i); for j=2:tap qslms(j )=previous(j-1); end; qslms( 1 )=estimateqs(i)+qe(i); 78

PAGE 86

previous=qslms; reconstruct(i)=estimateqs(i)+qe(i); qslms=qslms-mean( q sims); m(i)=mean(qslms); wf(i+ 1 ,:)=wf(i,:)+u*qe(i)*qslms; estimateqs(i+ 1 )=qslms*wf(i+ I,:)'; k=floor( abs( qe(i) )/delta(i)); ifk>8 k=8; elseifk==O k=I; end; delta(i+ I )=delta(i)*M(k); end; recounctrut=reconstruct+m; figure; subplot(4II); plot(s(tap+ I :tap+n)); ylabel('s(n)); title('ADPCM (Time Domain LMS Algorithm, Tap=16, u=0.5, SNR=40.03 dB, SNRQ=59.46 dB)); grid; subplot(412); plot(estimateqs(l :n)); ylabel(Estimated s-(n)); grid; subplot( 413); plot(reconstruct(l :n)); ylabel('s-(n)); grid; subplot(414); plot(qe(l :n)); xlabel(Number of Samples); ylabel('e-(n) ); grid; SNRQ= 1 O*log 1 0( var( s( I +tap+900:n+tap) )/var( reconstruct( I +900:n )-s( I +tap+900:n+tap))) SNR= I O*log I 0( var(s( 1 +tap+900:n+tap) )/var( estimateqs( 1 +900:n)-s( I +tap+900:n+tap) )) 79

PAGE 87

ADPCM with Wavelet Domain LMS Filtering Algorithm (Input is a sine wave) % Adaptive Differential Pulse Code Modulation (ADPCM) % DWT Domain LMS Algorithm applied to a sinusoidal signal clear all; % Generate a sinusoidal signal Fs=8000; f=100; t=O: 1/Fs:0.4; s=sin(2 *pi *f*t); % Jayant Quatizer (4-bit) b=4; do=(2"-(b-1))* 1; M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4]; delta (I )=do; % LMS Algorithm FIR Filter n=500; u=0.6; tap=16; previous=zeros( 1 ,tap); w=zeros( size( 1 :tap)); estimateqs( 1 )=previous*w'; % 4-bit Adaptive standard Jayant quantizer is applied to s(n) % LMS Algorithm is applied for i=1:n e(i)=s(i+tap )-estimateqs(i); qe(i)=floor( e(i)/delta(i) )*delta(i); for j=2:tap q slmsG)=previousU -1); end; qslms( 1 )=estimateqs(i)+qe(i); reconstruct(i)=estimateqs(i)+qe(i); 80

PAGE 88

previous=qslms; qslms=q simsmean( q sims); m(i)=mean(qslms); %wdt input of the FIR filter input qslms [Al,Dl]=dwt(qslms,'dB6); % 1st level All=A1(6:length(Al)); %Low frequency component of 1st level Dll=Dl(l :length(Dl)-5); %high frequency component of 1st level [A2,D2]=dwt(All,'dB3); %2nd level A22=A2(3:length(A2)); D22=D2( 1 :length(D2 )2); qsdwt=[A22 D22 Dll]; % wdt LMS FIR filter input R=qsdwt'*qsdwt; R=R+(O.OOOl/i)*eye(tap); Rinv=inv(R); g=-2*e(i)*qsdwt*Rinv; w(i+ l,:)=w(i,:)-u*g; estimateqs(i+ 1 )=qsdwt*w(i+ 1,:) '; k=floor(abs(qe(i))/delta(i)); ifk>8 k=8; elseifk==O k=l; end; delta(i+ 1 )=delta(i)*M(k); end; recounctrut=reconstruct +m; figure; subplot(411); plot( s( 1 +tap :n+tap)); ylabel('s(n)); title('ADPCM (DWT Domain LMS Algorithm, Tap=16, u=0.6, SNRQ=110.68 dB, SNR=126.09 dB)); grid; subplot(412); 81

PAGE 89

plot(estimateqs(l :n)); ylabel(Estimated s-(n)'); grid; subplot(413); plot( reconstruct(! :n)); ylabel('s-(n)'); grid; subplot( 414 ); plot( qe(l :n) ); xlabel(Number of Samples'); ylabel('e-(n)'); grid; SNRQ=I O*logi O(var(s( I +tap+400:n+tap ))/var(reconstruct( I +400:n)-s( 1 +tap+400:n+tap ))) SNR=1 O*log 1 O(var(s( 1 +tap+400:n+tap ))/var( estimateqs(l +400: n)-s( 1 +tap+400:n+tap ))) 82

PAGE 90

ADPCM with Wavelet Domain LMS Filtering Algorithm (Input is a sine wave with a sudden change) % Adaptive Differential Pulse Code Modulation (ADPCM) % DWT Domain LMS Algorithm, input is a sine wave with a sudden change clear all; % Generate a sinusoidal signal Fs=8000; f=IOO; t=O: l/Fs:0.4; s=sin(2 *pi *f*t); s(516)=-0.2; s(517)=-0.3; s(518)=-0.5; % Jayant Quatizer (4-bit) b=4; do=(2 "-(b-1)) 1; M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4]; delta( 1 )=do; % LMS Algorithm FIR Filter n=IOOO; u=0.5; tap=16; previous=zeros( 1 ,tap); w=zeros( size( 1 :tap)); estimateqs(1)=previous*w'; % 4-bit Adaptive standard Jayant quantizer is applied to s(n) % LMS Algorithm is applied for i=l:n e(i)=s(i+tap )-estimateqs(i); qe(i)=floor( e(i)/delta(i) )*delta(i); for j=2:tap qslmsU)=previousU-1 ); end; 83

PAGE 91

qslms( 1 )=estimateqs(i)+qe(i); reconstruct(i)=estimateqs(i)+qe(i); previous=qslms; qslms=qslms-mean(qslms); m(i)=mean(qslms); %wdt input of the FIR filter input qslms [Al,Dl]=dwt(qslms,'dB6); % 1st level A11=A1(6:length(Al)); %Low frequency component of 1st level Dll=Dl(l:length(D1)-5); %high frequency component of 1st level [A2,D2]=dwt(A11,'dB3); %2nd level A22=A2(3 :length(A2) ); D22=D2(1 :length(D2)-2); qsdwt=[A22 D22 D11]; % wdt LMS FIR filter input R=qsdwt'*qsdwt; R=R+(0.0001/i)*eye(tap); Rinv=inv(R); g=-2*e(i)*qsdwt*Rinv; w(i+ 1,:)=w(i,:)-u*g; estimateqs(i+ 1 )=qsdwt*w(i+ 1 ,: ) '; k=floor(abs(qe(i))/delta(i)); ifk>8 k=8; elseifk==O k=l; end; delta(i+ 1 )=delta(i)*M(k); end; recounctrut=reconstruct+m; figure; subplot(411); plot(s(1 +tap:n+tap)); ylabel('s(n)); title('ADPCM (DWT Domain LMS Algorithm, Tap=16, u=0.5, SNRQ=165.54, SNR=152.87)); 84

PAGE 92

grid; subplot(412); plot(estimateqs(l :n)); ylabel('Estimated s-(n)'); grid; subplot(413); plot(reconstruct(l :n)); ylabel('s-(n) '); grid; subplot( 414); plot(qe(l :n)); xlabel(Number of Samples'); ylabel('e-(n)'); grid; SNRQ= 1 O*log 1 O(var(s( 1 +tap+900:n+tap ))/var(reconstruct(1 +900:n)-s(l +tap+900:n+tap ))) SNR= 1 O*log 1 O(var(s( 1 +tap+900:n+tap ))/var(estimateqs( 1 +900:n)-s( 1 +tap+900:n+tap))) 85

PAGE 93

MSE of Ensemble of 50 Runs, Time/Wavelet Domain ADPCM (Input is a sine wave) % MSE of an Ensemble of 50 runs % Time Domian Adaptive Differential Pulse Code Modulation (ADPCM) % LMS Algorithm applied to a sinusoidal input clear all; for k=1:50 % Generate a sinusoidal signal Fs=8000; f=lOO; t=O: 1/Fs:0.4; s=sin(2 *pi *f*t); % Jayant Quatizer (4-bit) b=4; do=(2A-(b-1))* 1; M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4]; delta(l )=do; % LMS Algorithm FIR Filter n=500; u=0.8; tap=16; previous=zeros( 1 ,tap); wf=zeros(size( 1 :tap)); estimateqs( 1 )=previous*wf; % 4-bit Adaptive standard Jayant quantizer is applied to s(n) % LMS Algorithm is applied for i=1:n e(i)=s(i+tap )-estimateqs(i); qe(i)=floor( e(i)/delta(i) )*delta(i); for j=2:tap qslmsU)=previousU-1 ); end; qslms( 1 )=estimateqs(i)+qe(i); 86

PAGE 94

previous=qslms; reconstruct(i)=estimateqs(i)+qe(i); q slms=q slmsmean( qslms); m(i)=mean( qslms ); wf(i+ 1,: )=wf(i,:)+u*qe(i)*qslms; estimateqs(i+ 1)=qslms*wf(i+ 1 ,:)'; k=floor(abs(qe(i))/delta(i)); ifk>8 k=8; elseifk==O k=1; end; delta(i+ 1 )=delta(i)*M(k); end; recounctrut=reconstruct +m; ee(k,:)=qe; end; [a b]=size(ee); fori=1:a for j=1:b e2(i,j)=ee(i,j)"2; end; end; Esquare=mean( e2); save timeMSE; % Adaptive Differential Pulse Code Modulation (ADPCM) % DWT Domain LMS Algorithm clear all; for kk=1:50 % Generate a sinusoidal signal Fs=8000; f=100; t=O: 1/Fs:0.4; s=sin(2*pi*f*t); 87

PAGE 95

% Jayant Quatizer (4-bit) b=4; do=(2A-(b-1))* 1; M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4]; delta(l)=do; % LMS Algorithm FIR Filter n=500; u=0.6; tap=16; previous=zeros( 1 ,tap); w=zeros( size( 1 :tap)); estimateqs(1)=previous*w'; % 4-bit Adaptive standard Jayant quantizer is applied to s(n) % LMS Algorithm is applied for i=l:n e(i)=s(i+tap )-estimateqs(i); qe(i)=floor( e(i)/delta(i) )*delta(i); for j=2:tap qslmsU)=previous(j-1); end; qslms( 1 )=estimateqs(i)+qe(i); reconstruct(i)=estimateqs(i)+qe(i); previous=qslms; q slms=qslmsmean( qslms); m(i)=mean( qslms); %wdt input of the FIR filter input qslms [Al,Dl]=dwt(qslms,'dB6'); % 1st level All=Al(6:length(A1)); %Low frequency component of 1st level D11=D1(1:length(Dl)-5); %high frequency component of 1st level [A2,D2]=dwt(A11,'dB3'); %2nd level A22=A2(3:length(A2)); D22=D2( 1 :length(D2)-2); qsdwt=[A22 D22 D11]; % wdt LMS FIR filter input R=qsdwt'*qsdwt; R=R+(0.0001/i)*eye(tap); 88

PAGE 96

Rinv=inv(R); g=-2*e(i)*qsdwt*Rinv; w(i+ 1,:)=w(i,:)-u*g; estimateqs(i+ 1 )=qsdwt*w(i+ 1,:)'; k=floor(abs(qe(i))/delta(i)); ifk>8 k=8; elseifk==O k=1; end; delta(i+ 1 )=delta(i)*M(k); end; recounctrut=reconstruct +m; ee(kk,:)=qe; end; [a b]=size(ee); for i=1 :a forj=l:b e2(i,j)=ee(i,j)"2; end; end; Esquare=mean( e2); save dwtMSE; clear all; load timeMSE; timeE=Esquare; load dwtMSE; dwtE=Esquare; N=l :length(timeE); figure; plot(N,timeE,'-m',N,dwtE,'--b); xlabel(Number of Iterations); ylabel(Time Domain ADPCM MSE (-) vs. Wavelet Domain ADPCM MSE (--)); title(MSE of an Ensemble of 50 Runs (ADPCM, Time/Wavelet Domain LMS)); grid; 89

PAGE 97

MSE of Ensemble of 50 Runs, Time/Wavelet Domain ADPCM (Input is a sine wave with a sudden change) % Adaptive Differential Pulse Code Modulation (ADPCM) % LMS Algorithm applied to a sinusoidal input with a sudden change clear all; for kk=1:50 % Generate a sinusoidal signal Fs=8000; f=lOO; t=O: 1/Fs:0.4; s=sin(2*pi*f*t); s(516)=-0.2; s(517)=-0.3; s(518)=-0.5; % Jayant Quatizer (4-bit) b=4; do=(2"-(b-1))* 1; M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4]; delta(l)=do; % LMS Algorithm FIR Filter n=1000; u=0.5; tap=16; previous=zeros(l,tap); wf=zeros( size( 1 :tap)); estimateq s( 1 )=previous *wf; % 4-bit Adaptive standard Jayant quantizer is applied to s(n) % LMS Algorithm is applied for i=1:n e(i)=s(i+tap )-estimateqs(i); qe(i)=floor( e(i)/delta(i) )*delta(i); for j=2:tap qslmsU)=previous(j-1); end; 90

PAGE 98

qslms(1)=estimateqs(i)+qe(i); previous=qslms; reconstruct(i)=estimateqs(i)+qe(i); qslms=qslms-mean( qslms ); m(i)=mean( qslms ); wf(i+ 1 ,:)=wf(i,:)+u*qe(i)*qslms; estimateqs(i+ 1 )=qslms*wf(i+ 1 ,:)'; k=floor(abs(qe(i))/delta(i)); ifk>8 k=8; elseifk==O k=1; end; delta(i+ 1 )=delta(i)*M(k); end; reconctrut=reconstruct +m; ee(kk,:)=qe; end; [a b]=size(ee); for i=1 :a forj=1:b e2(i,j)=ee(i,j)"2; end; end; Esquare=mean( e2); save timeMSE; % Adaptive Differential Pulse Code Modulation (ADPCM) % DWT Domain LMS Algorithm clear all; forkk=1:50 % Generate a sinusoidal signal Fs=8000; f=lOO; t=O: 1/Fs:0.4; s=sin(2 *pi *f*t); s(516)=-0.2; 91

PAGE 99

s(517)=-0.3; s(518)=-0.5; % Jayant Quatizer (4-bit) b=4; do=(2 "-(b-1)) 1 ; M=[0.9 0.9 0.9 0.9 1.2 1.6 2.0 2.4]; delta( 1 )=do; % LMS Algorithm FIR Filter n=1000; u=0.3; tap=16; previous=zeros( 1 ,tap); w=zeros(size(l :tap)); estimateqs( 1 )=previous*w'; % 4-bit Adaptive standard Jayant quantizer is applied to s(n) % LMS Algorithm is applied for i=1:n e(i)=s(i+tap )-estimateqs(i); qe(i)=floor( e(i)/delta(i) )*delta(i); for j=2:tap qslms(j)=previous(j-1); end; qslms( 1 )=estimateqs(i)+qe(i); reconstruct(i)=estimateqs(i)+qe(i); previous=q sims; q slms=qslms-mean( q slms); m(i)=mean(qslms); %wdt input of the FIR filter input qslms [A1,D1]=dwt(qslms,'dB6'); % 1st level A11=A1(6:length(A1)); %Low frequency component of 1st level D11=Dl(l :length(D1)-5); %high frequency component of 1st level [A2,D2]=dwt(A11,'dB3'); %2nd level A22=A2(3:length(A2)); D22=D2(1 :length(D2)-2); qsdwt=[A22 D22 D11]; % wdt LMS FIR filter input 92

PAGE 100

R=qsdwt'*qsdwt; R=R+(O.OOOlli)*eye(tap); Rinv=inv(R); g=-2*e(i)*qsdwt*Rinv; w(i+ 1,:)=w(i,:)-u*g; estimateqs(i+ 1 )=qsdwt*w(i+ 1 ,:)'; k=floor(abs(qe(-i))/delta(i)); ifk>8 k=8; elseifk==O k=l; end; delta(i+ l)=delta(i)*M(k); end; recounctrut=reconstruct+m; ee(kk,:)=qe; end; [a b]=size(ee); fori=1:a forj=l:b e2(i,j)=ee(i,j)"2; end; end; Esquare=mean( e2); save dwtMSE; clear all; load timeMSE; timeE=Esquare; load dwtMSE; dwtE=Esquare; N= 1 :length(timeE); figure; plot(N,timeE,'-m',N,dwtE,'--b'); xlabel('Number oflterations'); ylabel('Time Domain ADPCM MSE (-) vs. Wavelet Domain ADPCM MSE (--)'); title(MSE of an Ensemble of 50 Runs (ADPCM, Time/Wavelet Domain LMS)'); grid; 93

PAGE 101

REFERENCES [1] Srinath Hosur and Ahmed H. Tewfik, "Wavelet Transform Domain Adaptive FIR Filtering", IEEE Transactions on Signal Processing, Vol. 45, No.3, March 1997, pp. 617-630. [2] Simon Haykin, "Adaptive Filter Theory, Third Edition", Prentice Hall, Upper Saddle River, New Jersey 07458, 1996. [3] Martine Vetterli and Cormac Herley, "Wavelets and Filter Banks: Theory and Design", IEEE Transactions on Signal Processing, Vol. 40, No.9, September 1992,pp.2207-2232. [4] Ingrid Daubechies, "Ten Lectures on Wavelets, Second Printing", Capital City Press, Montpelier, Vertmont, 1992. [5] Dr. Tarnal Bose, "Class Notes, Wavelets and Multi-Rate Signal Processing", University of Colorado at Denver, Spring Semester, 1997. [6] Gilbert Strang and Truong Nguyen, "Wavelets and Filter Banks",Wellesley Carnbridge Press, Wellesley, MA 02181, 1996. [7] Thomas Hart, "Image De-Noising Using Wavelet Shrinkage and Non-Linear Filtering", Master Thesis of University of Colorado, Denver, Colorado, 1998. [8] Peter Steffen, Peter N. Heller, Rarnesh A. Gopinath, and C. Sidney Burrus, "Theory of Regular M-Band Wavelet Bases", IEEE Transactions on Signal Processing, Vol. 41, No. 12, December 1993, pp. 3497-3511. [9] Dr. Tarnal Bose, "Class Notes, Adaptive Signal Processing", University of Colorado at Denver, Spring Semester, 1998 [10] Edwin K. P. Chong and Stanislaw H. Zak, "An Introduction to Optimization", John Wiley& Sons, Inc., New York, NY 10158-0012, 1996. 0 [11] Karl J. Astrom and Bjorn Wittenmark, "Adaptive Control, Third Edition", 94