The fast wavelet transform (FWT)

Material Information

The fast wavelet transform (FWT)
Boyer, Keith G
Place of Publication:
Denver, CO
University of Colorado Denver
Publication Date:
Physical Description:
67 leaves : illustrations ; 29 cm


Subjects / Keywords:
Wavelets (Mathematics) ( lcsh )
Wavelets (Mathematics) ( fast )
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )


Includes bibliographical references (leaves 66-67).
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Applied Mathematics
General Note:
Department of Mathematical and Statistical Sciences
Statement of Responsibility:
by Keith G. Boyer.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
34889301 ( OCLC )
LD1190.L622 1995m .B69 ( lcc )


This item is only available as the following downloads:

Full Text


The Fast Wavelet Transform (FWT) by Keith G. Boyer B.S.E.E.T., DeVry Institute of Technology, 1984 A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Applied Mathematics 1995


This thesis for the Master of Science degree by Keith G. Boyer has been approved by William L. Briggs William E. Cherowitzo Weldon A. Lodwick Date


Boyer, Keith G. (M.S., Applied Mathematics) The Fast Wavelet Transform (FWT) Thesis directed by Professor William L. Briggs ABSTRACT A mathematical basi s for the constr u c t ion of the fast wavelet t r ansfor m (FWT), based on the wavelets of Daubechies, is given. A contrast is made between the continuous wavelet transform and the discrete wavelet transform that provides the fundamental structure for the fast wavelet transform algorithm. The wavelets considered here lead to orthonormal bases. To realize the orthonormality of these bases, the Fourier transform is u sed to cons t r u c t equivalent realizations of the wa velet r e c ursions i n the spec t ral domain. Thes e spe c t ral r epre sentations lead to a complementary pair of filter t r ansfer functions, f rom whic h the inv erse Fourier t ransfor m can be u sed to generate equivalent Daubechies' convolution coefficients. The discrete wa velet transfor m generated from the convolution filter operations, is incorporated into a recursive filter decimation algorithm that is the FWT. Finally, the FWT is applied to the image compression problem. This abstract accurately represents the content of the candidate' s thesis. Signed ________________________ __ William L. Briggs


Contents Chapter 1. 1.1 1.2 1.3 2. 2.1 2.2 2.3 2.4 3. 3.1 3.2 4. 4.1 4.2 4.3 Introduction Wavelets and Signal Analysis The continuous wavelet Transform Discretization of the Continuous Wavelet Transform Multiresolution Approximations wavelet and scaling Function coefficients Orthonormality of Compactly Supported Wavelets Bi-Orthogonal Decomposition z-Transform and Filter coefficients Daubechies' Construction Spectral Factorization of 12 Daubechies' Examples for N=2 and N=3 Fast Wavelet Transform (FWT) The FWT Algorithm The FWT and Image Compression The FWT Image Compression Algorithm Appendix A. Matlab Code and Compression Images References 1 1 2 5 7 11 12 16 19 22 27 29 34 45 46 48 52 66


1. Introduction 1.1 Wavelets and Signal Analysis A signal may e xist in its original form as a continuous entity. This continuous signal may then be converted into a quantized form as a sequence of sampled values that describe the signal at discrete points in time. A signal may also be generated in its original form as a discrete sequence. Images and audio signals are e xamples of signals that originate in a continuous form, and a r e stored as discrete sequences on compact disks. These signals may also be generated directly as discrete sequences. Signal analysis is born out of the need to understand the composition of both discrete and continuous signals. This analysis usually takes the form of a sub-space decomposition into a linear combination of basis functions. The classic decomposition tool is the Fourier transform, which decomposes a signal into a linear combination of complex e xponential functions (equivalently, sines and cosines). The concept of signal stationarity plays a key role in determining the effectiveness of the signal decomposition obtained from the Fourier transform. The stationarity of a signal is judged by its statistical invariance over time. If the probability of a signal being a certain value is constant over time, then the signal is said to be stationary. If, however, transient statistical events occur that cannot be predicted, the signal becomes nonstationary. Due to the periodic nature of the Fourier transform, it is ideal for the decomposition and analysis of stationary signals. On the other hand, transient nonstationary signals require a basis that is more localized in both time and frequency. To see how the Fourier transform is not localized in time and frequency, consider the transformation of a square pulse in the time domain. The Fourier transform of this 1


pulse is a sine (sin(x ) / x ) function in the spectral domain, a function for which the energ y is clearly not localized. As for the time domain, the periodic basis functions of the Fourier transform are clearly not localized [ 5 ] Wavelets solve this problem by having a prescribed smoothness, with energy that is well-localized in both time and frequency. In general, not only do wavelets requir e fewer basis functions than trigonometric basis functions, but the well-localized nature of the wavelet basis ameliorates such reconstruction anomalies as the Gibbs' phenomenon [ 9 ] To understand the wavelet basis, we will consider a continuous t ransformation in L2(R) 1.2 The Continuous Wavelet Transform The wavelet basis is a family of functions based on a well-localized oscillating function 111(t) of t h e real variable t 111(t) i s calle d the because all other wavelet functions within the family are generated from translations and dilations of 111(t). Actually, the mother wavelet 111(t) i s t h e function w i t h zero translation and a dilation of 1 Equation (1 .1) shows the family of functions generated f rom 111 by translation and dilation. In (1 .1), b is the t ranslation variable and a is the dilation variable. 1 ( t-b] 111a,b (t) = -111 -a-' ..;a a >0, be.:IR. (1.1) A simple e xample of a wavelet might be the Mexican hat function 111 (x) (1.2) The mother wavelet of (1 .2) is the solid function is the center of figure (1 .1), together with two translated dilations. 2


1 .5,----.------.------.---.---.--------, 0 5 \ -0. 5 \ I 1 I I I I I I I I I I I \ \ \ _\':-5 ----1L.O ---5':----o-'-------'------1L-0-----"15 Figure 1.1: Translations and dilations of (x). A restriction on (t) is that it have a zero integral. Actually, for reasons we will discover later, a further restriction on (t) requires that the first m+1 moments vanish. This gives us condition (1 .3), a series of integral moments equal to zero. o = i:w, t) dt -... (1. 3) Figure (1.2) shows plots of the integrands of (1 .3) as an example of the vanishing moments of ( x ) in ( 1.2) In this case, if we consider the integral of the functions in figure (1.2) moments 0 and 1 vanish, while moment 2 does not. 3


(\ I I 05 Mome n t 1 I I ' 0 5 -1 1 5 Mo me n t 2 2 10 8 6 -4 2 4 6 1 0 Figure 1.2: Moments of (1. 2) With a mother wavelet w meeting t h e restriction of (1 .3), a corresponding set of functions in the form of (1 .1) can be considered. The translation and dilation parameters a b will range over a continuous set S such that the set of functions {Wab} has sufficient cardinality to allow any function fin L2( R) to be reconstructed from the coefficients of the wavelet transform defined by the inner product of f and The inner product of the vector space L2(R) of s quare-integrable, m easurable functions is defined by The inner product can be used to define the continuous wavelet transform F(a,b) off E L2( R ) as for f W E L2 (R) 4


This is the analysis of the function or signal that is the motivation for the construction of the wavelet family. The synthesis of f(x) from F involves a linear combination of the original wavelets using the coefficients of F(a,b) The synthesis or inversion formula is given by fJ db da f(x) = F(a,b)lJI b(x)--o -oo a, a 2 These formulas, from a historical perspective, are interesting in that they show the basic analysis involved in the early development of wavelets. However, continuous analysis is usually impractical, which gave rise to discrete solutions of the wavelet analysis problem. For an extensive historical perspective, reference [5] is very complete. 1.3 Discretization of the Continuous Wavelet Transform For the same reasons that we discretize differential equations, we will consider a discretization of the continuous wavelet transform in the (a,b) plane. This discretization will allow for numerical solutions based on a summation rather than a continuous integral. Obviously, many possible discretizations of the continuous (a,b) plane exist. However, only certain restrictions on (a,b) will give the desired basis functions One classic choice for restricting (a,b) is a binary discretization that leads to the wavelets of Daubechies [2]. This binary discretization sets a=2-j and b=ak, where j,k E Z. The discretization of F(a,b) then becomes F(rj,rjk), j,k E z, where the corresponding discrete wavelet functions are defined by (1. 4) 5


Equation (1.4) is one of three formal definitions of a wavelet [5]. To introduce further wavelet definitions, we must first define the Fourier transform. The Fourier transform of the function fin L2(R) is The inverse Fourier transform is the expression With the Fourier transform pair defined, we can consider the formal wavelet definitions. (1) A wavelet is a function ljr(t) in L2(E) whose Fourier transform satisfies the condition rM = 1 almost J 0 t everywhere, (continuous sense) (2) A wavelet is a function ljr(t) in L2(E) whose Fourier transform satisfies the condition 12 = 1 almost everywhere, (discrete sense) ( 3) A wavelet is a function 1jJ ( t) in L2 (E) such that 2i121jr (2i x-k), j, k E Z is an orthonormal basis for L2 (E). From definitions (2) and (3) above, we can see that each subspace is defined by an integer power of 2. These integer subspaces form a nested sequence of subspaces of L2(R) known as a multiresolution analysis (approximation), which is the subject of the next chapter. 6


2. Multiresolution Approximations The construction of the fast wavelet transform (FWT) begins by splitting L2(R) i n t o a sequence (V4 ) j e Z, o f closed subspaces, each of which is spanned by an orthonormal basis of translates of a single function keZ, (2 .1) such that the following p roperties hold [14] \fjeZ LJ;: __ V j is dense in L2 (R) & n;: __ V J {0} f(x) e v1 -(2x ) e v1+1 \:1 j e z V0 has an orthogonal basis of translates (x-k), \:1 k e z Given the properties for the basis of above, we can define an approximation of a function f(x ) e L2(R), at the resolution 2j, as the orthogonal projection of f(x ) onto Vj. This is an orthogonal projection because each basis function involved in the approximation of f(x ) is orthogonal to every other. The construction of the o rthogonal p rojection above begins by finding a unique function (x ) e L2(R) A wavelet basis, related to (x), t h e n evolves from the additional information of an approximation at a resolution 2j'1 compared with the resolution 2j. This additional information is just the projection of the orthogonal complement of V in v ,1 That is, v ,1= V $ W,, where W is the space defined by-the projection-of the orthogonal complement of Vj in Vj t1 The space W will be 7


spanned by the w a velet o rthonormal basis defined b y kEZ. (2. 2) We can summarize the relationship of these subspaces at every level by To u nderstand the p rojection of f onto a n y of the particular subspaces, let' s consider the simple Haar basis defined by (x) { 1, if O!:x

ll1 (x) 1, -1, if 2 1 if -,;;x<1, 2 0, otherwise Figure (2.1) shows the Haar scaling function (x) and corresponding wavelet l!J(x). This is one of the only wavelet functions that can be described explicitly. The wavelets that we will see throughout the rest of this work are recursively defined. As such, they cannot be viewed in the usual way. In later chapters we will see how to view these recursive functions. Figure 2.1: The Haar scaling function (top) and the Haar wavelet w (bottom) If we consider the projection of a function f onto the Haar basis at two resolutions, we would get figure ( 2. 2 ) 9


Proj. v_o Proj. V_1 Figure 2.2: Some f (x) and its projection onto V0 and V1 spanned by two resolutions of the Haar basis. In figure (2.2), f is projected by the scaling function cj.l(x) onto V0 But we know that V0 c V1 ; that is, V1 is a higher resolution than V0 This explains why the projection of f onto V1 is more accurate than the projection onto V0 It is also important to realize that since V0 c V1 the projection of f onto V0 can exist in V1 but the projection of f onto V1 cannot exist in V0 We briefly introduced the scaling function and restrictions on the coefficients { e n } in the above example. We will now develop restrictions on the wavelet and scaling function coefficients more formally. 10


2.1 Wavelet and Scaling Function Coefficients With V c V,.1 and (x) E V, -(2x) E V,.1 i t follows thai foi some set of c6efficients (x) can be written as a linear combination of (2x), XER. (2. 3) Alternatively, if we consider equation (2 .3) written as a linear combination of (x ) in the scaled form defined in equation (2 .1), the recursion for (x ) can be written in terms of a new set of coefficients {hn} as X E R (2. 4) so that To normalize (x) we require J :


Now that we have defined (x), (x) can be expressed so that it is orthogonal to (x), namely XE"R. (2. 5) To make (x) orthogonal to (x), we require gn = (-l)n h1_n so that the coefficient set {g"} is orthogonal to {h"} i.e. h'"g = 0. From equation (1.3) of the introduction, we required that the zero moment or the mean of (x) equal zero (vanish). From this condition, we can derive the following requirement for the filter coefficients {gn}. 0 = f_:w ( x) dx = .j2 gn f_:ct> (2x-n) dx 2.2 Orthonormality of Compactly Supported Wavelets At this point, we have defined two basis functions: (x) and (x). The problem now becomes how to pick the coefficients {h"} and {g"} to ensure that the functions (x) and (x) form orthonormal bases. To answer this question, we will turn to the Fourier transform. The spectral representation of these functions greatly simplifies the analysis of orthonormality. To construct the spectral representation of these basis functions, we start with the multiresolution analysis defined by the filter coefficients {hn} 12


Using the following properties of the Fourier transform, (scaling) (delay) with a = 2 and t0=n, the Fourier Transform of (x) is As such, we can define a transfer function so that (2. 6) where 1 < m ( = L h e -Ht._ D y'2 n-- n (2. 7) The transfer function is 2n-periodic in L2( [0,2n]). As we will see later, the construction of will yield a low pass filter corresponding to the scaling function (x), and a complementary high pass filter corresponding to (x). This low pass/high pass relationship makes sense because (x) evolved from the additional information of an approximation at a resolution 2J'1 compared with the resolution 2J. The additional information can be looked at as a high pass operation, namely the change between two resolutions. As a function of time, (x) and (x) will provide analysis through convolution operations. Because convolution in the time domain is equivalent to multiplication in the spectral domain, the convolution operations will be equivalent to multiplication by and a complementary 13


filter which will be developed later. Figure (2 .3) shows the relationship of the complementary high pass/low pass 2rr-periodic filters and 0 9 M O 0 8 07 0 6 0 5 0.4 03 02 M1 01 0 0 4 5 Figure 2.3: Low pass filter and complementary high pass filter in the frequency domain. Finally, what are the necessary and sufficient conditions that will ensure that { k } forms an orthonormal basis? To begin this construction, consider the d efinition of orthogonality in If the scaling function (x) forms an orthonormal basis, then !_: <1> (x) <1> (x-k) dx = 0 a.e. (k,.O) or equivalently !_: (X) (x-k) dX = Ok,O (2. 8) 14


To take the Fourier transform of (2.8), we turn to Parseval's theorem. Theorem 2.1 (Parseval): J:x(t)y*(t) dt = [:x(f) y*(f) df where X(f) and Y(f) are the Fourier transform of x(t) and y(t)r respectively. By this theorem, the integrand becomes = By the delay property of the Fourier transform, the time shift of (-k) results in a phase shift term e1kE. The Fourier transform of the right side is 1, by the property that the Fourier transform of the delta function equals 1. Consequently, taking the Fourier transform of (2.8) yields or 1 2n Jo It follows that Evaluating the integral, we see that 15


Now, with reference to (2.6) and Since m0 is 2n-periodic, we can split the sum into odd and even k, which gives us Again, because we have a necessary condition on given that c ,:, is orthonormal. 2.3 Bi-Orthogonal Decomposition (2. 9) As stated earlier, the space W; spanned by evolved from the additional information of the approximation at the resolution 2Jd compared with the resolution 2J. In terms of the spaces spanned by = and this becomes WJ E& v, = v,d or v, = v,_1 E& w _1 This implies that any f(x) V. 6an be-decomposed in terms of the biorthogonal spaces-at resolution ( j 1 ) With respect to the Fourier transform of f(x) this decomposition can be stated as follows. 16


Since. forms a basis, we can express f(x) as a linear combination of The Fourier transform of f(x ) is where ( e a .., = a Jc is a 2rr-periodic function. For the purpose of this decomposition we define H( h e and n G ( g e n for the coefficients {hn} and {gn} defined previously (We use instead of here for ease of discussion). If we recall that V0 = V _1 $ W_1 we can decompose into the next lower resolution for some ITperiodic functions and as Substituting = and {p-(2 = 1 7


we get From this decomposition, a paraunitary linear s ystem can be constructed, through which a necessary condition for the operator of this linear s ystem can be realized. F rom = + the following l inear system c a n be cons t r u cted, l [ a = Here we need to show that [ l U = i s u nitary (U' U = UU' = I), s o that This is a necessary condition so that if U is the transform operator for the fast wavelet transform, u -1 is jus t If U i s unitary, then +H*({.+n) and 18


* [ H* G* H* G l UU = Consequently, the system is unitary if and H ( G ( + H G = 0 Actually, all the diagonal elements need to be 1 and the off diagonal elements need to be 0 so that U is unitary. However, only the elements listed are necessary due to the 2rr-periodicity and symmetry of and 2.4 z-Transform and Filter Coefficients At this point, from equation (2.7), we can define in terms of as That is, is the Fourier transform of the filter coefficients {hn} Similarly, is defined as the Fourier transform of the filter coefficients {gn ) Therefore, we can look to the definition of the filter coefficients { g n ) provided in (2.5) where, because is defined as the orthogonal to (x), gn = (-1)'' h'I -n From this definition, we can use the z-transform to define a suitable solution for First consider that is, in reality, defined in terms of a finite sequence, so that equation (2 .5) becomes 19


1jJ (x) p) L..,k =o gk


H ( = 12 mo ( G ( = 12 m1 ( where mo ( h e -ikf; k gk e -ikf; Putting this all together, if we define such that and form a paraunitary filter pair (i.e., such that the linear transformation matrix U constructed of and is unitary), then the IFFT (Inverse Fast Fourier Transform) can be used to generate the sequence of filter coefficients {hn } and therefore, {g" } The following construction and examples will hopefully clarify this entire process. 21


3. Daubechies' Construction The construction of the Daubechies' orthonormal wavelet basis ([1] [ 2]) begins with the concept of regularity and vanishing moments. Now that have defined and its relationship to the Fourier transform of the filter coefficients {gk } ( 1 g e -ikl; m1 '> k /2 from (2.6) the Fourier transform of (2.10) becomes (3 .1) Equation (1.3) puts the restriction on W(t) that the first rn+1 moments vanish. We must show that this restriction implies mth order differentiability of This differentiability will lead to a unique factorization of which will be needed in the construction of the Daubechies' wavelet basis. First, consider the Fourier transform of W(t), The first derivative with respect to yields Continued differentiation within the integral gives 22


so that (3.2) Now, from (1.3) i: t j 1jJ ( t) dt 0 j O,l, ... ,m. From this condition, ( 3 2 ) becomes 0 j O,l, ... ,m (3. 3) But we know that ( 0) 'f-0 Therefore, with (3.3) showing mth order differentiability at we have that m0 is m times differentiable at 0 j O,l, ... ,m This implies that m0 has a zero of order m+l at At this point, we factor into a polynomial that 23


is the multiple zero at another polynomial such that = (where R is order N =m+l and Q is order m) To construct we realize that when normalized m ,(O)=(l)"'ilQ(O ) and m,.(n)=(0)"''1Q(n). This , gives the following factorization of (3.4) This factorization is important because it allows for the development of the polynomial Q in z =ei(, from which the polynomial m0 in z can be realized directly from the factorization of (3.4) We now have everything necessary to develop a trigonometric polynomial for 1 h e -ikl; mo (.,) = /c -12 such that First, consider the factorization for ( i l m+l = 1+; From this representation, it follows that With the coefficients of I Q ( 12 being real, it can be written as a real polynomial in or equivalently as a polynomial Pin Letting y = = 24


)/2, we can write as a polynomial is such that from (2.9) As a polynomial in y = P must satisfy (1-y)NP(y) +yNP(1-y) = 1 (3. 5) To solve for P, I. Daubechies' [2,pp 169] introduces a rather obscure theorem by Bezout that succinctly states the unique existence and order of P given (3 5 ) The order of P is important since P will ultimately be written as a truncated Taylor expansion. Theorem 3.1 (Bezout): If P1rPz are two polynomials of degree n1rn2 respectivelyr with no common zerosr then there exists unique polynomials quq2 of degree n2l rn1-l r respectivelyr so that By this theorem, there exist unique polynomials q1 q2 of degree s N-1 such that (1-y)Nql(y) +yNq2(y) 1 Now substituting y for (1-y) yNql(l-y) + (1-y)Nq2(1-y) 1 2 5


Then, by the uniqueness in Bezout's theorem (3.1), q2(y)=q1(1-y), so that (1-y)Nq1(y) +yNq1(1-y) 1 Solving for q1(y) we get Since q1 is degree.::; N-1, (1-y)-N can be written as the first N terms of its Taylor expansion plus residual as (1 )-N ( N+K-1) k 0( N) y = L k=D k y + y Substituting the Taylor expansion into q1(y) gives ql(y) = ( N+Kk-1] yk+O(yN) -yN q1(1-y) (1-y) N Again, since q1(y) is degree.::; N-1, q1(y)=P(y) becomes = ( N+K-1) k p (y) Lk= D k y We can now return to the previous substitution which give us P as a polynomial in instead of y: 26


With this representation, we can write the transfer function for M = lm 12 = (cos21)N ( N+K-ll (sin21)k o '> o '> 2 Lk-o k 2 What we are really interested in, however, is so that we have the necessary polynomial and hence in z=e1E to obtain the filter coefficients {hn} via the inverse Fourier transform. To extract from 12 we realize that 12 This yields a spectral factorization for the roots of 12 from which the "square root" of 12 can be extracted [2, pp. 172-1 73] 3.1 Spectral Factorization of 12 The process of extracting the square root from I Q ( 12 begins by considering the spectrum of the roots of the polynomial Realizing that this polynomial is a real product of conjugate factors 12 = the resultant polynomial can be written as a real polynomial in Stated as a polynomial in x, it can be factored as where the cjs appear as complex conjugate pairs or real singlets. Because these roots are a product of conjugate factor, we can extract a polynomial in z=ei E by writing a new polynomial as a product of averaged factors in z, with I z I =1, 2 7


-c.) ] llm 1 1 2 = a l':lJ=l (--c .z + -z ) 2 ] 2 m=N-1 That is, for each linear term of the polynomial we will get a quadratic that averages the roots in z. The roots (r1 ) of this new polynomial form the spectrum of P A(z). the quadratic formula, these roots are of the form By spectral factorization, Q(z) can then be extracted by considering the roots of PA(z) outside the unit circle: We now have all the necessary root s to creat e the transfer f unctions and where and 28


3.2 Daubechies' Examples for N=2 and N=3 The following examples use spectral factorization to construct the N=2 and N=3 Daubechies' polynomials From these polynomials, and can be created, from which the corresponding filter coefficients {h"} and { g n } can be realized via the inverse Fourier transform. We will start the e xamples with the Daubechies N=2 case. N=2: Let x=cos ( and ( 2 ;: ) = LN-1 ( N+K-ll ( 2 ;: ) k P Sln i sln k 0 -2 k 2 = 1 + 2 ( ------==2 (2-x) c1=2 Now let m=N1 and place the root c1 into PA ( z ) The roots of PA(z) are r1 = 3.7321 and r2 = 0.2679. By spectral factorization we take the root outside the unit circle. In this case, that is r1 Consequently, Q ( z) (z-3.7321) (1-3.7321) The (1-r1 ) factor simply normalizes Q(z). This factor can be realized simply by evaluating Q at that is Q(1). Now that we have Q(z), becomes 2 9


= ( 1 +2e i l N (1-r1 ) or Finally must be normalized so that :Lk hk = ..f2 This gives ( 1+;'')' (e''-r,) ( Now since we can take the inverse discrete Fourier transform (using an IFFT) of the 2n-periodic function to obtain the Daubechies' coefficients {hn} Using the Matlab (A.l) code in appendix A, the coefficients {hn} become h 0.48296291314453 O .OOOOOOOOOOOOOOi 0.83651630373781 O.OOOOOOOOOOOOOOi 0.22414386804201 O.OOOOOOOOOOOOOOi -0.12940952255126 + O .OOOOOOOOOOOOOOi After the coefficients are created, Matlab (A.l) test them to see that they do, in fact, equal the square root of 2. Matlab (A.l) further test that the transfer functions and do create a unitary operator, as discussed in section (2.3) 30


As a contrast for these e xamples, w e will show a plot of together with after the nex t e xample. We now show the Daubechies' e xample for N=3 N=3: Let x=cos ( and 2 t: = ( N+Kk-1) P(sin 2) L 2 = +6 = 1+3 ( +6 ( = x2-3x+8/3 c1=1.5.6455j Now let m=N1 and place the root c1 into PA(z). = ( z 2 -2 c1 z + 1) ( z 2 -2 z + 1) The roots of PA(z) are r1, r1 = 2 .7127 1.4439j and r2, r2 = 0 .2873 0 .1529j. Again, by spectral factorization we take the roots outside the unit circle. In this case, that is r1 and r1 Consequently, Q ( z) (z-r1 ) 1Again, the denominator simply normalizes Q(z) at 31


From this we get Using Matlab (A.2) we again take the IFFT of the 2rr-periodic function to obtain the six Daubechies' coefficient set {h"} h = 0.33267055295008 + O.OOOOOOOOOOOOOOi 0 .80689150931109 O .OOOOOOOOOOOOOOi 0.45987750211849 O .OOOOOOOOOOOOOOi -0.13501102001025 + O.OOOOOOOOOOOOOOi -0.085441273882 0 3 + O.OOOOOOOOOOOOOOi 0 .03522629188571 + O .OOOOOOOOOOOOOOi The rest of Matlab (A.2) checks, a gain, that the sum of the coefficients is the square root of 2, and that the linear operator comprised of and is unitary. 0 5 ' T r a n sfe r Funct r o n s H2{ sdrd). H 3(dashed ) \ \ \ \ \ I I \ I \ I \ / \ ( I I I I I I I I I f I / 3 0 pornt s 0 t o 2'pr Figure 3.1: for N=2 and N=3 3 2


Figure (3.1) shows for N = 2 and N = 3 as a comparison of the higher degree of vanishing moments of H3 verses Hz. Another way of looking a t this is that H3 has a steeper low-pass "roll-off" than Hz. 33


4. Fast Wavelet Transform With the coefficient set { hn } defined, we can return to the multiresolution analysis defined previously by ( 2 4 ) and ( 2 5 )


Hx HHx I HHHx I G HHx 0 0 0 X Gx GHx Figure 4.1: Recursiv e operations of the FWT algorithm, where H is the low pass operator and G the high pass operator. Figure (4 1) shows the operation of the FWT algorithm with respect to the spectral operators H and G The bar with an 'x' in it represents a vector in the spectral domain. After each operation, a vector of the same length is created. The left half of the vector is the result of the low pass operator H operating on 'x' (Hx ) and the r i ght half is the result of the hig h pass operator G operating on 'x' (Gx ) Consequently, the r i ght half of the new vector is the chang e between H x and the o r i ginal v e ctor 'x'. This follows the mul tiresolution analysis relationship V0 = V _1 $ W _1 discussed in section (2 ) That is, the vector x that lives in V0 can be represented as a the sum of two vectors at the nex t lower resolution. At each step ln figure (4 1) equation (4 1) shows us that there is a decimation in frequency of 1/2. Now, since this is a wavelet basis operation for w t h e resul t need s to follow the operator G at each resolution. In other words, we need to leave what has been operated on by G and continue to the nex t resolution, which is a vector of half the siz e It also follows that since G operates on the scaling function or father wave H x is what is operated on at the nex t resolution. Remember, H relates to (x ) and G to w(x ) This algorithm continues until the vector at the current resolution can no longer be decimated. Since multiplication in the spectral domain is equivalent to convolution in the time domain, we can 35


create a discrete convolution operator that is equivalent to the decimation in frequency operator derived f rom the recursions 1 J2 from which With a decimation in frequency of every other convolution is discarded yielding a discrete convolution operator of the form, ho hl h2 h3 ho hl h2 h3 H ho hl h2 h3 A h -h n-1 h n-2 -h n-3 n h -h 11-1 h n-2 -h 11-3 II G h -h 11-1 h n-2 -h n-3 ... II For the case where N=2 the linear operation of Ax=b becomes, X= Ax or 36


h e h l h 2 h3 H x h e hl h2 h 3 ho hl h2 h3 h 2 h 3 he hl X h 3 -h2 h l h o Gx h 3 h2 h l h e h3 -h2 h l h o h l h o h 3 h 2 This is the Fast Wavelet Transform (FWT). The Inverse Fast Wave l e t Transform (IFWT) operator A -1 i s just the Hermitian since A i s unitary. Actually, since A has real entries in the Daubechies' case, A -1 i s just the transpose o f A and hl h 2 h3 T h o h l h2 h 3 h e h l h 2 h 3 h3 h e h l X h 3 h 2 h l h o h 3 h 2 h l -he h3 h 2 h l ho h l h o h 3 h 2 With the FWT operator defined, an algorithm based on this operator can be developed Also we can take our f irst look at the t i me domai n representation of the Daubechies' functions. Until this p o int, (x) has existed onl y as a recursive function which could not be displayed. 37


Suppose for a moment that we have the recursive FWT decomposition algorithm written. To view a basis function we look to another orthogonal basis decomposition algorithm, the FFT. Each element in the discrete spectral domain of the FFT corresponds to a discrete frequency in the time domain (a basis function) Similarly, if we take a single discrete non-zero element in the decomposition domain of the FWT and take the IFWT, a basis function should appear in the discrete time domain. For example, in the Daubechies case where N=2, the vector e(5) (which is a 1 in the fifth positionzero elsewhere) put through a 1024-point IFWT gives the basis function W2 in figure (4.2). 0 15,--------,-------,-----,-------,-----,----------, 0.1 0 0 5 0 05 -O 1o'-------:c'-2 0-:0 __ 4c'-:O O---c-60L_0 __ 8c-'-00c:----c-10:'-:-00c:----:-:'1200 Figure 4.2: The mother wave w(x) for Daubechies' N=2. Similarly, we can view the more regular basis function W3 (figure (4.3)): 38


0 1 5 0 1 0 05 0 \ 0. 05 0 1 0 200 4 00 600 8 00 10 00 1 2 0 0 Figure 4.3: The mother wave (x) for Daubechies' N =3. Notice how the higher number of coefficients for N = 3 versus N = 2 makes 3 more regular than 2 This progression continues, as the number of coefficients is increased. Figure (4.4) shows this clearly in the plot of W10 39


0 1 0 0 5 -0 1 o 1 5 o'--------'-2 0.,.-o __ 4_Lo o---,-6oL..o __ sc-'-o.,-o --1 o:'--o o,---_j12 0 0 Figure 4.4: The mother wave (x) for Daubechies' N=lO. Not only can we now view the mother wave (x), but we can decompose some sample signals using the FWT. In viewing these examples, keep figure (4.1) and the discussion describing it in mind. The first examples step through the FWT one resolution at a time. Figure (4.5) shows the first level operation on a square pulse. Notice how the right side of the resultant vector is the change between the two resolutions. 4 0


0 8 0.6 0.4 0.2 Figure 4.5: Top: original signal. Bottom: First level FWT operation. Figure (4.6) shows the next level of the FWT decomposition. Again, if we follow figure (4.1), the next change between resolutions is represented by the middle spikes, and the signal that is left is the almost square pulse on the left. 41


0 8 0 6 0.4 0.2 0 0 200 400 6 0 0 800 100 0 1 200 2 5 1 5 0 5 l I or---I I -0 5 0 200 4 00 600 8 0 0 1000 1 2 0 0 Figure 4.6: Top: ori g inal signal. Bottom: Second level FWT operation. If these operations are continued to the lowest resolution, we get figur e (4 .7) -the full FWT decomposition of a square pulse. The important thing to realiz e hear is that most of the energy is well localized. This is contrary to what a similar decomposition using the FFT would give, which is why wavelets were invented in the first place. Additionally, Since each operation of the FWT is simply a sparse real matrix multiplication of a (Nx M ) matrix and a (Mx l ) vector (where M is the size of the vector being transformed, and N is the size of the filter coefficient set) the complexity of the FWT algorithm is O(M) This compares to the FFT which is O(M*log(M)). This is another attr a ctiv e attribute of the FWT o v e r the FFT 42


0 8 0 6 04 0 2 0 0 200 400 600 8 0 0 1000 1200 6 4 2 0 r 2 4 0 200 4 00 600 800 10 0 0 1200 Figure 4.7: Top: ori g inal signal. Bottom: Full FWT d e c o mposi t ion. One final signal decomposition that is rather interesting is that of sine wa v e (fig u r e (4 .8)). This is a signal that the Fourier t ransfor m is v e r y good at decomposing (it is just a single f requency in the spectral domain). The FWT, how e v er, doesn' t do well at localizing the energy of this signal. Not to worry! This is NOT the type of signal being targeted by the FWT. 4 3


05 0 5 2 0 0 400 600 800 Figure 4.8: Top: original signal. Bottom: Full FWT decomposition. 100 0 1 2 00 Now that we have seen what the F W T algorithm can do, and how it is derived, we can detail the actual Matlab code that performs these operations. 44


4.1 The FWT Algorithm Like the FFT the FWT is a power-of-two algorithm. Each successive decomposition of the FWT operates on a vector half the size of the previous operation. This is due to the decimation in time of the multiresolution analysis defined by

( 2 Xn ) and W (x) = fi Ln gn Q> (2x-n) Consequently, this power of two algorithm and the FWT operator A defined previously, define the structure of the FWT algorithm. The core of this algorithm is structured correspondingly, into two Matlab functions. The first function "waveletn. m (Matlab A.3, appendix A ) calls the core operator function "daubn. m (Matlab A 4 appendix A) for each resolution of either the transform or inverse transform operation depending on whether the variable transform "xform" is greater than zero or not, respectively. If "transform" is true, the operation continues until the length of the vector is no longer greater than or equal to four. The algorithm stops after four because no more decimation is possible. Each time through the "while" loop the vector is divided in half, corresponding to the multiresolution analysis decimation described above. If "transform" is false, then the inverse transform core operation is called recursively from the lowest resolution (nn=4) to the highest resolution defined by the original vector space size (n) The function "daubn. m is relatively straight forward. The transform operation follows the Row vector multiplies defined by the operator A The main thing to realize here, and in the inverse transform operation, is that these are convolution operations (this is the same as saying that they are filter operation) and, as such, 45


wrap-around of the filter coefficients {hn} and {gn} must be accounted for. In this algorithm, the wrap-around is accounted for by the check and compensation of the pointer variable "jk" greater then "n". If the transforming operation follows Row operations, then we would expect the inverse transform operation to follow column operations, and this is the case. The column operations continue for each Row element at column index "i" and "i+nh", where "nh" is simply half the vector length. Like the transform operation, the inner-loop has two multiplies. The loops differ mainly in how the two multiplies are contained in one sum for the inverse transform operation and two separate sums for the transform operation. Probably the easiest way to understand this algorithm is to follow the Row/column operations of X = Ax and x = A -l X 4.2 The FWT and Image Compression The need to efficiently store and transmit digitized images is becoming increasingly important. The idea is to reduce the amount of data necessary to define an image, thereby reducing storage and transmission bandwidth requirements. Compression of a digitized image takes two forms: lossy and lossless. Loseless data compression dates at least as far back as the 1970s when Lempel and Ziv, and independently Huffman, developed string matching algorithms that remove the redundancy from a string of characters. More recently, Langden and Rissanen of IBM Corp. developed a probability-based algorithm known as Arithmetic Coding [20]. These lossless algorithms operate recursively on the data set, removing the redundancy, until a certain level of entropy is reached. They are all lossless because the compressed sequence can be decompressed to obtain the exact original sequence. 46


Lossy data compression techniques also seek to reduce the siz e of the data set; however, they do not attempt to exactly reconstruct the original sequence when decompressed. This is where the science of lossy compression gets interesting. How much and what type of information can be lost, such that a reasonable facsimile of the original will exist upon decompression? Lossy techniques make sense only in situations such as image and audio compression because, as long as enough of the original content remains, the brain can discern what it is seeing or hearing. For example, a very detailed image of a house can be reduced to a stick drawing, and the brain can still realiz e the image content. This stick drawing example really gets to the crux of the image compression problem. That is, what cannot be lost in a lossy image compression algorithm is the edge information. This is what has brought localized wavelet bases to the forefront in image compression algorithms. Because of the localized time and frequency nature of the wavelet basis, the edge information in the transform domain is also localized. This is contrary to the classic Fourier basis, where discontinuities are distributed throughout the spectral domain. This is not to say that the Fourier transform cannot be used in image compression. In fact, the classic JPEG image compression algorithm is based on the real Fourier transform (cosine transform) Very recently, however, wavelet based image compression algorithms have begun to overtake Fourier based algorithms in a big way with compression ratios on the order of 100: 1 without loss of image quality. This is what the HARCC (Houston Advanced Research Centre) wavelet-based algorithm, released recently, boasts. HARCC is proprietary. As such, it will be a while before we know how these incredible ratios are obtained. There are two applications of the FWT in image compression, one being lossless and the other lossy. In a lossless application the FWT is used to localize essential details, such that an image that would not compress due to its randomness will become compressible. In the lossy application, the FWT is performed on the 4 7


image. Less significant coefficients of the transformed image are discarded, leaving a highly compressed image. The decompression process then involves performing the IFWT with the reduced coefficient set. Since the larger coefficients retain the essential detail, an arbitrarily close approximation to the original image can be obtained as more coefficients are allowed. 4.3 The FWT Image Compression Algorithm The FWT image compression algorithm is different than the complete FWT multiresolution decomposition discussed in section (4.). In this algorithm, two dimensions must be considered. Each resolution must operate on both rows and columns of the image matrix. Accordingly, the algorithm performs just one decimation on each row vector. This separates the essential detail in the horizontal dimension. Next, each column vector is decimated in the same way, thus separating the essential detail in the vertical dimension. Matlab A.5 of Appendix A. shows an example that takes an original 512 X 512 gray-scale image (matrix "I", Matlab A.5) and performs the above two dimensional decomposition twice, creating two multiresolution decompositions. Each time the subspace divides its dimension by two. The wavelet basis used in this example is the Daubechies' N=3, six coefficient basis detailed in section (3.2) Figures A.1,A.2,A.3,A.4,A.5, and A.6 of Appendix A. follow the code in Matlab A.5. Figure A.1 is the original image of a mandrill (baboon) contained in the matrix "I" of MATLAB A.5. Figure A.2 shows the first two dimensional decomposition of mandrill image contained in matrix "I". Actually, the original image is 480 X 500, and as such, has been padded with zeros to a power of 2 (512 X 512). This can be seen as bars on the bottom and side of each quadrant of figure A.2, matrix "C" of Matlab A.5. 48


At this point the original image does not compress (lossless), but the first resolution decomposition (figure A.2) compresses lossless at approximately 3:1. This is what we expect from the wavelet transform, since i t tends t o localize the energy of the signal being transformed. As such, changes in the image that tend to be dispersed will become localized. The result of this will be more change in less area, leaving the remaining area with less change. The remaining area will, as a result of having less change, compress better (lossless). Now we may ask: what would the upper left quadrant of figure (A.2) (which, using the notation of figure (4.1) becomes (HH) in two dimensions), 1/4 of the information, look liked if it were inverse transformed with all other quadrants zero. This image is in Figure A.3 (matrix "D" of Matlab A.5) Since the quadrant (HH) is the low-pass filter region, the inverse transform will have an averaging effect. Notice how with only 1/4 the data of the original image a reasonable facsimile of the original image evolves from the inverse transform. Consequently, this is one possibility of lossy compression. That is, send only the low-pass quadrant, and take the inverse transform at the receiving end. After one two dimensional operation, the dimension is now n/2 X n/2. If we perform the same decomposition on the quadrant upper-left quadrant (HH) of Figure A.2, we get the next multiresolution analysis shown in Figure A.4 (matrix "C" of Matlab A.5, again) Notice how the now twice averaged (Low-pass) image appears again in the upper left 1/16 of the matrix. Figure A.4 (Now Matrix "C" of Matlab A.5) now looks like 49


HHHH HHHG HG HHGH HHGG GH GG where each H and G follow the low-pass and high-pass operations shown in figure (4.1), respectively. The operations follow, from right to left, row, column, row, column, etc ... If we now take the inverse transform of the upper left 1/16th of figure A.4 (HHHH), we will get an even more averaged image than Figure A.3. This inverse transformed image, using only 1/16 of the original, is shown in Figure A.5 (matrix "D1" of Matlab A.5). Notice how even though it is highly averaged the edge information is intact. Another thing to notice is the content of other portions of the decomposition of figure A.4 other than the upper left 1/16 (HHHH). These segments of the decomposition contain essential detail in the vertical, horizontal and diagonal dimensions. The previous example was intended to show the spacial separation provided by the high and low pass multidimensional multiresolution decomposition. We threw away entire segments of the analysis. In a real compression algorithm, since every segment contains some essential detail we would keep some portion of each segment. The question becomes, what criteria should be used to determine what data should be retained and what should be discarded. As far as I can tell from the currently available literature, the criteria is probably the simplest one could imagine: keep the largest magnitude coefficients and discard the smallest. Matlab A.6 performs a two level transform of the original mandrill image (matrix "I" of Matlab A.6). It then 50


discards every coefficient less then 200 in magnitude. This leaves less then 1/20 of the coefficients. Figure A.6 shows the reconstruction of the original with only these few (1/20) coefficients. Again, notice how the edge information remains intact. Obviously, the HARCC algorithm is more involved than this example, but it will be some time before we will know the details of this algorithm. All we do know is that it is based some wavelet basis. 51


Appendix A. Matlab Code and Compression Images Matlab A.l r=roots([1 4 1]) Note: The discretization of the 2*pi interval includes only the end point at 2*pi, not the end point at zero. This is important so that the IFFT doesn't alias. v=(2*pi/10) x=2*pi:-v:v; % Normalize H so that the sum of the hns is sqrt(2). normalize factor=(sqrt(2)/(1-r(1)) ); Q=(exp(i*x)r(1) ); H=( ( (1+exp(i*x) )/2) .A2) .*Q*normalize factor; h=ifft(H) s=cumsum (H) ; sqrt( 2)-s(1) output >> daub qn2 r = 3.73205080756888 0. 26794919243112 h 0.48296291314453 0.83651630373781 -0.22414386804201 -0.12940952255126 + O.OOOOOOOOOOOOOOi O.OOOOOOOOOOOOOOi O.OOOOOOOOOOOOOOi O.OOOOOOOOOOOOOOi sqrt(2) sum{hn} 0 + 2.195904039796593e-16i This is the error between the calculated square root of 2 by Matlab and the sum of the hns. We can also look at the aforementioned unitary conditions in the spectral domain: Note: Here c implies complement, and m implies x+pi. v=(2*pi/30) x=2*pi:-v:v; a=1/(1-r(1)); H=a ( ( ( l+exp ( i *x) ) I 2) A2) ( exp ( i *x) -r ( 1) ) ; Hc=a*( ( (1+exp(-i*x) )/2) .A2) .*(exp(-i*x)-r(1)); Hm=a*( ( (1+exp(i*(x+pi) ))/2) .A2) .*(exp(i*(x+pi) )-r(1) ); Hmc=a*( ((1+exp(-i* (x+pi)) )/2) .A2) .*(exp(-i* (x+pi) ) -r(1) ); G=-a*exp(i*3*x) .*( ( (1+exp(-i*(x+pi)) )/2) .A2) .*(exp(-i*(x+pi) ) -r(1) ) ; 5 2


Gc=-a*exp(-i*3*x) .*(( (1+exp(i*(x+pi)) )/2) .*(exp(i*(x+pi) )-r(1) ) ; Gm=-a*exp(i*3*(x+pi)) .*( ( (1+exp(-i*(x)) )/2) .*(exp(-i*(x) )-r(1) ); Gmc=-a*exp(-i*3*(x+pi)) .*(( (1+exp(i*(x)) )/2) .*(exp(i*(x) ) -r(1) ); ZO=Hc.*H+Hmc.*Hm; % should be 1 Z1=Gc.*H+Gmc.*Hm; % should be 0 Z2=H.*Hc+G.*Gc; % should be 1 Z3=Hm.*Hc+Gm.*Gc; % should be 0 Matlab A.2 c=roots([ 1 3 8/3]) r=roots([ 1 -2*(c(1)+c(2)) (2+4*c(1)*c(2) ) -2*(c(1)+c(2)) 1]) Note: The discretization of the 2*pi interval includes only the end point at 2*pi, not the end point at zero. This is important so that the IFFT doesn't alias. v=(2*pi/14); x=2*pi:-v:v; % Normalize H so that the sum of the hns is sqrt(2). normalize factor= sqrt(2)/(1-(r(1)+r(2) )+r(1)*r(2)); Q=exp(2*i*x)-(r(1)+r(2))*exp(i*x)+r(1)*r(2) ; H=( ( (1+exp(i*x) )/2) .A3) .*Q*normalize factor; h=ifft(H) s=cumsum(H); sqrt(2)-s(1) output >> daub qn3 c = 1.50000000000000 + 0.64549722436790i 1.50000000000000 -0.64549722436790i r = 2.71274862195598 + 1.44388678261800i 2.71274862195598-1.44388678261800i 0.28725137804402 + 0.15289233388220i 0.28725137804402 -0.15289233388220i h 0.33267055295008 + O.OOOOOOOOOOOOOOi 0.80689150931109 O.OOOOOOOOOOOOOOi 0.45987750211849O.OOOOOOOOOOOOOOi -0.13501102001025 + O.OOOOOOOOOOOOOOi -0.08544127388203 + O.OOOOOOOOOOOOOOi 0.0352262918857 1 + O.OOOOOOOOOOOOOOi sqrt(2) -sum{hn) = 0 + 2.831240458600424e-16i 53


Matlab A.3 % This function was derived with input from articles % from William H. Press and Gilbert Strang. % Prologue: 'a' defines the vector to be transformed 'n' defines the length of 'a' 'cl' are the low-pass filter coefficients {hn) 'ch' are the high-pass filter coefficients {gn) 'ncof' indicates the number of coefficients in cl or ch 'xform' true for transform false for inverse-transform function a= waveletn(a,n,cl,ch,ncof,xform) % Fast Wavelet Xform/invXform function return if ( x form > 0 ) else end nn=n; while (nn >= 4), a=daubn(a,nn,cl,ch,ncof,xform) ; nn=nn/2; end nn=4; while (nn <= n ) a=daubn(a,nn,cl,ch,ncof,xform) ; nn=nn*2; end 5 4


Matlab A.4 % This function was derived with input from articles % from William H. Press and Gilbert Strang. % Prologue: 1 a 1 defines the vector to be transformed 1n' defines the length of 'a' 'cl' are the low-pass filter coefficients {hn) 1Ch1 are the high-pass filter coefficients {gn) 1ncof' indicates the number of coefficients in cl or ch 'xform' true for transform false for inverse-transform function a d a ubn(a,n,cl,ch,ncof,xform) nh=n/2; % clear the work space for i=l:n, w ( i ) =0; end if ( x form > 0 ) i=l; els e end for j=1:2:n, j k=j; end i=l; for k=l: ncof, if ( jk>n) end j k=j k -n; end w(i)=w(i)+cl(k)*a(jk); w (i+nh)=w(i+nh)+ch(k)*a(jk ) ; j k= j k+l; i=i+l; for j =1:2:n, j k=j ; end for k =l: ncof, if ( j k>n) end j k=j k-n; end w(jk)=w(jk )+cl(k)*a(i)+ch( k )*a( i +nh) ; j k= j k+l; i =i+l; % save the work space result back into 'a' for i =l: n, a (i) =w ( i ) ; end return 55


Matlab A.5 % Image Compression example using the FWT multiresolution decomposition. % Prologue: 'I' the original image contained in a 512x512 matrix of gray scale values. 'C' the transformed gray scale matrix: I is decomposed to two levels in this example. 'ncof' the number of Daubechies' coefficients (filter length) n=512; ncof=6; 'n' matrix dimention % Transform the image I xform=l; % First along Rows for i=l:n, A(i, :)=daubn(I(i, : ),n,cl,ch,ncof,xform); end % Then along Columns for i=l:n, C(:,i)=daubn(A(:,i),n,cl,ch,ncof,xform); end % Take the inverse transform of the HH portion of c. % The other 3/4 of the matrix are zeroed. xform=-1; I=zeros (n, n) ; I(l:n/2,l:n/2)=C(l:n/2,l:n/2); for i=l:n, A(l:n,i)=daubn(I(l:n,i),n,cl,ch,ncof,xform); end for i=l:n, D(i,l:n)=daubn(A(i,l:n),n,cl,ch,ncof,xform); end % D contains the uncompressed image of the first resolution % Take the transform to the next level of resolution; %i.e., take the transform of the HH 1/4 portion of C. xform=l; for i=l:n/2, A(l:n/2,i)=daubn(C(l:n/2,i),n/2,cl,ch,ncof,xform); end for i=l:n/2, C(i,l:n/2)=daubn(A(i,l:n/2),n/2,cl,ch,ncof,xform); end 56


% C now contains the second level of resolution % Take the inverse transform of the HHHH portion of c. % The other 15/16 of the matrix are zeroed. xform=-1; I=zeros(n/2,n/2); I(1:n/4,1:n/4)=C(1:n/4,1:n/4); for i=1:n/2, A(1:n/2,i)=daubn(I(1:n/2,i),n/2,cl,ch,ncof,xform) ; end for i=1:n/2, I1(i,1:n/2)=daubn(A(i,1:n/2),n/2,cl,ch,ncof,xform); end I=zeros (n, n) ; I(1:n/2,1:n/2)=I1(1:n/2,1:n/2); for i=1:n, A(1:n,i)=daubn(I(1:n,i),n,cl,ch,ncof,xform) ; end for i=1:n, D1(i,1:n)=daubn(A(i,1:n),n,cl,ch,ncof,xform); end % D1 contains the unco mpressed image of the second resolution % view the uncompressed image of the second resolution. image (D1) colormap(gray(200)) 5 7


5 0 1 0 0 150 2 00 250 3 0 0 350 400 450 5 00


50 100 1 50 200 250 300 3 50 LIDO l!50 500 Figure A 2 : This is :he :t:t:rs: level of the image \r...xn) This image is contained in matrix ucu of A.5.


5 0 1 0 0 15 0 200 25 0 3 0 0 3 50 40 0 4 5 0 500 Figure A.3: i. u -,,, i.Lv"n'" :'f: t:h" \: rl/2 X :n./2) t rFpi:: r :tf !':... 2 :tf !-1-at l aJ:; !"-. 1 wi. tiL aJ. J. :' r :1. 'th. j s i s i:r-1 n l atri x :=f ! !':.. F,.


100 150 200 250 3 00 50 Figure A. 4: This is the second level \nxn). 350 LIDO l!SO tr-ansform vf the dimension \r...xn), and the second c:pe.rativn at the dimension (n/2 n/2) This image i s contained in the second of "C" in 61 500


50 1 0 0 15 0 200 25 0 3 0 0 3 50 40 0 450 500 Figure A.S: i. t] ;.;, i.Lv"n'" :'f: t:h" X : n./4 ) t rFpi::r J./ o:t f !':.. ,c : 1 f w til o :tf tilE: :mat:rj.x


Matlab A.6 % Image Compression example using the FWT multiresolution decomposition. % Prologue: 'I' the original image contained i n a 512x512 ma trix of gray scale values. 'C' the transformed gray scale matrix: I is deco mposed to two levels in this example. % 'ncof' the number of Daubechies coefficients (filter length). 'n' matrix dimention n=512; ncof=6; % Transform the image I xform=l; % First along Rows for i=l:n, A(i, :)=daubn(I(i, :),n,cl,ch,ncof,xform); end % Then along Columns for i=l:n, C(:,i)=daubn(A(:,i),n,cl,ch,ncof,xform); end % Take the transform to the next level of resolution; %i.e., take the transform of the HH 1/4 portion of C. xform=l; for i=l:n/2, A(l:n/2,i)=daubn(C(l:n/2,i),n/2,cl,ch,ncof,xform) ; end for i=l:n/2, C(i,l:n/2)=daubn(A(i,l:n/2),n/2,cl,ch,ncof,xform) ; end % c now contains the second level of resolution % A s an example, zero all coefficient s less than 200 for i=l:n, for j=l: n, if ( ab s ( c ( i, j ) ) < 2 0 0 ) C(i,j)=O; end end end 63


% Take the inverse transform of the second resolution % transformed i mage with only the more significant % coefficients retained. xform=-1; for i=l:n/2, A(l:n/2,i) =daubn(C(l:n/2,i),n/2,cl,ch,ncof,xform); end for i=l:n/2, C (i,l:n/2)=daubn(A(i,l:n/2),n/2,cl,ch,ncof,xform); end for i=l:n, A(l:n,i)=daubn(C(l:n,i),n,cl,ch,ncof,xform) ; end for i=l:n, C(i,l:n)=daubn(A(i,l:n),n,cl,ch,ncof,xform); end % c contains the uncompressed image of the second resolution % view the uncompressed image of the second resolution. image(C) colormap(gray(200)) 64


5 0 1 0 0 15 0 200 250 3 0 0 350 40 0 450 500


References [ 1 ] I. Daubechies, "Orthonormal Bases of compactly supported wavelets," corrununications on Pure and Applied Mathematics, vol. XLI, pp. 909-996, 1988. [ 2 ] I. Daubechies, "Ten Lectures on wavelets," SIAM, Philadelphia, PA, 1992. [3] T.H. Koornwinder, "Wavelets: An Elementary Treatment of Theory and Applications," World Scientific Publishing co., River Edge, N J 1993. [4] G.G. Walter, "Wavelets and Other Orthogonal Systems With Applications," CRC Press, Boca Taton, FL, 1994. [ 5] Y. Meyer, "Wav e lets, Algorithms & Applications," SIAM, Philadelphia, PA, 1993. [ 6 ] J .M. Combes and A. Grossmann, "Wavelets, Time-Frequency Methods and Phase Space," Springer-Verlag, Berlin Heidelberg, 1989. [ 7 ] C.K. Chui, "Wavelets: A tutorial in Theory and Applications," Academic Press, San Diego, CA, 1992. [ 8 ] L.L. Schumaker and G. Webb "Recent Advances in wavlet Analysis," Academic Press, san Diego, CA, 1994. [9] M.B. Ruskai, "Wavelets and Their Applications," Jones and Bartlett, Boston, MA, 1992. [10] P. Duhamel et a l., "Special Issue on Wavelets and Signal Processing," IEEE Trans. on Signal Processing, vol. 41, 1993. [11] 0. Rioul, "A Re mez Exchange Algorithm for Orthonormal Wavelets," IEEE Trans. on Circuits and Systems, vol. 41, pp. 550-560, Aug. 1994. [12] S.G. Mallat, "Multiresol ution Approximations and Wavelet orthonormal Bases of L2( R )," Trans. of the American Mathematical Society, vol. 315, pp. 69-86, Sept. 1989. [13] S.G. Mallat, "A Theory for Multiresolution Signal Decomposition: The Wavelet Representation," I EEE Trans. on Pattern Analysis and Machine Intelligence, vol. 11, pp. 6 74-6 9 3 Jul. 19 8 9 66


[14] G. Strang, "Wavelet Transforms Versus Fourier Transforms," American Mathematical society, vol. 28, pp. 288-304, Apr. 1993. [15] R.A. Gopinath, "Optimal wavelet Representation of Signals and the wavelet sampling Theorem," IEEE Trans. on Circuits and Systems, vol. 41, pp. 262-277, Apr. 1994. [16] I. Daubechies, "The wavelet Transform, Time-Frequency Localization and Signal Analysis," IEEE Trans. on Information Theory, vol. 36, pp. 961-1004, Sept. 1990. [17] P.J. Burt and E.H. Adelson, "The Laplacian Pyramid as a compact Image Code," IEEE Trans. on communications, vol. 31, pp. 532-540, 1983. [18] J.J. Benedetto and M.W. Frazier, "Wavelets, Mathematics and Applications," CRC Press, Boca Raton, FL, 1994. [19] M.V. Wickerhauser, "Adapted wavelet Analysis, from Theory to Software," A.K Peters, Wellesley, MA, 1994. [20] T.C. Bell et al., "Text Compression," Prentice Hall, Englewood Cliffs, NJ, 1990. 6 7