LAPLACE INVERSION BY OPTIMAL FILTERING
by
Thoai Huu Nguyen
B.S., University of Colorado, 1986
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Master of Science
Department of Electrical Engineering
1989
This thesis for the Master of Science degree by
Thoai Huu Nguyen
has been approved for the
Department of
Electrical Engineering
by
Date : j
Joe Thomas
Acknowledgement
I would like to express my gratitude to Dr. Douglas A.
Ross for his help during the preparation of the thesis. I also
would like to thank my friends, Chuong, Trung, Jeff, and Linda,
for proofreading and typing a part of this thesis.
Finally, I would like to thank my friend, Thy, for his
design of the cover page.
Nguyen, Thoai H. ( M.S., Electrical Engineering)
Laplace Inversion by Optimal Filtering
Thesis directed by Associate Professor Douglas A. Ross
Consider the Frelholm integral equation of the first kind
oo
g(t) = J G(y)exp(yt)dY
0
where g(t) is the known autocorrelation function and the
linewidth distribution function, G(y), is the unknown. This
equation is an illconditioned integral equation, which
means that in the presence of noise, the variance of the
inversion integral is infinite, indicating maximum
uncertainty in the inversion. A method of optimal filtering
will be introduced to reduce the effect of noise. The noise
spectrum, which results from statistical noise in g(t),
maybe filtered to minimize its effect by using spectral
decomposition of the Laplace transform which leads to the
WeinerHopf integral equation defining the optimal filter.
The regularized inversion of the Laplace transform
gives a convenient computational algorithm and does not
require any a priori knowledge about the shape of the
linewidth distribution. Regularized inversion of the Laplace
transform will be introduced which lead to the regularized
V
solution. Even though regularization does not give an optimal
filter; it does provided a filtering process which gives
finite noise variance.
The results of this thesis provide the comparison
between using the optimal filter and the regularization
filter.
The form and content of this abstract are approved. I
recommend its publication.
Signed
Douglas A. Ross
v i
TABLE OF CONTENTS
CHAPTER
I. INTRODUCTION................................. 1
II. OPTIMAL FILTER EQUATION...................... 5
Introduction................................ 5
Spectral Decomposition of the Laplace
Transform................................... 6
Statistical Noise in Estimating g(t)...... 12
Derivation of the Optimal Filter............ 21
III. EXAMPLE OF USING OPTIMAL FILTER EQUATION ... 26
Gamma Linewidth Distribution................ 26
Histogram Linewidth Distribution............ 35
IV. SPECTRAL PROPERTIES OF THE REGULARIZED
INVERSION OF THE LAPLACE TRANSFORM............... 44
Introduction................................ 44
The Regularized Inversion of the Laplace
Transform................................... 44
Role of the Regularization Parameter........ 49
V. DISCUSSION AND CONCLUSIONS.................... 59
BIBLIOGRAPHPHY....................................... 64
APPENDIX
A. PROGRAM LISTINGS............................... 65
vi i
TABLE OF FIGURES
FIGURE
3.1 Gamma linewidth distribution........................ 27
3.2 Amplitude response of the Gamma linewidth
distribution for different values of o^y............ 32
3.3 Variation with T/tc and c.^ of the bias and
RMS error in estimating Gamma linewidth
distribution using the optimal filter.............. 34
3.4 Optimal filter amplitude response of the
Gamma linewidth distribution for
different values of ................................ 36
3.5 Histogram linewidth distribution.................... 35
3.6 Amplitude response of the histogram
linewidth distribution for various spacing.......... 38
3.7 Variation with T of the RMS error in
estimating histogram linewidth
distribution using the optimal filter............... 42
3.8 Optimal filter amplitude response of
the histogram linewidth distribution
for various spacing................................. 43
4.1 Variation with a of the F(ji)....................... 48
4.2a Variation with a of the bias, noise, and
mean square difference for T = 5000 ................ 55
VIII
4.2b Variation with a of the bias, noise, and
mean square difference for T = 7000 ................ 56
4.2c Variation with a of the bias, noise, and
mean square difference for T = 10000 ............... 57
4.3 Variation with a of the RMS error
forT = 10000 ....................................... 58
CHAPTER I
INTRODUCTION
Filtering in one form or another has been with mankind
for a very long time. Sunlight filters through the trees. We
remove the more visible of the impurities in water by
filtering, etc. This thesis is devoted to a discussion of a
filtering process which plays an important role in
communication, control systems, and signal processing. The
following is the synopsis of the thesis's content and order.
Chapter II: The introductory chapter gives a brief
introduction to the Laplace integral equation. Eigenfunctions
and eigenvalues of the Laplace transform will be introduced.
The relationship between the autocorrelation function, g(t),
and the linewidth distribution function, G(y), can be obtained
by using the spectral decomposition of the Laplace transform
rather than the Fourier decomposition of traditional time
invariant linear system theory. Other properties of the
eigenfunctions of the Laplace transform will be derived in
this chapter.
By assuming the time average estimate based on zero
mean noise, stationary normal process of x(t), with the
autocovariance g(t), the autocovariance of the noise and the
2
variance spectrum are obtained. Using a simple example,
assuming the noise is white noise, the variance is infinite
during the inversion. So, it is desirable to filter the spectrum
of g(t) in order to reduce the effect of the noise. If the mean
square difference between the filtered estimation and the
true linewidth distribution is minimized, an optimal filter
can be derived. A method of optimal filtering will be
introduced to reduce the effect of noise. The noise spectrum,
which results from statistical noise in g(t), may be filtered
to minimize its effect by using spectral decomposition of the
Laplace transform which leads to the WeinerHopf integral
equation defining the optimal filter. The derivation of the
mean square difference using the optimal filter is also
included in this chapter.
Chapter III: In this chapter, using the results found in
chapter two, the gamma linewidth distribution, which is the
continous case, and the histogram approach, which is the
discrete case, have been used to investigate their spectral
properties with and without applying the optimal filter.
For the gamma linewidth distribution, various values of
the standard deviation divided by the mean will be used to see
the effect of the linewidth distribution during the inversion
and variations of the RMS error with respect to the shape of
the linewidth distribution in estimating G(y) using the
optimal filter.
3
For the histogram iinewidth distribution, difference
spacing will be used to see the effect of the Iinewidth
distribution during the inversion. The solution of the RMS
error does not allow yQ equal to zero because the natural log
term appears in its expression. Therefore, various values of y0
will be used to see variations of the RMS error in estimating
the histogram Iinewidth distribution using the optimal filter.
Chapter IV: The regularized inversion of the Laplace
integral equation gives a convenient computational algorithm
and does not require any a priori knowledge about the shape
of the Iinewidth distribution. In this chapter, the regularized
inversion of the Laplace transform will be introduced which
leads to the regularized solution. Even though the
regularization process does not give an optimal filter, it does
provided a filtering process which gives finite noise variance.
The regularized solution is a low pass filter which preserves
the properties of the inversion spectrum at low frequencies
but provides a sharp cutoff at a point controlled by the
regularization parameter a. The solution of the mean square
difference is a function of a. Thus, if the mean square
difference is minimized for some value of a, then an optimal
choice of a can be obtained. The parameter a will be carefully
discussed in this chapter.
4
Chapter V: The optimal filter, the regularization filter,
and other results found in previous chapters will be carefully
analyzed. The comparison between using optimal filter and
regularization filter will also presented in this chapter.
CHAPTER II
OPTIMAL FILTER EQUATION
Introduction
Consider the Frelholm integral equation of the first
kind
g(t) = J G(y)exp(yt)dy (2.1)
o
where g(t) is the known autocorrelation function and the
linewidth distribution function, G(y), is the unknown. This
integral equation appears in many fields of science. For
example, the laser scattering measurement of the Brownian
motion of particles suspended in a colloid can be modeled by
equation(2.1). In this case, g(t) is the autocorrelation of the
electric field of scattered light, and the linewidth
distribution, G(y), represents the particle size distribution of
the colloid, with the property G(y) > 0 for all y .
The integral above may be inverted by direct
integration. However, the kernel function of the inversion
integral is not integrable [2] which implies that the Laplace
transform is an illconditioned integral equation.
6
The spectral decomposition using the eigenfunction
spectrum of the Laplace transform gives a representation of
the noise and signal analogous to the Fourier decomposition
of the traditional time invariant linear system. In the
presence of noise, the variance of the inversion integral is
infinite which means maximum uncertainty in the inversion.
If the mean square difference between a filtered estimation
and the true linewidth distribution is minimized, an optimal
filter can be obtained. As a result, the variance is finite in
contrast to an infinite variance using an unfiltered
estimation of the linewidth distribution. A method of optimal
filtering will be introduced to reduce the effect of noise. The
noise spectrum, which results from statistical noise in g(t),
may be filtered to minimize its effect by using spectral
decomposition of the Laplace transform which leads to the
WeinerHopf integral equation defining the optimal filter.
Spectral Decomposition of The Laplace Tranform
Using the spectral decomposition of the Laplace
tranform with the eigenfunctions
7
27cyr(.5 jp)
1
 exp[j.50(p)] y'5 Ju y > 0
V2ti
and the eigenvalues spectrum
\(p) =  r(.5 + jp)  = V jc/cosh(jrp)
and
0(p) = arg[r(.5 + jp)]
oo
 pv(.5) + I [p/(p + .5) tan1{p/(p + .5)}]
n=o
where
V(.5) = 1.9635100260221423 . .
(2:2)
(2.3)
(2.4)
(2.5)
are the solution to
8
X(^)*(t,n) S 0(Y,n)exp(yt)dY (2.6)
o
Proof:
Substituting equation (2.2) in equation (2.6), obtain
oo X(p.)
XM*(t,n) = J V Y^exp(Yt)dy (2.7)
0 27iyr(.5 ji)
The change of variable C = Yf allows one to write
Mn)**(t,n) = VMn)/2itr(.5 jji) t 5 + i*1/ {5 fcexpKJdC
o (2.8)
Since the Gamma function is given by [7]
r(z) = I cz'1expK)dc
0
(2.9)
So
X(ji)*(t,p) = V X(p)/2icr(.5 \\i) r5 + ^r(.5 i)
(2.10)
9
note that
= V x(f) r(.5 j^i)/27ir(.5 + jiix) r(.5 j^) t'5 + ^
(2.11)
which gives
X(n) r(.5 + jji) V r(.5 + jn)/r(.5 jn)
 r(.5 + jn)  V it/cosh(jin) (2.12)
From equation(2.3), for large \i the eigenvalues are extremely
small which is the source of illconditioning. Assuming the
linewidth distribution can be written in the form
oo
G(Y) J H(n)
OO
the relationship between g(t) and G(y) can be derived as
follows
OO
oo
oo
J G(y)()(Y,i,)dY = J H(i)diJ ^(y.M') 4>*(Y.H)dY
0 0 0
(2.14)
Since the orthogonality of the eigenfunctions is
J <)(t,(j.)<>*(tIjx,)dt = 8(i p')
o
Substituting equation(2.15) in equation(2.14), gives
OO OO CO
J H(a)5(]j. (x')dM =J H(p)di J ^(Y.iiMY.^Ody
OO OO 0
= J G(y)(J)(Y,i,)dY
0
Because of the property of the impulse
equation(2.16) reduces to
OO
H(p') = J G(y)<})(y,(j.')dy
o
consider also
OO OO OO
J g(t)*(t,x)dt = J G(y)dyJ <>*(t.jj.)exp(yt)dt
0 0 0
1 0
(2.15)
(2.16)
function,
(2.17)
(2.18)
From equation(2.6), equation(2.18) can be written as
11
J g(t)*(t,^)dt = Mp)J G(y)<>(y,p) dy (2.19)
0 0
Substituting equation(2.17) in equation(2.19), obtain
oo
X.(1)H(1) = J g(t)4>*(t,n)dt (2.20)
0
Where H(i) is the spectrum of G(y) and if g(t) is known, then
G(y) can be determined from its spectrum.For convenience, let
X0i)H(n)= J(p) (2.21)
Parseval's theorem of two function is
oo oo
J a(t)b(t)dt = J A(p)B*(p)dp (2.22)
0 oo
Where A(p) and B(p.) are the spectrum of a(t) and b(t)
respectively. Giving
OO oo
J G2(y)dy =  H(p) 2d^
0
oo
(2.23)
1 2
And
oo oo
/ g2(t)dt J  Jfti) 2dji (2.24)
0 oo
Statistical Noise in Estimating g(t)
Suppose x(t) is a zero mean stationary process with
normal statistics and autocovariance g(t); also g(t) is a
time average estimate of g(t). Let
g(t)
1
T
T
! x(t')x(ft)dt'
o
(2.25)
where T is the duration of the time average. If the estimate
of g(t) = g(t) + n(t) and assuming n(t) has zero mean noise,
then the noise in g(t) is
n(t) = g(t) g(t) (2.26)
which gives
1 T
n(t) = J [x (t')x (t't) g(l)]df
T
(2.27)
The autocovariance is
1 tt
cn(t,t') = J J {E[x(t1)x(t1t) X(t2)x(t2t')]
T2 0 0  gWginjd^dtg (2.28)
For x(t) a stationary normal process [3]
E[x(t1)x(t1t)x(t2)x(t2t')]g(t)g(t') + g(t1t2)g(t,t2t+t')
+ g(t,t2t)g(t1t2t) (2.29)
Then the autocovariance, Cn(t,t') can be written as
Cn(t,t') = J [1 x/T]{g(x)g(x t + t')+g(x t)g(x + t')}dx
T T (2.30)
Equation(2.30) can be simplified by taking the Fourier
transform and evaluating the integral with respect to dx. This
yields
S(f) T
FT{Cn(t,t')}J [1 x/T]g(x)exp(j27if[f + x])dx
T T
S(f) T
+ J [1 x/T]g(x + t')exp(j27ifx)dx (2.31)
T T
Note that
14
OO
g(x) = J S(f)exp(j27tfx)df (2.32)
OO
Then
FT{Cn(t,f)} =
S(f) 00 T
J S(f')df J [1 x/T]exp(j27c[f' f]x)exp(j27ift)dx
T oo T
S(f) 00 T
+J S(f')df' J [1 x/T]exp(j27c[f' f]x)expG2jift)dx
T oo T (2.33)
Using the fact that cos(x) = (exp(jx) + exp(jx)}/2, then
equation(2.33) reduces to
FT{Cn(t,t')} =
2
T
OO T
os(27tft')S(f)J S(f')df J [1 x/T]exp(j27t[f f]x)dx
oo T (2.34)
Since [6]
T
J [1 x/T]exp(j27c[f f]x)dx = T{sin[7c(f' f)]T/7c(f' f)T}2
T (2.35)
1 5
Finally
oo
FT{Cn (t,t')} = 2cos(27ift')S(f) J S(f) sinc2[rc(f f)T]df
(2.36)
Assuming that the bandwidth of the sine function is
infinitesimal compared with the frequency variation of the
spectral density, one may use the following appoximations
OO oo
J S(f') sinc2[7t(f f)T]df = S(f) J sinc2[7c(f f)T]df
(2.37)
Since
oo
J sinc2(p)di = n (2.38)
 oo
And the property of any spectral density
S(f) = S(f) (2.39)
Giving
FT{Cn(t,t')} s 2cos(27rft')S2(f)/T
(2.40)
1 6
or
Cn(t,t') = J S2(f)cos(27cft')cos(27rft)df
= 02 (t t')/T + g2(t + t')/T
(2.41)
where
oo
g2(t) = / oW0(t 
oo
(2.42)
From equation(2.26), the spectrum of G(y) can be written as
^ oo ^ oo
H(ix)  J g(t)(j)*(t,p)dt+ J n(t)(t)*(t,p)dt
Mp) o Mp) o
H(p) = HOx) + N(p)/^(p)
(2.43)
where
oo
N(x) / n(t)
o
(2.44)
Similarly, G(y) can be written in the form
OO CO
G(y)=J H(ju)*(YlM,)djj. + / {N(i)<>*(Y,j.)/X(i)}d}i (2.45)
or
OO
G(y) = G(y) + J {N((j.)*(Y,Jx)/>,(M.)}dM. (2.46)
OO
The second term of equation(2.46) indicates that no matter
how small the noise the ratio N(i)/?i(i) is large for large
The variance of the estimate of G(y) is
2
= E { [ G(y) G(y)]2 } (2.47)
From equation(2.46), equation(2.47) can be written as
2 00 00 ^(y.fMy.f')
og(y) = E {J J N(i)N*(i') djLid^i } (2.48)
The equation above can be written as
2 00 00 ^(y.fWy.f)
= J / E (N(n)N(n')}dudn' (2.49)
OO OO
18
Defining
oo oo
E { N(n)N*(u') } CN(i,n') = J i E {n(t)n(t')}$*(t,n)4>(t',n)dtdf
oo oo (2.50)
Substituting equation(2.41) in equation(2.50), gives
CN(n,n)
1
T
CO oo
JJ g2(t t'M^t.nWt.^dtdt'
o o
^ CO oo
+ II g2(t + ^(t^J^^.^Jdtdt'
T o o
(2.51)
By evaluating the integral above, obtain
1 X2([l) + X2(i')
CN(H,H) r(jH^) rg2(r)*(r,a)
T r(.5+m) r(.5ji') o
1 r(.5m)r(.5+m) ~
+J rg2(r)()*(r,i)()(r,a,)dr
T r(.5j^i+jx') o
(2.52)
The term gives a pole at ji = T Thus, the residue of
this pole is
lim (pp')CN(p(^) = J g2(t)dt
ji \i' o
2
S2(0)
T
Suppose the noise is white noise then
Cn(t,f) = 5(t t')
2
Substituting equation(2.54) in equation(2.50), gives
CO oo
Cn(jj.,x') = J J
0 0
5(t tVapWt'.^Odtdt'
2
Because of the property of the impulse
equation(2.55) reduces to
C*
CN(p,n') = J
o
(2.53)
(2.54)
(2.55)
function
2
*(t,p)(t,i')dt
(2.56)
From the orthogonality of the eigenfunctions
20
Cn(imi') 6(m m')
2
Substituting equation(2.57) in equation(2.49), obtain
2 oo oo n0
og(y)"^ 6(x m')didp'
0 0 2 x(n)Mn')
Because of the property of the impulse function,
equation(2.58) reduces to
2
aG
oo oo
(7)JJ
0 0
n0 *(Y.P)(Y.fO
dp.
2 *2(n)
Since
1
<>*(Y.M)(Y.M) =
2tty
(2.57)
(2.58)
(2.59)
(2.60)
and
21
X(i) = Vrc/cosh(7tp.) (2.61)
Equation(2.59) becomes
2 n0
cg(y) =  J {cosh(7tp.)/jt} d(i = (2.62)
4tcy
which implies that the variance is infinite in the presense
of white noise. So it is desirable to filter the spectrum of
g(t) in order reduce the effect of the noise, n(t).
Derivation of the Optimal Filter
The optimal filter equation can be derived as follows
by assuming
oo
Gf(Y)J H(n)F(Y.n)*(Y,H> dn (2.63)
OO
where F(y,^i) is some filter. Now the problem is to find a filter
which minimizes the mean square difference. Consider
E f [GpM G(y)]2 } E { [GfM Gf(y) + GF(Y) G(r)]2 }
= E ( [Gf(y) Gf(y)]2 ) + E ( [Gf(y) G(y)]2 )
22
2
= [Gf(y) G(y)]2 + Oqf (y) (2.64)
where
[GF(y)G(y)32 =//H(^)H*(fj.,)[F(y,^)1 ][F*(y,M.)1 J<})*(y,^L)<^)(y,JLi,)d^ld^x,
(2.65)
and
2 00 00 Cn(h,m.')
ciGF(y) = n  F(y)^i)F*(yII')<^)*(y1^I)(t)(Y1l,)dlidI, (2.66)
0000 WMn')
Giving
2
D M E { [GFW G(r)]2}
oo oo
J / H(n)H*(n)[F(Y.n)1][F*(Y,n')1]t*(T.uH(T.n')dndn'
OO oo
00 00 ^(ji.ji)
+ JJ  Ffr.^F^Y.iaJtny.FMy.F^diT (2.67)
00 00
By using the variational method, the above expression is
minimized by a filter satisfying
23
00 CN (M.M')
/  F0(Y,n)(()*(yIp !di = H*(h')[G(y)J H(p)F0(y,p)<>*(Y.F)c1f]
oo MM)Mh') oo (2.68)
This is an integral equation or WeinerHopf equation which
defines the optimal filter, Fo(y,m.). This integral equation can
be simplified by using the pole at \i = p'. The equation(2.68),
then reduces to
1 F c^YifO^YiM')
S2(0) = H*(h')IG(7)J H(ii)F0(y,i)<.*(y.(i)di
T X2 (n') (2.69)
The solution to the integral above is
G(i)X2 (n)H(n)
F0(y,f)*(y.P) = (2.70)
S2(0)/T + g2(0)
Where
oo
S(0) = 2 J g(t)dt
0
(2.71)
and
24
oo oo
g2(0) 2 i S2(f)df = 2 J g2(t)dt
0 0
Substituting equation(2.70) in equation(2.63), obtain
oo G(y)
GfM J H(ji)X2 (n)H*fti)dn
S2(0)/T + g2(0)
Since
oo
g2(0) = J X2 (n)H(n)H*(g)dn
OO
Giving
g2()
Gf(y) = G(y)
S2(0)/T + g2(0)
Equation(2.64) is repeated here for convenience
2
D2(Y) = [Gf(y) G(Y)]2 + aGF (y)
(2.72)
(2.73)
(2.74)
(2.75)
(2.76)
25
After a long algebra, the mean square difference, D2(y) is
given by
S2(0)/T
D2MG2M (2.77)
S2(0)/T + g2(0)
Note that as T ) OO
lim GF(y) G(y) (2.78)
T oo
And
lim D2(y) = 0 (2.79)
T > oo
Which implies that the optimal filter estimate converges in
the mean square sense.
CHAPTER III
EXAMPLES OF USING OPTIMAL FILTER EQUATION
Gamma Linewidth Distribution
For the Gamma linewidth distribution as shown in
Figure 3.1
G(y) = A yXexph'y (3.1)
Where x = real constant, t0 = constant, and A = constant, with
the following property
oo
jG(7)dY = g(0) = 1 (3.2)
0
Substituting equation(3.1) in equation(3.2), then performing
the integration yields
t x+1
lo
A = (3.3)
xT(x)
Note that equation(3.1) depends on two parameters x and tQ.
The autocorrelation function of the Gamma linewidth
27
G(Y)
xicr4
Y
Figure 3.1 Gamma linewidth distribution.
28
distribution can be obtained by substituting equation(3.1) into
equation(2.1)
o t0x+1
g(t) =/ ^expCrt^expCTtJdy (3.4)
0 xr(x)
Which gives
+ X+1
lo
9(t) = {} (3.5)
t + to
Similarly, the spectrum of g(t), H(p) can be found from
equation(2.20) as follows
1 00 t0x+1 1
H(p)Jexp[j.50(p)] f5 & dt (3.6)
^(p) o [t+tQ]x+1 V2tu
Equation(3.6) can be written as
exp[j. 50(p)] 00 1
H(U)!  f5 I*1 dt (3.7)
V2nX(n) jt/to+i]x + l
A change of variable Â£ = t/tQ allows one to write
29
exp[j.56(n)]t05 Jm oo 1
H(H)/t 5 '*
V2jt X(t) o K + 1] X+1
Since the Beta function is given by [4]
r(z)r(w) 00 lo
B(z,w)  j dt (3.9)
T(z + w) 0 [t+i]2+w
Which gives
exp[j.5e(p)]t05 j> r(.5 jp)r(.5 + x + jp)
H(p)(3.10)
V2jiMp) r(x + 1)
The above expression may be simplified by letting x be an
integer and consider
r(x + 1 + z) = (x + z)(x 1+ z) . .(1 + z)r(1 + z) (3.11)
r(x + 1) = x(x 1)(x 2) . (1) (3.12)
Let z .5 + jp, then
r(.5 + x + jp)
= (1 + z/x)(1 + z/{x 1}) . .(1 + z)r(z + 1)
(3.13)
r(x +1)
30
Equation(3.13) can be written as
r(.5 + x + ji) x .5 + ji
= r(.5 + in) n [ 1 +]
r(x + i) k=i k
Finally, the expression for the spectrum of g(t) is
exp0.56(i)]to5 JM r(.5 jp)r(.5 + j x
H(n)
V2n ?i(i)
n [1 +
k=1
The magnitude of H(p) is
*o x _______________________
H(ill) = Vn V (1 1 /2k)2 +0i/k)2
2cosh(7ti) k=l
The mean of the Gamma linewidth distribution is
x + 1
yG{y)6y=
(3.14)
.5 + j (i
]
k
(3.15)
(3.16)
(3.17)
The mean square value of the Gamma linewidth distribution
31
i s
(x + 1)(x+2)
Y2 =Jv2G(Y)dy=  (3.18)
0
The standard deviation of the Gamma linewidth distribution
i s
[x + 1]'5
a [ Y2 ( Y)2 ]'5 =  (3.19)
to
Suppose the mean = 1000 and a / y^ .05, .1 and .5 which
gives x = 399, t0 = 4/10; x = 99, t0 = 1/10 and x = 3,
t0 = 4/1000 respectively The variation of H(i) with a y/ y^
is shown in Figure 3.2. The plot shows that the variance of the
spectrum, H(i), is controlled by the parameter x of the Gamma
linewidth distribution. The variance spectrum is inversely
proportional to the variance of the linewidth distribution. For
this particular example with g(t) = [tQ /(t0 +t)]x+1 giving
oo
S(0) = 2 J g(t)dt = 2t0/x
0
(3.20)
32
H(i)
Figure 3.2 Amplitude response of the Gamma linewidth
distribution for different values of a/y.
33
And
g2(0) = 2 J g2(t)dt = 2t0/(2x + 1) (3.21)
o
From equation(2.77), the mean square difference of the
Gamma linewidth distribution is
1
D2(y)G2(y) (3.22)
Tx2
1 + 
2(2x + 1)t0
The bias can be found from equation(2.75)
G(y) Gf(y) Gf(y) 1
= 1 (3.33)
G(Y) G(Y) Tx2
1 + 
2(2x + 1)t0
The variations of the bias and RMS error with T/L and av/y
are plotted in Figure 3.3, where the correlation time, tc, is
equal to tQ/(x + 1). From the result, by using the optimal
filter, the bias and RMS error depend on the ratio T/tc and the
34
Log10(Bias and RMS Error)
Lo^oavy
Figure 3.3 Variation with T/tc and a/y of the bias and RMS
error in estimating Gamma linewidth distribution using the
optimal filter.
35
parameter x. When T is much greater than tc a good estimate
is obtained. The variation of the optimal filter amplitude
response with the ratio oy/y^ is shown in Figure 3.4. A
conclusion can be made that the filter's shape is completely
determined by the term, ?i2(i) = 7i/cosh(rci).
*
Histogram Linewidth Distribution
Consider the following Histogram model
Figure 3.5
With the property
10
X area = 1 (3.34)
n = i
The histogram model can be equally or unequally spaced. For
convenience, let the histogram model be equally spaced. From
36
H(n) *2(n)
Figure 3.4 Optimal filter amplitude response of the Gamma
linewidth distribution for different values of a/y.
37
equation(2.17), the spectrum of the histogram may be written
in the form
10 Yk
HW = lGkl 4>(Y,H)dy (3.35)
n1 7kl
Substituting equation(2.2) in equation(3.35) yields
10 Yk 1
H(n)=XGkJ expy.56(h)] y~'s ^dy (3.36)
n1
Evaluating the integral above yields
exp[ j.56(i)] 10
H(H)2 <3k[ (yk) 5   (y,^.!) 5 j^] (3.37)
V2n(.5 jp) k=1
For various spacing and yQ = 0, the amplitude response of
histogram linewidth distribution is shown in Figure 3.6. The
plot shows that the amplitude response has finite variance
and that the side lobes are caused by the discontinuity of the
linewidth distribution. To find the bias and mean square
difference of the histogram linewidth distribution by using
the optimal filter, the values g2(0)and S(0) are needed. The
equation (2.1) and (2.72) are repeated here for convenience.
38
H(n)
1.2
t11r
1
Ay = 1/25.5
0.8 
o.6 r
0.4 
0.2 
0 ^
50
40 30
20
40
Figure 3.6 Amplitude response of the histogram linewidth
distribution for various spacing.
oo
39
g(t) = G(y)exp(yt)dt
0
and
g2(0) = 2 / g2(t)dt
0
Substituting equation(3.38) in equation(3.39) yields
OO OO OO
g2(0) = 2 JJ J G(y)G(Y')exp(Yt Y't)dydy'dt
000
This integral can be evaluated as follows. Integrating
equation(3.40) with respect to dt, gives
oo oo G(y)G(y')
g2(0) = 2 J! dydy1
0 0 i + y
This equation can be written as
10 10 Yk Y 1
g2(0) = 2lGklG,J J dydy
Tk1 Ym i + Y
(3.38)
(3.39)
(3.40)
(3.41)
k=1 1=1
(3.42)
40
Evaluate the integral above with respect to dy yields
10 10 yk y,
g2(0) 2 X Gk z G, J {ln[y + y]  }dy (3.43)
k1 vm yi
Similarly, integrating equation(3.43) with respect to dy'
10 10
g2(0) = 2 X GkX G,{[yk + y](ln[yk + y,] 1)
k=1 1=1
 [Yk.i + Y](*n[yk + y,] 1)
 [yk + YM](ln[yk + yM] 1)
+ [yki + yiillin^., + y,.,] 1)} (3.43)
Because g(t) is an even function of t, the Fourier transform
of g(t) is
oo oo
S(f) = J g(t)exp(j27tft)dt = 2 1 g(t)cos(2:ift)dt (3.44)
oo 0
Substituting equation(3.38) in equation(3.44) yields
41
OO CO
S(f) 2 JI G(y)exp(yt)cos(27ift)dydt (3.45)
00
Evaluating the integral above with respect to dt yields
oo yG(y)
S(f) = 2 J dy (3.46)
0 y 2 + (27ift) 2
Letting f = 0 and evaluating the integral above yields
10
S(0)2SGk{ln[yk]ln[yk.1]} (3.47)
k=1
Note that y0 cannot be zero because the natural log of zero is
undefined. The variation of the RMS error with T and y0 are
plotted in Figure 3.7. These result indicate that if T is large
and y0 moves away from the origin, a good estimate is
obtained. The optimal amplitude response of the histogram
linewidth distribution is shown in Figure 3.8. Again, the
filter's shape is completely determined by X2(p)=7r/cosh(7ip).
42
Log10(RMS Error)
Log10(T)
Figure 3.7 Variation with T of the RMS error in estimating
histogram Jinewidth distribution using the optimal filter.
43
H(H) Ji2(n)
Figure 3.8 Optimal filter amplitude response of the histogram
linewidth distribution for various spacing.
CHAPTER IV
SPECTRAL PROPERTIES OF THE REGULARIZED INVERSION OF THE
LAPLACE TRANSFORM
Introduction
The regularized inversion of the Laplace transform
gives a convenient computational algorithm and does not
require any a priori knowledge about the shape of the
linewidth distribution. In this chapter the spectral properties
of the regularized inversion are determined by using
eigenfunction decomposition of the Laplace transform as
described in Chapter II. The regularized solution represents a
type of low pass filter which preserves the properties of the
inversion spectrum at low frequencies but provides a sharp
cut off at a point controlled by the regularization parameter.
The Regularized Inversion of the Laplace Transform
If an estimate of g(t) contains noise, then the
equation(2.1) can be written as
oo
9 (t) = J G(Y)exp(yt)dy
0
(4.1)
45
where g(t) is the known autocorrelation function and the
linewidth distribution, G(y), is the unknown. The equation
above may be written in terms of the operator notation
g = KG (4.2)
and considering the following functional [1]
F(G) = KG g2 + oCG2 (4.3)
where the L2 norm is
oo
llfll = J f2(t)dt (4.4)
0
and C is the constraint operator. The functional is minimized
for a regularized solution, GR, which satisfies
[K*K + aC*C]GR = K*g (4.5)
where K* and C* are the adjoint operators of K and C; a is a
regularized parameter. Using the same approach as used in
chapter 2, the regularized linewidth distribution can be
written in the form
oo
GR(Y) J HR(nW*(w)dn (4.6)
OO
46
where HR is the spectrum of of Gr(y). If the constraint
operator C is the identity operator, then [2]
oo
[K*K + *(Y,n)di (47)
OO
and
oo
K*g = J J(p)X(p)<)*(Y,p)d^
(4.8)
where
oo
J(M) J g(t)D(t,i)dn (4.9)
0
J(p) is the spectrum of g(t). From the orthogonality of the
eigenfunctions, the spectrum of regularized solution is
Mu)
Hr(h)J(f) (4.10)
X2(p) + a
While the unregularized solution is
1
H(i)J(i)
X(i)
47
(4.11)
So, the regularization can be thought of as filtering by a low
pass filter
?i2(i)
F(m) (4.12)
X2(}i) + a
Defining the filter's bandwidth as the frequency where F
equals half its value at \i = 0 gives
1
iB = cosh"1 (2 + n/a) (4.13)
K
The variation of the regularization parameter, a, is shown in
Figure 4.1. From the plot, the filter's bandwidth is controlled
by the regularization parameter. Even though regularization
does not give an optimal filter, it does provide a filtering
process which gives finite noise variance and requires no a
prior knowledge of the signal being processed. The problem is
reduced to finding the appropriate value of parameter a .
48
49
Role of the Regularization Parameter
Subtituting equation(4.10) in equation(4.6), the regularized
linewidth distribution is
00 Mji)
GR(y) = JJ((i)(t*(Y^)di (4.14)
o ?i2((i) + a
From equation(2.13), the unregularized is
oo 1
G(y) = J J(nH*(Y,rtdn (4.15)
Mu)
The optimum value of a can be investigated by finding the
mean square difference as follows
2
Dr(y) = E{[GR(Y) G(V)]2}
2
= OqrM + [Gr(y) G(y)]2 (4.16)
Subtracting equation(4.15) from equation(4.14), gives
50
oo a
[Gr(y) G(y)]2 ={ J H((i)
oo X2(i) + a
2
The noise variance, ctgr(y) is given by
2 oo oo Mp)Mp')
ogr(y)=J I 
ooco [^2(i) + a][ X.2((i') + a ] (4.18)
Because the autocovariance of the noise has a singularity at
i = T, equation(4.18) above can be simplified by using the pole
at (i = i\ which gives
2 1 *2(H)
GrMS2(0) JI4>(w)l2dn (4.19)
T p.2(n) + a]2
where
X2(i) = 7r/cosh(7Ci) (4.20)
and
1
I
2ny
(4.21)
51
Equation(4.19) can be written as
2 S2(0) > cosh(jc^)
GR^)=/ (4.22)
Tya2 oo [cosh(^i) + rc/a]2
After evaluating various integrals the variance is given by
2 S2(0) 2z
aGR(Y)[Q0(z) zQi(Z)] (4.23)
Ty na
Where z = jiN n2 a2 and Q0,Q are the ascociated
Legendre function of the second kind [4]. For a < n
2 S2(0) 2z z2 1 z2 + 1
aGR(Y) =[zln()] (4.24)
Ty rca 2 z 1
For a > n
2 s2(0)
qrM
Ty
2a
2 2
a n
2 n
tan'1(
(a 7i )
2x3/2
n
))]
a(a2 7t2)
(4.25)
For a = 7i
52
2 S2(0) 4
aGR(Y)= (4.26)
Ty 3n2
The bias can be evaluated as shown below. Equation (2.17) is
repeated here for covenience
co
H(i) = 1 G(Y)0(y,p)dy (4.27)
0
From equation(4.14), the regularized linewidth distribution
can be writen as
oo oo X2(p)
Gr(y) = J G(Y)dy I 0*(y.M)(J)(y,.M') dp. (4.28)
0 oo ?i2(i) + a
Substituting X2(p) = 7u/cosh(7tp) and evaluating the integral
above with respect to dp yields [5]
oo
GrM J G(y)KR(r.y)dY' (4.29)
0
Where
53
1 sin[aln(y/y')]
sinh(a:c) sinh[ln(y/Y1)]
1
Kr(y.Y) = (
a V yY
1 sinh[aIn(y/Y)]
sin(a7i) sinh[ln(y/y)]
and
1 cosh'1 (rc/a ) a <. n
a = {
n tan'1 (V 7t2 a2 hz) a > n
The bias is
[GrM G(y)]2 = [ I G(Y)KR(Y,y)dy G(Y) f (4.30)
0
Since the bias increases with increasing a and the noise
variance is inversely proportional to a, there must be a
optimal choice of a which minimizes the mean square
difference. Equation(4.30) can not be evaluated in closed form.
Using numerical integration, the variation of a with the bias,
noise varriance and the mean square difference are
54
shown in Figure 4.2a, Figure 4.2b, and Figure 4.2c for various
values of T. From the plots the optimal value of a is changed
for different values of the time duration.
55
The bias, noise, and mean square difference
xlO*9
Figure 4.2a Variation with a of the bias, noise, and mean
square difference for T = 5000.
56
The bias, noise, and mean square difference
x 109
Figure 4.2b Variation with a of the bias, noise, and mean
square difference for T = 7000.
57
The bias, noise, and mean square difference
xlO9
Figure 4.2c Variation with a of the bias, noise, and mean
square difference for T 10000.
58
RMS error
4.3 Variation with a of the RMS error for T = 10000.
CHAPTER V
DISCUSSION AND CONCLUSIONS
The eigenfunctions and eigenvalues of the Laplace
transform have been introduced. The relationship between the
autocorrelation function, g(t), and the linewidth distribution
function, G(y), were derived by using the spectral
decomposition of the Laplace transform rather than the
Fourier decomposition of traditional time invariant linear
system theory. Other properties of the eigenfunctions of the
Laplace transform were also derived. Based on the assumption
that x(t) is a zero mean stationary normal process, and
assuming a time average estimate, the autocovariance of the
noise and the variance spectrum were obtained. White noise
was used to investigate the spectral properties of the
statistical noise in estimating G(y) by a time average. As a
result, the variance was infinite during the inversion. Thus, it
is desirable to filter the spectrum of g(t) in order to reduce
the effect of noise. The mean square difference between the
filtered estimation and the true linewidth distribution was
obtained, and using the variational method, the mean square
difference was minimized by a filter satisfying the
WeinerHopf integral equation defining the optimal filter.
60
The spectral properties of the Gamma and the
histogram linewidth distribution have been investigated.
Various values of the standard deviation divided by the
mean were used to investigate the spectral properties of the
Gamma linewidth distribution. It was found that the variance
of the linewidth distribution decreases with increasing x.
Also, as the parameter x of the linewidth distribution
increases, the variance spectrum also increases. So, the
variance spectrum is inversely proportional to the variance
of the linewidth distribution. The RMS error decreases as the
parameter x increases. Thus, the error in estimating the
Gamma linewidth distribution was proportional to the
variance of linewidth distribution which means, for smaller
variance of the linewidth distribution, the error in estimating
G(y) decreases. A conclusion is made that the error depends on
the ratio T/tc and variance of the linewidth distribution. If
the duration time T is much greater than the correlation time
tc a good estimate is obtained. For example, T/Tc = 105the
error is less than 0.025. Applying the optimal filter to the
example above, one thing to note is that the filter's shape is
completely determined by the term X2(i) = 7t/cosh(rcp).
Different spacing of the histogram linewidth
distribution was used to see the effect of the linewidth
distribution during the inversion. It was found that the
61
variance spectrum was finite, and the discontinuity of the
linewidth distribution causes side lobes in its spectrum.
Because the solution of the RMS error does not allow y6 equal
to zero, various values of y0 were used to see the effect of the
error in estimating the linewidth distribution using the
optimal filter. The error in estimating the histogram
linewidth distribution was decreased as a0 moves, away from
the origin. Also, the error depends only on the duration time T
for the histogram linewidth distribution. When T is large a
good estimate was obtained.
The regularized solution which results from using the
spectral decomposition of the Laplace transform is a low
pass filter with pB = cosh'1 (2 + n/a)/n. The parameter a
controlls the filter's bandwidth as shown in Figure 4.1. Even
though the regularization process does not give an optimal
filter; it does provided a filtering process which gives finite
noise variance and requires no prior knowledge of the signal
being processed. The mean square difference was obtained by
using the same approach as described in chapter two. To find
the optimal value of a which gives a minimum mean square
difference, one can take a derivative with respect to a, set
the equation equal to zero, then solve for a. However, the
optimal value of the parameter a may be impossible to solve
62
by this method because of the complicated expression. As an
alternative way, one can plot the mean square difference vs
the parameter a to see if there is an optimal choice of a that
gives minimum mean square difference. This method turns out
to be a better approach for this particular case. The bias and
the noise variance have been calculated. Since the bias
increases with increasing a and the noise variance is
inversely proportional to a, there must be an optimal choice
of a which minimizes the mean square difference. The plots
for various values of the time duration T are shown in Figure
4.2a, 4.2b, and 4.2c. These plots indicated that the optimal
choice of a was changed for different values of T.
The gamma linewidth distribution is used to compare
the peformance of the optimal filter and the regularization
filter. For c/y^ = .5 and T/Tc = 105the error in estimating
G(y) using the optimal filter is less than 0.025, while the
error in estimating G(y) using the regularization filter is
approximately 0.12 as shown in Figure 3.3 and Figure 4.5.
Thus, the error introduced by using the optimal filter in
estimating G(y) is smaller than the error with the
regularization filter. However, the optimal filter requires
complete knowledge of the linewidth distribution, while the
regularization filter depends only on the level of statistical
63
noise in the autocorrelation. Therefore, there is a trade off
between the prior knowledge of the linewidth distribution and
the RMS error.
64
BIBLIOGRAPHPHY
[1] H. S. Dhadwal, "Regularized Inversion of the Laplace
Integral Equation Polydispersity Analysis in Photon
Correlation Spectroscopy", Particle and Particule Syst.
Charact., to be published in 1989.
[2] D. A. Ross, "Optimal Filtering Applied to the Inversion of
the Laplace Transform", Particle and Particule Syst.
Charact., S, 3,109115,1988.
[3] A. Papoulis, Probability. Random Variables, and
Stochastic Process. Second Edition, 215221,1984.
[4] M. Abramowitz and I. A. Stegun Ed, Handbook of
Mathematical Functions. Tenth Printing, 258, 335,1972.
[5] I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals.
Series, and Products. Academic Press, 1148,1980.
[6] H. Stark and J. W. Woods, Probability. Random Processes
and Estimation Theory for Engineer. PrenticeHall, 320,
1986.
[7] E. D. Rainville, Special Functions. Second Printing, 9,
1963
65
All the progams
MATLAB language.
APPENDIX A
PROGRAM LISTINGS
in this appendix are written in the
66
%This program is used to plot Figure 3.1
u=0.00000000000000000000000001:10:4000
m=1000;
x=3 ;
t=(x+1)./m;
a=exp((x+1).*log(t))
y=exp(x.*log(u));
yl=exp(t.*u);
y2=(a.*y.*yl)./(x.*gamma(x));
plot(u,y2,'g')
%This program is used to plot Figure 3.2
u=65:1:65;
d=l;
for x=l:3
c=((l.l./(2.*x)).*2.+(u./x).*2).".5;
d=d.*c;
end
e=l;
for y=l:99
h=( (l.l./(2.*y)) ~2. + (u./y) ~2) \5;
e=e.*h;
end
m=l;
for z=l;399
l=((l.l./(2.*z)).*2.+(u./z)*2).*.5;
m=m.*1;
end
n=sqrt(pi./cosh(pi*u));
r=sqrt(1/(500*pi));
s=sqrt(1/(20*pi));
t=sqrt(1/(5*pi));
o=n.*d.*r;
p=n.*e.*s;
q=n.*m.*t;
plot(u,o,'g',u,p,'g',u,q,'g')
%This program is used to plot Figure 3.3
u=0:1:10;
ua=[l 1111111111];
x=3;
a=x."2;
b=((4.*x)+2).*(x+l);
c=exp(u.*log(10));
d=l+((a.*c)./b);
e=ua./d
f=sqrt(e)
g=logl0(c);
h=logl0(e);
i=logl0(f);
xa=99;
aa=xa."2;
ba=((4.*xa)+2).*(xa+1);
ca=exp(u.*log(10));
da=l+( (aa. *ca) ./t>a)
ea=ua./da
fa=sqrt(ea)
ga=logl0(ca);
ha=logl0(ea);
ia=logl0(fa);
xb=399;
ab=xb."2;
bb=((4.*xb)+2).*(xb+1);
cb=exp(u.*log(10));
db=l+((ab.*cb),/bh);
eb=ua./db
fb=sqrt(eb)
gb=logl0(cb);
hb=logl0(eb);
ib=logl0(fb);
Plot(gfhfg,i,g,ha,g,hb,g,ia,g,ib)
%This program is used to plot Figure 3.4
u=2:.05:2;
d=l;
for x=l:3
c=((1.1./(2.*x)).~2.+(u./x)."2).".5;
d=d.*c;
end
e=l;
for y=l:99
h=((l.l./(2.*y)).~2.+(u./y).*2).".5;
e=e.*h;
end
m=l;
for z=l:399
l=((l.l./(2.*z))."2.+(u./z).*2). 5 ;
m=m.*1;
end
n=sqrt(pi./cosh(pi*u)).*(pi./cosh(pi*u))
r=sqrt(1/(500*pi));
s=sqrt(1/(20*pi));
t=sqrt(1/(5*pi));
o=n.*d.*r;
p=n.*e.*s;
q=n.*m.*t;
plot(u,o,'g',u,p,'g',u,q,'g')
70
%This program is used to plot Figure 3.6
u=50:1:50;
xa=0;ya=0;xb=0;yb=0;
fO=0.0000000000000000000000000000000000000000000001
a=(0.5.*2)+(u.~2) ;b=sqrt (2. *pi. *a) ;
e=0.2;f=l/(25.5.*e);
za=[e 2.*e 3.*e 4.*e 5.*e 4.*e 3.*e 2.*e e .5.*e];
for m=l:10
n=ml;
aa=u.*log(m.*f+fo);ba=u.*log(n.*f+fo);
xa=xa+za(m).*sqrt(m.*f).*cos(aa)za(m).*
sqrt(n.*f).*cos(ba);
ya=yaza(m).*sqrt(m.*f).*sin(aa)+za(m). *
sqrt(n.*f).*sin(ba);
end
z=(xa.*2)+(ya.~2)?t=sqrt(z)./b;
xal=0;yal=0;xbl=0;ybl=0;
el=0.1;f1=1/(25.5.*el);
zal=[el 2.*el 3.*el 4.*el 5.*el
4.*el 3.*el 2.*el el .5.*el];
for ml=l:10
nl=mll;
aal=u.*log(ml.*fl+fo);bal=u.*log(nl.*fl+fo);
xal=xal+zal(ml).*sqrt(ml.*f1).*cos(aal)zal(ml).*
sqrt(nl.*fl).*cos(bal);
yal=yalzal(ml).*sqrt(ml.*f1).*sin(aal)+zal(ml).*
sqrt(nl.*fl).*sin(bal);
end
zl=(xal."2)+(yal."2);tl=sqrt(zl)./b;
xa2=0;ya2=0;xb2=0;yb2=0;
e2=l;f2=1/(2 5.5.*e2);
za2=[e2 2.*e2 3.*e2 4.*e2 5.*e2
4.*e2 3.*e2 2.*e2 e2 .5.*e2];
for m2=l:10
n2=m2l;
aa2=u.*log(m2.*f2+fo);ba2=u.*log(n2.*f2+fo);
xa2=xa2+za2(m2).*sqrt(m2.*f2).*cos(aa2)za2(m2).*
sqrt(n2.*f2).*cos(ba2);
ya2=ya2za2(m2).*sqrt(m2.*f2).*sin(aa2)+za2(m2).*
sqrt(n2.*f2).*sin(ba2);
end
z2=(xa2.~2)+(ya2."2);t2=sqrt(z2)./b;
plot(u,t f'g1,u,tl,'g',u,t2,1g')
71
%This program is used to plot Figure 3.7
u=0:1:10;
xa=0;
ya=0;
fo=.0000000000000000000000000000001;
e=0.1;
f=l/(25.5*e);
za=[e 2.*e 3.*e 4.*e 5.*e 4.*e 3.*e 2.*e e .5.*e]
for m=l:10
for mm=l:10
n=ml;
nn=mml;
aa=m.*f+fo;
ba=n.*f+fo;
ca=mm.*f+fo;
da=nn.*f+fo;
ga=(aa+ca).*(log(aa+ca)1);
ha=(ba+ca).*(log(ba+ca)1);
ia=(aa+da).*(log(aa+da)1);
ja=(ba+da).*(log(ba+da)1);
xa=xa+za(m).*za(mm).*(gahaia+ja);
end
ya=ya+za(mm).*(log(aa)log(ba));
end
wa=exp(u.*log(10));
wb=(2.*ya.*ya)./(2.*ya.*ya+wa.*xa);
wd=logl0(wa);
we=logl0(wb);
wf=sqrt(wb);
wg=logl0(wf);
xaa=0;
yaa=0;
foa=.01;
for ma=l:10
for mma=l:10
na=mal;
nna=mmal;
aaa=ma.*f+foa;
baa=na.*f+foa;
caa=mma.*f+foa;
daa=nna.* f+foa;
gaa=(aaa+caa).*(log(aaa+caa)1);
haa=(baa+caa).*(log(baa+caa)1);
iaa=(aaa+daa).*(log(aaa+daa)1);
jaa=(baa+daa).*(log(baa+daa)1);
xaa=xaa+za(ma).*za(mma).*(gaahaaiaa+jaa);
end
yaa=yaa+za(mma).*(log(aaa)log(baa));
end
waa=exp(u.*log(10));
72
wba=(2.*yaa.*yaa)./(2.*yaa.*yaa+waa.*xaa);
wda=loglO(waa);
wea=loglO(wba);
wfa=sqrt(wba);
wga=loglO(wfa);
xab=0;
yab=0;
fob=.1;
for mb=l:10
for mmb=l: 10
nb=mbl;
nnb=mmbl;
aab=mb.*f+fob;
bab=nb.*f+fob;
cab=mmb. f+f ob;
dab=nnb.*f+fob;
gab=(aab+cab).*(log(aab+cab)1);
hab=(bab+cab).*(log(bab+cab)1);
iab=(aab+dab).*(log(aab+dab)1);
jab=(bab+dab).*(log(bab+dab)1);
xab=xab+za(mb).*za(mmb).*(gabhabiab+jab)
end
yab=yab+za(mmb).*(log(aab)log(bab))
end
wab=exp(u.*log(10)) ;
wbb=(2.*yab.*yab)./(2.*yab.*yab+wab.*xab);
wdb=logl0(wab);
web=logl0(wbb);
wfb=sqrt(wbb);
wgb=logl0(wfb);
xac=0;
yac=0;
foc=l;
for mc=l:10
for mmc=l:10
nc=mcl;
nnc=mmcl;
aac=mc.*f+foc;
bac=nc.*f+foc;
cac=mmc.*f+foc;
dac=nnc.*f+foc;
gac=(aac+cac).*(log(aac+cac)1);
hac=(bac+cac).*(log(bac+cac)1);
iac=(aac+dac).*(log(aac+dac)1);
j ac=(bac+dac).*(log(bac+dac)1);
xac=xac+za(me).*za(mmc).*(gachaciac+jac);
end
yac=yac+za(mmc).*(log(aac)log(bac));
end
wac=exp(u.*log(10)) ;
73
wbc=(2.*yac.*yac)./(2.*yac.*yac+wac.*xac);
wdc=loglO(wac);
wec=loglO(wbc);
wfc=sqrt(wbc);
wgc=loglO(wfc);
plot(wd,we,wd,wea,wd,web,wd,wee)
74
%This program used to plot Figure 3.8
u=3:.1:3;
xa=0;ya=0;xb=0;yb=0;
fO=0.000000000000000000000000000000000000000000000001
a=(0.5.~2)+(u.*2);b=cosh(pi.*u).*sqrt(2.*pi.*a)./pi;
e=0.2;f=l/(25.5.*e);
za=[e 2.*e 3.*e 4.*e 5.*e 4.*e 3.*e 2.*e e .5.*e];
for m=l:10
n=ml;
aa=u.*log(m.*f+fo);ba=u.*log(n.*f+fo);
xa=xa+za(m).*sqrt(m.*f).*cos(aa)za(m). *
sqrt(n.*f).*cos(ba);
ya=yaza(m).*sqrt(m.*f).*sin(aa)+za(m).*
sqrt(n.*f).*sin(ba);
end
z=(xa.*2)+(ya."2);t=sqrt(z)./b;
xal=0;yal=0;xbl=0;ybl=0;
el=0.1;f1=1/(25.5.*el);
zal=[el 2.*el 3.*el 4.*el 5.*el
4.*el 3.*el 2.*el el .5.*el];
for ml=l:10
nl=mll;
aal=u.*log(ml.*fl+fo);bal=u.*log(nl.*f1+fo);
xal=xal+zal(ml).*sqrt(ml.*fl).*cos(aal)zal(ml).*
sqrt(nl.*fl).*cos(bal);
yal=yalzal(ml).*sqrt(ml.*fl).*sin(aal)+zal(ml).*
sqrt(nl.*fl).*sin(bal);
end
zl=(xal."2)+(yal.'2);tl=sqrt(zl)./b;
xa2=0;ya2=0;xb2=0;yb2=0;
e2=l;f2=l/(25.5.*e2);
za2=[e2 2.*e2 3.*e2 4.*e2 5.*e2
4.*e2 3.*e2 2.*e2 e2 .5.*e2];
for m2=l:10
n2=m2l;
aa2=u.*log(m2.*f2+fo);ba2=u.*log(n2.*f2+fo);
xa2=xa2+za2(m2).*sqrt(m2.*f2).*cos(aa2)za2(m2).*
sqrt(n2.*f2).*cos(ba2);
ya2=ya2za2(m2).*sqrt(m2.*f2).*sin(aa2)+za2(m2).*
sqrt(n2.*f2).*sin(ba2);
end
z2=(xa2.*2)+(ya2."2);t2=sqrt(z2)./b;
plot(u,t/'g',u,tl,'g',u,t2,'g')
75
% This program is used to plot Figure 4.1
u=7:.07:7;
a=pi./cosh(pi.*u);
x5=.0000006;
b5=a./(a+x5);
x6=.000007;
b6=a./(a+x6);
x7=.00008;
b7=a./(a+x7);
x8=.0009;
b8=a./(a+x8);
x9=.0010;
b9=a./(a+x9);
plot(u,b5,'g'/u,b6,u,b7,u,b8,u,b9/'g')
76
% This program is used to plot Figure 4.2a
u=0.00002:.000018:.0008;
wl=pi./sqrt(pi.*piu.*u);
w2=wl.*wl;
w3=(wl+1)./(wl1);
w4=log(w3);
w5=w2l;
w6=(w5.*w4)./2;
w7=wlw6;
W8=128./(9.*pi.*1000000000) ;
t=5000;
w9=w7.*w8./(t.*u);
cl=acosh(pi./u)./pi;
n=2000;c2=l/3000;x8=0;y8=0;z=25;
zl=log((3+1)./z) ;
z2=sin(cl.*zl);
z3=sinh(zl);
z4=sqrt(z);
z5=exp(3.*log(z));
z6=exp(z);
z7= (z6.*z5.*z2)/(z3.*z4);
h=25.000000001/2000;
for i=l:((n/2)1)
x=2.*h.*i;
xl=log((3+1)./x);
x2=sin(cl.*xl);
x3=sinh(xl);
x4=sqrt(x);
x5=exp(3.*log(x));
x6=exp(x);
x7= (x6.*x5.*x2)/(x3.*x4);
x8=x8+x7;
end
x9=2.*x8;
for j=0:((n/2)1)
Y=( (2. . *h;
yl=log((3+1)./y);
y2=sin(cl.*yl);
y3=sinh(yl);
y4=sqrt(y);
y5=exp(3.*log(y));
y6=exp(y);
y7= (y6.*y5.*y2)/(y3.*y4);
y8=y8+y7;
end
y9=4.*y8;
yl0=(h.*(x9+y9+z7))./3;
yll=(c2.*yl0)./(u.*sinh(pi.*cl));
C3=0.00001221*64;
z8=yllc3;
77
Z9=(z8.*z8),/c3;
Wl0=w9./c3;
wll=wl0+z9;
plot(u,z9,'w',u,wlO,'w',u,wll,'w');
78
% This program is used to plot Figure 4.2b
u=0.00002:.000018:.0008;
wl=pi./sqrt(pi.*piu.*u);
w2=wl.*wl;
w3=(wl+l)./(wl1);
w4=log(w3);
w5=w2l;
w6=(w5.*w4)./2;
w7=wlw6;
W8=128./(9.*pi.*1000000000) ;
t=7000;
w9=w7.*w8./(t.*u);
cl=acosh(pi./u)./pi?
n=2000;c2=l/3000;x8=0;y8=0;z=25;
zl=log((3+1)./z) ;
z2=sin(cl.*zl) ;
z3=sinh(zl);
z4=sqrt(z);
z5=exp(3.*log(z))?
z6=exp(z);
z7= (z6.*z5.*z2)/(z3.*z4) ;
h=25.000000001/2000?
for i=l:((n/2)1)
x=2.*h.*i;
xl=log((3+1)./x);
x2=sin(cl.*xl);
x3=sinh(xl);
x4=sqrt(x);
x5=exp(3.*log(x));
x6=exp(x);
x7= (x6.*x5.*x2)/(x3.*x4);
x8=x8+x7;
end
x9=2.*x8;
for j=0:((n/2)1)
Y=((2.*j)+1).*h;
yl=log((3+1)./y)?
y2=sin(cl.*yl);
y3=sinh(yl);
y4=sqrt(y);
y5=exp(3.*log(y));
y6=exp(y);
y7= (y6.*y5.*y2)/(y3.*y4);
y8=y8+y7;
end
y9=4.*y8 ?
yl0=(h.*(x9+y9+z7))./3;
yll=(c2.*yl0)./(u.*sinh(pi.*cl))?
C3=0.00001221*64?
z8=yllc3;
79
Z9=(Z8.*Z8)./c3;
wl0=w9./c3;
wll=wl0+z9;
plot(u, z9, 'w' ,u,wlO, 'w',u,wll, 'w') ;
80
% This program is used to plot Figure 4.2c
u=0.00002:.000018:.0008;
wl=pi./sqrt(pi.*piu.*u);
w2=wl.*wl;
w3=(wl+l)./(wl1);
w4=log(w3);
w5=w2l;
w6=(w5.*w4)./2;
w7=wlw6;
W8=128./(9.*pi.*1000000000);
t=10000;
w9=w7.*w8./(t.*u);
cl=acosh(pi./u)./pi;
n=2000;c2=l/3000;x8=0;y8=0;z=25;
zl=log((3+1)./z);
z2=sin(cl.*zl);
z3=sinh(zl);
z4=sqrt(z);
z5=exp(3.*log(z));
z6=exp(z);
z7= (z6.*z5.*z2)/(z3.*z4) ;
h=25.000000001/2000;
for i=l:((n/2)1)
x=2.*h.*i;
xl=log((3+1)./x);
x2=sin(cl.*xl);
x3=sinh(xl);
x4=sqrt(x);
x5=exp(3.*log(x));
x6=exp(x);
x7= (x6.*x5.*x2)/(x3.*x4);
x8=x8+x7;
end
x9=2.*x8;
for j=0:((n/2)1)
y=((2.*j)+1).*h;
yl=log((3+1)./y);
y2=sin(cl.*yl);
y3=sinh(yl);
y4=sqrt(y);
y5=exp(3.*log(y));
y6=exp(y);
y7= (y6.*y5.*y2)/(y3.*y4);
y8=y8+y7;
end
y9=4.*y8;
yl0=(h.*(x9+y9+z7))./3;
yll=(c2.*yl0)./(u.*sinh(pi.*cl));
C3=0.00001221*64;
z8=yllc3;
81
z9=(z8.*z8)./c2;
wl0=w9./c3;
wll=wlO+z9;
plot (u, z9, 'w' ,u,wlO, 'w' ,u,wll, 'w') ;
% This program is used to plot Figure 4.3
u=0.00002:.000018:.0008;
wl=pi./sqrt(pi.*piu.*u);
w2=wl.*wl;
w3=(wl+1)./(wl1);
w4=log(w3);
w5=w2l;
w6=(w5.*w4)./2;
w7=wlw6;
W8=128./(9.*pi.*1000000000);
t=10000;
w9=w7.*w8./(t.*u);
cl=acosh(pi./u)./pi;
n=2000;c2=l/3000;x8=0;y8=0;z=25;
zl=log((3+1)./z);
z2=sin(cl.*zl);
z3=sinh(zl);
z4=sqrt(z);
z5=exp(3.*log(z));
z6=exp(z);
z7= (z6.*z5.*z2)/(z3.*z4);
h=25.000000001/2000;
for i=l:((n/2)1)
x=2.*h.*i;
xl=log((3+1)./x);
x2=sin(cl.*xl);
x3=sinh(xl);
x4=sqrt(x);
x5=exp(3.*log(x));
x6=exp(x);
x7= (x6.*x5.*x2)/(x3.*x4);
x8=x8+x7;
end
x9=2.*x8;
for j=0:((n/2)1)
y=((2.*j)+l).*h;
yl=log((3+1)./y);
y2=sin(cl.*yl);
y3=sinh(yl);
y4=sqrt(y);
y5=exp(3.*log(y));
y6=exp(y);
y7= (y6.*y5.*y2)/(y3.*y4);
y8=y8+y7;
end
y9=4.*y8;
ylO=(h.*(x9+y9+z7))./3;
yll=(c2.*yl0)./(u.*sinh(pi.*cl));
C3=0.00001221*64;
z8=yllc3;
83
z9=z8.*z8;
wll=sqrt(w9+z9),/c3;
plot(u,wll,'w*)
