Gender, kinship ties, and social network density

Material Information

Gender, kinship ties, and social network density
Bullers, Susan
Publication Date:
Physical Description:
v, 82 leaves : illustrations, form ; 29 cm


Subjects / Keywords:
Social networks ( lcsh )
Women -- Social networks ( lcsh )
Men -- Social networks ( lcsh )
Men -- Social networks ( fast )
Social networks ( fast )
Women -- Social networks ( fast )
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )


Includes bibliographical references (leaves 78-82).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Arts, Department of Sociology.
Statement of Responsibility:
by Susan Bullers.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
23379189 ( OCLC )
LD1190.L66 1990m .B84 ( lcc )

Full Text
Stefania Alenka Brown-VanHoozer
B.S. University of Maryland, 1985
M. S., University of Colorado at Denver, 1990
A thesis submitted to the Faculty of
the Graduate School of the
University of Colorado in partial fulfillment of
the requirements for the degree of
Master of Science
Department of Electrical Engineeri ng


c' 1990 by Stefania Alenka Brawn-VanHoozer
All rights reserved.

This thesis for the Master of Science
has been approved for the
School of
Electrical Engineering
Arun K. Majumdar
Jochen Edrich
Edward T. Wall


Brown-VanHoozer, Stefania Alenka (M. S. Electrical
Statistics of Laser Irradiance Fluctuations Through
Atmospheric Turbulence Based on HigherOrder Moments
Thesis directed by Professor Arun K. Majumdar
The thesis is based on the theory of constructing an
unknown probability density function (PDF) from the
higher-order moments. The technique proposed to develop
the theory is based on the orthogonal and asymptotic
expansion derived from the normal or Gaussian
distribution. The expansion series focused on for the
development of the technique are the Gram-Charlier and
Edgeworth series.
Conclusion covers the feasibility in applying such a
technique for determining an approximate PDF of laser
irradiance fluctuations through atmospheric turbulence
from higherorder moments.
The form and content of this abstract are approved. I
recommend its publication.

Arun K. Maj umdar

I wish to acknowledge the following people who made
this study possible. My mentor, Dr. Arun Majumdar, of the
Faculty of the Graduate School, Department of Electrical
Engineering, University of Colorado at Denver, who was the
source of my inspiration for this study. Dr. James
Churnside of NOAA, whose experimental data made this work
possible and Michael P. Johnson and Darcee Freier who
assisted in the development of the MOMENT program for
conversion of the experimental data to be used on a
Personal Computer. Last, but not least, I am very
grateful to my family, friends and Roger Ballenger for
their patience and support.

Figures ......................................ix
1. INTRODUCTION . .........................1
Defining Turbulence ..................... 5
Line-of-Sight Propagation ........ 9
Variance of Irradiance Fluctuations ... 12
PDF's Considered Recently ................ 13
Concepts of Higher-Higher Order Moments. 15
Gram-Charlier Edgeworth Series ........... 17
Gram-Charlier Series .................. 17
Edgeworth Series ...................... 19
Experimental Arrangement ............. . 21
Calculation of the Moments from
the Experimental Data....................2 3
PDFs Proposed for the Experimental Data 25
Moments from the First Set of
Proposed PDFs............................2 7
Moments from the Second Set of
Proposed PDFs..............................27

CENTRAL MOMENTS & CUMULANTS ...................57
EDGEWORTH SERIES)..............................66
REFERENCES ........................................ .75

3.1 PDF Representation Of Experimental Data
with Lowest Irradiance ..................... .29
3.2 PDF Representation of Experimental Data
with Highest Irradiance .................... .30
3.3 Distributions Based on the Third, Fourth
and Fifth Moments of the Experimental Data .31
3.4 Distributions Based on the Truncated
Third, Fourth and Fifth Moments of the
Experimental Data.......................33
4.1 Test Function: Log-normal Distribution and
Edgeworth Series .............................4 2
5.1 Histogram of the 100 meter Path Length
(Values Constructed from MOMENT Program) . .48
5.2 Histogram of the 2400 meter Path Length
(Values Constructed from MOMENT Program) . .49
5.3 PDF of Experimental Data 100 meter Path
Length (Experimental Data Values vs
Gram-Charlier Values) ...................50
5.4 PDF of Experimental Data 1200 meter Path
Length (Experimental Data Values vs
Gram-Charlier Values) ...................51
5.5 PDF of Experimental Data 2400 meter Path
Length (Experimental Data Values vs
Gram-Charlier Values)

3.1 Spacing for Five Photomultiplier
Detectors ..................................24
4.1 Central Moments Developed from the MOMENT
Program, Appendix E, for the 100, 1200 &
240 0 meter Path Lengths......................4 4

experimental data.
In general, the statistics of laser irradiance fluct-
uations are non-Gaussian in nature. Therefore, highei
order moments, up to the eighth order, can give more info-
rmation concerning the contributions from the tails of
the probability density function. Even and odd-order
moments give Information generally about the width and
lack of symmetry Df -the distribution respectively C20I.
This thesis is directed to solve this problem in a
new way by using the technique of orthogonal and asymptot-
ic expansion, derived from the normal or Gaussian distrib-
ution. These expansion series are Gram-Charlier and Ed gw-
orths series and are well defined in literature. It is
assumed that we have a knowledge of higherorder moments
of laser irradiance, after propagating through some atmos-
pheric turbulent path, or at least estimation of moments
from the measurements. In this thesis, we develop the new
technique of reconstructing the unknown PDF from the
knowledge of highezorder moments. The highezorder
moments are very sensitive to the form of PDF. Therefore,
in order to compare measured moments with the predictions
of a model PDF, care must be taken to measure the higher-
order moments as accurately as possible. The
experimental data used for this thesis is the data taken
by James H. Churnside and Steven F. Clifford of the
Batiozml Oceanic and Atmospheric Administration /

Environmental Research Laboratories (HOAA/ERL). The data
includes fluctuations of several path lengths
and the measurements of the crosswind velocity, the
refractive turbulence structure, Cjj and the inner scale
of turbulence, 1Q; all measured optically. The data was
normalized at each time series by its mean and sorted into
histograms for estimating the probability density
functions. The second and third moments were calculated
to obtain the parameters of the theoretical density funct-
ions purposed to describe the irradiance fluctuations in
the atmosphere C11] .
Computer algorithms were developed which calculated
the irradiance fluctuations taken by BOAA up to the nth
order of moments and generated the central moments, regul-
ar moments, normalized regular moments and histograms of
the irradiance fluctuation intensities.
A second set of computer algorithms were developed
which generated the mathematical model of an unknown PDF
from the orthogonal and asymptotic expansion based on the
Gram-Charlier and Edgeworth series. The central moments
and cumulants derived from these series were used to
construct of the unknown PDF which were compared with the
histograms constructed from the MDMEET program of the
irradiance fluctuation intensities (taken from the
experimental HOAA data) and the histograms generated by
EOAA. From the comparisons it can be concluded whether

the higher-order moments would be an efficient method in
determining an unknown PDF model.
The chapters to follow will describe the areas of
statistical theory relating to laser propagation, concepts
on highezorder moments, the Gram-Charlier/Edgeworth
series, the experimental arrangement of the BOAA data, and
the reconstruction of an unknown PDF from the experimental
data using the expansion series.
The conclusion of the thesis will confirm whether a
reasonable PDF can be established using higherorder
moments from the orthogonal and asymptotic expansions.

Defining Turbulence
Intensity scintillations are fluctuations of optical
waves which, have their energy redistributed in turbulent
atmosphere, primarily by small-scale fluctuations. The
logarithm of the intensity of the distorted wave generally
has a Gaussian or normal probability density function
described in terms of the logintensity variance, ^lnl
This variance increases with the strength of the
turbulence as the 11/6 power of the path length until it
reaches a value of approximately 2.4. At this point the
turbulence "saturates" and no longer increases I 131. The
intensity scintillations are caused by turbulence with a
scale near the Fresne1-zone size where the predominate
frequency is calculated by dividing the wind velocity by
Fresnel-zone size, ( > L, A being the optical wavelength
and L the propagation path length. The important temporal
frequencies occupy the range from 1 to 100 Hz and the
intensity scintillations caused by turbulence with a scale
near the Fresnel-zone size continue until saturation
occurs. The log-intensity variance of scintillations

varies inversely as the 7/6 power- of the optical
wavelength [ 131 .
Since the scintillation patterns are random, the
receiving aperture receives a number of IT, where
]J = (n1)10 in parts/mi 11 ion, n being the refractive
index, of statistically independent portions of the
pattern corresponding to the predominant scale size. The
log-intensity variance of the received signal will then be
reduced by a factor IT which is called aperture
For optical wavelength,
W % 79 P/T
where P is the atmosphere pressure in millibars and T is
the temperature in Kelvin C13].
Turbulence usually causes random fluctuations in the
density and refractive index due to the fact that the
atmosphere is nonstationary and the statistics of the
fluctuations vary with time and place. The usage of the
word turbulence in this paper refers to the temperature
fluctuations which are the primary cause of the
refractiveindex fluctuations since pressure variations
are relatively small and are rapidly dispersed. The
refractiveindex fluctuations proportional to the temp-
erature fluctuations are expressed as,
SIT = - 79*P-ST / (0 1) T2 <2. 1)
where T = temperature, P = pressure and the ratio of

1.47 for air C13].
specific heats, 0 = Cp / c^ ~
These temperature fluctuations originate from large-
scale phenomena and obey the same spectral law as the
velocity fluctuations. The fluctuations are broken and
mixed by the wind until they exist at all scales. The
large scale sizes.defined by La, are called the outer
scale of turbulence, which correspondence to a wavenumber
K0 = 2n / Lq, while 1Q defines the smaller scale sizes,
called the inner scale of turbulence, which correspondence
to a wavenumber Kj = 2ic / 1Q.
The temperature and refractiveindex spectrum are the
same as that of the velocity spectrum, defined by
Tatarski [20] in the following form for the refractive-
index spectrum:
O -11 /Q
0n(K) = 0.033 Cn K exp<
with the following three conditions:
Kml0 = 5.92,
K < Ka the equation is a poor approximation for
large scales and
K > Km the equation is a reasonable approximation
for small scales
where k is the spatial wavenumber, 2x/l and 1 is the scale
size of the irregularity [ 13] .
The primary parameters dealt with in atmospheric
2 2
turbulence are: Cn 1Q and L0. Cn is the refractive-

index structure parameter, sometimes referred to as the
structure "constant" for the structure function D-f(T>[201.
Cn measures the intensity of the refractiveindex
2/3 2
fluctuations in units of meters The values of Cq
17 13
typically vary from 10 -10 . The parameter 1Q is the
inner scale of turbulence, below which viscous dissipates
penetrates the spectrum. This inner scale is calculated
from kinematics viscosity of air and the rate of viscous
dissipation of turbulent energy and usually ranges from 1
to 2mm near the ground; increasing to about 1 cm at the
tropopause. The best way to take measurements of the
inner scale has been suggested through measuring the light
intensity fluctuations over a short path. L0 is the outer
scale of turbulence, above which the energy is presumed to
be introduced into the turbulence. The values of L0
depend on the strength of the turbulence ranging about 100
meters or 1/5 the height above the ground, whichever is
less. Between 1Q and LQ lies the inertial subrange
where the Kolmogorov model predicts the spectral slope to
be -11/3 C13],
0n (2.3)
for K > 2x/L0 and Kjj, = 5.91/10, where ^(K) is a three-
dimensional spectrum C21].

Line-of-sight Propagation
There are several methods for analyzing wave
fluctuation characteristics in a line-of-sight path: one
of these methods is the Rytov method because of its
usefulness for a wide range of wave fluctuations [21].
The Rytov method simplifies the procedure by obtaining
both amplitude and phase fluctuations and using an
exponential expression to give a better representation of
the propagation wave [22]. In discussing the fluctuations
in a random medium it is necessary to consider the
relative positions of the transmitter and receiver with
respect to whether the receiver is in near field, for beam
wave cases or in far field for spherical wave
The lineofsight propagation between the transmitter
and the receiver has its origin at the transmitter <0,0,0)
and at the receiver <0,0,L>. The measurement of the irra-
diance fluctuations between these points is based on a
voltage measurement of the output voltage, V, measured
at the receiver consisting of the average and the
fluctuating component Vr,
V = + Vr <2.4)
where <> is averaged with respect to time 1211.
At the receiver end there exists two fields, the

field propagated directly from the transmitter to the
receiver and the field scattered from particles. The
direct wave is that of the output voltage, , and the
fluctuating voltage, Vr, relates to the scattered
field. The average output voltage of the receiver,
which is proportional to the coherent field , is
given by
= Agt gr where g^ and Sr^z^ are "the point estimators of the
field transmitting and receiving patterns and y is the
optical distance given by
u^-(r^)= J /(0^,z )/R^
exp Here /(0^,z> is the point estimator of the scattering
amplitude of a random particle, Rp the distance from
(x.y1,z'), of a single particle in dV, to (xi,0,L>, which
is an observation point and p is the density. yQ and yp
are the optical distances defined as,
z Rx
y0 = f pci dz and yi = f Fl dR
where pxr^ is the attenuation constant equal to the product
of density p and the total^ cross section C213.

fluctuation Vf of the output voltage to the fluctuating
field Uj. is given by
VfCt> = J ^ g1
exp C-y^/2 -y2/2 + ikCR^ + R2)] dV. (2.7)
Here / is the point estimator of the scattering
amplitude of the random particle at location r* at a time
t-R2/c [21,223. *
"From a theoretical viewpoint, there are two features
that cause difficulties in the problem of lineof-sight
propagation of optical waves through the atmosphere.
1) The atmospheric refractive-index is a random
function of space and time..." This has to be1 to the
average properties of the optical wave as described by
equation (2.2). <
2) The other is the application of the linear theory
based on the pertubation method to the wave equation is
inadequate."[13] !
In dealing with line-of-sight propagation, 'it must
always be noted that the transmitting and receiving chara-
cteristics have significant effects on the fluctuation
characteristics, the transmitter and receiver gain functi-
ons, the transmitted power, optical distance, scattering,
beamwidths, velocity fluctuations of the random particles,
density and size distribution of the particle characteris-
tics and coherence time, which is the time required for

the particle to move across the distance of the aperture
size, if the beamwidth is narrow, in contrast to the
point receiver where time is on the order of the time
required for a particle to move over the distance equal to
its size t 213 .
From the measurements of the intensities tahen, a
model of the turbulence can be described. One of those
parameters calculated is the variance of the irradiance
Variance of Irradiance Fluctuations
Intensity (or irradiance) is usually defined as I=A ,
where A is the amplitude of the electric vector of the
optical wave. The variance of the log-intensity is then
defined as being four times the variance of the log-
2 2
= 4'0'jj. (2.8)
2 2 2
Where represents the log-amplitude of the optical wave, log A,
equal to In I .
For a plane wave, spherical wave and a beam wave
(the Gaussian beam), the three types of waves to be valid. For horizontal

propagation paths close to the ground, the theoretical
expressions for o'^nj may exceed 2.5 for a path length of
500 meters where high values of Cjj are observed. For
vertical or slant paths there is less chance of exceeding
this limit, since the values of Cq varies along the path.
For the plane and spherical waves, a number of different
results have been derived and it has been shown that the
spherical wave has a smaller variance C 133 .
For a collimated beam and a given source diameter,
the variance 2
plane wave for L << kVQ /2. Vhere VD represents the width
(or radius) of an optical beam at the transmitter and k is
the optical wavenumber, 2x/^ Vhen L >> kVa /2, the
variance becomes very close to that of a spherical wave.
For near field focus the PDF's Considered Recently
Measurements of the PDF of the intensity fluctuat-
ions of laser scintillation poses difficult technical
problems such as the characterization of fluctuations by
random spikes (requiring a wide dynamic range for
recording), effects of detector saturation on higher
moments or the requirement of large samples in order to
observe the frequency of very large and rare events C163.
Because of the difficulties, many models liave been

proposed over the years in determining the PDF of laser
scintillation. Such a list includes: log-normal, negative
exponential, K distribution, I-K distribution, generalized
K distribution, universal statistics model, log-normally
modulated exponential, log-normally modulated Rician,
exponential Bessel, Furutsu distribution and split-source
model. PDF models of weak turbulence are usually display-
ed as a log-normal distribution, in comparison to strong
turbulence where the PDF usually approaches a negative
exponential. The three primary PDFs noted in literature
studies are the log-normal turbulence), negative exponential and the K distribution
(for large scale or strong turbulence). Since the
higheiorder moments of the data observed are very
sensitive to the form of the PDF, the higher-order moments
must' be known in order to compare the measured moments
with the predicted PDF I 163.
Once the highezorder moments are known, the
statistical behavior of the higher moments, as a function
of sample size, is determined by a simulation of sample
PDFs that duplicate the experimental procedure C 163 The
moments sampled are generally normalized by the estimated
mean, thereby making it very difficult to provide an exact
theory. This was recently shown by a theory developed in
which the mean intensity was known C 163 The work
indicated that the higher moments were very difficult to

estimate with the typical sample sizes currently
It is seen then that the accuracy of experimental
estimates for moments is difficult theoretically and that
the PDF of these estimates for the moments provide only
reliable confidence intervals.
Concepts of HigheiOrder Moments
The characteristic function of the irradiance
fluctuations P f as
J P exp <2. 9)
contains a series of moment or cumulant terms of higher
orders from which information about the probability
density function can be obtained. These higher-order
moments illustrate the super and hyper coefficients of the
skewness (third moment) and excess (fourth moment).
The problem of determining the probability density
function, given all of the moments, is not entirely appaz
ent. Highezorder moments can give more information
concerning the contributions from the tails of the PDF in
dealing with laser fluctuations through turbulence.
However, to do this sufficiently, it is necessary to

consider the fourth-order correlation. For practicality,
moments to the eighthorder are sufficient for statistical
characterization of the stochastic process, though the
moments for some statistical distributions do not define
the process uniquely [231.
Even and odd higherorder moments give information
providing the observer with a better understanding about
the width and lack of symmetry of the distribution. For n
odd, any deviation of from zero indicates a deviation
from a Gaussian distribution.
The- higherorder moments describe any departures,
closeness or occurrence of sharp jumps of the measured
data from the Gaussian statistics or proposed model distr
ibutlons, through the skewness, which measures the nature
of the asymmetry and the excess the weight of the distribution tail.
However, the problem in dealing with higherorder
moments is the large amount of scattering associated with
them. Hinimization of the scattering may be accomplished
through additional sampling points and by utilizing
statistical signal-processing schemes.
Since the central moments and cumulants are related,
the skewness and excess parameters can be expressed in
terms of cumulants. The cumulants are usually defined as
= Z xn n=l
In 0
/ n!

where Xn is the nth cunmlant and 0 is the charactei
istic function given by
J e^X /(x)dx,
where / variable x [123.
Describing the physical nature of the process, it may
be necessary to measure the correlation of higher-order
moments or cumulants > 5. These values of the hlghex
order moments of skewness and excess can provide a
convenient and efficient method for determining which of
the probability distributions is closest in describing the
random process under consideration. From these values a
realistic model may be constructed to establish the origin
of turbulence by determining the physical mechanism
associated with the non-Gaussian effects C12,193 .
One way in determining the coefficients of the
higheiorder moments is through the technique of orthogon-
al and asymptotic expansions.
Gram-Charller and Edgeworth Series
Gram-Charlier Series.
One method in deriving the highexorder coefficients

from the central moments is to expand a density function
formally in a series of derivatives a(x), where a(x) is
the symbol of the density of the standardized normal
distribution defined as
a = 1 / V2tt exp {3$ x2). (2.12)
Ve then have the expression
/(x) = I CjHj(x)a(x). (2.13)
By multiplying by Hr and integrating from to an

orthogonality relation is developed
cr = 1/r! J /(x)Hr(x) dx. (2.14)
Substituting the value of Hr from equations (A.1) the
following is defined for cr for moments about the mean
in equation (B.4) C9],
cr = 1/r! {ji*r rC23 Mr-2 x 2-i!
+ rC43 p*r_4 / 22-2! (2.15)
From cr and the orthogonal polynomials, the Cheby
shev-Hermite, Hr(x), see Appendix A, the expansion derived
is known as the as the Gram-Char1ier series. The
standardized form is defined as [91,

f a(x>/o' <1 + I/6113H3
+ 1/24 H4 +
(2. 16)
This series is based on three parameters, central moment,
pr, the Chebyshev-Hermite polynomials and the transformat-
2 2
ion standardized variable, z, where z = Edgeworth Series.
Another method in deriving the higheiorder
coefficients is the Fourier transformation of the term
r r
Hra the expression
r i f y r
x a By interchanging x and t the following is defined
and hence changing the sign of t, the transform Hr is
V"<2x) irtra Then from the expression we obtain its c.f.
fl2x)a , <2.20)
where k,- is the rth cumulant.

From which
exp C-Gc! a)D/l! + a(x) (2.21)
Stating that D = d/dx*, the Chebyshev-Hermite polynomial
Hr(x) can be identified as,
<-D)ra(x> = Hr (x)a(x) (2.22)
From this we obtain the standardized density function,
letting k2 = 0, K2 =1, of C9],
//4! . . >a(x) (2.23)
An asymptotic series of this kind was derived by
Edgeworth, known as the Edgeworth series. The Edgeworth
series is based on the cumulants; the coefficients, Cj.,
of the cumulants are defined in equation sented by the expression C 9],
/ + k4H4/24 +k5H5/120 +...> (2.24)
Edgeworth claimed this series to be different from
equation <2.18) for the reasons that only a finite number
of terms in the series is used and the rest neglected and
that the terms in equation <2.18) do not tend regularly to
zero from the point of view of basic errors.
From here we move into the arrangement of the experi-
mental data observed by HDAA.

Experimental Arrangement
A narration of the National Oceanic and
Atmospheric Administration results is covered for background knowledge to be compared
with the results of the thesis study.
Measurements of irradiance were taken during periods
of high atmospheric turbulence in August of 1985. A
diverged laser beam was used as the source and the beams
were propagated horizontally over flat, uniform grassland
at a height of 1 2 meters [113.
Two basic experimental setups were used, one for a
short path length of 100 meters and one for long path
length of 1200 and 2400 meters. The radiation source was
a 4mV HeNe laser operating on a wave length of 633nm.
The natural beam divergence was about a milliradian. A
negative cylindrical lens was used to expand the vertical
beam width by about a factor of 2 to compensate for the
changes in vertical temperature gradient along the
path C143.
At the receiver a time series of the irradiance was

main propagation path.. Each data run consisted of 50 such
records, half a million intensity samples [14].**
The five detectors used in the short path experiment
were set up with varying spacings for use with spatial
correlation analysis [103, see Table 3.1.
The measurements of Cn , 10, and crosswind were samp-
led at the same time the intensity measurements were
taken. The measured Cn values for the data sets ranged
it 1? q/'r
from 5.2 X 10 to 1.9 x 10 m The Rytov
spherical-wave, zero innezscale, logintensity variance,
£a^, defined as 0.5k*^^L^, k = 2jc/^, ranged from
12 to 43. The field of strong turbulence, where
spherical-wave irradiance variance decreases with
increasing turbulence, correspondence to the fl0 values
exceeding about 2.5, which was within the range of
experimental data [143.
Calculation of the Moments
from the Experimental Data
The estimated PDF was derived by normalizing the data
at each time series by its mean value. The second and
third moments were calculated to obtain the parameters of
the theoretical density functions that were considered.
The normalized irradiance data were sorted into hist-
ogram bins and values < 5 the bin width was 0.1, for
values 510, the bin width was 0.3, 1020 the width was

Table 3.1. Spacing for Five Photomultiplier Detectors
Pairs of Channels
Spacing in mm
2,3 0.800
3,4 1.500
2,4 2.270
0, 1 4. 582
1.2 5. 340
1.3 6. 110
1.4 7. 610
0,2 9. 921
0,3 10.690
0,4 12.190

1.0 and 20-40 the width was 3.0. The uncertainty in the
measurement was calculated for each data point, with the
assumption that successive samples of irradiance were
independent L14] .
The variance of the normalized irradiance was calcul-
ated and compared with the theoretical variance,
2 2
or i = 2 exp 2
which was calculated for the estimated value of orz from
the equation,
p 2 2
exp C -I/z -(lnz + Jj*V 2 which represents the density function for the normalized
n 2
irradiance I, I/, where logarithm of the modulation [14].
PDFs Proposed for the Experimental Data
From the experimental data the following PDFs were
James Churnside and Steven Clifford, the original
observers of the experimental data, proposed a log-normal
Rician PDF that obeys the RiceHahagami statistics modula

strong integrated turbulence and were numerically
amenable C113.
Moments From the First Set of Proposed PDFs
The data was not compared using tie moments in this
proposal, Instead the moments were used to obtain the
parameters of the theoretical density functions that were
constructed [113.
Moments From the Second Set of Proposed PDFs
The moments in this set of proposed PDFs of the inte-
nsity (irradiance) data were normalized
In = . (3.3)
The moments were observed up to fifthorder moment as
functions of the second moment or the normalized variance
given by
*2 1-
<3. 4)
Here the higheiorder moments were written in terms of the
second moment as
I-n = n! (Io / 2)
where I2 > 2.

For the K PFD, the higheiorder moments were given
by C14],
In = n! T <2/12 2 + n>
v <2 /I2 2)n T<2 /I2 ~ 2), <3.6)
where T ( > is the gamma function
r (n> = f xn_1e~x dx for n > 0, <3.7)
where n is a positive integer, then T = { C17].
The histograms of three distributions were plotted based
on the third, fourth and fifth moments as functions of
the second moment, see Figure 3.3. "For I2 = 2, the log-
normally modulated exponential function and the K density
function reduced to the negative exponential PDF having
the same values as their moments C14]. As the second
moment increased, the higheiorder moments of the function
deviated from those of the K PDF.
Good agreement was found between the experimental
PDFs in Figures 3.1 and 3.2, however, the measured
irradiance moments in Figure 3.3 did not agree. The
fourth and fifth measured moments were found to be greatly
underestimated and showed the need for extremely large
sample spaces to reduce scattering in the measured
moments. This suggested that extreme caution should be
used when considering moments.
In obtaining accurate estimates of the third moment

Figure 3.1 PDF's of normalized irradiance for<*-^2=2.83
0 = 36): K (---------), log-normal (------), and log-
normally modulated exponential(----) functions and data (0).
Linear representation for I < 2 of the data with the lowest
irradiance variance of the 10 data runs executed. 14

Figure 3.2 PDF's of normalized irradiance for<7"". 2=4.13
/ a 2 1
ro =23): K (---------log-normal(- -) and log-normally
modulated exponential (-----) functions and data (O).
Linear representation for I < 2 of the data with the
highest irradiance variance of the 10 data runs executed.

Figure 3.3 Third, fourth, and fifth normalized
moments of irradiance 13,14 and I5 as functions
of the second moment 13: K ( ;-------), log-normal (---),
and log-normally modulated exponential (-------) functions
and data (O).-*-^

several events near I = 100 should have been observed.
Samples to resolve the fourth and fifth moments would need
to be greater.
"Because of the uncertainties In the measured
moments, the theoretical and experimental PDF's were trun-
cated to a value of I =40 and the moments recalculated,
see Figure 3.4. It could be seen that the scatter of the
truncated measured moments In Figure 3.4 Is much less than
the scatter of the untruncated measured moments In Figure
3.3. "Concluding that the irradiance moments were were a
misleading statistic and that deviations from log-normal
statistics were not so extreme as had been supposed."
Recommendation that the comparison of theoretical PDF's
with measured PDFs is strongly suggested in comparing the
relationship of moments C14]".
Moments From the Third Set of Proposed PDFs.
Vith this proposal the second and third moments were
suggested as a method for obtaining the values for the
Fourth Proposed PDF for the Moments
A fourth method was derived for calculating the mome-
nts based on the Sumerical PDF. Calculation of these

Figure 3.4 Third, fourth and fifth normalized moments
of irradiance I3 I4 and I5 as functions of the
second moment I2 for a PDF truncated at I = 40: K (-----
log-normal (- -) and log-normally modulated exponential
(----) functions and data (O).^-4

in 16 bit words and stored as a stream of bytes. The data
sequence for the long path lengths <1200 and 2400 meters)
were arranged in 70 consecutive runs at a rate of IK Hz,
each run consisting of:
64, 000 intensity samples with the LASER on.
3,200 intensity samples with the LASER off.
And a sample of the average crosswind, and the
inner scale size, 10.
"The data sequence for the five channel sets were
arranged in 29 consecutive runs at a data rate of 500 Hz,
each run consisting of:
5 channels of 32,000 intensity samples with the
5 channels of 1,600 intensity samples
with the LASER off.
And a sample of the average crosswind, Cn and
inner scale size, 1Q.
The data runs were tested up to the eighthorder
moment and were found to be satisfactory to the second or
third significant digit. Values greater than twenty-eight
were good to the order of magnitude only.
The accuracy of the values could be improved by avez
aging the moments taken over data runs with similar condi-
tions of crosswind and innerscale, i.e., for the 100
meter runs, averaging moments from corresponding runs on
all channels L10].

The file format given by ffOAA indicates a storage
area for the analog measurements of turbulence, ,
however, this value remained constant throughout all the
data runs. The output of the MOMEHT program is in ASCII
text format.
Upon execution of the program the following is ashed:
an input file, an output report file, the number of
moments to compute, and the data path length, 100, 1200 or
2400 meters. The execution for the analysis of the 100
meter path lengths require demultiplexing the five
channels with a program called SEPARATE2 [101 which
separates each of the five 100 path length channels for
calculat ions.
The output file consists of the central moments for
the period with the Laser Off and the Laser OFF, the
crosswind, turbulence structure, Cq and the innerscale,
The turbulence strength, Cq had not been converted
properly during the conversion of the ffOAA system to the
PC system and should not be relied upon See Appendix E
for the basic logic of the MOMEHT program.
Higher-Order Moments from the
Gram-Char1ier/Edgeworth Ser ies
The purpose of the fourth method was to convert the
experimental data for research study using a personal

computer (PC) in constructing an unknown PDF based on the
higher-order moments. The calculation of the central
moments from the MOMENT program were used in the computer
program EXPANSION SERIES see Appendix F, to calculate
the standardized values from the Gram-Charlier and
Edgeworth series. Histograms were created from these
Histograms of the Experimental Data
Based on the three sets of proposed PDF's by NOAA,
the data seemed to favor the density function, log-normal
rician, over the K function and the log-normal functions.
The K distribution, in the case of a strong
scattering process (random phase fluctuations of many
wavelengths), focused on two distinct scale sizes which
made the K distribution appealing for propagation through
strong turbulence because of the observation of two
distinct scale sizes in the received irradiance pattern.
However, the K distribution could only be applied in cases
of strong turbulence and at times tended to underestimate
the probability of high irradiances and thus overpredict
the higheiorder moments C14]. Also, for the K distri-
bution to be defined the mean and second moment must be
known. The results of the histograms developed from the
values from the MOMENT and EXPANSION SERIES programs are

discussed in the final chapter. See Figures 5.1, 5.2, 5.3
5.4 and 5.5.

There have been considerable interest, both theoreti-
cally and experimentally, in the irradiance statistics of
fluctuations in atmospheric turbulence.
Many PDFs models and theories proposed for describ-
ing irradiance fluctuations for weak and strong laser
scintillation, have been based on the primary variables of
the estimated mean and lower moments, second and third.
The basis of this thesis was to reconstruct an unknown
PDF model from the experimental data using the orthogonal
and asymptotic expansions, the Gram-Charlier and Edgeworth
series, for moments > 4. The first phase was to
determine if the theory was feasible. This was
accomplished by simulating a known PDF.
Simulation of a Known PDF
The selected known PDF proposed was the log-normal
distribution defined as,
/ From the characteristic function of the log-normal

the noncentral moments were derived,
-r=e'*r + !*r2,r2 <4.2)
The parameters /i and or are the mean and variance of the
log-normal function. From the noncentral moments the
central moments were derived by the expanded binomial,
Mr = Z lr\ M*r-j <*.3>
where pr represents the central moment. The standardized
central moments were then calculated,
Mrz = Mr / *** (4.4)
From the standardized moments the higheiorder
coefficients of skewness and excess were found using the
transformation of the standardized Gram-Charlier series
/(x) = a(x) (1 + l/6p3H3 + l/24(p4-3)H4 + ......) (4.5)
or the standardized Edgeworth series
f(x) = a(x) ( 1 + l/6(k3H3) + l/24(k4H4> + ....) (4.6)
where Hr(x) was based on the equation (A.2) and
a(x) I/VStc or exp {-J£ (z)^> (4.7)
where z was the transformation standardized variable:
z = (x p i)^/o*^. (4.8)

Log-normal va Edgeworth Series
Log-normal Edgeworth Series
Figure 4.1

(n 2048) / 204.8.
v2 =
Here the experimental data of the three different path lengths
observed to a voltage output signal. The value 2048 was
the referenced zero baseline of 0 or 10 volts and n was
the value of experimental data observed.
Experimental data values were then determined from
Figures 5.1 and 5.2 and normalized,
I / IQ (4.11)
I = V2 x-L <4. 12)
I0 = xa *1, <4. 13)
where x0 represents the peak and x^ represents the
zero reference.
Histograms of the normalized experimental data values
versus the Gram-CharHer values were constructed for the
100, 1200 and 2400 meter path lengths. Discussion of the
results follow.

Table 4.1 Central Moments Developed from the MOMEMT
Program, see Appendix E, for the 100, 1200 and 2400 meter
path lengths. (Central moments calculated up to the 4th
100m Path
1200m Path
6. 161817X10^
1. 075111X10*
1. 477447X10:?
2. 076316X10
3. 747714X10J?
M2/ M3/ M4/r
100m Path
1200m Path
= mean
M2 = - standard deviation
2400m Path
2400m Path

Histograms of Path Lengths
Constructed from the MOMENT Program
Two examples are shown of histograms constructed from
the values of the standardized central moments. The
examples reference one of the short path length of 100
meters and the longer path length of 2400 meters.
The two histograms show distinct descriptions of the
irradiance fluctuations at the different path lengths.
For the 100 meter path length with the laser on, the rise
and downfall of the nonGaussian curve is gradual with a
smooth peak at a PCI) of approximately 1. The peak shown
of the laser off curve references the zero baseline to be
approximately -0.28 volts. In comparison, the long path
length of 2400 meters with the laser on shows a non
Gaussian curve with a quick rise and fall at peak at a
PCI) of approximately 1. The peak shown on the laser off
curve references the zero baseline to be approximately -
.58 volts.
The number of frequent occurrences of irradiance
fluctuations for the 100 meter path' length occurred within
the ranges of -.18 to 4.0 volts in comparison to the 2400
meter path length where the range of frequent intensity

An assumption can be made from the research that the
use of the orthogonal and asymptotic expansion, up to the
fourth moment as a function of the second moment, is able
to give a good approximated PDF of the observed data
without the predicated or theoretical PDF being stated
beforehand based on previous knowledge.
Observed data can now be constructed to show a PDF
from the technique mentioned.
Higherorder moments of the eighth order were
tried, but unsuccessfully. It is suggested that to
obtain an unknown PDF with good agreement to the data that
the higherorder moments > 4 be a function of the third
and fourth moments in using the orthogonal and asymptotic
From this technique, statistical models could be
developed for providing more detailed information of the
optical irradiance fluctuations through turbulence and
characterizing the propagation path for the design of
optical communications. The model would provide a better
understanding, especially at the higherorder moments > 4,
of the nature, origin and time events of the irradiance
fluctuations, scattering effects on long path lengths,
saturation of the detectors and hardware designs required
for space and local transmissions. The benefit of the
statistical model would lie not only in the area of
optical communication, but also in the area of remote
sensing for communications and industrial processes.

Relative Frequency of Occurance of Intensity
Figure 5.1 (Top) Histogram of the intensity fluctuations of the experimental
data when the laser is ON. The data created for the histogram was done by the

Relative Frequency of Oecuranoe of Intensity
C .(53- 0.63- 0.42- 0.21- 1 j Laser On \ V ,
l 'j f | li If- | y 1 j |" Ml
- i 00 - 0 *r ? 0.05 0 5 £
0 i i- 0.63- 0.42- 0,21- 0 00- Laser OFF
-i 00 -0 .H7 0.05 0 5#
Figure 5.2 (Top) histogram of the intensity fluctuations of the experimental
data when the laser is ON. The data created for the histogram was done by the
MAMDMfT r>-%4-U T O/lAD 4 i-v

Unknown PDF of Experimental Data
100 mkr Path Length
Figure 5.3

Unknown PDF of Experimental Data
1200 mater Path Length
G-C Data I Exp Data
Figure 5.4

Unknown PDF of Experimental Data
2400 mater Path Length
G-C Data "+ Exp Data
Figure 5.5

Calculation of the Chebyshev-Hermite Polynomials is
derived from the equation:
Hr (x)
/2- 1!
C 4]
(A. 1)
The first eight terms of x of the polynomials are
Hq = 1
Hj = x
H2 = x2 1
H3 = x3 3x
1 II 10x3 + 15x
1 x II 15x4 + 45x2 - 15
H7 = X7 - 2 lx5 + 105x3 - 105x
H8 = x8 - 28x6 + 4 21 Ox - 42Ox2 +105
The first eight standardized terms of the polynomials are
determined by substituting x for z where C91 ,
z = / v (A. 3)
o' = -4 p-2* 53

The Gram-CharHer Series Expansion, Type A, is defined as,
/ 00
Z c
xHj CB. 1)
cr = 1/r!
J / (B, 2)
cr = 1/r! (B.3)
where p'r represents the non-central moments [ 1].
The moments about the mean, based on the first eight cr
c0 = 1
= 0
c2 = c3 = P3 / 6
C4 = <^4 6p2 t- 3) / 24 (B. 4)
c5 = - 10p3> / 120
Cq = c7 = (jiy 21p5 + 105^3) / 5040
cq = (pg 28pg + 210p4 420p2 + 105) / 40320
The /(x) in standard measure of the Gram-Charlier
Series, Type A, becomes
f(x) = a where
aCx) = 1/4 25f- 54

The Edgeworth Series Expansion, Type A, is defined as,
/ = E c^Hj
where the coefficients, c^ in CB.3) are evaluated, at
equation , become when standardized and converted
into cumulants using equation CD.4) are,
c0 = 1
c^ = 0
c2 =
c3 = ^3 / 6 '
c4 = k4 / 24
c5 = k5 / 120
eg = (kg + 10k3> / 720
C7 = iky + 35k4k3> / 5040
c8 = (kg + 56kgk3 + 35k4) / 40320
The / Type A, becomes
/ = a(x) il + l/6(k3H3> + l/24(k4H4) > where
a(x) .= l/f~3x- or represents the standard deviation of the central
moment, p2 and standardized variable z is defined as
C 9] ,
(x p* 1) / or.

The central moments defined from the expanded binomial
i C
Mr-j <-Ml
CD. 1)
M7 -
M8 =
= Cr! / j Cr-j ) )
= 1
= 0 2
= M2 ~ M* 1 3
= M*3 3m1M2 + 2m1
= M* 4 4MlM*3 + MlM*2 _
3M* i
CD. 2)
CD. 3)
2 3 5
M*5 5MlM4 +.10MlM,3 10MlM2 + 4M'1
2 3 4
M*6 6M 1M*5 + 15M* iM4 20M* lM 3 + 15M* lM* 2
" SMi
2 4 5
M* 7 ~ 7MlM*6 + 21/1'ift* 5 + 35MlM3 ~ 21MiM2
+ 6p i
2 3 4
M*8 8MlM*7 + 2M'iM6 56p* ^'5 + 70p*1p,4
5 5 8
- 56p* JJ1* 3 + 28p xp* 2 - 7p* x
The eight expressions of the cumulants based on the first
eight central moments about the mean, p^ = 0, are,
k2 = M2
k3 = M3

k4 = M4 k5 M5 " 3M2 10M3M2
1 s II 15M4M2
1 II is M 21M5M2
k8 = M8 28M6M2
- 630jig
2 3
10*13 + 30M2
35/14/13 + 210/i3/i2
56M5M3 35M4 +
CD. 4)
2 2
420/i4/i2 + 560/i3/i2
The first eight standardized central moments and
cumulants based on the first eight central moments and
cumulants are defined as
Mrz Mr f v
r = 1,2,3
CD. 5)
The standardized central moments and cumulants are
M2z : M2 / 2 3 k2z = k2 /
M3z M3 / 4 k3z = k3 /
M4z M4 / 5 k4z = k4 /
M5z M5 / CD. 6) ^Sz = *5 /
M6z = M6 / v7 k6z - k6 /
M7z = M7 / *3 k7z = k7 /
M8z ~zz M8 / V k8z = k8 /
CD. 7)

This program calculates up to 'x' number of Central,
Regular and Normalized Regular moments and stores the
data into histogram bins. The moments are obtained
from experimental data taken by NQAA (National Oceanic
Atmospheric Administration!. The software used to
convert the experimental data so that it could be read
on a personal computer is Turbo Pascal, version 5.0.
Original program was developed by Michael Johnson and
modified by Alenka BrownVanHoozer for experimental
data manipulation.
This segment defines the size of the buffer and the
number Of moments to be calculated,
uses c r t-;
const ap = #39; C }
bufsize = 8192;
bigmoment- = 8; (number of moments to be calculated!
; Declare pointers
type pdftype = arrayCO..40963 of longint;
ptrpdf = "pdf type;
var- f; file; histlonfile, histloffile, histtonfile,
histtoffile, (histogram files!
outfile: text; i, run, sample, maxorder, onmax, offmax
runmax: word; laseron, laseroff, totalon,
totaloff: ptrpdf;
inbuf: arrayC1..bufsize! of word; toread,
actualread: integer; histbase, histlon, histloff,
histt-on, histt-off, inname, outname,
answer: stri ngC2551;
ok ay: boo1ean;
v, turbulence, inscale: extended;
ch: char;
; Subroutine calculates an exponent to log integer,
function power (a,b: extended!: extended;
if a <= 0 then
writeln('ERROR: ',a,' passed as base to function
power := exp(ln(a! T bl;
Subroutine calculates
foilow i ng formu1a.
(d/204.8 24!
turbulence strength by the
= in
function Cn"" (data: integer!: extended;
A" i"i

cn2 := power(10, ((sample/204.a) 24);
Subroutine calculates the innerscale using the
Let a = b/204.8 -10, b is the data number to be read,
a < 0.10175 then the inner scale = 0.
a > 2 then the inner scale
a is between 0.10175 and
polynomial is used:
innerscale = 2.65 + 9.327a
47.249a~4; ( '
-function innerscale (data:
var a,b,c: extended;
a := (data/204.8) 10;
if a, -0.10175 then
- innerscale := 0
if a > 2 then innerscale
= 90.
2 then the following
- .639a"2 25.332a"3 +
represents the power of')
integer): extended;
£ = 65972 + 9.92783
b = S 4 a;
C = c - 0.699722 4 b;
b = b £ a,
c = c - 25.3229 4: b;
b = b 4 a;
c = C + 47.2491 4 b;
b = b * a;
c = c - 19.024 4 b;
b = b * a;
c = c - 7.1593 4 b;
b = b 4 a;
innerscale := c + 4.51235 4 b:
Subroutine calculates the crosswinds using the
following formula.
u = 20/3 Cv/204.8 -10) m/sec
function cw(data: integer): extended;
cw := 6.66666666666666666667 4 < (data/204.8) 10)
Subroutine calculates the mean to the kth power.
: extended: expt: integer):
function exponent (base
var prod: extended;
k: integer;

if 0) then
prod := base;
for k := 2 to expt do
prod := prod :fe base;
prod := 1;
exponent := prod:
Subroutine calculates the factorial of r! and ji of the
normalized moments.
function factorial (nu: integer): extended;
var n: integer;
prod; extended;
prod := 1;
if (num > 1) then
for n := num downto 1 do
prod := prod $ n;
factorial := prod;
Subroutine calculates the binomial coefficient of the
expanded binomial.
function binom_coef f
binom_coef := factorial fac tor ialC rj));
Declaration of variables to find the central, regular
and normalized moments.
procedure findmoments (pdf: ptrpdf; maxorder: word);
var ij, index: integer;
to ta1samp1es: 1ong i n t;
a, di f f: extended;
normmom, moment, m: array CO. .b i gmomen t j of extended;
Following segment calculates the mean.
totalsamples : = G;
for i := 0 to 4036 do
to ta 1 samp 1 es : = to ta 1 samp* 1 es + pdf C i II;
ttlsmpls := totalsamples;
mean ;= 0;
for i := 0 to 4036 do

mean := mean + i (pdf %Ci3 / ttlsmpls3;
Following segment calculates the first maxorder central
moment C i 3 := 0;
for i : = 0 to 4036 do
d i f f := i mean;
a := diff;
p : = pdf ' C i 3 / tt Ismpls;
for j := 2 to maxorder do
a := a J diff;
mCj3 := mEj3 + a -t p;
if i = (i and $FF803 then writeCi,#133;
Following segment calculates the Regular Moments based
on the expanded binomial.
for j := 2 to bigmoment do
for index : = o to j do
momentCj] := momentCj3 + binomcoefCj,index) t
Following segment calculates the Norma1ized Regular
for j:= 2 to bigmoment do
normmomC.jl := momentCj 3 / exponent C mean, j 3 ;
Main Frogram Begins
Histogram array bins for intensity values.
newt laseron3;
newt laseroff3J
for i i <_3 to 4036 do
1 ase r on'' E i 3 : = 0;
1aserof f"C i 3 := O;
m c o 3 : = 1;
for i := 1 to bigmoment do
mCi3 l= 0;
m C j i ndex 3 :fc exponen t ( mena, i ndex 3

totalon~Ei3 := 0;
totaloff' C i 3 := 0;
Process input file and write moments to output file
(Process all runs.)
for run := 1 to runmax do
sample := orsmax;
Process all LASER ON samples,
toread := bufsize
if sample < toread then toread : = sample;
if eofCf) then
writelnC'Unexpected end of file .inname);
blockreadCf, inbuf, toread, actualread);
for i := 1 to actualread do
inc(laseron E inbuf E i 3 3 3 ;
inc(totalon "E inbuf E i 3 3 > ;
sample := sample actualread;
writeCsample, ', LASER ON #13);
until sample = 0;
sample := offmax;
process all LASER OFF samples
toread ;= bufsize
if sample < toread then toread := sample;
if eofCf) then
writelnC'Unexpected end of file '.inname);
blockreadCf, inbuf, toread, actualread);
for i := 1 to actualread do
inc C laserof f E inbuf E i 3 3 ) ;
inc C totalof f"Einbuf E i 3 3);
sample := sample ac tua1read;
wr i te C samp1e, , LASER OFF ', #13);
until sampIe = 0;
Calculate the moments of the LASER ON data,
f i ndmomentsC1aseron, maxorder);

Calculate the moments of the LASER OFF data,
f indmomentsClaserof f, maxorder);
Read in data
blockreadCf, inbuf, 3, acturalread);
if acturalread <> 3 then
writeln('Unexpected end of input file inname);
v := cwCinbufCi3)
Store data into four files for histogram manipulation
for i := 0 to 4096 do
writeln(histlonfile, laseron"Ci3);
writelnChistloff ile, laseroff'C i 3 >;
writelnChisttonf ile, totalon"ti3); .
wr itelnChistt-of f i le, totalof f C i 3 ) ;
Close f iles
closeCoutf ile) ,
End of Program


u3x = ncm3 - <3 ncml ncm2> + <2 ncml ' 3)
u4x = ncm4 - <4 ncml ncm3) + <6 ncml ~ 2 ncm2)
- (3 * ncml 4)
u5x = ncm5 - <5 ncm4 ncml) + <10 ncm3 * ncml - 2)
- <10 * ncm2 ncml 3) + <4 ncml " 5).
u6x ncm6 - <6 ncm5 ncml) + <15 ncm4 * ncml - 2)
- <20 * ncm3 ncml ~ 3) + <15 ncm2 * ncml 4)
- <5 * ncml ~ 6)
u7x = ncm7 - <7 ncm6 ncml) + <21 ncm5 * ncml A 2)
- <35 * ncm4 ncml ~ 3) + <35 ncm3 * ncml 4)
- <21 * ncm2 ncml ~ 5) + <6 ncml ~ 7)
u8x ncm8 - <8 ncm7 ncml) + <28 ncm6 * ncml - 2)
- <56 * ncm5 % ncml ~ 3) + <70 ncm4 * ncml 4)
- <56 * ncm3 ncml ~ 5) + <28 ncm2 * ncml 6)
- <7 ncml ~ 8)
; The following routine calculates the central moments
; of g ; f ; This express determines the standard deviation based
; on the variance of the central moment, U2X.
sd = u2x ~ 5
; The central moments for g u2z = u2x / sd ~ 2
u3z = u3x / sd 3
u4z = u4x / sd " 4
u5z = u5x / sd * 5
u6z = u6x / sd ~ 6
u7z = u7x / sd ~ 7
u8z = u8x / sd ~ 8
; Calculation of the standarized variable values.
; Calculates the standarized variable values and stores
; the values in an array.
FOR x = 1 TO 8 STEP 1
z = zarray(x) = z

k2x = u2x
k3x = u3x
k4x = u4x - <3 X u2x 2 >
k5x - u5x - <10 * u3x X u2x)
k6x = u6x - * 10 H V u4x X u2x> - <10 X u3x - 2)
+ <30 X u2x " 3)
k7x = u7x - <21 X u5x X u2x> - <35 X u4x X u3x>
+ to o X u3x X u2x "2)
k8x = u8x - <28 X u6x * u2x) - <56 X u5x X u3x)
- <35 X u4x ~ 2) + <420 X u4x X u2x ~ 2)
+ <560 X u3x ~ 2 X u2x> <630 X u2x ~ 4)
Calculates the cumulants for g k2z = k2x / sd ~ 2
k3z = k3x / sd * 3
k4z = k4x / sd ~ 4
k5z = k5x / sd ~ 5
k6z = k6x / sd ~ 6
k7z = k7x / sd ~ 7
k8z k8x / sd ~ 8
Calculation of the .Tchebycheff-Hermite Polynomials,
Hr Calculates the values for the first eight Hr polynomials of.
FOR z = 0 TO 8 STEP 1
HO = 1
HI = z
H2 = H3 = H4 = H5 = H6 = H7 = H8 = <420 X z 2) + 105
X z>
Stores the values of
Hr in an array
harray harray 1) =
2) =

harray harray harray harray harray harray NEXT
; Calculation of the Tchebycheff-Hermit Polynomials for
; transform of the standarized g ; standarized variable is z = ; Calculates the values for the first eight polynomials
; for Hr/ FOR i = 0 TO 8 STEP 1
HOzx = 1
Hlzx = zarray
H2zx = H3zx = H4zx = H5zx = ~ 5) -
+ <15 X zarray
<3 zarray>
<6 X zarray ~ 2) + 3
<10 zarray(i) 3)
H6zx = (zarray(i) ~ 6) <15 zarray "4)
+ <45 zarray H7zx = 7) <21 zarray ~ 5)
+ <105 X zarray>
H8zx = (zarray(i) ~ 8) <28 zarray + <210 X zarray ~ 4) <420 X zarray ~ 2)
+ 105
; Stores the values of H> in an array
zxharray zxharray < i, 2 > = H2zx
zxharray zxharray < i, 4) = H4zx
zxharray zxharray zxharray = H7zx
zxharray NEXT

for the
; The following routine calculates the values
; density of the normal distribution.
; Calculates the values for a.
FOR z = 0 TO 8 STEP 1
alphaz = aarray(z) = alphaz
The following routine calculates the density of the
standarized normal distribution with the standarized
variable z.
z = ; Calculates the values for a(x)/o'.
FOR i = 1 TO 8 STEP 1
alphazz = EXP<-,5 azarray NEXT
; The following routine calculates the values of the
; selected example distribution to be used for testing,
; simulation and comparsion to the series expansion
; expressions.
; Calculates the values of the selected distribution.
; Selected distribution is lognormal.
FOR r = 1 TO 8 STEP 1
logf = EXP(<(LOG / NEXT
Gram-Charlier or Edgeworth Series, Type A
Selection of the formal or standarized Gram-Charlier
Gram-Charlier series formal expansion.
Calculates the values of the first eight terms of the
Gram-Charlier formal series, a f.

FOR i = 0 TO 8 STEP 1
a = <1 + < < b = c = < d = < e = ( / 720
f = < / 5040
g = < 4c harrayd, 8)) / 40320
gcf = aarrayd) 4: ; Standardized Gram-Char1ier Series
; where z = ; a ; The first eight terms of the Gramr-Charlier series,
; cl hi = g FOR i = 0 TO 8 STEP 1
cl = <1 + dl = < el = < fl = < gl = < )
4c zxharrayd, 7)) / 5040
hi = < 4c zxharrayd, 8)) / 40320
gcsff = aarrayd) 4: NEXT
; Standardized Gram-Char1ier Series
; where z = ; a /V2m cr
; Calculates the values of the first eight terns of the
; Gram-Charlier series, ccl hhl = g FOR i = 0 TO 8 STEP 1
ccl = <1 + ddl = < 72

eel = < ffl = < ggl = <
* zxharray hhl = < * zxharray gcst = azarray NEXT
Edgeworth Series Formal Expansion, Type A
Calculates the values of the first eight terms of
Edgeworth ; series, h-o =.g FOR i = 0 TO 8 STEP 1
h = 1 + < j = 1 = m = < n = < o = < * harray ef = aarray NEXT
Standardized Edgeworth Series
where z= a /V2x ; x = 1,2,3. .
Calculates the values of the first eight terms for the
Edgeworth series, elk hlk = g FOR i = 0 TO 8 STEP 1
elk = 1 + < dlk = < elk = < flk = < glk = < * zxharray hlk = < * zxharray ewsf f = aarray < i)
* 73

; Edgeworth Series Standardized Expansion
; where z = (x p.' ^x) / v and
; a = exp {-.5 /V2x or
; Calculates the values for the first eight term of
; Edgeworth series, ccl hhl = g FOE i = 0 TO 8 STEP 1
cclk = 1 + < ddlk = < eelk = < ) zxharray fflk = < + 30> zxharray gglk = < * zxharray hhlk = < * zxharray ewst
= azarray(i) * End of program.

[ 1] A. K. Majumdar and H. Gama, Statistical measurements
of irradiance fluctuations of a multipass laser beam
propagated through laboratory-simulated atmospheric
turbulence, Applied Optics, 1982, Vo. 21, pp
2229,2231, No.12.
[23 A. M. Prokhorov, F.V. Bunkin, K.S. Gochelashviby, and
V.I. Shishov, Proc, IEEE 63, 1975, p 790.
[31 R.I. Fante, Proc. IEEE 63, 1975, 1669.
C 4] J.V. Strobehn, Ed., Laser Beam Propagation in the
Atmosphere, (Springer, Berlin, 1978>.
15] A. Ishimaru, Proc. IEEE 65, 1977, p 1030.
161 V. I. Tatarskii, A.S. Gurvich, B.S. Elepov, V.V.
Pokasov, and K.K. Sabeifeld, Opt. Acta 26, 1979, p 531
I7D M. S.Belenkii, V.V. Boronoev, N.Ts. Gomboev, and V.L.
Mironov, Opt. Lett. 5, 1980, p 67.
C83 J. Carl Leader, J. Opt. Soc. Am. 71, 1981, p 542.
[93 M. Kendall, A. Stuart, J.K. Ord, "Kendalls Advanced
Theory of Statistics, Vol. 1, Fifth Edition, 1987,
pp. 87-88,210, 220-226.
[103 M. Johnson, Conversion and Analysis of NOAA LASER
Atmospheric Propagation Data, MOMENT.DOC and
SEPARATE.DOC, Unpublished Papers, 1989.
[113 J.H. Churnside and R.G. Frehlich, .Probability
density function measurements of optical
scintillations in the atmosphere, SPIE Proc.
Optical, 1988, Infrared, and Millimeter Wave
Propagation Engineering, p 4.
[123 A.K. Majumdar, Higheiorder skewness and excess
coefficients of some probability distributions
applicable to optical propagation phenomena, pp 199-
201. J. Opt. Soc. Am., 1979, Vol.69, No.1.

[133 R.S. Lawrence and J.W. Strohbehn, "A survey of Cleai
Air Propagation Effects Relevant to Optical
Communications," Proc, IEEE, 1970, vol. 58, pp 1523
1533, No. 10.
1141 J.H. Churnside and R.J. Hill, "Probability density of
irradiance scintillations for strong pathintegrated
refractive turbulence, J. Opt. Sac. of Am., 1987,
p 727, Vo. 4.
1153 J.H. Churnside and S.F. Clifford, "Lognormal Rician
probabilitydensity function of optical
scintillations in the turbulent atmosphere, J. Opt.
Soc. of Am., 1987, Vol 4, pp 1923-1929.
1163 R.G. Frehlich and J.H. Churnside, "Probability
density function for estimates of the moments of
laser scintillation, SPIE Proc., Infrared, and
Millimeter Wave Propagation Engineering, 1988.
[173 W.W. Hines and D.C. Montgomery, "Probability and
Statistics in Engineering and Management Science,
Sec Edition, 1980, pp 161, 189191.
[183 R. Dashen, "The distribution of intensity in a
multiply scattering medium, Opt. Lett 10, 1984, pp
[193 A.K. Majumdar,. "High-order statistics of lasez
irradiance fluctuation due to turbulence, Opt. Soc.
of Am, 1984, Vol.1, p. 1067-1071.
[203 V.I. Tatarski "The Effects of the Turbulent
Atmosphere on Wave Propagation," Israel Program for
Scientific Translations, 1971, TT-68-50464.
[213 A. Ishimaru, Wave Propagat ion and Scattering in
Random Media, 1978, Vol. 1, pp 118, 126-128, 141 -142
[223 A. Ishimaru, Random Media, Wave 1978, Propagation Vol. 2, pp and Scattering 349-351. in
[233 A.K. Majumdar, Opt. Commun. 50, 1984, p 1.
C 243 H. Cramer, "Mathematical Methods pp 132-133, 222. i of Statistics," 1966
[253 T.E. Ewart, "The probabi1ity distribution of
intensity for acoustic propagation in a randomly
varying ocean, J. Accqust Soc. Am., 1984, 76(5>.

Full Text
Moments from the Third Set of
Proposed PDFs .........................32
Fourth Proposed PDF for the Moments ... 32
Moment Program Description ............... 34
Higher-Order Moments from the
Gram-Charlier/Edgeworth Series ........... 36
Histograms of the Experimental Data ... 37
Simulation of a Known PDF.................3 9
Construction of an Unknown PDF
from Experimental Data ...................41
Histograms of Path Lengths Constructed
from the MOMENT Program....................4 5
Histograms of Path Lengths constructed
from the Experimental Data and Gram-
Charlier Series . .....................4 6
100 Meter Path Length .................4 6
1200 Meter Path Length . . . .46
2400 Meter Path Length.................4 6
GRAM-CHARLIER SERIES ......................... 54
EDGEWORTH SERIES ............................. 56
v i i

In recent years, laser irradiance fluctuations have
been extensively studied [1,2-51, where laser was
propagated through real atmospheric turbulence as well as
laboratory simulated turbulence [1,68]. However, the
exact statistics of irradiance fluctuations in turbulent
media, is still not clearly understood. For optical beam
propagating through a turbulent medium, the random spatial
and temporal fluctuations in the refractive index of the
medium give rise to fluctuations of irradiance. By
understanding the nature of irradiance fluctuations, we
can understand better the nature, origins and the time
histories of this type of important stochastic process.
It is important, from a practical point of view, to know
the form of the probability density functions (PDF) and
the variance of irradiance fluctuations, for example, for
designing laser systems to operate under various
atmospheric conditions. For applications like atmospheric
optical communication system, it is important to know the
degradation effects caused by atmospheric turbulence
between the laser transmitter and the receiver.
Therefore, it is important to know the correct
mathematical model of the PDF which also agrees with the

digitized and recorded for later processing to include the
measurements of the refractiveindex structure parameter
for turbulence strength, Cn the cfosswind velocity and
the inner scale of turbulence, 10,
The signal received from the 100 meter path length
was detected by five photomultiplier tube detectors after
passing through a 1-mm aperture and a 1-nm optical
bandpass filter. Each detector was connected to an
amplifier with a range of -10 to +10 volts. After
amplification, the voltage was converted by an analog-^to
digital converter to a number in the range of 0 to 4096,
with 2048 representing a baseline for 0 volts. After each
data run, a single record was collected with the laser
blocked (off) to permit estimation of offsets and noise in
the receiver electronics and establish a zero baseline.
The data was then fed into a data acquisition system. One
photomultiplier tube detector and a different data
recording format was used for the irradiance signals
received for the longer path lengths, all else remained
the same C2,14],
At the data acquisition end, the irradiance signal
was digitized with 12 bits of resolution at a rate of
5,000 samples per second at Ik Hz. After collecting 10,833
samples, the system sampled the output of an incoherent-
light optical scintillometer. This instrument provided a
measure of Cn averaged over a 250m path parallel to the

ted by a multiplicative factor that obeys log-normal stat-
istics. On the basis of this model all the parameters
required by the density function could be calculated by
using physical parameters as turbulence strength, inner
scale and propagation configuration C15] .
A second proposal of a PDF of the experimental data
was done by James Churnside and R.J. Hill irradiance scin-
tillations in the case of strong scintillation diance variance decrease with further increases in path-
averaged refractiveindex turbulence) which was the log-
normally modulated exponential PDF. They compared their
PDF with the experimental PDFs, the negative exponential
and log-normal statistics, and obtained an agreement which
was better than that of the K PDF. It was shown that the
receiver-apertureaveraged irradiance was log normal for
sufficiently large apertures and the comparison of moments
of measured irradiance with theoretical moments showed
moment comparison method to be very misleading due to the
limitations of receivers and the sensitivity of highorder
moments to very large irradiances C14].
James Churnside and R.G. Frehlich proposed yet two
more PDFs for the experimental data which were based on
the I-K and the lognoramlly modulated riclan
distributions. Each of the PDFs required two parameters
and could be used under any atmospheric conditions. Both
PDFs reduced to the negative exponential PDF in very

moments were done through a computer algorithm called
HOMEBT, which Is described further in the section
following. The numerical PDF was generated by counting
the number of occurrences of each value of intensity
contained in the part of the experimental data of
interest. The probability of each occurrence was then
approximated by the number of occurrences counted divided
by the number of samples counted. The highezorder
central moments are computed numerically as,
*k =
P dx
(3. 8)
Mjj E > n/m (3.9)
where n was the number of samples of value x, and m was
the total number of samples C10D.
Description of the computer algorithm called HOMEBT
HOMEBT Program
The MOMEBT program calculated the central moments,
regular moments, normalized regular moments of the PDF up
to the nth order from the experimental data and stored the
results in histogram bins.
All of the numbers recorded were between 0 and 4096,

A Histogram was constructed showing the trans-
formation values from the Edgeworth series and values
derived from the selected PDF, the log-normal
distribution. See Figure 4.1.
The results showed that the non-Gaussian curve
constructed from the the Edgeworth series aligned
reasonably well with that of the selected PDF. From this
it was ascertained that the possibility of obtaining an
unknown PDF was feasible through an orthogonal or
asymptotic expansion.
Construction of an Unknown PDF
from the Experimental Data
From the MOMENT program the mean, p^, and central
moments, up to the fourth order moment, were constructed
for the 100m, 1200m and 2400m path lengths. See Table
4.1. The central moments, pr, were then standardized,
Mrz = Mr / 'r C4.8)
and used to calculate the transformation values of /(n)
from the standard measure of the Gram-Charlier Series,.see
equation (B.5). The unknown PDF was then calculated
q where q(v2) was the unknown PDF of the ouput voltage at
the receiver defined as,

occurrences were between .55 to .47 volts.
Histograms of Path Lengths Constructed
from the Experimental Data and the
Gram-Char Her Series
100 Meter Path Length.
The unknown PDF developed from the approximation of
the Gram-Charlier series shows a nonGaussian curve which
is lies very close to the normalized experimental data
paints. See Figure 5.3.
1200 Meter Path Length.
The unknown PDF developed from the approximation of
the GramCharlier series of the 1200 meter path length
came fairly close to the normalized experimental data
values at the rise and downslope of the nonGaussian
curve. See Figure 5.4.
2400 Meter Path Length.
The unknown PDF developed from the approximation of
the GramCharlier series showed good agreement with the
data at the rise of the sharp curve; however, on the
downslope of the curve the GramCharlier values deviated
from the normalized data fairly significantly which may
have been due to the scattering effects from the
atmospheric turbulence. See Figure 5.5.

o' represents the standard deviation of the central
moment, and the standardized variable z' is defined
as [9],
z = 55


This program was developed with Microsoft QuickBASIC,
version 4.0, by Alenka Brown-VanHoozer
The program is written in a simplextic format for
readablity by anyone.
The program determines:
1. non-central moments,
2. central moments,
3. cumulants,
4. standarized central moments'and cumulants
5. the density of the standardized normal distribution,
6. Chebycheff-Hermite Polynomials,
7. Gram-Char1ier Series
(formal and standard measurement) and
8. Edgeworth Series
(formal and standard measurement).
The following routine declares the arrays to be used
for storing calculated data.
1. harray stores the Chebycheff-Hermite Polynomial
2. zxharray stores the Chebycheff-Hermite Polynomial
standarized values.
3. aarray stores the density normal distribution values.
4. zarray stores the denstiy standarized normal
distribution values.
5. ncmarray stores the values for the non-central moments
6. azarray stores the standarized values of a
DIM harray(8, 8), zxharray(8, 8), aarray(9), zarray(9),
ncmarray(9, 9), azarray(9)
; The following routine calculates the non-central
; moments of a known distribution. In this case the
; example distribution is the lognormal distribution.
FOR x = 0 TO 8 STEP 1 .
ux = EXP((theta x) + ((x ~ 2 var 2) / 2))
LPRINT x, ux'"; x; ; ux
PRINT x, ux"; x; ux
; Calculation of the central moments, about the mean.
uOx =1
ulx = 0
u2x = ncm2 ncml ~ 2