Citation

## Material Information

Title:
Performance evaluation of a square law receiver
Creator:
Meyer, Mark Mathew
Publication Date:
Language:
English
Physical Description:
118 leaves : illustrations ; 29 cm

## Subjects

Subjects / Keywords:
Digital communications ( lcsh )
Digital computer simulation ( lcsh )
Digital communications ( fast )
Digital computer simulation ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

## Notes

Bibliography:
Includes bibliographical references (leaves 47-48).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Department of Electrical Engineering.
Statement of Responsibility:
by Mark Mathew Meyer.

## Record Information

Source Institution:
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
21104538 ( OCLC )
ocm21104538
Classification:
LD1190.E54 1988m .M48 ( lcc )

Full Text
PERFORMANCE EVALUATION OF A
by
Mark Mathew Meyer
B.S., University of North Dakota, 1985
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Master of Science
Department of Electrical Engineering
1988

This thesis for the Master of Science degree by
Mark Mathew Meyer
has been approved for the
Department of
Electrical Engineering
by
Date

Meyer, Mark M. (M.S., Electrical Engineering)
Performance Evaluation of a Square Law Receiver
Thesis directed by Professor John R. Clark
The probability density functions at the output of a square law receiver
(square law envelope detector) can be used to determine the probability of detection
(Pd) and probability of false alarm (Pfa) in a radar receiver or digital communication
system. The Pd and the Pfa define a measure of performance of a receiver that is of
most interest when attempting to ascertain the presence of a signal. This thesis
presents a numerical technique of determining the Pd and the Pfa for square law
detector systems with video bandwidth less than the PRE-D or IF bandwidth. A
number of examples are included which demonstrate the utility of this technique in
presented which compare the performance of a linear envelope detector to that of the
square law detector. A discussion of the utility of using the square law detector
analysis to predict linear detector performance is also included.
The form and content of this abstract are approved. I recommend its publication.
Signed
/ Dr. John R. Clark
iy

CONTENTS
CHAPTER
I. INTRODUCTION............................................. 1
II. ANALYSIS................................................ 4
Physical Model......................................... 4
Mathematical Model................................... 5
Formulation of Output Density Functions .............. 17
Probability Density for the Noise Only Case........... 20
Probability Density for the Case of Signal Plus Noise. 22
Determination of Receiver Operating Characteristics... 22
III. RESULTS................................................. 26
Sensitivity Analysis.................................. 26
Comparison of Numerical Methods....................... 34
Simulation.........>.................................. 39
IV. CONCLUSION.............................................. 43
GLOSSARY...................................................... 46
REFERENCES.................................................... 47
APPENDIX
A. DERIVATION OF AUTOCORRELATION ON
SQUARE LAW DETECTOR OUTPUT COMPLEX
INPUT................................................... 49
B. DETERMINATION OF THE REAL PART OF THE SIGNAL
PLUS NOISE PROBABILITY DENSITY INTEGRAND................ 52
C. CALCULATION OF PROBABILITY OF DETECTION
FOR THE IDEAL SQUARE LAW DETECTOR
54

V
D. CALCULATION OF PROBABILITY OF DETECTION
ASSUMING GAUSSIAN OUTPUT DISTRIBUTIONS..... 56
E. DISCUSSION OF NUMERICAL TECHNIQUES
EMPLOYED IN PRODUCING NUMERICAL RESULTS.... 60
F. PROGRAM LISTINGS............................... 62

TABLES
Table
1. Video Filter Sensitivity Analysis Parameters..................... 28
2. PRE-D filter Sensitivity Analysis Parameters..................... 32

FIGURES
Figure
1. Block Diagram of Physical System............................. 4
2. Complex Lowpass Transformation...................................... 7
3. Block Diagram of Complex Lowpass Equivalent System........... 8
4. Square Law Detector Discrete Time Equivalent..................... 12
5. Graphical Relationship Between Linear and
Circular Convolution............................................. 15
6. Probability of Detection Curves for
Video Filter Sensitivity Analysis................................. 29
7. Overlay of Video and PRE-D Filter Responses Utilized
for Video Filter Sensitivity Analysis........................... 30
8. Probability of Detection Curves Generated via Gaussian
Assumption, Ideal Receiver Assumption, and Analysis
Technique of this Thesis.......................................... 31
9. Probability of Detection Curves for
PRE-D filter Sensitivity Analysis................................. 33
10. Overlay of Video and PRE-D Filter Responses Utilized
for PRE-D Filter Sensitivity Analysis............................. 34
11. Overlay of Noise Only Probability Density Functions
obtained via Separate Analysis Techniques Butterworth
PRE-D Filter...................................................... 37
12. Overlay of Noise Only Probability Density Functions
obtained via Separate Analysis Techniques Chebychev
PRE-D Filter...................................................... 38
13. Monte Carlo Simulation Block Diagram............................... 40
14. Comparison of Analytical and Simulation
Probability Density Curves........................................ 41

CHAPTER I
INTRODUCTION
In radar systems, noncoherent communication systems, and various
laboratory equipment to name but a few applications, a standard receiver system that
is used consists of a PRE-D or IF subsystem, a nonlinear device such as a linear
envelope or square law envelope detector, and a POST-D or video subsystem. In the
radar system the primary objective of the receiver is to determine the presence of and
ascertain the time of arrival of the returned signal. Similarly the communication
system attempts to reconstruct the transmitted waveform with minimal interference
from the transmission media and noise. The characteristics of the signal that the
radar and communication system utilize are known a priori, thus the IF and video
subsystems can be built to optimize the detection or reconstruction of the signal. In
[1], Marcum evaluates the performance, in terms of probability of detection, of a
very simplified receiver system that consists of an DF filter of bandwidth B, a linear
or square law envelope detector, and a brick wall video filter with bandwidth 2B.
This very simple system can be used as a good approximating model to determine the
performance of a number of "optimized" radar or communication systems.
The analysis of a receiver which has a video bandwidth that is less than the IF
bandwidth is a more difficult process. A number of authors have written papers on
the case of the square law envelope detector with video bandwidth less than IF
bandwidth: the first to do so were Kac and Seigert [2], These authors determined
the density functions of the noise only and signal plus noise outputs of the video
filter in terms of the eigenvalues and eigenfunctions of an integral equation which

2
involved the IF correlation function as well as the video frequency response function.
Unfortunately, this technique involved the solution of an integral equation which,
except for very special cases, appears to be very difficult. Emerson [3] also
produced a solution to this problem via a characteristic function expansion technique.
His results were not easily generalized since his analysis was applicable for gaussian
shaped filter transfer functions. Hummels, Adams, and Harms [4] produced a
numerical method for determining the performance of a general square law receiver.
Their method utilized a series representation of noise that allows for matrix
formulation of the problem. The matrix formulation can be viewed as an
approximation to the integral equation, thus, the solution of the eigenvalues and
eigenvectors of the resultant matrix provide a general technique for determining the
performance of the Square law receiver.
The focus of this thesis is to present a numerical method of performance
evaluation of a generalized square law envelope detector which will complement that
of [4]. This method is similar to [4], in that the solution is based on the eigenvalues
and eigenvectors of a matrix formulation of the problem. The key difference in the
approach is the technique in which the matrices are obtained. Rather than
approximating the noise process by a series expansion, the square law detector and
lowpass filter are approximated by their discrete equivalents. The discrete lowpass
filter structure was selected such that the problem could be solved via linear algebra
techniques. The results of this analysis is supported in two ways. The first is a
comparison of the presented method with results obtained using the analysis in [4].
Second is a Monte Carlo simulation of the square law detector which verifies the
analytical results.
At this point the question should arise: why examine another technique for
solving the square law envelope detector problem when at this time the linear
envelope problem hasn't been solved yet? The answer to this question is simple.

3
The mathematics of the linear envelope problem are very difficult and to quote [5],
"From a purely mathematical point of view, the quadratic rectifier is much easier to
treat analytically, and in fact, seems at the moment to be the only response for which
more or less explicit results can be found". The technique which will be presented
was originally devised in an attempt to solve the linear envelope detector problem.
Although it was unsuccessful at the linear detector problem, it produced another
technique for solving the square law detector problem which should complement the
method of [4]. Although the linear problem may be intractable from an analytical
standpoint, a Monte Carlo simulation provides results which show the performance
of the linear envelope detector with respect to the square law envelope detector. As
with the idealized case, the performance for the square law detector is quite similar to
that of the linear detector. Therefore, the performance of a linear envelope detector
system can be predicted from a square law detector analysis.

CHAPTER n
ANALYSIS
Physical Model
As depicted in figure 1, the first component of the physical system consists of
a linear time invariant bandpass filter which has frequency and time domain response
characteristics that can be described mathematically. The second element is a
memoryless nonlinearity which provides an output at time t that is proportional to the
square of the input voltage at that instant of time. The third element in the system is a
linear time invariant lowpass or video filter which has a frequency and time domain
response that can also be described mathematically. The input to this system is the
sum of a desired signal component s(t) and a stationary bandpass Gaussian noise
process n(t) with noise density Nq/2 and bandwidth 2B centered at fc>2B. The
output Q(t) is the quantity that is observable, thus a mathematical description of the
properties of Q(t) will provide the necessary information to determine system
performance.
A spectrum analyzer is a typical real world device that utilizes a receiver
similar to the one described above. If the spectrum analyzer is placed in a zero scan

5
mode and the resolution bandwidth is set greater than the video bandwidth, the exact
configuration described above is realized. Although the nonlinearity might be a linear
L
envelope detector in place of the square law envelope detector, the analysis for the
square law envelope detector can be used to bound the performance of the system
that uses a linear envelope detector.
Mathematical Model
Typically in the analysis of bandpass systems, it is convenient to represent
the signals and bandpass filters by the complex lowpass equivalents. The
relationship between the complex lowpass equivalent of the signal, noise, and
bandpass filter and the actual quantity are as expressed in equations la-lc.
la) s(t) =Re{s(t)exp(jcoct)}
lb) n(t) = Re{n(t)exp (jcoct)}
lc) h(t) = Re{h(t)exp (jcoct)}
In [6], a detailed treatment of the properties of n(t) can be obtained. Of
primary interest at this point in the discussion is the relationship between the original
spectral density of the noise process n(t) and its lowpass equivalent n(t). From [6]
a number of useful relationships between the noise process and its lowpass
equivalent are obtained. These will be useful in determining how the spectral
densities are related. The noise process n(t) can be written in the familiar
narrowband form.
2) n(t) = nr(t)cos(coct) n.(t)sin(coct)
The autocorrelation of n(t) is
3) Rn(x)=R (t)cos(cocx) - R n (x)sin(cocx)
r * i
4) R =Rn
' 1
The defining equation for the autocorrelation for n(t) is written as follows.
5) Rs =E{n(t)*n(t + x)}

6
From equations lb and 2 it is apparent that n(t) can be represented via the real
processes n^t) and nj(t) as follows.
6) n(t) =nr(t)+jn.(t)
Substituting 6 into 5 and performing the indicated operation results in an expression
for the autocorrelation of n(t) in terms of the auto and cross correlations of the real
processes nr(t) and ni(t).
7) R (x) = Rn (x)+Rn (x) + jRn .W-jRnn (x)
Since
Rn (x) =Rn (x) and Rn n (x) =-R (x)
r 1 1 i i r >
equation 7 can be reduced to
8) Ra(x)= 2Rn(x) +j2Rnn(x)
The autocorrelation of n(t) can thus be rewritten as
9) Rn(x) = -|[Rn(x)cos(cocx) +jRn(x)sin(0)cx)]
+ -|-[jRnrn.(x)cos(CQcx) Rn n (x)sin (cocx)]
+ y[Rn (T)cos(cocx) jRn (x)sin (cocx)]
+ jRnrn 00cos(cocx) Rn n (x)sin (cocx)]
An equivalent expression for 9 is
10) Rn(x)= ^exp(jcocx)[Rn(x) + jRnn(x)]
+ ^-exp(-jcocx)[Rn (x) -jRn n (T)]
Noting the similarity between the terms inside the brackets and the expression for the
autocorrelation of the complex lowpass noise process, equation 10 can further be
reduced as follows.
11) Rn(x)= ^exp(jcocx)Ra(x)+ |exp (-jcocx)R*a(x)
The spectral density of n(t) can be found by taking the Fourier transform of 11.

7
12) S,(f)=i.jR,(T)exp[-j2jt(f-fc)T]dT
O
oo
+ j j RjC^exp [- j27i(f + fc)t]dT
oo
From these integrals, the expression relating the spectral densities can be obtained.
13) S,(f) = is,(f-ft)+{s(-f-fc)
From equation 13 it is evident that the spectral density of the lowpass
equivalent is equal to the upper sideband of the narrowband noise process shifted
from f=fc to f=0 and scaled by 4 as shown in figure 2. It is of interest to note that
this transformation is not energy conserving. The energy in the lowpass equivalent
is 2 times that of the bandpass process. The relationship between the bandpass
process and its lowpass equivalent is described in equation lb. Since the bandpass
process is equivalent to only the real part of the lowpass equivalent modulated by a
complex exponential, 1/2 of the energy of the complex lowpass process existed only
as a mathematical convenience and will be discarded in the transformation.

Sn(f) S,(f)
Figure 2. Complex Lowpass Transformation.
Utilizing the transforms as outlined in equations la)-lc), the lowpass
equivalent system may be represented as depicted in figure 3. The output component
of the filter due to the signal input and noise input are v(t) and p(t) respectively.

8
H( f) Square Law G(f)
Detector
Figure 3. Block Diagram of Complex Lowpass Equivalent System.
The next step in defining the mathematical model is to determine a
mathematical description of the time functions v(t) and p(t). To describe p(t) it is
necessary to determine only the autocorrelation of the noise process on the output of
the filter. The autocorrelation is sufficient to describe p(t). As the input to the linear
time invariant system is a zero mean stationary complex gaussian process, the
output must also be a zero mean stationary complex gaussian process. Papoulis [7]
provides the relationship between the input and output spectral densities for a linear
filter.
14) Sp(f) = S5(f)M2
Since the spectral density of the complex lowpass process is 2N0 for -B where B is defined such that the value of the energy spectrum of the filter is
insignificant at B, a valid approximation for Sp(f) is
15) Sp(f) = 2N0fe(f)f
This relationship is useful because the autocorrelation and spectral densities
are transform pairs thus the autocorrelation of p(t) can be determined. This operation
is most conveniently accomplished via a partial fraction expansion of the energy
spectrum of the filter. An alternate expression for the energy spectrum is
16) |lT(f)|2 = H(f)H*(f)
If the analysis is performed on a filter which is designed via traditional
techniques the filter function H(f) can be represented by a ratio of polynomials in f.

9
Thus the energy spectrum can be written in terms of a ratio of products of the roots
of those polynomials as in equation 17.
r pzer
ri(- j2rf+z-)
i=l______________
npole
. k=l
The standard filter designs (Chebychev.Butterworth, Elliptic, etc.) produce a
transfer function with a denominator which has distinct roots thus a partial fraction
expansion of equation 17 is simplified considerably. The partial fraction expansion
assuming distinct roots is as follows
17) |H(f)f =
U&C1U
ri(j2rf+z.)
i=l
npole
II(j2jrf + Pt)
k=l
, npole xf_
18) |ft(f)| = X TTozJz
npole
Kn.
Z (j2jrf + Pi) tJ(-j25rf+p*)
The constants Kpj are determined via the following equation
fl(-Pi + z.)
19) Kp. =
1=1
npole
rit-Pi+PO
k=l
uwiu
IlCPi + z*,)
1=1__________
npole
IlcPi + PO
L k=l

The constants Kni are determined in a similar fashion.
The autocorrelation of p(t) can now be determined by performing the inverse
Fourier transform on equation 15, substituting the expression for the energy
spectrum as outlined in equation 18. A further simplification of the transform can be
realized by examining the form of equation 18. The first summation represents the
autocorrelation for x > 0. The key to this simplification is obtained from Papoulis
[7] as shown in equation 20, thus the autocorrelation is completely defined by
determining the values for x > 0.
20) R*(x) = R( x)

10
The expression for the autocorrelation of p for x > 0 is shown in equation 21. As
previously stated, the autocorrelation for negative x is defined in equation 20. The
autocorrelation function will be complex only if the low pass equivalent energy
spectrum is not symmetric with respect to f=0.
npole
21) Rp(x) =2N0XKpiexp(-pix) x>0
i=l
A generalized signal s(t) could represent a signal which could force the
usage of random process techniques to represent. However, to limit the complexity
of the problem, s(t) was chosen to be a deterministic pulsed CW signal centered at
f=fc with amplitude A0. A further restriction which does not limit the generality of
the problem but simplifies the analysis, is to limit s(t) to be of the form
22) s(t) = Ao(t)cos(0)ct)
The resulting complex equivalent is
23) s(t) = A0(t)
A reasonable approximation to v(t) can be obtained for pulse repetition
intervals (Tpri) greater than the effective impulse response length of the complex
lowpass filter. The effective impulse response length can be defined as the time
when the impulse response envelope is approximately 0. The interaction of the
pulses with each other, if separated by a time greater than the effective impulse
response length, will be negligible. The time function resulting from the convolution
of a single pulse with the filter impulse response is labeled vp(t). Assuming ho
interaction between pulses, the output v(t) can be written as a function of vp(t)as
shown in equation 28. The most difficult step in the convolution is the determination
of the filter impulse response. The most convenient form of the impulse response
can be obtained by performing a partial fraction expansion of the filter transfer
function. The inverse Fourier transform is subsequently utilized to determine the

11
impulse response as a summation of exponential terms. These steps are summarized
in equations 24-26.
npole -rr
24) H(f) = XtV
25) K,=
(j27Tf -hp.)
azero
n(-pi+Zl)
1=1__________
npole
n(-Pi+pk)
k=l
L i*k
npole
26) h(t) = Sk .exp(-p.t) t >0
i=l
The result of convolving vp(t) and h(t) is
K.
___l
Pi
27) v (t) h(t) = -
npoles
X p^[exP(Pit)-l] 0 < t < T
npoles ^
2 pexp(pit)t1-exp(~pi1)] t-T
l i=l 1
An expression for v(t) can be written as
h
Pi
' npoles
2 ?7(P [P.( T *)] -'} "T s Â£ T + nT^
i=l
npoles
28) v(tH I p7p[p,(t-nT )][i-exp(-PlT)]
i=l 1
for T + nT. < t < (n + 1)T .
pn pn
For the simple case of A0(t)=Ao, the expression for v(t) reduces to
v(t)=AoH(0). H(0) is a constant term which reflects the DC gain of the filter.
A mathematical model which represents the square law detector and low pass
filter in a form that simplifies the analysis task is desired. Previous authors [4]
utilized a frequency domain representation that simplified the problem by

12
approximating the noise process with a finite summation of sine and cosine time
functions, each modulated by an independent gaussian random variable. This
representation allowed them to write the output of the square law detector and video
filter in the form xTA&. Imbedded in the symmetric matrix A are the properties of
the IF and video filters while x is a vector of unit variance independent gaussian
random variables. The matrix A could be diagonalized, producing a weighted sum
of squares of independent gaussian variables for which the expressions for the
density functions could be computed.
Unlike [4], this thesis utilizes a time domain representation of the signal and
noise process on the output of the IF filter. It is a simple matter to describe the
Output of the square law device with the input being the noise or signal+noise. The
difficulty arises when the video filter is encountered. Other authors [2,5] have
solved this problem in terms of an infinity of eigenvalues and eigenfunctions of an
integral equation. This solution does not easily produce a numerical result as the
eigenvalues and eigenfunctions are not readily determined. This thesis utilizes an
approximation of the square law detector and video filter by discretizing the signal
and noise process. The infinite impulse response video filter can be approximated
by a finite impulse response filter. The result of this approximation is a matrix
formulation in a form similar to that of [4]. The continuous time description of the
square law detector and video filter is diagramed in figure 3. The sampled time
equivalent is contained below in figure 4.
Sampler G(f)
Figure 4 Square Law Detector Discrete Time Equivalent.

13
The relationship between a continuous time random process and the discrete
' equivalent is contained in equation 29. This relationship is the stochastic version of
the sampling theorem as adapted from [7].
V sin[27tW(x-nTs)]
29) p(t + t)- Ip(t +nTs) 2JtW(T-nTs)
T <
1
2W
S(f) = 0 |f|>W
As with deterministic sampling, this theorem states that given a sampling
frequency such that the spectral density is zero for frequencies greater than fs/2, the
continuous time process can be determined from past and future samples of the
discrete time process. Similarly the signal component v(t) can also be determined
from its past and future samples. The spectral density of the sampled process can
also be obtained from [7]. It is shown to be a function of the spectral density of the
continuous time process as outlined in equation 30.
30) S (f) = Jr Â£sr<(f + 2nitT,) T,= j-
s n=c 1 s
If the requirement on the sampling interval is satisfied, as outlined in equation
29, the spectral density of the discrete time process will equal the spectral density of
the continuous time process in the frequency range |fl of the discrete and continuous time signal will also be equal in that interval.
The square law detector modifies the spectral densities of both the continuous
and discrete time processes. Assuming v(t)=0, the modified spectral densities are
obtained most easily from the relationship between the input and output
autocorrelations which is derived in appendix A.
31) Rr(x) = Rp(0) + Rp(t)R*p(x)
The autocorrelation function for the discrete process is the sampled version of 31.
32) Rr(mT s) = Rp(0) + Rp(mT ^(mT s)

14
The Fourier transforms of equations 31 and 32 result in expressions for the
spectral densities of the discrete and continuous time processes. The spectral
densities for the continuous and discrete time processes are contained in equations 33
and 34 respectively. The symbol denotes convolution.
33) S r c( f) = Rp(0) 5( f) + S p /f) S p c(- f)
34) Sr (f) = Rp(0)8(f) + Sp (f) Sp (- f)
d d r d
In equation 33 the convolution of the continuous time spectral density with
itself produces an output spectral density with bandwidth twice of the input. This
type of convolution is sometimes referred to as linear convolution. The discrete time
spectral density has infinite bandwidth as defined in equation 30. The convolution of
this process with itself is referred to as circular convolution, since an image of the
spectrum around f=0 appears at f=nfs. Intuitively, the spectral density of the discrete
time model should match that of the continuous time model in the range |f] each point in the system for the model to be an accurate representation. Assuming
the bandwidth of the spectral density of p(t) is W, one can specify the sampling
frequency to be fs=2W which will satisfy the conditions of equation 29. With this
definition of fs, the spectral density of the discrete time process p(m) has images of
the continuous time spectral density located immediately adjacent to each other. The
convolution as defined in equation 34, will cause aliasing to occur when fs=2W.
Under this condition, the spectral densities of the discrete and continuous time
processes will not be equal in the frequency interval |f! frequency can be defined such that the circular convolution will act similar to the
linear convolution in the interval |f] the discrete time spectral density convolution will produce an output spectral density
that is equal to that of the continuous time spectral density on the output of the square
law device in the interval |fj

15
Continuous Time
Spectral density
Discrete Time
Spectral density
f. = 4W

-4W -w w 4W
*

4W -W W 4W
II
Figure 5. Graphical Relationship Between Linear and Circular Convolution
At this point in the discussion, it is of interest to show that the discrete and
continuous time processes are mean square equivalent at the output of the square law
detector if fs>4W.
The output of the continuous time square law detector could be sampled at
fsl>4W to form a sampled process ri(mTsi) which is representative of the
continuous time process as outlined in equation 29. The mean square error between
r(mTs) and ri(mTsi) is equal to zero if the processes are mean square equivalent.
This expression can be written as follows.
35) E{|r(mT,)-r1(mT,1)f} = 0
Expanding this equation produces
36) E{r(mT s)r*(mT s) + r^mT sl)r( mT J}
- E{r(mT s)r;(mT d) + r*(mT Jr^mT sl)} = 0

16
Since the random processes r(mT) and ri(mT) can be replaced by |p(mT)|2> it
is apparent that the processes will be equivalent in the mean square sense if Ts=Tsi.
Sampling p(t) at a sampling frequency twice that specified by equation 29 will ensure
equivalence between the discrete time and continuous time processes at each point in
the system. This proof satisfied the noise only case, however one can verify the
equivalence of the spectral density for the signal+noise case as well. This implies
that the continuous signal+noise process on the output of the square law detector can
be represented as past and future samples of the discrete signal+noise process as
well.
A realizable filter will theoretically produce a nonzero spectral density for all
frequencies. However, an acceptable approximation to a bandlimited spectral density
can be determined by obtaining a value of f=fm where the magnitude of the frequency
response of the filter is small relative to the peak value. If the sampling frequency is
specified under the assumption, W=fm, a negligible amount of aliasing will occur
upon discretizing the signal. For the purposes of this thesis the frequency value
where the magnitude of the energy spectrum of the filter was 40 dB less than the
peak value was designated as W.
The FIR filter can be used to approximate continuous time filters in the
frequency range -fs/2 determine the coefficients of the desired filter. The simplest technique is to sample
the impulse response of the desired filter at a time interval Ts as defined previously.
Since the filters of interest will, for practical purposes, be bandlimited to a frequency
less than fs/2, this technique will provide a reasonable approximation. Another
method utilizes a least mean square adaptive technique which minimizes the error
between the frequency response of the FIR filter and the continuous time filter in the
region -fs/2 response sampling technique provided a simpler and more effective method of

17
producing a good approximation. One problem that is experienced with
approximating a continuous time filter with a FIR filter is that a large number of
coefficients are required when approximating a filter with a bandwidth that is narrow
relative to fs/2.
The sensitivity analysis results that are presented in this thesis are concerned
less with the actual frequency response of the video filter and more with the
difference in performance for a change in the frequency response. For this study
some of the filter coefficients are determined via a windowed Fourier design.
Formulation of Output Density Functions
The output of the sampler is a sequence of jointly distributed complex
gaussian variables z{ with mean values E{zj}= z =v(t+iTs). For a given length K the
joint density function for these random variables can be defined as follows [6].
37) 2; = x. + jy; then
38)
p(Xy) =p(z) =
exp
{- \[z z]R_1[z z]*}
(27t)K|R|
In the previous equation R is the covariance matrix of the sequence of random
variables. The defining equation for R is
39) R = |e{[z z]*[z z]T}
As the mean value of the process z(t) is due to the signal only, the
autocorrelation of p(t) can be utilized to determine the covariance of the sequence zj.
From [7] it is shown that autocorrelation of the sampled process is the samples of the
continuous time autocorrelation. Thus, the elements of the covariance matrix can be
determined by sampling Rp(t) and assigning the elements as follows.
40) R= jRpKj-QTJ for i = 1 to K
and j = i to K
Utilizing equation 20, the other elements of R can be assigned

18
41) R.. = R*. for j = 1 to K
and i = 1 to j
From equations 40 and 41 it is evident that R*=RT It can also be shown for
linearly independent zfs that R is positive definite, therefore R is a Hermitian
positive definite matrix.
A single output sample of the square law detector and low pass FIR filter can
be described by the following quadratic form. In this equation F is a diagonal KxK
matrix whose diagonal elements contain the taps of the FIR filter. Since F is a real
diagonal matrix it also is Hermitian.
42) Q = zT*Fz
The task of finding the distribution of Q is simplified by finding a matrix
transformation which simultaneously diagonalizes R and F. The diagonalization
will produce a sum of independent random variables.
A result from linear algebra theory is that a KxK positive definite Hermitian
matrix has K positive eigenvalues. The eigenvectors corresponding to these
eigenvalues can form an orthonormal set. Thus, R can be written as follows.
43) R=UAUt*
U is the column matrix [Vi V2-...Vk] of orthonormal eigenvectors associated with
the eigenvalues A-i, %2 ... Ak that are the elements of the diagonal matrix A. As R
is positive definite and U is orthonormal the inverse of R is defined as
44) R-1 = UA-V*
To produce a transformation which simultaneously diagonalizes F and R one
must recognize that A can be factored into the following form.
45) A=\|/*\|/t
In the previous equation \|/ is a diagonal matrix with elements that are the One
can then form the transformation
46) w = \|/_1UTz

19
This transformation results in a sequence of gaussian random variables w that are
statistically independent with unit variances. The mean values of the random
variables are defined as
47) w = \j/-1UTz
The joint density function of w can be written as
exp {- }[w w][w w]*}
48) p(w) =-----------------k----------
(2k)
Taking the inverse of equation 46 results in the following
49) z = U*\|rw
The quadratic form of 42 becomes
50) Q = wT(yT*UTFU>)w = wT*Tw
51) T=\jrrUTFU>
From 51 it is evident that T^=T*. Therefore, T is a Hermitian matrix as well and
can be represented in the following form.
52) T = SOSt*
In equation 52, S is the column matrix of orthonormal eigenvectors of T and O is the
diagonal matrix of the eigenvalues of T. A new transformation can be introduced.
53) n = ST w
54) w = Sn
The quadratic form Q is then reduced to a diagonal form.
K
55) Q = nT*On = S^i|ni|2
i=l
The covariance matrix of n is still the identity matrix; however, the mean value of n is
determined by the following transformation.
56) n = STVUTz
The density function of n can be written as follows.

57)
58)
20
n = a + jP .
P(a?Pi) = 2iexp{- |(n.-n)2}
K
59) p(n) =Ilp(ai,pi)
i=l
The density function of Q can be found by solving for the characteristic
function of Q and then performing the Fourier transform on the resultant
characteristic function. The characteristic function of a sum of independent random
variables is simply the product of the characteristic functions of each random
variable. The characteristic function of one term of Q is
exp
60) Mq(i)) =E{exp(ja)n.)} =-----:
Utilizing the relationship between the characteristic functions of independent random
variables and equation 60, the characteristic function of Q can be written as follows.
exp
61) Mq(u) = E{exp (jun)} =-----
11(1-2^0.)
i=I
The density function of Q can be found by taking the Fourier transform of equation
61.
. v iM
V 1=1 J >
( A I- I2 )
j^iKI
^ 1 2 juO j j
Probability Density for The Noise Only Case
For the noise only case, the mean value of the random process is equal to
zero. This leads to considerable simplification of the characteristic function since the
value of the exponential is now equal to 1. The characteristic function for the noise
only case is

21
62) M Jv) =
11(1-2**.)
i=l
When the eigenvalues are distinct, the characteristic function can be expanded
via a partial fraction expansion to yield
K C
63) M0(d)=X-
Q
The Ci's can be found from equation 64.
C. = Â£M^;
P*i
The density function of Q for the noise only case is now found by taking the
Fourier transform of Mq. This transformation results in the following equation for
P(Q).
65) p(Q) = X^-expf- ^Wgn (0;)Q]
i=l i > is
No restrictions were placed upon the taps of the FIR filter whose values
compose the diagonal elements of F. Thus, F and the resultant matrix T are not
positive definite in the general case. A Hermitian matrix which is not positive
definite will have real eigenvalues that are negative. Viewing the form of the noise
only density function, it is apparent that p(Q<0)*0 in general. This result seems to
defy intuition since the output of the square law detector is always positive.
However, the square law output is modified by the FIR filter producing Q. The filter
coefficients of the FIR filter are not necessarily all positive, thus it is possible that an
output from the FIR filter is negative. If the filter coefficients are all positive the
matrix F is positive definite. If the matrix F is positive definite, the eigenvalues of T
will be positive and p(Q<0)=0 will be true.

22
Probability Density For the Case of Signal Plus Noise
In equation 66 the probability density function is stated explicitly as an
integral equation in u. The solution of this equation by analytic means is limited to
the noise only case (i.e. ni=0). For the signal plus noise case, a numerical technique
for evaluating the integral will have to be utilized.
f
66) P(Q) = 2^- J exp ( juQ)
exp
K - | |2 *\
K 0>.n.
V > >
2j-oO.
n(l-2j0)O.)
-du
i=l
The equation for p(Q) is represented by a integral equation which has a
complex integrand. As probability density functions are known to be real, the real
part of the integrand can be separated out and integrated to produce the desired
density function. Equation 67 represents the real part of equation 66. Please refer to
appendix B for details on the determination of the real part of the integrand in
equation 66.
- P
7> p(Q) = ^J-
22 1 [ COS -dq+2
i = l l + 4v \$ i .
.In I
' +tan ^2ixS
J 2
l+4l)
dv
n(i+4u2')
From these expressions for the density functions of the noise only and signal
plus noise cases, the receiver operating characteristics (ROC) for a particular IF-
Video filter bandwidth ratio can be determined. The determination of a ROC
involves integrating the probability functions from some threshold voltage Vt to an
upper limit of as expressed in the following equations.

23
69) Pa=^jj
dudQ
The choice of the threshold voltage Vt is somewhat arbitrary. One method of
choosing Vtis to determine an acceptable probability of false alarm (Pfa) and solve
for Vt as a function of Pfa. The equation for Pfa readily integrates into the expression
contained in equation 70. This expression assumes the eigenvalues are ordered from
largest positive to largest negative, Kpos is the number of positive eigenvalues, and
Vt is a positive number.
This equation cannot be solved in a closed form for Vt, thus a numerical
method must be employed to determine a solution. The Newton Raphson root
finding technique was the numerical method used to obtain the values of Vt for a
given Pfa. Details of this and other numerical methods employed in producing the
results in this thesis can be obtained in Appendix E.
Having determined Vt and Pfa, the next step is to determine the probability of
detection for the chosen Vt. In [4] the expression for probability of detection is very
similar to that of equation 69. Adapting the technique utilized by the authors of [4] to
equation 69 results in an expression for Pd which can be solved numerically.
K
70)
K
i=l
71) let e(o) =

24
72) and g(t>) = -Â£
i=l
1)0.In.|2
11 l
1 + 4d20
Y+ tan '(2dO.)
73) then p(Q) = e(i))cos[Qi) + g(i))]di)
An alternate expression for the probability of detection is Pd=l-P(Q expression for P(Q v. 0 vt
74) P(Q< vt) = J p(Q)dQ = J p(Q)dQ + j p(Q)dQ
-oo -OO Q
Replacing the integrand of equation 74 with the equation 73 results in the following
equation.
0 oo
75) P(Q < Vt) = 2- J Je(i))cos[Qd + g(i))]di)dQ +
0000
v.-
+ ^-J J e(^)cos [Qv + g(v)]d^dQ
0
A simplified solution for p(Q and performing the integration with respect to Q. The first integral in equation 75 can
then be written as
76) j e(D) J
cos[Qu + g(u)]
2%
dQ
di)
In [4] it was shown that the inner integral of equation 76 is equivalent to
77)
I
cos [Qd + g(D)] 1 sin [g(D)]
siQ = 25(1))+asr-
Since e(0)=l, the first integral of equation 75 evaluates to the following

25
Similarly, the second integral in equation 75 can be evaluated by interchanging the
integrations. This results in the following
79) J P(Q)dQ = J e(o)
sin[Vt'0 + g(i))] sin [g(o)]
270)
do
Equation 75 can be rewritten as shown in equation 80.
i r sin f V .1) + g(o)l
80) P(Q < Vt) = \ + J c(o)----------------Uv
Recognizing that the integrand in equation 80 is even, Pd can be written as follows
i i r sin [V.o + g(o)l
81> p d
Substituting for e(o) and g(o) in equation 81 produces the expression for Pd as
shown in equation 82. The solution is explicitly a function of the eigenvalues of T.
The eigenvectors and eigenvalues of R and the eigenvectors of T impact the mean
value of n, therefore the solution for Pd is a function of both the eigenvectors and
eigenvalues of the original matrix formulation.
. p
82)

2 2j r
ril
2 2
i = i l+4u O
_____________i_.
_i
2
sin
V.D-E

.N
Tv
0 tÂ¥ J 21
ull[l+4v> i=l
l+4v> 4>
+ tan"1(2i)Oi)
do
J/
This equation can be numerically integrated using standard numerical integration
techniques to produce a probability of detection for a given threshold voltage. Unlike
the equation for Pfa, the equation for Pd uses all of the eigenvalues irrespective of
their sign.

CHAPTER m
RESULTS
Sensitivity. Analysis
Utilizing the expressions for Pfa and P^ results can be obtained which
describe the performance of a particular square law receiver system. When using a
theoretical model to describe a physical system a difficulty arises when the
parameters of the physical system cannot be accurately determined. Typically, this
occurs because the analysis is performed prior to the construction of the physical
hardware. The implementation of the design generally does not produce exactly what
the designer had anticipated. For this reason it is useful to determine the sensitivity
of the analysis technique with respect to the variation of system parameters.
A qualitative study of the sensitivity of the square law receiver operating
characteristics to variations in system parameters is presented. The items which are
being evaluated are as follows: 1) Fixed IF filter with a variable shape video filter of
constant noise bandwidth, 2) Fixed video filter with a variable shape IF filter of
constant noise bandwidth.
To perform the sensitivity analysis as indicated previously, it is necessary to
compute the equivalent noise bandwidth Ngw of the IF and video filters. The noise
bandwidth of a filter is defined as the width of a fictitious rectangular spectrum such
that the power in in the rectangular band is equal to the power associated with the
actual spectrum over positive frequencies. This can be expressed via the following
equation [9].

27
83) Nw = L_J|H(f)fdf
|H(f.)| o
From [7] a relationship between the the autocorrelation evaluated at t=0 and
the spectral density of the output of a linear filter can be obtained. Equation 84
assumes that a uniform spectral density input of amplitude 2N0 was input to the
filter.
84) R(0)=2N J|H(f)|2df
For all of the results presented in this thesis, the filter transfer functions are
symmetric with respect to f=0. Utilizing the symmetry of H(f) it can be seen that the
integrand in equation 84 is even. Therefore, an expression for the noise bandwidth
in terms of the autocorrelation evaluated at zero can be written. This expression is
listed in equation 85.
85)
N
BW
R(0)
4N The equivalent noise bandwidth of a sampled time filter is defined via the
following equation.
2
86) = J|G(ej2TCf)| df
0
An equivalent expression for equation 86 can be written as follows.
0.5
87) Nbw = J G(ej2,rf)G(ej2,cf)df
0
The frequency response of the sampled time filter is defined as a function of the taps.
This relationship can be written as shown in equation 88.
K-l
88) G(ej2,rf) = Xfie~j2,tfi
i=0

28
The expression for the sampled time video filter, as outlined in equation 88,
can be substituted into equation 87. If the summations and integration are
interchanged the following expression for the noise bandwidth of the video filter is
obtained.
K-l K-l 'S
89) =X Xfnf*JVj2,tf(i"n)df
i=0 n=0 q
The integral in equation 89 is equal to 0.5 for i=n. Since the taps of the FIR
filter are real, the combinations for h*n over the double summation cancel each other
out The Nbw of the discrete time video filter can be defined as the sum of the
squares of the taps as shown in equation 90.
90)
^ i=0
The analysis of a square law receiver system for a fixed IF filter with a
variable shape video filter was performed for a number of video filters. The complex
lowpass equivalent of the PRE-D filter utilized a 6 pole lowpass Butterworth filter as
a model. Figure 6 represents an overlay of the receiver operating curves. Each curve
is associated with a particular video filter. The parameters associated with the
different cases are listed in table 1.
Table 1. Video Filter Sensitivity Analysis Parameters.
Cateeorv Case 1 Case 2 Case 3 Case 4 Case 5
Video Filter 2 Pole Uniform Blackman 1 Pole Uniform
Butterworth Butterworth
Taps 30 30 50 50 50
Noise BW ratio 6.84 6.84 6.84 6.84 6.84
Pfa IE-7 IE-7 IE-7 IE-7 IE-7
Input Signal CW CW CW CW CW

29
Figure 7 represents an overlay of the frequency response of the video and
PRE-D filters. With the different frequency responses illustrated in figure 7, it is of
interest to note that the variation in detection performance is only about 0.3 dB for all
of the different filters.

30
0
-5
dB
-10
0.5 1
0)
Overlay of Video and PRE-D Filter Responses Utilized for Video
Filter Sensitivity Analysis.
To put this variation into perspective, figure 8 is an overlay of the expected
performance of the system assuming gaussian distribution at the output of the 2 pole
Butterworth video filter, the performance of the system with no video filter, and the
analytical results as presented in figure 6. From that perspective it is apparent that an
error of 0.3 dB is not significant Furthermore, a typical system level design would
probably allocate significantly more than 0.3 dB margin to the video subsection, thus
an error of 0.3 dB in the analysis would be absorbed in the design margin.
0
Figure 7.

Figure 8. Probability of Detection Curves Generated via Gaussian Assumption,
Ideal Receiver Analysis, and Analysis Technique of this Thesis.
Figure 9 illustrates the effect of a variable PRE-D filter of constant noise
bandwidth with a fixed video filter. The conditions for this analysis are contained in
table 2. As shown, the Butterworth and the type II Chebychev filters (group 1) have
almost identical responses. The Chebychev type I filters (group 2) show
dramatically improved performance.
When examining the performance of a square law detector system, it is useful
to look at the spectral density on the output of the square law detector to provide a
qualitative technique of determining performance. Equation A10 outlines the
autocorrelation of the output process in terms of the input autocorrelation. If the
symbolic Fourier transform of equation A10 is performed, the following expression
for the spectral density is obtained.

32
91) Sr(f) = [ A40 + 2 A20R p(0) + R2p(0)]8( f)
+ A20[Sp(f)+Sp(-f)]+Sp(f)*Sp(-f)
For all of the filters analyzed the power associated with the signal output is
A0. The difference in performance between the group 1 and group 2 filters can be
explained by examining SNR at the output of the video filter. An exact calculation of
the output SNR will not be undertaken. Rather, a heuristic approach of estimating
the magnitude of the SNR is utilized. If a constant spectral density is input to a filter,
the output noise power is the noise bandwidth of the filter multiplied by the spectral
density. Although the spectral density on the output of the square law detector is not
constant, one can define an average spectral density which is constant. For the group
1 filters, the average output spectral densities will be virtually identical. The output
SNR's will also be the virtually identical. Viewing the form of the frequency
response for the group 2 filters in figure 10, it is apparent that the average noise
density for those filters is less than that of the group 1 filters. Therefore, the SNR of
the group 2 filters at the output of the video filter is greater than the group 1 filters.
From this perspective, it is not surprising that the group 2 filters had significantly
better performance than the group 1 filters.
Table 2 PRE-D Filter Sensitivity Analysis Parameters.
Cateeorv Case 1 Case 2 Case 3 Case 4 Case 5
PRE-D Filter Butterworth Butterworth Chebychev I Chebychev I
Chebychev n
Poles 6 10 6 10 10
Passband Ripple - - 3.0 5.5 -
Video Filter 2 pole But 2 pole But 2 pole But 2 pole But 2 pole But
Noise BW ratio 9.77 9.77 9.77 9.77 9.77
Pfa IE-7 IE-7 IE-7 IE-7 IE-7
Input Signal CW CW CW CW CW

33
SNR in dB
Figure 9. Probability of Detection Curves for PRE-D Sensitivity Analysis

34
Figure 10. Overlay of Video and PRE-D Filter Responses Utilized for PRE-D
Filter Sensitivity Analysis.
Comparison of Numerical Methods
Throughout this thesis, numerous comparisons between the square law
analysis method of [4] and the one presented in this thesis have been made. At this
point in the discussion, it is appropriate to provide a more thorough discussion of the
similarities, differences, advantages, and disadvantages of the use of each of these
methods.
The solution of the square law detector problem consists of finding an
approximation to the infinite set of eigenvalues and eigenfunctions which result from
the solution of the integral equation [2] listed in equation 92.

35
92) J R(s t)g(t)f(t)dt = Xf(s)
0
R(t) is the autocorrelation of the output of the PRE-D filter; g(t) is the video
filter impulse response; f(t) represents the eigenfunctions; and X represents the
eigenvalues associated with each eigenfunction. The probability density of the noise
only and signal plus noise outputs can be represented as functions of these
eigenvalues and eigenfunctions. Equation 92, in general, is difficult to solve
explicitly. Therefore, it is convenient to determine an approximation which provides
an equivalent finite set of eigenvalues and eigenfunctions (eigenvectors). An
approach to this approximation might be to attack equation 92 directly. Although an
approximation based on the "frontal assault" method might be obtained, this
technique was not investigated. Both the authors of [4] and the author of this thesis
elected to formulate the problem in such a manner that a direct confrontation with
equation 92 was unnecessary.
In [4], the noise process is approximated by a finite summation of sines and
cosines, each modulated by a zero mean independent gaussian random variable. The
variance of the gaussian variables and the frequencies of the sines and cosines are
determined via a Gaussian Quadrature approximation of the Fourier transform of the
power spectral density of the input noise process. The output of the square law
detector can be described as Q(t)=xtAi. The xfs are zero mean unit variance
gaussian random variables. The noise and signal+noise densities are written as the
functions of the eigenvalues and eigenvectors of the matrix A. The properties of the
PRE-D and video filters imbedded in A are obtained by sampling the frequency
response at discrete intervals across the frequency interval of interest The signal is
also approximated via a Fourier expansion technique. Although the output equation
for Q(t) is written as a function of time, the method of [4] utilized frequency domain

36
approximations to determine the time domain functions (A is implicitly a time varying
matrix).
The method of this thesis is based on a sampled time domain representation
of the noise process, the video filter, and the signal. As with [4], a matrix
formulation resulted, with the eigenvalues and eigenvectors being utilized to
determine the output density functions.
A direct comparison of the utilization of each of these analysis techniques is
warranted. The solution of the probability density function for the signal+noise case
of [4] was not undertaken (would have taken significantly more effort to program
with nothing to gain for the effort). However, the probability density function for
the noise only case in [4] is a simple function of the eigenvalues of the matrix A. A
program was constructed which provided a direct comparison of the noise only
density functions from each analysis technique. For the case selected (6 pole
Butterworth PRE-D and 2 pole Butterworth video filters) the probability density
functions, figure 11 are virtually the same. The order of the matrix for the frequency
domain approximation was 14 while it took a 30 x 30 matrix for the sampled time
domain technique. The frequency response of both the PRE-D and video filters are
smooth, thus a small number of samples of the frequency response can adequately
describe the filter function. For PRE-D and video filters with smooth frequency
responses it is clear that the method of [4] can provide similar results utilizing an
approximation with fewer terms.

37
Figure 11 Overlay of Noise Only Output Density Functions Obtained via
Separate Analysis Techniques Butterworth PRE-D filter.
Another overlay of the probability density functions that result from utilizing
both methods of analysis is shown in figure 12. In this case a 6 pole Chebychev
type I filter with 5 dB passband ripple replaces the PRE-D filter of the previous case.
As seen in figure 12, the density functions are significantly different Increasing the
number of terms utilized by the frequency domain approximation to 30 produces a
probability density function similar to the one produced by the sampled time domain
analysis technique. Although this is somewhat of a contrived scenario, it is evident
from this example that care must be exercised in choosing the order for the frequency
domain approximation when there is ripple present in the frequency response of the
PRE-D and video filters.

38
Figure 12 Overlay of Noise Only Output Density Functions Obtained via
Separate Analysis Techniques ~ Chebychev type IPRE-D filter.
The errors that can occur when utilizing the frequency domain approach [4]
result from 2 sources. The first source of error results when there are not enough
terms utilized by the Gaussian Quadrature approximation of the Fourier transform.
The second error source is also associated with the approximation of that integral.
The Gaussian Quadrature approximation requires that the integral being evaluated
have finite limits. All realizable filters will have a nonzero frequency response for all
frequencies. However, this error can be made insignificant by choosing the limits of
the integration such that the frequency response of the PRE-D filter at the limits of the
integration are small enough to be negligible. Similarly, the error sources in the
sampled time domain approach also result from 2 sources. As the process must be
sampled, it must be bandlimited for the sampling theorems to be valid. However, as

39
discussed previously this is not possible for realizable filters. From a practical
standpoint, the sampling frequencies can be chosen such that this problem will create
an arbitrarily small error. The second error results when approximating infinite
impulse response video filters with a finite impulse response sampled filter. As with
the frequency domain analysis technique, the error associated with the video filter
approximation can be reduced by using more terms in the FIR approximation.
Simulation
The analytical results presented in this thesis introduce a technique for
evaluating the probability of detection and probability of false alarm for an arbitrary
square law detector system. As mentioned previously, the analysis effort focused on
the square law detector system as opposed to the linear envelope detector problem.
The analysis was limited to the square law detector case because of the difficulty in
producing an appropriate mathematical model for the linear envelope detector which
would aid in the solution of the problem. Although the linear detector problem was
not solved in this thesis, a useful comparison between the square law detector
performance and the linear envelope detector performance was obtained via a Monte
Carlo simulation. The analytical results of this thesis can also be compared with that
of the simulation. The benefit accrued from a comparison between the simulation vs
analytical results is twofold. First of all, the results of the analysis were verified via
a technique which is virtually independent of the analysis technique and secondly,
greater confidence in the simulation model will be realized if the square law detector
results are comparable.
A complex lowpass simulation was constructed as outlined in figure 13. The
construction of this simulation was accomplished very efficiently by utilizing
numerous standardized Fortran subroutines obtained from [8] and [13]. The Fortran

40
source code is contained in appendix F. Additional comments on the simulation
noise and filter models is contained in appendix
V.
Figure 13. Monte Carlo Simulation Block Diagram.
Figure 14 illustrates the comparison between the simulation and analytical
results. The parameters for the analysis and simulation were: 6 pole Butterworth
PRE-D filter, 2 pole Butterworth video filter, 3 dB bandwidth ratio of 5, and PRE-D
SNR of 3.75 dB. Two important features of this plot are: First, the simulation and
analytical results for the square law detector are virtually identical and secondly, the

41
linear envelope detector performance appears to be as good or better than that of the
square law detector for all threshold voltages.
Figure 14. Comparison of Analytical and Simulation Probability Density Curves
Numerous comparisons between the analytical and simulation results have
been made which are not being presented here. In all of the cases, the simulation and
analytical results for the square law detector have been virtually identical. The
simulation results, along with the comparison between the analytical methods of [4]
provide solid evidence that the sampled time domain analysis technique is indeed a
valid technique which can be utilized to evaluate the performance of a square law
detector.
The comparison of the linear envelope detector performance with respect to
the square law detector via simulation techniques was also performed for a number of

42
different PRE-D and video filter configurations. Figure 14 illustrated that the square
law detector performance was worse than that of the linear envelope detector. For all
of the cases that were evaluated via simulation this was true. Although the simulation
results did not cover all possible cases, it is not unreasonable to assert that the square
law detector results can be utilized to predict the linear detector performance possibly
providing a lower bound. Marcum [1] produced analysis which compared the
performance of square law and linear envelope detectors when independent output
samples were added. His result demonstrated that the linear detector performance
was superior to that of the square law detector when the number of samples summed
was less than 70. Video filtering of the output process can be approximated via a
summation as illustrated by the sampled time domain analysis. Thus Marcum's
results provide an insight to the region where the square law detector results can be
utilized to lower bound the linear detector performance. Intuitively, the number of
independent samples averaged is equivalent to the PRE-D video bandwidth ratio.
The analysis technique presented in this thesis will not produce reliable results for
bandwidth ratios that large. The authors of [4] also stated that their technique would
not produce reliable results for bandwidth ratios greater than about 40. Thus, the
square law detector analysis can probably be utilized to lower bound the performance
of the linear envelope detector for all filter configurations for which these analysis
techniques are valid.

CHAPTER IV
CONCLUSION
The primary focus of this thesis was to devise an analysis technique which
provides a capability of evaluating the performance of a square law detector system
with arbitrary input signals and arbitrary PRE-D and video bandwidths. This
analysis technique was utilized to investigate the sensitivity of the square law detector
system with respect to variations in the filter models. A simulation was constructed
to produce data on square law and linear envelope detectors which could be
compared to the analytical results. Finally, a comparison between the sampled time
domain analysis technique and the technique presented in [4] were also made.
An analysis of the sensitivity of the square law detector was performed with
respect to a fixed PRE-D filter and a variable shape video filter. Equal noise
bandwidth was an additional constraint imposed on the video filters. The results
indicated that the performance variation for different video filters was insignificant.
This sensitivity analysis demonstrated that perfect modeling of the video filter shape
is unnecessary as the performance of the square law detector system is only mildly
dependent on that factor. The input signal for this analysis was assumed to be either
a CW or a pulsed CW with a signal bandwidth less than the video filter bandwidth.
If the signal being detected had a bandwidth greater than the video filter the
performance might be more dependent on the video filter shape factor.
The sensitivity of the square law detector system was determined with respect
to a constant video filter and a variable shape PRE-D filter. As with the video filter
sensitivity analysis, the PRE-D filters were constrained to a constant noise

44
bandwidth. Three major results of this analysis are stated below .First, the
performance of the square law detector system is virtually identical if the PRE-D filter
responses are similar in shape in a frequency band centered around the signal with
bandwidth equal to the video filter bandwidth. Secondly, the performance of the
square law detector system varies significantly if the PRE-D filters do not have
similar frequency responses in the area adjacent to the signal. Third, the
performance is strongly dependent on the noise bandwidth of the PRE-D filter, not
just the PRE-D SNR. This analysis assumed the input signal had narrow bandwidth
with respect to the total PRE-D bandwidth. Conclusions on the variation in
performance for different signal types might be difficult from the results presented in
this thesis.
A computer model of an arbitrary y th law detector system was developed
providing the capability of Monte Carlo simulations. The purpose of the simulations
was twofold. First, an additional verification/validation of the square law sampled
time domain analysis technique was desired. Secondly, the performance of a linear
envelope detector with respect to the square law detector was desired. The
simulation produced square law performance results that matched that of the analysis
almost exactly. For all PRE-D and video filter configurations that were simulated,
the linear detector performance was either equal or better than the square law
detector. The obvious benefit of this observation is one can use the square law
analysis to lower bound the performance of the linear envelope detector.
A comparison of the sampled time domain analysis technique presented in
this thesis and the analysis technique of [4] is contained in part E. Both of the
techniques produced the equations for the probability densities via matrix algebra
techniques. Generally, the method of [4] produced an adequate approximation to the
density functions with matrices that were of significantly smaller rank than the
sampled time domain technique. However, it was shown that for certain cases the

45
sampled time domain technique is more efficient. The method of [4], as presented,
can be used to perform the analysis if only the frequency domain response of the
filters is available. However, the PRE-D filter response was assumed to be
symmetric around the center frequency of the bandpass process. Therefore,
modification of the analysis in [4] would be required to handle a nonsymmetric PRE-
D filter. The sampled time domain technique, as presented, requires the specification
of the PRE-D filter in terms of the poles and zeros of the s plane, while the video
filter could be specified via either the impulse response or the frequency response.
The ensuing analysis made no assumptions on the symmetries of the PRE-D filter,
therefore a square law detector system with a PRE-D filter that was not symmetric
with respect to the center frequency could be analyzed. For PRE-D filter functions
which are smooth, the analysis technique of [4] is superior to the one presented in
this thesis. However for certain pathological cases the analysis technique presented
in this thesis is comparable with that of [4]. The intent of the author was to produce
another analysis technique utilizing different assumptions which validates previous
results and can be used to perform analysis on generalized envelope detector
systems.

GLOSSARY
PRE-D filter Pre-detection filter.
IF Intermediate frequency.
POST-D filter Post-detection filter. Same as video or lowpass filter.
Brickwall video filter Video filter with constant gain to f=W and 0 gain thereafter.
FIR Finite Impulse Response. A digital filter with no feedback is referred to as a
finite impulse response filter.
Equivalent Noise Bandwidth The width of a fictitious rectangular spectrum such
that the power in the rectangular band is equal to the power associated with the actual
spectrum over positive frequencies

REFERENCES
[1] Marcum, J.I.
A Statistical Theory of Target Detection by Pulsed Radar.
RAND Research Memo RM-754, Dec., 1947 (Reissued in
Transactions of IRE Professional Group on Information Theory.
IT-6 (Apr. 1960), 59-144
[2] Kac, M., and Seigert, A.
On the Theory of Noise in Radio Receivers with Square Law Detectors.
Journal of Applied Physics, 18 (1947), 383-397.
[3] Emerson, R.
First Probability Densities for Receivers with Square Law Detectors.
Journal of Applied Physics, 23 (1953), 1047-1053.
[4] Hummels, D.R., Adams, C., and Harms, B.K.
Filter Selection for Receivers Using Square Law Detection.
IEEE Transactions on Aerospace and Electronic Systems.
(Nov. 1983), 871-883.
[5] Meyer, M., and Middleton, D.
On the Distributions of Signal and Noise after Rectification and Filtering.
Journal of Applied Physics, 25 (1954), 1037-1052.
[6] Schwartz, M., Bennett, W.R., and Stein S. (1966)
Communication Systems and Techniques.
New York: McGraw-Hill, 1966.
[7] Papoulis, A.
Probability, Random Variables, and Stochastic Processes.
New York: McGraw-Hill, 1984.
[8] Press, W., Flannery, B., Teukolsky, S., and Vetterling, W.
Numerical Recipes.
Cambridge: Cambridge University Press, 1986.

[9]
Couch II, L.
Digital and Analog Communication Systems.
New York: Macmillan, 1987.
48
[10] Davenport, W., and Root, W.
An Introduction to the Theory of Random Signals and Noise.
New York: IEEE Press, 1987.
[11] McGee, W.
Another Recursive Method of Computing the Q function.
IEEE Transactions on Information Theory, (July 1970), 500-501.
[12] Cooper, G., and McGillem, C.
New York: McGraw-Hill, 1986.
[13] Steams, S., and David, R.
Signal Processing Algorithms.
New Jersey: Prentice-Hall, 1988.

APPENDIX A
DERIVATION OF AUTOCORRELATION ON
SQUARE LAW DETECTOR OUTPUT COMPLEX INPUT
The autocorrelation on the output of the square law detector in terms of the
input autocorrelation is of interest. A number of authors [7,10,12] derive the output
autocorrelation of a square law detector in terms of the input autocorrelation.
Unfortunately, they assume the input process is real. A modification to this analysis
is necessary to determine the correct autocorrelation for a complex input. The output
process r(t) is a function of the inputs v(t) and p(t) as follows.
Al) r(t) = |v(t) + p(t)|2
To simplify the notation for the ensuing computation of the output autocorrelation let
Vj=v(t+T)
v2 = v(t)
p, = p(t+x)
P2 = P(t)
The autocorrelation of r(t) can be expressed in terms of vi, V2, pi, and p2 as follows.
A 2) Rr('0=E{(v1 + p1)(vI+p1)*(v2+p2)(v2 + p2)*}
This expression can be expanded to produce
A3) Rr(x) = E{vlV*v2v2 + VjV*v2p* + vlV*v*p2 4- VjVjp2p2}
+ E{V1V2V2P; + VlV2PlP*2 + VlV2P*lP2 + vIPtP2P*2}
+ E{v; v2v*2Pl + vv2plP* + vJv* Plp2 + vJPlp2p*2}
+ E { V 2 V 2? 1P1 + V 2P lP* P*2 + V2PlP*lP2 + P lP*P 2P*2>
Since pi and p2 are zero mean jointly distributed gaussian variates and the
signal and noise are independent of each other, it can be shown that the 1st and 3rd
moments of the noise are zero [7]. Equation A3 reduces to the following form.

50
A 4) Rr(x) =E{v1v*v2v*+v1vJp2p* + Vlv2pjp* + vlV*2p*p2}
+ E(v;v2PiP*2+ v^'PiP, + v2v*PiP* +pip*p2p*}
If the signal is restricted to the case v(t)=A0 the equation for Rr(x) simplifies even
further.
A5) Rr(x) = A40 + A2oRp(0) + A20R*p(x) + A2Rp(t) + A2R p(0)
+ A2E{p*p*2 + ptp2} + E{plP;p2p*2}
The complex random process p(t) can be represented via a combination of
real processes as shown below.
A6) p(t) = pr(t) + jpj(t)
Since the input process that formed p(t) is defined to be strict sense
stationary p(t) is also SSS and the autocorrelations of pi(t) and pr(t) are equal.
Utilizing equation A6 and the equivalence of the autocorrelations it can easily be
shown that the first expected value in equation A5 is equal to zero. Similarly, the
second expected value can be expanded in terms of the real and imaginary parts to
yield
A7) E{plP;p2p*2} = Ejp? p2 +p2 p2 +p2 p2 +p2 p\ \
L r r r i if i iJ
A valid expansion for each of the terms in the expected value in equation A7 is
obtained from Papoulis [7].
A8) E{x2y2}=E{x2}E{y2}+2E2(xy}
This expansion is valid for jointly gaussian zero mean random variables only.
Utilizing equation A8 to perform the expansion on A7 results in
A9) E{p1p;p!p-!}=4R!p/0) + 4RJ (T) + 4R2[
= Rp(0) + Rp(x)Rp(x)
The final form for the output autocorrelation is shown in equation A10.

51
A10) Rr(x) = A40 + 2AoRp(0) + R2p(0)
+ 2A2Re{Rp(x)} +Rp(x)R*p(x)
Equation A10 represents the autocorrelation of the process r(t) which is the output of
a square law detector with the input being the complex lowpass equivalent of a
bandpass process. It is of interest to compare this result to the output autocorrelation
of the square law detector when the input is a bandpass process. In the complex
lowpass equivalent case, only baseband terms will result, thus only the baseband
terms of the bandpass result will be compared. If the input process is x(t)=v(t)+p(t),
where v(t)=Aocos(coct+0) and the noise process p(t) is similar to that described in
equation 3, an expression for the output autocorrelation of the square law detector
can be obtained from [10].
All) y(t) = x2(t)
A12) Ry(x) =
2 a:
+ 2Rp(x) +-g-cos(2cocx)
If a comparison is made between the baseband terms in equations A10 and
A12, assuming that the bandpass and lowpass processes are related as in equations
la and lb, it is apparent that there is only a multiplicative factor of 4 difference
between those expressions. Intuitively this is a pleasing result The lowpass
equivalent has twice the energy as the bandpass process due to the nature of the
transformation, additionally the output of the square law device with a bandpass
input has 1/2 of the output power centered at f=2fc. Since the autocorrelation is a
measure of energy it is reasonable that the lowpass result have a magnitude 4 times
that of the bandpass result.
+Rp(0) J +2AoRp(x)cos(co0x)

APPENDIX B
DETERMINATION OF THE REAL PART OF THE
SIGNAL PLUS NOISE PROBABILITY DENSITY INTEGRAND
The probability density function p(Q) was represented by an integral which
had a complex integrand. The purpose of this appendix is to solve for the real part of
that integrand.
i=l
The arguments of the exponentials can be combined as follows.
B2)
Separating B2 into the real and imaginary components
B3)
The denominator of the integrand is
K
B4) 11(1-2 juO,)
i=l
Equation B4 can be rewritten as
K
B5)
i=l
An equivalent expression for B5 is

B6)
n{(>
+ 4,u2o|) J exp
i=l
-jXtan_I(2\)0.)
i=l
Using equations B2 through B6, equation B1 can be rewritten as
B7> Q) = iJ-
** 2 2 J
i=l 1 + 4U <1*
{
-uQ+X
u In
1+41)22
+tan ^ 2u
-du
J7^1+4u2 J
The real part of B7 is
B8> p * D2|h
-*X^
= i l+4\)22
_____________i.
-UQ+X
UD.B. -l
1 1 +tan ^2u ^
2 2
l+4u
i
I](l+4o2*)
-du

APPENDIX C
CALCULATION OF PROBABILITY OF DETECTION
FOR THE IDEAL SQUARE LAW DETECTOR
The performance of an "ideal" square law detector system can readily be
determined utilizing the analytical results from Marcum [1] and a numerical technique
[11] for determining the probability of detection. The "ideal" implies that the video
filter approximates the ideal lowpass characteristic with bandwidth equal to twice that
of the PRE-D filter with amplitude gain=l. The equations for probability of detection
and probability of false alarm are extracted from [1] as follows.
Cl)
2 I0(ax)dx
Where a is defined as follows.
Ao . / R(0)
C2) a = and a = ^
The probability of false alarm is then defined as
C3) Pfe = exp (- -y-)
For the lowpass equivalent system the signal power to noise power ratio is defined as
C4) SNR = ^
From equation C4 it is apparent that the ratio a2/2 is the SNR at the output of the
PRE-D filter.
The expression for the probability of detection can be solved for the
parameters a and b via a numerical technique obtained from [11]. The equation for

55
probability of detection of the general square law detector as outlined in equation 82
is also an equivalent expression. Unfortunately, evaluating this equation for the
probability of detection results in a numerically unstable algorithm when utilizing a
single tap FIR filter as the model for the video filter. The expression for the
probability of detection as extracted from [11] is
where
b K 2
a +x
C5) Pd= Q(a,b) = 1- Jxe~ 2 I0(ax)dx
o
= i-Sfht
k=0
To build a ROC, the threshold voltage is determined by solving equation C3
for a particular Pfa. Subsequently, the square root of the threshold voltage is
equated to b. The equation for the probability of detection can be solved for that
value of b and a number of values of a to determine the receiver operating
characteristic.

APPENDIX D
CALCULATION OF PROBABILITY OF DETECTION
ASSUMING GAUSSIAN OUTPUT DISTRIBUTIONS
A method which previously has been utilized to predict the performance of a
square law receiver system with a video bandwidth narrow relative to the IF
bandwidth is to assume that the distributions on the output of the video filter are
gaussian in nature. The means and variances of the noise only and signal plus noise
case can be computed. These parameters describe a gaussian distribution entirely,
thus a ROC can be constructed using the calculated means and variances.
In appendix A an expression for the output autocorrelation of a square law
detector with a complex input was derived. This expression assumes that the input
signal is a constant for all t. For clarity this expression is reiterated.
Dl) Rr(x) = A40 + 2A2oRp(0) + R2p(0)
+ 2A2Re{Rp(t)} + Rp(x)R*p(x)
Note that when the signal amplitude Aq=0 the expression for the output
autocorrelation represents the output autocorrelation for the noise only input.
The mean value on the output of the square law detector can be determined as
follows
D2) Q = E{[ v(t) + p(t)][v(t) + p(t)] *}
Since v(t) and p(t) are independent and p(t) is a zero mean process this expression
reduces to
D3) Q = A2+Rp(0)

57
A general expression for the autocorrelation of the output of a linear filter in
terms of the input autocorrelation and filter transfer transfer can be obtained.
However, determining the entire output autocorrelation is unnecessary as the
autocorrelation at 0 is the defines the output variance. An expression for the variance
on the output of a linear filter can be obtained from [7].
D4) Rq(0) = f R*r(x)p(x)dx
The function p(x) is related to the video filter via the Fourier Transform of the
energy spectrum of the video filter. p(x) can be expanded into a sum of exponentials
in a manner previously applied to the autocorrelation function of the noise process.
This technique is described in equations 16-21. The results of this expansion
follow.
np np
D5) p(x) = Xciexp(-pv T)u('t) + Xc*exp^p*v ^u(- x)
i=l 1=1
where
pVj = roots of G(f) and u(x) = \
A simplification of D4 results from the form of Rr(x) and p(x). These
functions are both real and even, thus the integral can be evaluated by performing the
integration for positive x only. This result multiplied by 2 produces the variance at
the output of the video filter. Utilizing this simplification, further noting that the
constant terms in the autocorrelation will not be modified by equation D4, an
expression for the variance of the process Q at the output of the video filter g(t) can
be obtained as follows.

58
D6) Rn(0) = A40 + 2 A20R10) + Rp(0)
1 npole np
+ 4A20N0J X XKp;Ckexp[-(p.+pvjx]dx
o i=l k=l
** npole np
+ 4A2N0J X XKpCkexp[-(p* + pvjx]dx
o i=I k=1
~ npole npole np
+ 8N0JX ZXkp iKp*Ckexp [- (Pi + p* + pvjxjdx
i=i 1=1 k=i
If the integrals and summations in equation D6 are interchanged and the integrations
are performed, the expression for Rq(0) simplifies to equation D7.
D7) R Q(0) = A40 + 2 A2 R p(0) + R2(0)
npole np -
+ 4A2N0X X
i=l k=l_
npole npole np
Kp,Ct + Kp;Ct
(P. + PO (P' + P'.)J
npole np-
D8) 02Q=4AXI I
i=l k=lL
2TT T; ^ Kp.Kp'C.
+8N"I; 5 Scp^p'^p-.)
The variance of Q(t) is calculated by subtracting the square of the mean value
from equation D7. This result is shown in equation D8.
KP,C Kp;c '
(Pi + P>.) (P'i + P.t).
npole npole np v V^*r'
+ 8N*.I S Â£ Kj-%CJ -
^ / p +p*+p \
i=i 1=1 k=iV^i r \)
This first term of equation D8 is zero when the signal is not present. To
minimize the confusion over notation the values of the mean and variance of the noise
only case can be assigned to a different variable as follows
npole npole np
apoie npoie np -rr
a2 -8Nzy y y Kp PiCk
O 8N^ ^ ^(Pi+Pi+Pv )
i=l 1=1 k=l\ 1 1 J
D9) ~Q
DIO) Q = Rp(0)
Gaussian distributions for the noise only and signal plus noise case are as
follows.

59
Dll)
Ps + n(Q)
1
OqV^T
exp
(Q-Q)2
2^2q .
D12)
p-(Q) = ^72Txp
^ B
(Q-Q.)'
22.
The threshold voltage required to obtain a false alarm rate for the square law
detector, assuming gaussian output distributions, can be obtained by solving the
following integral equation for Vt.
D13)
=
V2^_J
1
exp
(Q-Q.)2
dQ
The expected probability of detection can be determined by the following integral.
D14)
Pd =
I
exp
_ r
(Q-Q)
2c\
dQ
Equation D14 can be evaluated for various values of input Signal to Noise Ratio
where SNR is defined as
D15> snr=r^
The solution of the equations D13 and D14 for Vt and Pd is expedited by
utilizing a numerical analysis routine from [8] which provides a technique for
evaluating those integrals. The listing of the program which is used to perform the
necessary calculations is contained in appendix F.

APPENDIX E
DISCUSSION OF NUMERICAL TECHNIQUES
EMPLOYED IN PRODUCING NUMERICAL RESULTS
A number of numerical algorithms were utilized in the Fortran programs
written to produce the results in this thesis. A brief description of each numerical
algorithm and its application follows.
Jacobi Technique of Determining Eigenvalues and Eigenvectors This
numerical technique was utilized to determine the eigenvalues and eigenvectors of the
Hermitian covariance and quadratic matrices. A description of this technique is
beyond the scope of this presentation and may be obtained from pages 342-349 of
[8].
Newton Raphson method of root finding This technique was used to solve
equation 68 for the threshold voltage Vt. This method utilizes an initial guess of the
root and the following recursion equation to iterate to a root.
El)
x. -
1
f(*j
W
Additional information on this technique is available from almost any numerical
analysis book. The actual algorithm utilized was obtained from [8], pages 254-259.
Integration Techniques Expressions for the Pd and the probability density
function for the signal+noise case required a numerical technique for evaluating the
integral. Again, an algorithm from [8] was utilized to perform the numerical
integrations. The technique utilized is referred to as Romberg integration on an open

61
interval. This method is discussed on pages 114-120 of [8]. The technique that is
used to handle the infinite limits is of interest to mention here. The integration from 0
to infinity is broken into 2 intervals: from 0 to b and b to infinity. The integral from
0 to b is evaluated using the function as specified. The integration from b to infinity
is evaluated by performing a change of variables as follows.
oo b
E2) Jf(x)dx= Jif(i)dt
b 0 1
The transformed equation can then integrated via the same technique used for the
integration from 0 to b.
Simulation Noise Model -- The Monte Carlo simulation of the square law and
linear envelope detectors required generation of a sequence of independent gaussian
variates. The samples were generated by utilizing two routines obtained from [8].
The first routine generated a sequence of uncorrelated uniformly distributed deviates.
An algorithm referred to as the Box-Muller method used the uniform deviate module
to generate a sequence of independent gaussian variates. Details of these algorithms
can be obtained on pages 192-203 of [8].
Simulation Filter Model The Monte Carlo simulation also required a set of
discrete time filters which approximated the analog filters under analysis. The digital
equivalents of the analog PRE-D and video filters were obtained via a Bilinear z
transform of the s domain descriptions of the analog filters. These algorithms were
obtained from [13]. A description of the filter design equations and the
transformation algorithms also can be obtained in [13].

APPENDIX F
The Fortran source code utilized in producing the results of this thesis is
contained in this appendix. There were 5 distinct programs written to produce the
results in this thesis. The first program computed the eigenvalues, eigenvectors and
mean values of the random variables that were needed to produce the probability
densities, the probability of detection and the probability of false alarm. This
program also calculated the eigenvalues necessary to determine the probability
density function for the noise only case via the techniques of [4]. The second
program accepted the eigensystem data from the first program. The outputs of the
second program consisted of 1) PRE-D and video filter frequency responses 2)
probability density curves 3) probability of detection parameterized on probability of
false alarm and 4) probability of detection parameterized on PRE-D signal to noise
ratio. The third program calculated the mean and variance of the output of the video
filter. These data were used to construct a probability of detection curve assuming a
gaussian output distribution. The fourth program implemented Marcum's Q function
[1]. The "ideal" square law detector curve parameterized on probability of false
alarm was the output of this program. The fifth program implemented the Monte
Carlo Simulation. The outputs of this program were probability of detection curves
for both a square law detector and linear envelope detector. The probability of
detection curves were parameterized on PRE-D SNR.
These programs were implemented on a Macintosh II computer. The
configuration of the computer was 1) 1 Megabyte memory, 2) 50 Megabyte Jasmine
hard disk, 3) 68020 and 68881 only, 4) 4 bit video card with standard black and

63
white apple monitor and, 5) One 800 Kbyte floppy drive. The Language Systems
Fortran compiler along with the Macintosh Programmers Workbench were utilized to
produce the executable code.
PROGRAM 1
PROGRAM COMPM1M2
IMPLICIT EXTENDED (A-H)
IMPLICIT EXTENDED (O-Z)
REAL*4 T APS(50) ,FCN,DZ(0:4) ,CP (0:4) ,EP, ATT,W STOP
EXTENDED LAM(50),R(50),P(50,50),D(50,50),C(50,50),
EVAL(50),EVECT(50,50),TIME,TEMPMAT(50,50)
EXTENDED CM1 (50),CM2(50),PM2(50,50),TAPSQRT(50,50),
ZCOEF2( 10,0:4),PCOEF2( 10,0:4)
EXTENDED ECOV(50)rEVCOV(50,50),SI(50,50),SIT(50,50),
EVCOVT(50,50),T(50,50)
EXTENDED EFILT(50),EVFILT(50,50),ZCOEF(10,0:4),
PCOEF(10,0:4),MEANVECT(50),NBW,NBWFIR
XCOMPLEX ROOT(10),CON(10),CTEMP,ROOT1(10),CON1(10),
ROOT2( 10) ,CON2( 10)
COMMON /GLOB/ ITAPS,TAPS,SMPL
COMMON /FILT/ ZCOEF,PCOEF,IPOLES
COMMON /EIGVAL/ EFILT,MEANVECT
WRITE(6,*) 'INPUT PW '
WRITE(6,*) 'INPUT PRE-D FILTER TYPE 1=BUTT 2=CHEBI
3=CHEBIT
WRITE(6,*) 'INPUT NUMBER OF POLES FOR FILTER'
IF(ITYPE.EQ.l) THEN
DO K=l,(IPOLES+l)/2
CALL SPBWCF(IPOLES,K,2,DZ,CP,IERROR)
DO 1=0,2
ZCOEF(K,I)=DZ(I)
PCOEF(K,I)=CP(I)
END DO
END DO
ELSEIF(ITYPE.EQ.2) THEN
WRITE(6,*) 'INPUT PASSBAND RIPPLE IN DB'
EP=sqrt(10**(EP/10.)-l)
DO K= 1 ,(IPOLES+1 )/2
CALL SPCHBI(IPOLES,K,2,EP,DZ,CP,IERROR)
DO 1=0,2
ZCOEF(K,I)=DZ(I)
PCOEF(K,I)=CP(I)
END DO

END DO
ELSE
WRITE(6,*) 'INPUT STOPBAND ATTENUATION
AND STOPBAND FREQUENCY (ATT AND WSTOP)
ATT=10**(ATT/10.)
WRITE (*,*) ATT
DO K=l,(IPOLES+l)/2
CALL SPCBII(IPOLES,K,2,WSTOP,ATT,DZ,CP,ERROR)
DO 1=0,2
ZCOEF(K,I)=DZ(I)
PCOEF(K,I)=CP(I)
WRITE(* ,*) NUM,DEN(I)' ,I,DZ(I) ,CP(I)
END DO
END DO
ENDE
222 CALL NOISEBW(ROOTl,CONI JEW)
WRITE(*,*) NOISEBW= ,NBW
WRITE(*,*) 'INPUT 1 TO CHANGE NBW OF PRE-D FETER '
E(INBW.EQ.l) THEN
WRITE(*,*) ENTER DESIRED NOISE BANDWIDTH'
WFREQ=DNBW/NBW
DO K=l,(EOLES+l)/2
DO 1=0,2
ZCOEF(K,I)=ZCOEF(K,I)/WFREQ**I
PCOEF(K,I)=PCOEF(K,I)/WFREQ**I
END DO
END DO
GOTO 222
ENDE
A2B=0.
A2=100.
11 CTEMP=H(A2)
A1 =20*LOG 10(CQABS(CTEMP))
E(Al.LT.-40.) THEN
E(A1.LT.-41.) THEN
A2U=A2
A2=A2-(A2-A2B)/2.
GOTO 11
ENDE
ELSE
A2B=A2
A2=A2+(A2U-A2)/2.
GOTO 11
ENDE
SMPL=PI/2./A2
WRITE(*,*) SAMPLE, MAG ',SMPL,A1
WRITE(*,*) INPUT 1 TO CHANGE THE SAMPLE RATE
E(ISMPL.EQ.l) THEN
WRITE(*,*) 'INPUT NEW SAMPLE RATE '

A2=PI/2/SMPL
CTEMP=H(A2)
A1 =20. *LOG 10(CQ ABS (CTEMP))
WRITE(* ,*) SAMPLE,MAG ',SMPL,A1
ENDIF
CALL CDETCON (ROOT,CON)
WRITE(6,*) 'INPUT NUMBER OF GQR TERMS TO USE '
TEME=0.
WRITE(*,*) 'INPUT THE NUMBER OF TAPS '
KK=2*K
WRITE(*,*) 'INPUT THE FILTER TYPE 1 FOR IMPULSE
MATCH 2 FOR FOURIER
IF(IFLAG.NE.2) THEN
WRTTE(6,*) 'INPUT VIDEO FILTER TYPE 1=BUTT 2=CHEBI
3=CHEBII'
WRITE(6,*) 'INPUT NUMBER OF POLES FOR FILTER'
WRITER,*) 'INPUT 3DB IF-VIDEO BW RATIO'
IF(ITYPE.EQ.l) THEN
DO J=l,(IPOLEVID+l)/2
CALL SPBWCF(IPOLEVID,J,2,DZ,CP,IERROR)
DO 1=0,2
ZCOEF2(J,I)=DZ(I)
PCOEF2(J,I)=CP(I)
END DO
END DO
ELSEIF(ITYPE.EQ.2) THEN
WRITE(6,*) 'INPUT PASSBAND RIPPLE IN DB'
EP=10**(EP/10.)
DO J=l,(IPOLEVID+l)/2
CALL SPCHBI(EOLEVID,J,2,EP,DZ,CP,ERROR)
DO 1=0,2
ZCOEF2 (J,I)=DZ(I)
PCOEF2 (J,I)=CP(I)
END DO
END DO
ELSE
WRITE(6,*) 'INPUT STOPBAND ATTENUATION AND
STOPBAND FREQUENCY (ATT AND WSTOP)'
ATT=10**(ATT/10.)
WRITE(**) ATT
DO J= 1 ,(IPOLEVID+1 )/2
CALL SPCBII(IPOLEVID,J,2,WSTOP,ATT,DZ,CP,ERROR)
DO 1=0,2
ZCOEF2(J,I)=DZ(I)
PCOEF2 (J,I)=CP(I)

66
END DO
END DO
ENDIF
CALL GETRCON (ROOT2,CON2,ZCOEF2,PCOEF2,IPOLEVID)
WRITE(*,*)
WRTTE(*,*) VIDEO POLES AND CONSTANTS
DO I=l,IPOLEVID
WRITER *) ROOT2(I),CON2(I)
END DO
111 SUM=0.
DO 1=1,IT APS
CTEMP=QCMPLX(0.,0.)
TIME2=(I-1 )*SMPL/B WRAT
DO J=l,IPOLEVID
CTEMP=CTEMP+CON2(J)*CQEXP(ROOT2(J)*TIME2)
END DO
TAPS(I)=REAL(CTEMP)*SMPL/BWRAT
SUM=SUM+TAPS(I)
C WRITE(*,*)
REAL(CTEMP),AIMAG(CTEMP),TAPS(I),SMPL/BWRAT
END DO
NBWFIR=0.
DO 1=1, ITAPS
TAPS (I)=T APS (I)/SUM
NBWFER=NBWFTR+TAPS(I)**2
END DO
BWRATIO=NBW/NBWFIR
IWNDO=2
WRITE(*,*) 'FIR NOISEBW, SUM',NBWFIR,SUM
WRITE(* ,*) TF NOISEBW=',NBW
WRITE(*,*) BWRATIO=,BWRATIO
write(*,*) input 1 FOR NEW BWRAT
IF (IBW.EQ.1) THEN
WRITE(*,*) 'INPUT 3DB IF-VIDEO BW RATIO'
GOTO 111
ENDIF
ELSE
WRITE(*,*) 'INPUT WINDOW TYPE 1-RECT 2-TAP RECT 3-
TRI4-HANN 5-HAMM 6-BLACK'
WRITE(*,*) 'ENTER NBW RATIO '
13 WRITER,*) ENTER NUMBER OF TAPS '
FCN=NBW/BWRATIO
IF( 1 ./FCN.GT.FLOAT(ITAPS)) THEN
WRITE(*,*) 'INSUFFICIENT NUMBER OF TAPS '
GOTO 13
ENDIF
A2U=0.5
A2B=0.0

67
12 CALL SPFIRL(rrAPS-1 ,FCN,IWNDO,TAPS,IERROR)
SUM=0.
DO 1=1,ITAPS
SUM=SUM+TAPS(I)
END DO
NBWFIR=0.
DO I=1,ITAPS
TAPS(I)=TAPS(I)/SUM
NBWFIR=NBWFIR+TAPS(I)**2
END DO
DELTA=NBWFIR-NBW/BWRATIO
' IF(DELTA.GT.0.001*NBWFIR) THEN
A2U=FCN
FCN=FCN-(FCN-A2B)/2.
GOTO 12
ELSEIF(DELTA.LT.-0.001 *NB WFIR) THEN
A2B=FCN
FCN=FCN+(A2U-FCN)/2.
GOTO 12
END IF
WRITE(*,*) 'FIR NOISEBW,FCN',NBWFIR,FCN
WRITE(*,*) 'IF NOISEBW=',NBW
WRITER,*) 'BWRATIO=',BWRATIO
WRTTE(*,*) 'DELTA=',DELTA
END IF
CALL
GETMEAN (MEANVECT,ITAPS JROOT1 ,CON 1 ,IPOLES JPW,SMPL)
C WRITE(*,*) 'I,MEANVECT(I)'
C DO I=1,ITAPS
C WRITE(*,*) IAffiANYECTa)
C END DO
DO I=1,ITAPS
DO J=1,ITAPS
TAPSQRT(I,J)=0.0
END DO
TAPSQRT(I,I)=TAPS(I)
END DO
Xl=-1.
X2=l.
CALL GAULEG(X 1 ,X2,LAM,R,KK)
C
C UNNORMALIZE THE RETURNED FREQUENCIES
C
DO 1=1,K
LAM(I)=LAM(I+K)* 1.4
R(I)=R(I+K)* 1.4/PI/2.
END DO
C
C
C intialize the d matrix
C
DO 1=1,KK
DO J=1,KK

uuu u uuuu u uuu
68
D(I,J)=0.
END DO
END DO
DO 1=1, K
D(I,I)=DSQRT(R(I))
END DO
DO I=K+1,KK
D(I,I)=DSQRT (R(I-K))
END DO
FORM DPD MATRIX
CALL PMATRIX(PJLAM,K,TIME)
CALL MATMULT(P,D,C,KK,KK,KK)
CALL MATMULT(D,C,P,KK,KK,KK)
FIND AND SORT THE EIGENVALUES
CALL JACOBI(P,KK,KK,EVAL,EVECT,NROT)
CALL EIGSRT(EVAL,EVECT,KK,KK)
WRITE(*,*) THE EIGENVALUES ARE '
SUM=0.
DO 1=1, KK
WRITE(* ,*) EVAL(I)
SUM=SUM+E V AL(I)
END DO
CALL DETCON (KK,E VAL,CM 1)
SUM=0.
WRITE(*,*) 'THE CONSTANTS ARE'
DO 1=1, KK
WRITE(*,*) CM1(I)
SUM=SUM+CM1 (I)
END DO
WRITE(*,*) THE SUM OF THE CONSTANTS ARE ,SUM
DO SECOND METHOD
CALL COV(TTAPS,SMPL,PM2,ROOT,CONSOLES)
C WRTTE(* ,*) 'THE COVARIANCE MATRIX
DO 1=1 ITAPS
C WRITE(6,110) (PM2(I,J),J= 1 ,ITAPS)
DO J=I,ITAPS
TEMPMAT(I,J)=PM2(I,J)
END DO
END DO
CALLJACOBI(TEMPMAT,ITAPS,ITAPS3COV3VCOV,NROT)
CALL EIGSRT(ECOV,EVCOV,rrAPS,ITAPS)
WRITE(*,*) 'BACK FROM FIRST JACOBI'
DO I=1,ITAPS
DO J=1,ITAPS
SI(I,J)=0.0

69
D(I,J)=0.0
END DO
SI(I,I)=QSQRT(QMAX 1 (1 .E- 10,ECO V (I)))
D(T,I)=1.0/SI(I,I)
END DO
CALL MATMULT(T APSQRT,EVCOV,TEMPMAT,IT APS,IT APS,IT APS)
CALL MATMULT (TEMPMAT, SI,T,ITAPS ,IT APS ,ITAPS)
CALL TRANS(EVCOV,EVCOVT,ITAPS)
CALL MATMULT(EVCOVT,T,TEMPMAT,IT APS,IT APS,IT APS)
CALL MATMULT(SI,TEMPMAT,T,ITAPS,ITAPS,ITAPS)
CALL JACOBI(T,ITAPS,ITAPS,EFILT,EVFILT,NROT)
CALL EIGSRT(EFILT,EVFILT,rrAPS,rrAPS)
CALL MATMULT(D,EVCOVT,T,ITAPS,ITAPS,ITAPS)
CALL TRANS(EVFILT,TEMPMAT,ITAPS)
CALL MATMULT(TEMPMAT,T,EVFILT,ITAPS,ITAPS,ITAPS)
DO 1=1,ITAPS
T(1,1)=MEANVECT(I)
END DO
n=i
CALL MATMULT(EVFILT,T,TEMPMAT,ITAPS,ITAPS,H)
DO 1=1 ITAPS
WRITE(*,*) 'MEAN(I)= ,I,TEMPMAT(I,1)
ME AN VECT(I)=TEMPMAT(1,1)
END DO
WRITE(*,*) 'THE EIGENVALUES ARE '
DO 1=1,ITAPS
WRITE(*,*) EFILT(I)
END DO
CALL DETCON (TTAPS,EFILT,CM2)
WRITE(*,*)
WRITE(*,*) 'THE CONSTANTS ARE '
SUM=0.
DO 1=1,ITAPS
WRITE(*,*) CM2(I)
SUM=SUM+CM2(I)
END DO
WRITE(*,*) 'SUM OF THE CONSTANTS=',SUM
OPEN(UNIT=2,FILE=EIGDATA.DAT',ST ATUS='NEW')
WRITE(2,*) KK
DO 1=1,KK
WRITE(2,*) CM1(I),EVAL(I)
END DO
WRITE(2,*) ITAPS,SMPL
DO 1=1,ITAPS
WRTTE(2, *) CM2(I) ,EF1LT (I),ME AN VECT (I),TAPS (I)
END DO
WRITE(2,*) IPOLES
DO I=l,(IPOLES+l)/2
WRITE(2,*) ZCOEF(I,0),ZCOEF(1,1),ZCOEF(I,2),PCOEF(I,0),
PCOEF(I,l),PCOEF(I,2)
END DO
WRITE(2,*) NBW,NBWRATIO,NBWFIR,IWNDO
CLOSE(2)
110 FORMATC ,50(F8.4,2X))

END
70
C
HILINALG.F
HI INTEG.F
HSDUM
MIUTHJTY.F
c
c-
c-
**
C-* CHAPTER 8 ROUTINES *
C-
s|5i:|<*:iÂ£:t:!(:5|es|<5|*=l<*st:*>K*=(:********:i<=l<**!l'***H<*s|**>l<*****s|=****sfc***>l<*****5|e****!f=*
**
c-
SUBROUTINE SPFIRL(L,FCN,IWNDO,B ,IERROR)
C-LATEST DATE: 05/17/86
C-FIR LOWPASS FILTER DESIGN USING WINDOWED FOURIER SERIES
C-L=FELTER LENGTH = L+l
C-FCN=NORMALIZED CUT-OFF FREQUENCY IN HERTZ-SECONDS
C-IWNDO=WINDOW USED TO TRUNCATE FOURIER SERIES
C- 1 -RECTANGULAR; 2-TAPERED RECTANGULAR; 3-TRIANGULAR
C- 4-HANNING; 5-HAMMING; 6-BLACKMAN
C-B(0:L)=DIGITAL FILTER COEFFICIENTS RETURNED
C-IERROR=0 NO ERRORS DETECTED
C- 1 INVALID FILTER LENGTH (L<=0)
C- 2 INVALID WINDOW TYPE IWNDO
C- 3 INVALID CUT-OFF FCN; <=0 OR >=0.5
DIMENSION B(0:L)
IERROR=0
IF(L.LE.O) IERROR=l
IF(IWNDO.LT. l.OR.IWNDO.GT.6) IERROR=2
IF(FCN.LE.0..OR.FCN.GE.0.5) IERROR=3
IF (IERROR.NE.O) RETURN
DO 11=0, L
B(I)=0.0
1 CONTINUE
PI=4.*ATAN(1.)
WCN=2.*PI*FCN
DLY=L/2.
LIM=INT(L/2)
IF(DLY.EQ.L/2) THEN
LIM=LIM-1
B(L/2)=WCN/PI
ENDIF
DO 2 I=0LIM
B(I)=((SIN(WCN*(I-DLY)))/(PI*(I-DLY)))*SPWNDO(rWNDOrL+1,1)
B(L-D=B(I)
2 CONTINUE
RETURN
END
C-

71
C-* CHAPTER 14 ROUTINES *
c-
FUNCTION SPWNDO(ITYPE,N,K)
C-LATESTDATE: 11/13/85
C-THIS FUNCTION GENERATES A SINGLE SAMPLE OF A DATA WINDOW.
C-ITYPE= 1 (RECTANGULAR), 2(TAPERED RECTANGULAR),
3(TRI ANGULAR),
C- 4(HANNING), 5 (HAMMING), OR 6(BLACKMAN).
C- (NOTE: TAPERED RECTANGULAR HAS COSINE-TAPERED 10%
ENDS.)
C-N=SEZE (TOTAL NO. SAMPLES) OF WINDOW.
C-K=SAMPLE NUMBER WITHIN WINDOW, FROM 0 THROUGH N-1. .
C- (IF K IS OUTSIDE THIS RANGE, SPWNDO IS SET TO 0.)
PI=4.*ATAN(1.)
SPWNDO=0.
IF(ITYPE.LT. 1 .OR.ITYPE.GT.6) RETURN
IF(K.LT.O.OR.K.GE.N) RETURN
SPWNDO=l.
GO TO (1,2,3,4,5,6), ITYPE
1 RETURN
2 L=(N-2)/10
IF(K.LE.L) SPWNDO=0.5*(1.0-COS(K*PI/(L+1)))
IF(K.GT.N-L-2) SPWNDO=0.5*(1.0-COS((N-K-1)*PF(L+1)))
RETURN
3 SPWNDO= 1.0-ABS(1.0-2*K/(N-1.0))
RETURN
4 SPWNDO=0.5 *( 1.0-COS (2 *K*PI/(N -1)))
RETURN
5 SPWNDO=0.54-0.46*COS(2*K*PI/(N-1))
RETURN
6 SPWNDO=0.42-0.5*COS(2*K*PI/(N-1))+0.08*COS(4*K*PI/(N-1))
RETURN
END
C-
SUBROUTINE SPBWCF(L,K,LN,D,C,IERROR)
C GENERATES KTH SECTION COEFFICIENTS FOR LTH ORDER
NORMALIZED
C LOWPASS BUTTERWORTH FILTER
C SECOND ORDER SECTIONS K<=L+l/2
C ODD ORDER LrFENAL SECTION WILL CONTAIN 1ST ORDER POLE
C LN DEFINES COEFFICIENT ARRAY SIZE
C ANALOG COEFFICIENTS ARE RETURNED IN D AND C
C IERROR=0 NO ERRORS
C ERROR-1 INVALID FILTER ORDER L
CIERROR-2 INVALID SECTION NUMBER K
DIMENSION D(0:4),C(0:4)
PI=4.0*ATAN(1.0)
IERROR=0
E(L.LE.O) IERROR=l
IF(K.LE.O.OR.K.GT.INT((L+l)/2)) IERROR=2
E(ERROR.NE.O) RETURN
D(0)=1.
C(0)=1.

72
DO 11=1, LN
D(I)=0.
C(I)=0.
1 CONTINUE
TMP=K-(L+l.)/2.
IF(TMP.EQ.O.) THEN
C(l)=l.
ELSE
C( 1 )=(-2.)*COS((2*K+L- 1)*PI/(2*1))
c(2)=1.0
ENDIF
RETURN
END
C
SUBROUTINE SPCHBI(L,K,LN,EP,D,C,IERROR)
C-LATESTDATE: 11/13/85
C-GENERATES KTH SECTION COEFFICIENTS FOR LTH ORDER
NORMALIZED
C- LOWPASS CHEBYSHEV TYPE I ANALOG FILTER
C-SECOND ORDER SECTIONS: K<=(L+l)/2
C-ODD ORDER L: LAST SECTION WILL CONTAIN SINGLE POLE
C-LN DEFINES COEFFICIENT ARRAY SIZE
C-EP REGULATES THE PASSBAND RIPPLE
C-TRANSFER FUNCTION SCALING IS INCLUDED IN FIRST SECTION (L
EVEN)
C-ANALOG COEFFICIENTS ARE RETURNED IN D AND C
C-IERROR=0 NO ERRORS DETECTED
C- 1 INVALID FILTER ORDER L
C- 2 INVALID SECTION NUMBER K
C- 3 INVALID RIPPLE PARAMETER EP
DIMENSION D(0:LN),C(0:LN)
PI=4.*ATAN(1.)
IERROR=0
IF(L.LE.O) IERROR=l
IF(K.GT.INT((L+1)/2).OR.K.LE.O) IERROR=2
IF(EP.LE.O.) IERROR=3
IF(IERROR.NE.O) RETURN
GAM=((1.+SQRT(1.+EP**2))/EP)**(1./L)
SIGMA=.5*(1./GAM-GAM)*SIN((2*K-1)*PI/(2*L))
OMEGA=.5*(GAM+1 ,/GAM)*COS((2*K-1 )*PI/(2*L))
DO 1 LL=0,LN
D(LL)=0.
C(LL)=0.
1 CONTINUE
IF(INT(L/2).NE.INT((L+1 )/2).AND.K.EQ.(L+1 )/2) THEN
D(0)=-1 *SIGMA
C(0)=D(0)
C(l)=l.
RETURN
ENDIF
C(0)=SIGMA**2+OMEGA**2
C(1)=-2.*SIGMA
C(2)=l.
D(0)=C(0)

73
IF(INT(L/2).EQ.INT((L+l)/2).AND.K.EQ.l) D(0)=D(0)/SQRT(1.+EP**2)
RETURN
END
C-
SUBROUTINESPCBII(L,K,LN,WS,ATT,D,C,TERROR)
C-LATESTDATE: 11/13/85
C-GENERATES KTH SECTION COEFFICIENTS FOR LTH ORDER
NORMALIZED
C- LOWPASS CHEBYSHEV TYPE H ANALOG FILTER
C-SECOND ORDER SECTIONS: K<= (L+l)/2
C-ODD ORDER L: FINAL SECTION WILL CONTAIN SINGLE POLE
C-LN DEFINES COEFFICIENT ARRAY SIZE
C-WS AND ATT REGULATE STOPBAND ATTENUATION
C- MAGNITUDE WILL BE 1/ATT AT WS RAD/SEC
C-ANALOG COEFFICIENTS ARE RETURNED IN ARRAYS D AND C
C-IERROR=0 NO ERRORS DETECTED
C- 1 INVALID FILTER ORDER L
C- 2 INVALID SECTION NUMBER K
C- 3 INVALID STOPBAND FREQUENCY WS
C- 4 INVALID ATTENUATION PARAMETER
DIMENSION D(0:LN),C(0:LN)
PI=4.*ATAN(1.)
IERROR=0
IF(L.LE.O) IERROR=l
IF(K.GT.INT((L+1 )/2).OR.K.LT. 1) IERROR=2
EF(WS.LE. 1.) IERROR=3
IF(ATT.LE.O.) IERROR=4
IF(IERROR.NE.O) RETURN
GAM=(ATT+SQRT(ATT**2- l.))**(l ./L)
ALPHA=.5*(1./GAM-GAM)*SIN((2*K-1)*PI/(2*L))
BETA=.5*(GAM+l./GAM)*COS((2*K- 1)*PI/(2*L))
SIGMA=(WS* ALPHA)/(ALPHA* *2+BET A* *2)
OMEGA=(-1 *WS*BETA)/(ALPHA**2+BETA**2)
DO 1 LL=0,LN
D(LL)=0.
C(LL)=0.
1 CONTINUE
IF(INT(L/2).NE.INT((L+l)/2).AND.K.EQ.(L+l)/2) THEN
D(0)=-1 *SIGMA
C(0)=D(0)
C(l)=l.
RETURN
ENDIF
SCLN=SIGMA**2+OMEGA**2
SCLD=(W S/COS((2*K-1 )*PI/(2*L)))**2
D(0)=SCLN*SCLD
D(2)=SCLN
C(0)=D(0)
C(1)=-2.*SIGMA*SCLD
C(2)=SCLD
RETURN
END
C-
C

c
74
SUBROUTINE JACOBI(A,N,NP,D,V,NROT)
C COMPUTES ALL OF THE EIGENVALUES OF A REAL
C SYMMETRIC MATRIX A WHICH IS OF SIZE N
C BY N STORED IN A PHYSICAL NP BY NP ARRAY. ON
C OUTPUT, THE ELEMENTS ABOVE THE DIAGONAL
C ARE DESTROYED. D RETURNS THE EIGENVALUES OF A IN
C ITS FIRST N ELEMENTS. VIS A MATRIX
C WITH THE SAME EXIENDEDS AS A WHOSE COLUMNS CONTAIN
C ON OUTPUT THE NORMALIZED EIGENVECTORS
C OF A. NROT RETURNS THE NUMBER OF JACOBI ROTAIONS
C WHICH WERE REQUIRED.
C
PARAMETER (NMAX=100)
IMPLICIT EXTENDED (A-H)
IMPLICIT EXTENDED (O-Z)
EXTENDED A(50,50),D(50),V(50,50),B(50),Z(50)
C
DO 12 IP=1,N
DO 11 IQ=1,N
V(IP,IQ)=0.
11 CONTINUE
V(IP,IP)=1.0
12 CONTINUE
DO 13 IP=1,N
B(IP)=A(IP,IP)
D(IP)=B(IP)
Z(IP)=0.
13 CONTINUE
NROT=0
DO 241=1,50
SM1=0.
DO 15 IP=1^N'-1
DO 14IQ=IP+lJSf
SM1=SM1+QABS(A(IP,IQ))
14 CONTINUE
15 CONTINUE
IF(SM1.EQ.0.) RETURN
IF(I.LT.4) THEN
TRESH=0.2*SM1/N**2
ELSE
TRESH=0.
ENDIF
DO 22 IP=1,N-1
DO 21 IQ=IP+1,N
G=100.*QABS(A(IP,IQ))
IF((I.GT.4).AND.(QABS(D(IP))+G.EQ.QABS(D(IP))).AND.
(QABS(D(IQ))+G.EQ.QABS(D(IQ)))) THEN
A(IP,IQ)=0.
ELSE IF(QABS(A(IP,IQ)).GT.TRESH) THEN
H=D(IQ)-D(IP)
IF(QABS(H)+G.EQ.QABS(H)) THEN
T=A(IP,IQ)/H
ELSE

75
THETA=0.5*H/A(IP,IQ)
T=1./(QABS(THETA)+QSQRT(1.+THETA**2))
IF(THETA.LT.O.)T=-T
ENDIF
C=1./QSQRT(1+T**2)
S=T*C
TAU=S/(1.+C)
H=T*A(IP,IQ)
Z(IP)=Z(IP)-H
Z(IQ)=Z(IQ)+H
D(EP)=D(IP)-H
D(IQ)=D(IQ)+H
A(IP,IQ)=0.
DO 16 J=1,IP-1
G=A(J,IP)
H=A(J,IQ)
A(J,IP)=G-S*(H+G*TAU)
A(J,IQ)=H+S*(G-H*TAU)
16 CONTINUE
DO 17 J=IP+1,IQ-1
G=A(IP,J)
H=A(J,IQ)
A(IP,J)=G-S*(H+G*TAU)
A( J,IQ)=H+S (G-H*TAU)
17 CONTINUE
DO 18 J=IQ+1,N
G=A(IP,J)
H=A(IQ,J)
A(IP,J)=G-S*(H+G*TAU)
A (IQ, J)=H+S *(G-H*TAU)
18 CONTINUE
DO 19 J=1JST
G=V(J,IP)
H=V(J,IQ)
V(J,IP)=G-S*(H+G*TAU)
V (J,IQ)=H+S*(G-H*TAU)
19 CONTINUE
NROT=NROT +1
ENDIF
21 CONTINUE
22 CONTINUE
DO 23
B(IP)=B(IP)+Z(IP)
D(IP)=B(IP)
Z(IP)=0.
23 CONTINUE
24 CONTINUE
PAUSE '50 ITERATIONS SHOULD NEVER HAPPEN '
RETURN
END
C
C
SUBROUTINE EIGSRT(D,V,N,NP)
C SORTS THE EIGANVALUES AND EIGENVECTORS

76
c FROM JACOBI INOT ASCENDING
C ORDER.
EXTENDED D(50),V(50,50),P
DO 13 1=1 JsT-1
K=I
P=D(I)
DO 11 J=I+1,N
IF(D(J).GE.P) THEN
K=J
P=D(J)
ENDIF
11 CONTINUE
IF(K.NE.I) THEN
D(K)=D(I)
D(I)=P
DO 12 J=1JST
P=V(J,I)
V(J,I)=V(J,K)
V(J,K)=P
12 CONTINUE
ENDIF
13 CONTINUE
RETURN
END
C
SUBROUTINE BALANC(A^,NP)
C
C
IMPLICIT EXTENDED (A-H)
IMPLICIT EXTENDED (O-Z)
EXTENDED A(50,50)
1 CONTINUE
LAST=1
DO 14 I=1JM
C=0.
R=0.
doiij=i,n
IF(J.NE.I) THEN
C=C+QABS(A(J,I))
R=R+QABS(A(I,J))
ENDIF
11 CONTINUE
IF(C.NE.O..AND.R.NE.O.) THEN
F=l.
S=C+R
2 IF(C.LT.G) THEN
C=C*SQRDX
GO TO 2
ENDIF

uu
77
3 IF(C.GT.G) THEN
C=C/SQRDX
GO TO 3
ENDIF
IF((C+R)/F.LT.0.95*S) THEN
LAST=0
G=l./F
DO 12 J=1,N
A(I,J)=A(I,J)*G
12 CONTINUE
DO 13 J=1,N
A(J,I)=A(J,I)*F
13 CONTINUE
ENDIF
ENDIF
14 CONTINUE
IF(LAST.EQ.O) GO TO 1
RETURN
END
SUBROUTINE HQR(A,N,NP,WR,WI)
IMPLICIT EXTENDED (A-H)
IMPLICIT EXTENDED (O-Z)
EXTENDED A(50,50),WR(50),WI(50)
ANORM=QABS(A(l,l))
DO 1=2,N
DOJ=I-lN
ANORM=ANORM+QABS(A(I,J))
END DO
END DO
NN=N
T=0.
1 IF(NN.GE.l) then
ITS=0
2 DO L=NN,2,-1
S=QABS(A(L-1,L-1))+QABS(A(L,L))
IF(SEQ.O.)S=ANORM
IF(QABS(A(L,L-1))+SEQ.S) GO TO 3
END DO
L=1
3 X=A(NN,NN)
IF(L.EQ.NN)THEN
WR(NN)=X+T
WI(NN)=0.
NN=NN-1
ELSE
Y=A(NN-1,NN-1)
W=A(nn,nn- l)*a(nn- l,nn)
IF(LEQ.NN- 1)THEN
P=0.5*(Y-X)
Q=p**2+W
Z=QSQRT(QABS(Q))

X=X+T
IF(Q.GE.O.) THEN
Z=P+SIGN(Z,P)
WR(NN)=X+Z
WR(NN-1)=WR(NN)
IF(Z.NE.O.)WR(NN)=X-W/Z
WI(NN)=0.
WI(NN-1)=0.
ELSE
WR(NN)=X+P
WR(NN-1)=WR(NN)
WI(NN)=Z
WI(NN-1)=-Z
ENDIF
NN=NN-2
FT SF.
IF(ITS.EQ.30) PAUSE TO MANY ITERATIONS
IF(ITS.EQ.10.OR.ITS.EQ.20) THEN
T=T+X
DO 1=1,NN
A(I,I)=A(I,I)-X
END DO
S=QABS(A(NN,NN-l))+QABS(A(NN-l,NN-2))
X=0.75*S
Y=X
W=-0.4375*S**2
ENDIF
ITS=ITS+1
DO M=NN-2,L,-1
Z=A(MM)
R=X-Z
S=Y-Z
P=(R*S-W)/A(M+1,M)+A(M,M+1)
Q=A(M+1,M+1)-Z-R-S
R=A(M+2M+1)
S=QABS(P)+QABS(Q)+QABS(R)
P=P/S
Q=Q/S
R=RJS
IF(M.EQ.L) GO TO 4
U=QABS(A(M,M-1))*(QABS(Q)+QABS(R))
V=QABS(P)*(QABS(A(M-1,M-1))
+QABS(Z)+QABS(A(M+1,M+1)))
IF (U+V.EQ. V)GO TO 4
END DO
DO I=M+2,NN
A(I,I-2)=0.
IF(I.NE.M+2) A(I,I-3)=0.
END DO
DO K=M,NN-1
IF(K.NE.M) THEN
P=A(K,K-1)
Q=A(K+1,K-1)
R=0.

IF (K.NE.NN -1 )R=A(K+2,K-1)
X=QABS(P)+QABS(Q)+QABS(R)
IF(X.NE.O.) THEN
P=P/X
Q=Q/x
R=R/X
ENDIF
ENDIF
S=SIGN(QSQRT(P**2+Q**2+R**2),P)
IF(S.NE.O.) THEN
IF(K.EQ.M) THEN
IF(L.NE.M) A(K,K-1)=-A(K,K-1)
ELSE
A(K,K-1)=-S*X
ENDIF
P=P+S
X=P/S
Y=Q/S
Z=R/S
Q=Q/P
R=R/P
DO J=K,NN
P=A(K, J)+Q* A(K+1, J)
IF(K.NE.NN-1)THEN
P=P+R*A(K+2,J)
A(K+2,J)=A(K+2,J)-P*Z
ENDIF
A(K+1 ,J)=A(K+1 ,J)-P* Y
A(K,J)=A(K,J)-P*X
END DO
P=X*A(I,K)+Y*A(I,K+1)
IF(K.NE.NN-1)THEN
P=P+Z*A(I,K+2)
A(I,K+2)=A(I,K+2)-P*R
ENDIF
A(I,K+1)=A(I,K+1 )-P*Q
A(I,K)=A(I,K)-P
END DO
ENDIF
END DO
GO TO 2
ENDIF
ENDIF
GOTO 1
ENDIF
RETURN
END
SUBROUTINE ELMHES(A,N,NP)
IMPLICIT EXTENDED (A-H)
IMPLICIT EXTENDED (O-Z)
EXTENDED A(50,50)
IF(N.GT.2) THEN

DO M=2,N-1
X=0.
I=M
DO J=M,N
IF(QABS(A(J,M-1)).GT.QABS(X)) then
X=A(J,M-1)
I=J
ENDIF
END DO
IF(I.NE.M)THEN
DOJ=M-l,N
Y=A(I,J)
A(I,J)=A(M,J)
A(M,J)=Y
END DO
DO J=1,N
Y=A(J,I)
A(J,I)=A(J,M)
A(J,M)=Y
END DO
ENDIF
IF(X.NE.O.)THEN
DO I=M+1,N
Y=A(I,M-1)
IF(Y.NE.O.) THEN
Y=Y/X
A(I,M-1)=Y
DO J=M,N
A(I,J)=A(I,J)-Y*A(M,J)
END DO
DO J=1,N
A(J,M)=A(J,M)+Y*A(J,I)
END DO
ENDIF
END DO
ENDIF
END DO
ENDIF
RETURN
END
SUBROUTINE TRANS(A,B,ITAPS)
EXTENDED A(50,50),B (50,50)
DO 1=1,ITAPS
DOJ=l,ITAPS
B(J,I)=A(I,J)
END DO
END DO
RETURN
END
SUBROUTINE MATMULT(A,B,C,IROWUCOLl,ICOL2)
EXTENDED A(50,50),B(50,50),C(50,50),SUM
DO I=l,IROWl
DO J=l,ICOL2

SUM=0.
DO K=l,ICOLl
SUM=SUM+A(I,K)*B(K,J)
END DO
C(I,J)=SUM
END DO
END DO
RETURN
END
SUBROUTINE GAULEG(X1,X2,X,W,N)
IMPLICIT EXTENDED(A-H,0-Z)
EXTENDED X1,X2,X(N),W(N)
PARAMETER (EPS=3.D-14)
M=(N+l)/2
XM=0.5D0*(X2+X 1)
XL=0.5D0*(X2-X1)
DO 12 I=1,M
Z=COS(3.141592654DO*(I-.25DO)/(N+.5DO))
I CONTINUE
P1=1.D0
P2=0.D0
DO 11 J=1,N
P3=P2
P2=P1
P 1=((2.D0* J-1 .D0)*Z*P2-(J-1 .D0)*P3)/J
II CONTINUE
PP=N* (Z*P 1-P2)/(Z*Z-1 .DO)
Z1=Z
Z=Z1-P1/PP
IF(ABS(Z-Zl).GT.EPS)GO TO 1
X(I)=XM-XL*Z
X(N+ 1-I)=XM+XL*Z
W(I)=2.D0*XL/(( 1 .D0-Z*Z)*PP*PP)
w(N+i-i)=wa)
12 CONTINUE
RETURN
END
SUBROUTINE TRAPZD(FUNC,A,B,S,N)
SAVE
EXTENDED A,B,S,N,FUNC,DEL,TNM,SUM,X
IF(N.EQ.l) THEN
S=0.5*(B-A)*(FUNC(A)+FUNC(B))
IT=1
ELSE
TNM=IT
DEL=(B-A)/TNM
X=A+0.5*DEL
SUM=0.
DO J=1 IT
SUM=SUM+FUNC(X)
X=X+DEL

82
END DO
S=0.5*(S+(B-A)*SUM/TNM)
IT=IT*2
ENDEF
RETURN
END
SUBROUTINE QTRAP(FUNC,A,B,S)
SAVE
EXTENDED FUNC,A,B,S,OLDS
PARAMETER (EPS=1.E-9,JMAX=20)
OLDS=-1E-30
DO J=1,JMAX
CALL TRAPZD(FUNC,A,B,S,J)
IF(ABS(S-OLDS).LT.EPS*ABS(OLDS)) RETURN
OLDS=S
END DO
PAUSE 'TO MANY STEPS'
RETURN
END
SUBROUTINE QMIDPNT(FUNC,A,B,S)
SAVE
EXTENDED FUNC,A,B,S,OLDS,OLDSS
PARAMETER (EPS=1.E-6,JMAX=10)
OLDS=-1E-30
DO J=1,JMAX
WRITE(*,*) 'S=',S
CALL MIDPNT(FUNC,A,B,S,J)
IF(ABS(S-OLDS).LE.EPS*ABS(OLDS)) RETURN
OLDSS=OLDS
OLDS=S
END DO
WRITE(*,*) 'TO MANY STEPS S,OLDS=',OLDS,OLDSS
PAUSE
RETURN
END
SUBROUTINE QMIDINF(FUNC,A,B,S)
SAVE
EXTENDED FUNC,A,B,S,OLDS,OLDSS
PARAMETER (EPS=1.E-6,JMAX=10)
OLDS=-1E-30
DO J=1,JMAX
WRITE(*,*) 'S=',S
CALL MIDINF(FUNC,A,B,S,J)
IF(ABS(S-OLDS).LE.EPS*ABS(OLDS)) RETURN
OLDSS=OLDS
OLDS=S
END DO
WRITE(*,*) 'TO MANY STEPS S,OLDS=',OLDS,OLDSS
PAUSE
RETURN
END

83
SUBROUTINE QROMO(FUNC, A, B,SS, CHOOSE)
IMPLICIT EXTENDED(A-H,0-Z)
PARAMETER (EPS= 1 .E-6,JMAX= 14,JMAXP=JMAX+1 ,KM=4,K=KM+1)
DIMENSION S (JMAXP) ,H(JMAXP)
H(l)=l.
DO 11 J=1,JMAX
CALL CHOOSE(FUNC,A,B,S(J),J)
IF (J.GE.K) THEN
CALL POLINT(H(J-KM),S(J-KM),K,0.0,SS,DSS)
IF (ABS(DSS).LT.EPS*ABS(SS)) RETURN
ENDEF
S(J+1)=S(J)
H(J+l)=H(J)/9.
11 CONTINUE
PAUSE 'Too many steps.'
END
C
C
SUBROUTINE POLINT(XA,YA,N,X,Y,DY)
IMPLICIT EXTENDED(A-H,0-Z)
PARAMETER (NMAX=10)
DIMENSION XA(N),YA(N),C(NMAX)D(NMAX)
NS=1
DIF=ABS(X-XA(1))
DO 111=1^
DIFT=ABS(X-XA(I))
IF (D1FT.LT.DIF) THEN
NS=I
DIF=DIFT
ENDIF
C(I)=YA(I)
D(I)=YA(U
11 CONTINUE
Y=YA(NS)
NS=NS-1
DO 13 M=1JM-1
DO 121=1,N-M
HO=XA(I)-X
HP=XA(I+M)-X
W=C(I+1)-D(I)
DEN=HO-HP
IF(DEN.EQ.O.)PAUSE
DEN=W/DEN
D(I)=HP*DEN
C(I)=HO*DEN
12 CONTINUE
IF (2*NS.LT.N-M)THEN
DY=C(NS+1)
ELSE
DY=D(NS)
NS=NS-1
ENDIF
Y=Y+DY

84
13 CONTINUE
RETURN
END
SUBROUTINE MIDPNT(FUNC,A,B,S,N)
SAVE
EXTENDED FUNC,A,B ,S ,TNM,DEL,DDEL,X,SUM
IF (N.EQ.1) THEN
S=(B-A)*FUNC(0.5*(A+B))
IT=1
ELSE
TNM=IT
DEL=(B-A)/(3. *TNM)
DDEL=DEL+DEL
X=A+0.5*DEL
SUM=0.
DO J=1,IT
SUM=SUM+FUNC(X)
X=X+DDEL
SUM=SUM+FUNC(X)
X=X+DEL
END DO
S=(S+(B-A)*SUM/TNM)/3.
rr=rr*3
ENDIF
RETURN
END
SUBROUTINE MIDINF(FUNK,AA,BB,S,N)
SAVE
EXTENDED FUNK,A,B,S,TNM,DEL,DDEL,X,SUM,AA,BB
B=1./AA
A=1./BB
IF (N.EQ.1) THEN
X=0.5*(A+B)
S=(B-A)*FUNK(1.0/X)/X**2
IT=1
ELSE
TNM=IT
DEL=(B-A)/(3. *TNM)
DDEL=DELrfDEL
X=A+0.5*DEL
SUM=0.
DO J=1,IT
SUM=SUM+FUNK (1.0/X)/X**2
X=X+DDEL
SUM=SUM+FUNK( 1.0/X)/X* *2
X=X+DEL
END DO
S=(S+(B-A)*SUM7TNM)/3.
it=it*3
ENDIF
RETURN
END

85
C
C
C
EXTENDED FUNCTION PROB(V)
IMPLICIT EXTENDED(A-H.O-Z)
EXTENDED EFILT(50),MEANVECT(50),M2
COMMON /EIGVAL/ EFILT,MEANVECT,Y
COMMON /GLOB/ ITAPS,TAPS,SMPL
C
C
EXPSUM=0.
COSSUM=-V*Y
DPROD=l.
DO I=1,ITAPS
M2=MEANVECT(I)**2
DENOM=l+4*V**2*EFILT(I)**2
EXPSUM=EXPSUM+EFILT(I)**2*M2/DENOM
COSSUM=COSSUM+V*EFILT(I)*M2/DENOM+QATAN(2*EFILT(I)*V)
DPROD=DPROD*DENOM
END DO
EXPSUM=-2.*EXPSUM*V*V
DPROD=QSQRT(DPROD)
PROB=QEXP(EXPSUM)*QCOS(COSSUM)/DPROD/2./PI
RETURN
END
C
C
C
EXTENDED FUNCTION PD(V)
IMPLICIT EXTENDED(A-H,0-Z)
EXTENDED EFILT(50),MEANVECT(50),M2
COMMON /EIGVAL/ EFILT,MEANVECT,Y
COMMON /GLOB/ ITAPS,TAPS,SMPL
C
C
EXPSUM=0.
SINSUM=V*Y
DPROD=l.
DO 1=1,IT APS
M2=MEANVECT(I)**2
DENOM=l+4*V**2*EFILT(I)**2
EXPSUM=EXPSUM+EFILT (I) **2 *M2/DENOM
SINSUM=SINSUM-V*EFILT(I)*M2/DENOM-QATAN(2*EFILT(I)*V)
DPROD=DPROD*DENOM
END DO
EXPSUM=-2. *EXPSUM*V*V
DPROD=QSQRT(DPROD)*V
PD=QEXP(EXPSUM)*SIN(SINSUM)/DPROD/PI
RETURN
END
SUBROUTINE COV(ITAPS ,SMPL,PM2,ROOT,CON,IPOLES)
EXTENDED SMPL,PM2(50,50), ZCOEF(10,0:4),PCOEF(10,0:4)
XCOMPLEX ROOT(10),CON(10),TEMP

uuuu uu
86
LL=0
DO J=1,ITAPS
T=QFLOAT(J- 1)*SMPL
TEMP=QCMPLX(0. ,0.)
DO K=l,IPOLES
TEMP=TEMP+CON (K)*CQEXP(T*ROOT (K))
END DO
PM2( 1, J)=QRE AL(TEMP)
PM2( J, 1) =PM2( 1, J)
END DO
DO I=2,ITAPS
DO J=I,ITAPS
PM2(I,J)=PM2(I-1 ,J-1)
PM2(J,I)=PM2(I,J)
END DO
END DO
RETURN
END
SUBROUTINE DETCON (KK,EVAL,CON)
EXTENDED EVAL(50),CON(50),PROD
DO K=1,KK
PROD=1.0
DO 1=1,KK
IF(K.NE.I) THEN
PROD=PROD/( 1.0-E V AL(I)/EV AL(K))
ENDIF
END DO
CON(K)=PROD
END DO
RETURN
END
XCOMPLEX FUNCTION H(LAM)
EXTENDED LAM,ZCOEF( 10,0:4),PCOEF(10,0:4)
XCOMPLEX W,PROD
COMMON /FELT/ ZCOEF,PCOEF,IPOLES
W=QCMPLX(0.0,LAM)
PROD=QCMPLX(l .0,0.0)
DO K= 1 ,(IPOLES+1)/2
A=ZCOEF(K,2)
B=ZCOEF(K,l)
C=ZCOEF(K,0)
PROD=PROD*((A*W+B)*W+C)
A=PCOEF(K,2)
B=PCOEF(K,l)
C=PCOEF(K,0)
PROD=PROD/(( A* W+B)* W+C)
END DO
H=PROD

RETURN
END
XCOMPLEX FUNCTION G(LAM)
EXTENDED LAM,TEMP,FREQ,SMPL
XCOMPLEX Z1,BSUM
COMMON /GLOB/ ITAPS,TAPS(50),SMPL
FREQ=SMPL*LAM
Z1=CQEXP(QCMPLX(0.0,FREQ))
BSUM=0.
DO I=ITAPS,2,-1
BSUM=(BSUM+TAPS(I))*Z1
END DO
BSUM=BSUM+TAPS(1)
G=BSUM
RETURN
END
SUBROUTINE PMATRIX(P,LAM,KK,T)
EXTENDED P(50,50),LAM(6),LSUM,LDIF,LK,LL,T
XCOMPLEX TEMP,LSUMT,LDIFT,G,H
DO K=1,KK
DO L=1,KK
LSUM=LAM(K)+LAM(L)
LDIF=LAM(K)-LAM(L)
LK=LAM(K)
LL=LAM(L)
LSUMT=QCMPLX(0.0,LSUM*T)
LDIFT=QCMPLX(0.0,LDIF*T)
TEMP=H(LK)*H(LL)*G(LSUM)*CQEXP(LSUMT)
TEMP=TEMP+H(LK)*DCONJG(H(LL))*G(LDIF)*CQEXP(LDIFT)
P(K,L)=0.5*DREAL(TEMP)
END DO
END DO
DO K=1,KK
DO L=KK+1,2*KK
L k=L-KK
LSUM=LAM(K)+LAM(L_K)
LDIF=LAM(K)-LAM(L_K)
LK=LAM(K)
LL=LAM(L_K)
LSUMT=QCMPLX(0.0,LSUM*T)
LDIFT=QCMPLX(0.0UDIF*T)
TEMP=H(LK)*H(LL)*G(LSUM)*CQEXP(LSUMT)
TEMP=TEMP-
H(LK) *DCONJG(H(LL)) *G(LDIF) *CQEXP(LDIFT)
P(K,L)=0.5*DIMAG(TEMP)
P(L,K)=P(K,L)
END DO
END DO
DO K=KK+1,2*KK

uuu
DO L=KK+1,2*KK
L_K=L-KK
K K=K-KK
LSUM=LAM(K_K)+LAM(L_K)
LDIF=LAM(KK)-LAM(L_K)
LK=LAM(K_K)
LL=LAM(L_K)
LSUMT=QCMPLX(0.0,LSUM*T)
LDIFT=QCMPLX(0.0,LDIF*T)
TEMP=H(LK)*H(LL)*G(LSUM)*CQEXP(LSUMT)
TEMP=TEMP-
H(LK)*DCONJG(H(LL))*G(LDIF)*CQEXP(LDIFT)
P(K,L)=-0.5*DREAL(TEMP)
END DO
END DO
RETURN
END
SUBROUTINE CDETCON(ROOT,CON)
XCOMPLEX ROOT(10),CON(10),POLE,PROD
EXTENDED ZCOEF(10,0:4),PCOEF(10,0:4),A,B,C,TEMP
COMMON /FILT/ ZCOEFJPCOEFJPOLES
TEMP=1.0
DO K=l,(IPOLES+l)/2
A=PCOEF(K,2)
B=PCOEF(K,l)
C=PCOEF(K,0)
D=B**2-4.*A*C
IF(D.LE.O.) THEN
D=QSQRT(-D)
ROOT(2*K-l)=QCMPLX(-BJD)/2./A
ROOT(2*K)=QCMPLX(-B,-D)/2./A
TEMP=TEMP/A/A
ELSE
ROOT(2*K-l)=-C/B
TEMP=TEMP/B/B
ENDIF
END DO
DO I=l,IPOLES
POLE=ROOT(I)
PROD=1.0
DO K= 1 ,(IPOLES+1)/2
A=ZCOEF(K,2)
B=ZCOEF(K,l)
C=ZCOEF(K,0)
PROD=PROD* ((A*POLE+B)*POLE+C)
PROD=PROD*((A*POLE-B)*POLE+C)
END DO
DO K=l,IPOLES
IF(K.NE.I) THEN
PROD=PROD/(POLE-ROOT (K))

89
ENDIF
PROD=PROD/(-POLE-ROOT(K))
END DO
CON(I)=PROD*TEMP/2.
C DIVIDE BY TWO BECAUSE THE AUTOCORRELATION OF THE REAL
AND
c IMAGINARY IS
C HALF OF THE COMPLEX PORTION
END DO
RETURN
END
C
SUBROUTINE NOISEBW(ROOT,CON,NBW)
XCOMPLEX ROOT (10) ,CON (10) ,POLE,PROD
EXTENDED ZCOEF(10,0:4),PCOEF(10,0:4),A,B,C,TEMP,DUM,NBW
COMMON /FILT/ ZCOEF,PCOEF,IPOLES
C
TEMP=1.0
DO K= 1 ,(IPOLES+1 )/2
A=PCOEF(K,2)
B=PCOEF(K,l)
C=PCOEF(K,0)
D=B**2-4.*A*C
EF(D.LE.O.) THEN
D=QSQRT(-D)
ROOT(2*K- 1)=QCMPLX(-B JD)/2./A
ROOT(2*K)=QCMPLX(-B,-D)/2./A
TEMP=TEMP/A
ELSE
ROOT (2*K-1 )=-C/B
TEMP=TEMP/B
ENDIF
END DO
DO I=l,IPOLES
POLE=ROOT(I)
PROD=QCMPLX(l.0,0.0)
DO K=l,(IPOLES+l)/2
A=ZCOEF(K,2)
B=ZCOEF(K,l)
C=ZCOEF(K,0)
PROD=PROD*((A*POLE+B)*POLE+C)
END DO
DO K=l,IPOLES
IF(K.NE.I) THEN
PROD=PROD/(POLE-ROOT(K))
ENDIF
END DO
CON (I)=PROD*TEMP
END DO
PROD=QCMPLX(0.0,0.0)
DOI=l,IPOLES
DO J=I,IPOLES
IF(I.EQ.J) THEN
PROD=PROD-CON(I)**2/2./ROOT(I)

on
90
ELSE v
PROD=PROD-2.0*CON(I)*CON(J)/(ROOT(I)+ROOT(J))
ENDIF
END DO
END DO
NBW=QREAL(PROD)
RETURN
END
SUBROUTINE GETRCON(ROOT,CON,ZCOEF,PCOEF,IPOLES)
XCOMPLEX ROOT(10),CON( 10),POLE,PROD
EXTENDED ZCOEF( 10,0:4),PCOEF(l 0,0:4),A,B ,C,TEMP,DUM,NBW
C
TEMP=1.0
DO K=l,(IPOLES+l)/2
A=PCOEF(K,2)
B=PCOEF(K,l)
C=PCOEF(K,0)
D=B**2-4.*A*C
IF(D.LE.0.) THEN
D=QSQRT(-D)
ROOT(2*K-l)=QCMPLX(-B,D)/2./A
ROOT(2*K)=QCMPLX(-B,-D)/2./A
TEMP=TEMP/A
ELSE
ROOT (2*K-1 )=-C/B
TEMP=TEMP/B
ENDIF
END DO
DO I=l,IPOLES
POLE=ROOT(I)
PROD=QCMPLX( 1.0,0.0)
DO K=l,(IPOLES+l)/2
A=ZCOEF(K,2)
B=ZCOEF(K,l)
C=ZCOEF(K,0)
PROD=PROD*((A*POLE+B)*POLE+C)
END DO
DO K=l,IPOLES
IF(K.NE.I) THEN
PROD=PROD/(POLE-ROOT (K))
ENDIF
END DO
CON (D=PROD*TEMP
END DO
RETURN
END
C
SUBROUTINE
GETMEAN(MEANVECT,ITAPS,ROOT,CON,IPOLES,PW,SMPL)
XCOMPLEX ROOT( 10),CON( 10),POLE,PROD,TEMP
EXTENDED MEANVECT(50),PW,SMPL,TS,TB,TU,BIG,TIME
IF(PW.EQ.0.) THEN

91
DO I=1,ITAPS
MEANVECT(I)= 1.0
END DO
ELSE
TS=0.
BIG=0.
TIME=0.
DO 1=1,100
IF(TS.LE.PW) THEN
TEMP=0.
DO J=l,EPOLES
TEMP=TEMP+CON(J)/ROOT(J)*(CQEXP(ROOT(J)*TS)-1.)
END DO
ELSE
TEMP=0.
DO J=l,IPOLES
TEMP=TEMP+CON (J)/ROOT( J)*CQEXP(ROOT (J)*TS)*
(1 .-CQEXP(-ROOT(J)*PW))
END DO
ENDIF
IF(QREAL(TEMP).GT.BIG) THEN
BIG=QREAL(TEMP)
TIME=TS
ENDIF
TS=TS+SMPL/8.
END DO
TS=TIME
MID=(TTAPS+l)/2
IF(MID.EQ.ITAPS/2) THEN
TB=TS-SMPL/2.
DO I=MID,1,-1
TEMP=0.
DO J=l,IPOLES
TEMP=TEMP+CON(J)/ROOT(J)*(CQEXP(ROOT(J)*TB)-1.)
END DO
MEANVECT (I)=QREAL(TEMP)
TB=TB-SMPL
IF(TB.LE.O) TB=0.
END DO
TU=TS+SMPL/2.
DO I=MID+1,ITAPS
TEMP=0.
DO J=l,IPOLES
TEMP=TEMP+CON(J)/ROOT(J)*CQEXP(ROOT(J)*TU)*
(1 .-CQEXP(-ROOT (J)*PW))
END DO
MEANVECT(I)=QREAL(TEMP)
TU=TU+SMPL
END DO
ELSE
TB=TS
DO I=MID,1,-1
TEMP=0.
IF(TB.GT.PW) THEN

92
DO J=l,IPOLES
TEMP=TEMP+CON (J)/ROOT (J)*CQEXP(ROOT(J)*TB) *
(l.-CQEXP(-ROOT(J)*PW))
END DO
FT SF
DO J=l,IPOLES
TEMP=TEMP+CON(J)/ROOT(J)*(CQEXP(ROOT(J)*TB)-1.)
END DO
ENDIF
ME ANVECT (I)=QRE AL(TEMP)
TB=TB-SMPL
IF(TB.LE.O) TB=0.
END DO
TU=TS+SMPL
DO I=MBD+1,ITAPS
TEMP=0.
IF(TU.GT.PW) THEN
DO J=l,IPOLES
TEMP=TEMP+CON(J)/ROOT(J)*CQEXP(ROOT(J)*TU)*
(1 .-CQEXP(-ROOT(J)*PW))
END DO
ELSE
DO J=l,IPOLES
TEMP=TEMP+CON(J)/ROOT(J)*(CQEXP(ROOT(J)*TU)-
ENDDO
ENDIF
MEANVECT(I)=QREAL(TEMP)
TU=TU+SMPL
END DO
ENDIF
ENDIF
RETURN
END
PROGRAM 2
PROGRAM TEST
IMPLICIT EXTENDED(A-H,0-Z)
EXTENDED A,B,C,D,E,F,X,S1,S2,S,CM1(50)JCM2(50),EVAL(50)
EXTENDED EFILT(50),MEANVECT(50)>I2,NBW>NBWRATIO,
NB WFIR,MEANTEMP(5 0)
EXTENDED LAM,ZCOEF(10,0:4)JPCOEF(10,0:4)
REAL*4 TAPS(50)
XCOMPLEX FILTIF,FILTVID
COMMON /EIG VAL/ EFILT,MEANVECT,Y
COMMON /GLOB/ ITAPS,TAPS,SMPL
COMMON /FILT/ ZCOEF,PCOEF,IPOLES
A=0.
B=100.
C=1.D100
OPEN(UNIT=2,FILE=EIGDATA.DAT,)

93
DO 1=1,KK
WRTTE(6,*) CM1(I),EVAL(I)
IF(EVAL(I).GT.O.) KKPOS=I
END DO
WRITE(6,*) ITAPS,SMPL
DO 1=1 ITAPS
RE AD(2 ,*) CM2 (I) ,EFILT(I) ,MEANVECT(I) ,TAPS (I)
WRTTE(6,*) CM2(I),EFILT(I),MEANVECT(I),TAPS(I)
IF(EFILT(I).GT.O.) ITAPSPOS=I
END DO
WRITE(6,*) IPOLES
DO 1= 1 ,(IPOLES+1 )/2
PCOEF (1,1 ),PCOEF (1,2)
WRTTE(6,*) ZCOEF(I,0),ZCOEF(1,1),ZCOEF(I,2),PCOEF(I,0),
PCOEF (1,1),PCOEF (1,2)
END DO
CLOSE(2)
OPEN(UNIT=2,FILE='FILTRES.DAT',STATUS='NEW')
DO 1=1,100
W=QFLOAT(I)/50
FILTVID=G(W)
FILTIF=H(W)
TEMP=20*LOG10(CQABS(FILTIF))
TEMP1=20*LOG10(CQABS(FILTVID))
IF(TEMPl.LT.-30.) TEMPl=-30.
IF(TEMP.LT.-30.) TEMP=-30.
WRITE(2,100) W,CHAR(9),TEMP,CHAR(9),TEMPI
WRITE(6,100) W,CHAR(9),TEMP,CHAR(9),TEMPI
END DO
CLOSE(2)
WRITE(*,*) INPUT 1 FOR DENSITY 2 FOR ROC 3 FOR PD-VS-PFA'
DO 1=1 ITAPS
MEANTEMP(I)=MEANVECT(I)
END DO
IF(IDEC.EQ.l) THEN
OPEN(UNIT=2,FILE='DENS.DAT',STATUS='NEW')
WRITE(*,*) 'INPUT 1 TO COMPUTE PD '