Citation
Quantization factors influencing the reconstruction of autocorrelation functions

Material Information

Title:
Quantization factors influencing the reconstruction of autocorrelation functions
Creator:
Dimas, Nicholas
Publication Date:
Language:
English
Physical Description:
vi, 74 leaves : illustrations ; 29 cm

Subjects

Subjects / Keywords:
Linear systems -- Evaluation ( lcsh )
Adaptive signal processing ( lcsh )
Signal processing -- Digital techniques ( lcsh )
Analog electronic systems ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references.
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Department of Electrical Engineering.
Statement of Responsibility:
by Nicholas Dimas.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
26082217 ( OCLC )
ocm26082217
Classification:
LD1190.E54 1991m .D55 ( lcc )

Downloads

This item has the following downloads:


Full Text

PAGE 1

QUANTIZATION FACTORS INFLUENCING THE RECONSTRUCTION OF AUTOCORRELATION FUNCTIONS by Nicholas Dimas B.S., New York Institute of Technology, 1987 M.S., University of Colorado at Denver, 1991 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Master of Science Department of Electrical Engineering 1991

PAGE 2

This thesis for the Master of Science degree by Nicholas Dimas has been approved for the Department of Electrical Engineering by Douglas Ross Joe Thomas Marvin Anderson Date 2/ cp /

PAGE 3

Contents 1 Introduction 1.1 Problem and Approach 2 Correlation Functions 2.1 Analog to Digital Conversion 2.2 Digital Correlation ..... 2.3 Comparison of Analog and Digital Correlation Functions 3 Numerical Calculations 3.1 Voltage Probabilities 3.2 Joint Probabilities 3.3 Calculation Errors 4 Quantized Data and Correlation Functions 4.1 Calculation of Scaling Factor 4.2 RMS Error 1 2 5 6 8 11 13 14 16 19 21 22 24

PAGE 4

5 Results of Calculations 5.1 5.2 5.3 Error Vs. Bit Level Error Vs. Signal Variance Error Vs. Sample Rate .. 5.4 Comparison of Autocorrelation Functions 6 Conclusion Bibliography Appendix A Table of Gauss Quadrature Weights and Zeros B Computer Programs IV 25 26 31 35 38 45 47 48 52

PAGE 5

List of Figures 2.1 2.2 Two bit parallel analog to digital converter ._ ..... Scaled and unsealed versions of digital autocorrelation 5.1 RMS difference vs. bits, for /.. =3162.28, 10'1 31622.8 Hz 5.2 RMS difference vs. bits, for /.. =3162.28, 10'1 31622.8 Hz 5.3 RMS difference vs. bits, for /.. =3162.28, 104 31622.8 Hz 5.4 rms difference vs. bits, for f., =3162.28, 1e4, 31622.8 Hz 5.5 RMS difference vs. u,.,, for bits 2 through 8 5.6 RMS difference vs. u .,, for bits 2 through 8 5. 7 RMS difference vs. u x, bits=4, fR = 31622.8 Hz 5.8 RMS difference vs. sample rate f.,, for bits 2 through 8 5.9 RMS difference vs. sample rate fs, for bits 2 through 8 5.10 Continuous and digital autocorrelation functions 5.11 Continuous and digital autocorrelation functions 5.12 Continuous and digital autocorrelation functions 5.13 Continuous and digital autocorrelation functions v 7 12 27 28 29 30 32 33 34 36 37 39 40 41 42

PAGE 6

5.14 Continuous and digital autocorrelation functions 5.15 Continuous and digital autocorrelation functions VI 43 44

PAGE 7

Chapter 1 Introduction Correlation functions play an important role in the evaluation of linear systems with random inputs. They are used for example to find the power spectral density via the Wiener-Khinchin theorem [1). Also, correlation functions play an important part in adaptive signal processing. Specifically, when designing a Wiener filter, the autocorrelation of the tap inputs to the transversal FIR filter is needed as well as the crosscorrelation between the desired response and the input [2]. Similar requirements are needed when using a forward or backward linear predicter. When using the YuleWalker method for power spectrum estimation, the coefficients for the all-pole model are directly related to the autocorrelation of the input data [3]. In many cases, the input to the digital system must first be digitized by an analog to digital converter. This type of conversion introduces quantization effects. How well these correlation functions stay true after going

PAGE 8

through the A/D converter is the topic of this thesis. In other words, under what circumstances is the digital correlation function an accurate representation of the analog correlation function? 1.1 Problem and Approach Consider a parallel analog to digital converter with sample rate, /8 and quantization level, L bits. Given an analog input zero mean random process, :z:c(t), with autocorrelation, Rxx(t). The problem is to investigate the autocorrelation Rxx[n] of the quantized output :i:[n]. That is, what is the distortion, caused by the analog to digital converter, between the continuous autocorrelation function and the autocorrelation function of the quantized output signal as a function of sample rate, f.,, and bit quantization level, L. The amplitude of the analog input, :tc(t) is assumed to be normal with variance cr; and autocorrelation Rxx(t). The density and joint density functions will be used to calculate the mean, mean square, and the autocorrelation of the quantized output. Let the quantized output of the analog to digital converter be in the range 0 :::; :i:[n] :::; 2D -1. Because of the way a parallel A/D converter works, for a given bit level L, there will be 2L -1 reference voltages at the input of the A/D. In evaluating the various output level probabilities, the probability that :i:[n] = k is the same as the prob ability that the input voltage to the A/D is between Vk < :z:c(t) :::; Vk+h where Vk is the kth reference voltage. Therefore the mean of the quantized output is 2

PAGE 9

E{a:} = E kP{Vk < xc(t) :5 Vk+d The same is true when evaluating the joint probabilities for the expression E{x[n] x[O]}. In this case, the probability that x[n] = k and that x[O] =lis the same as the probability that Vk < :5 Vk+l and lt < Xc(O) :5 "l+J. Since the joint density cannot be written as a product of the marginal densities, the random variables x andy are not independent. In order to evaluate the joint probabitlities, the integration must be done numerically. The method used is Gaussian Quadrature [4]. When evaluating the joint probabilities, the two integrals must be broken up. This is done using the method of completing the square. The second integral is computed by changing it into Q-function form and then using a method found in [4]. After transforming the joint probability into one integral and a Q-function, the integration is ready to be performed. By using a variable transformation on the limits of integration the integral can be evaluated using a ninety-six point Legendre polynomial whose zeros and weights can be found in [4]. Once the integrations have been done and the autocorrelation of the quan tized output is found, a special scale factor is needed to compare the continuous version with the digital one. The comparison between the two correlation func tions is done using the method of least squares. This involves taking the sum of the square of the differences between the digital and the analog autocorrelation. The error that will be used to compare the two sets of data will be root mean 3

PAGE 10

square. This error will be a function of the quantizer's sample rate, bit level, and quantization range. An expla.nation of the error graphs is discussed in chapter five, along with a comparison of the digital and analog autocorrelation curves. Finally a summary of the results is made concerning the factors contributing to the improvement of the digital autocorrelation. Appendix A, contains a list of the weights and zeros that the Gauss Quadrature method requires. Appendix B contains the computer programs that were used to calculate the various probabilities. 4

PAGE 11

Chapter 2 Correlation Functions A correlation function, as shown below, is the mathematical expectation of a signal at two different time instants [5]. An autocorrelation, Rxx(t), is a measure of the similarity of a signal with itself, shifted in time. The argument, t, in Rxx(t) is called the time lag. The autocor relation will have its maximum value at t = 0. In other words, the signal :z:c(t) is most similar with itself when there is no time lag. One of the properties of correlation functions is that they contain in them information about the signal. For example, the correlation function evaluated at t = 0 is termed the average power of the signal or mean square value. Also, if the process :z:c(t) is not zero mean, then the autocorrelation function contains a

PAGE 12

constant term whose square root is equal to the meau or DC value of the signal. The question arises, how well can the continuous correlation function, Rxx(t), be reconstructed based on the sampled data? This will depend on the sampling frequency, f.,, and the bit level, L, of the quantizer. 2.1 Analog to Digital Conversion The thesis, considers the pa.rallel or flash type analog to digital converter as shown in figure 2.1. Flash converters are used in a variety of applications such as video displays, radar signal processing, and high speed instrumentation. The operation of the parallel converter is as follows. The resistor reference network can be thought of as a ruler. The amplitude of the input voltage :cc(t) is compared with the three reference voltages. If the input voltage is greater than the reference voltage, then the output of that particular operational amplifier goes high. By having V,.r., = V,., bipolar conversion is possible. Using superposition, the different reference voltage levels Vj to V;1 can be calculated. From the :figure it is clear that (2i-1) 1 V; V,.b + 2 ( 21. 1 ) (l rf V,.,) t = 1, ... ,2L-1 In this particular case with L = 2 bits, v;,, = +1, and v;,,_, = -1 the reference voltages are Vi = V2 = 0, and V1 = Therefore, if the input voltage is 0.5 volts, then operational amplifiers 1 and 2 will be high, and 3 will be low. 6

PAGE 13

Vrt R/2 3 / /R Xc(t) o2 ./ R R/2 Vrb Figure 2.1: Two bit parallel analog to digital converter 7

PAGE 14

The digital encoding network then translates its input into a 2 bit output ranging from 0, ... 2L -1. Other output ranges are also possible. For example, a two's complement procedure can be used to create a digital output that is bipolar. Such encoding will provide an output process that is already zero mean. 2.2 Digital Correlation As was mentioned at the beginning of this chapter, an autocorrelation is a measure of the similarity of a signal, x(t), at two different times. For a digital signal, the above definition can be put another way. In this case an autocorrelation can be thought of as the influence that one sample of a quantized signal, i[n], at time n1 has on another sample at time n2 Whereas in the former case the amplitude of x(t) is continuous, the amplitude of i[n] is discrete. The same is true for the analog and digital density functions, fxx(Xt, :c0), and /xy{:ck, Y), respectively. Since the random variable, x, is discrete, its density function can be written as a sum of delta functions as in 2.2 [1]. fx(x) L P(X (2.2) k 8

PAGE 15

Where, P, is the probability that the random variable, X, takes on the values, Xk. And the delta function is as described below. { 1 if X = Xk 0 otherwise When calculating the autocorrelation the joint density function is required. Therefore, for the discrete case, equation 2.2 becomes 2.3. where LLP(X k f. { 1 if x = Xk and y = Yt D (X Xk y -Yl) = 0 otherwise {2.3) In this case, P, is the probability that the random variable, X, takes on values equal to xk, and, that the random variable, Y, takes on values equal to Yt The next step in transforming equation 2.1 into something that can be used for the digital autocorrelation is to change the double integral. Since the allowed values that the random variables can take on are discrete, tlte double integral signs in equation 2.1 can be changed into summations. The output values that the A/D converter takes on, 0 2L -1, are in increments of one. Therefore the incremental values, dxt and dx0 are equal to one. The result so far is shown below 9

PAGE 16

in equation 2.4. 21'-1 2D-1 Ru [n] L L XJ.:Yf /xy (xk, yl) (2.4) .... ,=n 11l=n It was stated earlier in chapter one that because of the way the output of the A/D is set up, the probabilities in 2.3 evaluate to Using the above equation and changing the variables in 2.4 from Xk and Yl to k and l respectively the equation for evaluating the digital autocorrelation is 2.5. 21'-1 21'-1 Rxx [n] L L klP(x[n] k, x [OJ = l) (2.5) k=O 1=11 Where the random variables X and Y are evaluated at x[n) and at x[OJ respectively. 10

PAGE 17

2.3 Comparison of Analog and Digital Correlation Functions Consider the digital autocorrelation given by 2.5. Let m = 2L -1 and the joint probability Pmm = 1 with all other joint probabilities zero. Then the maximum value that 2.5 will have at n = 0 will be m2 For example, if L = 2 then m2 = 9, if L = 3 then m2 = 49, and so on. The maximum value for the input autocorrelation, which occurs at t = 0, is o-;. If only the shape of the digital autocorrelation was needed then the comparison would be able to be done at this point. Since an autocorrelation function contains information about the original random signal, such as mean and mean square, it is important to scale down the digital version in order to extract this information. Only then can the quantization factors that influence the digital autocorrelation be compared. Figure 2.2 shows a comparison between the scaled and unsealed versions of the digital autocorrelation, for a specific case. 11

PAGE 18

comparison of digital autocorrelation with analog version . . , i i bits=2 ................. '1' ............. : ............... ............... ......... "'"'1"""""'""':" ............ ............... '1'"""'"'.3.5 . fs=5 kHt ............................ ................. r ............... r ............... 1 ................ : .............. 3 ..... .... ........ : ....... . ............... ............... ................ ................. ...... :, . . ;. 2.5 ............ .. 2 ............................ ................ ................ l ................ .............. . . . : : 1.5 ......... ; ................ : ................ : ................ ; ............. . . . . . . . 1 ............................ p .. ...... : ................ ................ : ................ i .............. : : : : . 0.5 6 :b ..... ..... : ................ ................ : ................ ,,: ............. ; -6 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time xt0-3 Figure 2.2: Scaled and unsealed versions of digital autocorrelation 12

PAGE 19

Chapter 3 Numerical Calculations Before numerically calculating the individual probabilities, and the joint probabilities, the two equations must be put in a format that can be used in a program. The integrations are done by using the method of Gauss Quadrature with n=96 point Legendre polynomial. The zeros and weights were obtained from [4). Let a and b denote the lower and upper limits respectively, when calculating the probability, Pk, that the analog input voltage is between reference levels Vk and Vk+I. If a and b are finite then a Gauss transformation is used. When the upper limit is +oo or the lower limit is -oo the integral is broken into two parts. For example, if b = oo then faoo = ft'(' Since the density function fx( z) has a Gaussian form, the first part of the integral evaluates to one half. Likewise for the case when a= -oo and b is finite. The same is true for the joint probabilities, Pk. The integration is written

PAGE 20

as 1: fed. When evaluating this integral numerically, the same considerations must be taken, but here there are six valid combinations of a, b, c, and d. The inner integral is put into Q-function form and is evaluated by methods found in [4]. As far as 1: is concerned, a different method is used to evaluate the special cases. Since the number of bits used will always be greater than or equal to 2, there will never be a situation where the limits of integration are from negative infinity to zero or from zero to infinity. Therefore in the special case when one of the limits is infinity, first an inverse transformation like :r:: = l is done. The result is limits '"( which are finite and which the Gauss transformation can be used on. 3.1 Voltage Probabilities Given that the input signal has a Gaussian density as shown in equation 3.1 below, fx(:r::) exp { -;-;-; } (3.1) the probability that the signal will fall between ranges vk and vk+l in figure 2.1, can be found using equation 3.2 [1]. (3.2) Let: a = :r:: (3.3) 14

PAGE 21

1 da. = Ux (3.4) (3.5) (3.6) (3.7) Using the method_ of Gauss Quadrature, equation 3.7 can be solved numerically by first performing the following transformation on it. 1 1 Let: a = { b 1 -a J) f3 + -( b 1 + a I) 2 2 (3.8) (3.9) Substituting 3.8 and 3.9 into 3. 7 the following equation is obtained. (3.10) Equation 3.10 can now be solved numerically by using the zeros, :z:i, and the weights, Wi, required by Gauss Quadrature. This then turns 3.10 into the following form: (3.11) 15

PAGE 22

The mean and mean square value of the quantized output can now be found using the above value of Pk in 3.12 and 3.13 as shown below. 21'-1 E{x} = L: kPk (3.12) k=O 21'-1 E {x2 } = I: k2 pk (3.13) k=O 3.2 Joint Probabilities When calculating the joint probabilities, it is necessary to use equation 3.14. fx (:z:, y) = 1 exp {Rxx(O):z:2 -2Rxx(t):z:y + Rxx(O)y2 } y 271" VR1xx(O) Rix(t) 2 [Rix(O) Rix(t)) (3.14) The joint density function, 3.14, can be simplified by making substitutions A through H shown below. The result is shown in equation 3.16. 1 A = Rix(O)-Ri:x(t), B = 21rv'A' D = Rxx(O), E = Rxx(t) 16

PAGE 23

(3.16) Using the method of completing the square in the exponential term of equation 3.16 allows equation 3.15 to be written as shown in 3.17. (3.17) 2 Integral #2 in 3.17 will be evaluated as a Q-function. A Q-function [1] is a function defined on the area under a Gaussian density with zero mean and variance of one, as shown below in 3.18. Q (z) (3.18) where: Q(-z) = 1-Q(z), Q(oo) = 0, Q(-oo) = 1 Making the following substitutions into 3.17 and breaking up the integral as was mentioned earlier, equation 3.19 is obtained. u = vf2F (y -H :t) V/. = vf2F (Vi -H :t) 17

PAGE 24

Before the above equation can be put into a form so the Gauss Quadrature method can be used, a similar transformation as 3.8 must be done first. For the case where the limits vk and vk+I are finite the transformation looks like 3.20 and 3.21. (3.20) dx (3.21) Substituting the above change of variables into equation 3.19 will change the limits of integration to The integration then becomes the summation shown in 3.22. Note that Vf and V/+1 are also functions of x. Vf= where, G= 1 2 Rxx(O) Rxx(O) Rxx(t) Rix(O)-Rix(t) (l-J Rxx(O) Xi) Rxx(t) x) Rxx(O) 1 Using the method of Gauss Quadrature, there are both positive and negative zeros. Therefore the upper limit n in the summation in 3.11 and 3.22 goes to n = 96. The first n = 48 will be for positive xi, and the last n = 49 to n = 96 will be for negative Xi. 18

PAGE 25

3.3 Calculation Errors There will always be errors when trying to evaluate integrals numerically. In the programs that are used, pertinent variables are type cast as double pre cision. This means that instead of the usual 8 bit representation of numbers, a 16 bit representation is used. When using the method of Gauss Quadrature to evaluate the integrals Pk and Pkl, a Legendre polynomial with 96 zeros was used. In evaluating t for Pkl by transforming the integrand into Q-function form, the error according to [4] is less than 7.5 x 10-8 The table below shows a compar ison of the calculations when evaluating the integrals by Gauss Quadrature and when using the Q-function method for f s = 3162.28 Hz and L = 4 bits. The calculation errors were tested by summing the voltage probabilities to make sure they summed to one. Another method used to test calculations was by using the density functions, fx(x), and /xy(x,y). If the value of Rxx(t) in the joint density function is allowed to approach zero, then the joint density can be written as a function of the marginal densities. Therefore, if a probability value say, P1 is squared, the result would approach the joint probability value, P11 19

PAGE 26

k Pk, Gauss Quadrature Pk, Q-function 0 0.0025549232 0.0025551950 1 0.0056424093 0.0056423421 2 0.0145526108 0.0145525455 3 0.0320491790 0.0320492485 4 0.0602704040 0.0602704777 5 0.0967857747 0.0967856570 6 0.1327229071 0.1327229692 7 0.1554217918 0.1554215650 8 0.1554217918 0.1554215650 9 0.1327229071 0.1327229692 10 0.0967857747 0.0967856570 11 0.0602704040 0.0602704 777 12 0.0320491790 0.0320492485 13 0.0145526108 0.0145525455 14 0. 0056424093 0.0056423421 15 0.0025549232 0.0025551950 20

PAGE 27

Chapter 4 Quantized Data and Correlation Functions In order to compare the continuous correlation function with the digital correlation function, the method of least squares will be used. In the usual least squares approximation, the objective is to minimize an error which is of the form = Li(Yitrue Yiob.)2 [2]. Given that the output range of the quantizer goes from, 0 x [n] 2L. -1, the output scale for different bit levels will change. For example the output scale for a 6 bit quantizer is larger than the scale for a 2 bit quantizer. In this case, since only a comparison is being made, the value of will be minimized if Ytrue and Yob . are on the same scale. For this reason, a scale factor is required so that the different quantizer levels can be compared with the continuous correlation function. Therefore the scale factor that is calculated in

PAGE 28

the next section is really a compression factor. 4.1 Calculation of Scaling Factor Let the quantized output signal be designated by, 3; [n]. Assuming an L bit quantizer, the output range will go from, 0 ::; 3; [n] ::; 2L -I. Since the original signal is zero mean, it is best to consider an output process v [n] that is also zero mean, and with the scale factor r. Let: v [n] = f (3; [n] -E {3; [n]}) Taking the expected value of v [n] at times, n = n and n = 0, the autocorrelation, Rv [n], is computed. After some manipulation R11 [n] turns out to be 4.1. (4.1) where 21'-l 21'-] Rx:ii: [n] = E { :ii [n] 3; [OJ} L I: klPkt (4.2) k=O l=O The error between the sampled analog autocorrelation function, Rxx (nT8), and the digital one, Rv [n], using the sum of the square of the differences as a criterion, 22

PAGE 29

as shown below in equation 4.3, is N 1/J = L {Rxx (nr.) -Rv [n]}2 (4.3) n.=U where: 2 ( inTsl) Rxx (nr.) = uxexp fc The upper limit, N, in summation 4.3 is chosen so that for a specific sample rate, /.., the total number of sample points always goes out to five time constants as shown below. N = 5tc /.. The error between the analog and digital correlation functions is minimized at the value of r where the first derivative is zero. Note that only R11 [n] is a function of r. Expanding equation 4.3 as shown below in equation 4.4 and taking the partial derivative equation 4.5 is obtained. N 1/J = L { (nTs) -2Rxx (nTs) Rv [n] + [nJ} (4.4) n=O {)1/J N {) N {) ar = 2 L Rv [n] ar (RII [n]) -2 L Rxx (nTs) ar (Rv [n]) = 0 (4.5) n=O :r (R" [n]) = :r (r2 (Rxx [n] -E2 {i [n]})) = 2 r ( Rxx [n] -E2 {i [n]}) (4.6) 23

PAGE 30

Substituting 4.6 into 4.5 and solving for f2 equation 4. 7 is obtained. Rxx (nT.) (Rxx [n] E2 {z [n]}) (Rxx [n] E2 {z [n]})2 (4.7) Putting 4. 7 back into 4.1 and using 4.3 the mean square error between the continuous and the digital correlation functions is calculated. 4.2 RMS Error The error that will be used to compare the two correlation functions will be root mean square(rms). The error will be a function of bit level, L, sample rate, fs, quantization range, qr, and the variance of the input signal, CT;. The one pattern that will be noticed by the graphs is the following. In order for the sample rate and bit level to improve the digital correlation function, the quantization range of the A/D converter must be chosen appropriately. That is, if the quantization range does not cover all possible amplitudes that may occur within the input signals amplitude variance, then neither sample rate nor bit level can help to improve the digital correlation function. Another interesting result that will be noticed in the graphs is that, of the two factors that can improve the digital correlation function, under the conditions mentioned in the above paragraph, an increased number of bits provides the most noticeable improvement. 24

PAGE 31

Chapter 5 Results of Calculations The graphs below all show the rms error between the analog and digital correlation function. Two quantizer ranges are used, -1 qr + 1 denoted as qrt, and -3 qr +3 denoted as qrJ Bit levels ranged from 2 to 8 bits. The rms value of the input signal took on the values 0.1, 0.3, and 1.0. The sample rates considered were 3162.28 Hz, 10.0 kHz, and 31622.8 Hz. In all cases the ordinate of the graph is a log scale of the rms difference.

PAGE 32

5.1 Error Vs. Bit Level Figures 5.1 through 5.4 show the rms difference between the two autocorre lation functions with respect to bit level. Each graph has three plots representing the three different sample rates. The top plot is for Is = 3162.28 Hz, the second for Is = 10.0 kHz, and the third plot is fs = 31622.8 Hz. It's obvious from figure 5.1 that an increase in bit quantization level provides a smaller rms error. But for the error to be reasonable, such as one in ten thousand, a high bit level is required. Although sample rate does help, it does not contribute to improving the quality of the digital correlation function. Figure 5.2 shows the same plot but with an input rms value at 1.0. It was noted earlier in chapter two that given a particular quantization level ofthe A/D converter, and a certain bit level, the voltages at the resistive network were divided accordingly, without regard to the input signal's amplitude. With this in mind, an explanation of figure 5.2 is possible. Because the input signal can now have higher amplitudes, the quantizer is unable to han dle them. Therefore the bit level no longer contributes to improving the digital autocorrelation function. Once the quantization range increases to qr3 as shown in figure 5.3, bit level again provides significant improvement. Figure 5.4 shows the results for qr3 with a small rms value. Comparing figure 5.1 with that of 5.4, a smaller quantization range in the A/D is better if the input signal's amplitude range is also small. In other words, the resolution of the quantizer is best when its quantization range is matched with the expected input signal's amplitude range. 26

PAGE 33

sigma-x=0.1, q-range=-1,1 -1.5 : -2 ....... ..... ... -2.5 . :-: . . . ., -3 0 bh -3.5 .s -4 : : .... . -4.5 .......................... ;. ....... .... ... ............ : ....... ................... ; ........................... ........................... . 2 3 4 5 6 7 8 bits Figure 5.1: RMS difference vs. bits, for Is =3162.28, 104 31622.8 Hz 27

PAGE 34

sigma-x=l.O, q-range=-1,1 -1.5 ................................ ...................................................................................................................... -2 ... -2.5 . ................. :: Cl.l -3 ............... .... :. ... .. .......... ....... -3.5 .9 ............................... ........................... ; ........................ -4 ................... -! ...................... ........................ t ........................ -4.5 2 3 4 5 6 7 8 bits Figure 5.2: RMS difference vs. bits, for Is =3162.28, 104 31622.8 Hz 28

PAGE 35

sigma-x=l.O, q-range=-3,3 -1.5 : ................ -2 -2.5 ..... rn -3 = 0 b'o -3.5 3 -4 : -4.5 2 3 4 5 6 7 8 bits Figure 5.3: RMS difference vs. bits, for fs =3162.28, 104 31622.8 Hz 29

PAGE 36

sigma-x=O.l, q-range=-3,3 -1.5 T -2 ......................... : ........................... -2.5 ............................................................. -3 ......... .. ............... ....................................... ............. ; ............................ ............................ .................. .. : : : : : -3.5 .............................................. .. ..: 0 -4 ......................... ..................................................... ........................... ............................ ....................... .. -4.5 .................. .................................. ................. ................ 1 ............................ ;""""'"''""""' . . . 2 3 4 5 6 7 8 bits Figure 5.4: RMS difference vs. bits, for fs =3162.28, 1e4, 31622.8 Hz 30

PAGE 37

5.2 Error Vs. Signal Variance With respect to the input signal's rms value, figures 5.5 and 5.6 show an optimum quality. The rms value of :z:,:( t) is a measure of how far from the mean the amplitude of the input signal varies. Here the graphs show the rms difference for bit quantization levels 2 through 8 with respect to the rms value of the input signal. The top curve is for 2 bits and the bottom curve is for 8 bits. Figure 5.5 shows that as the input signal's rms value increases, an optimum value is found for the given quantization range q,.1 In other words, the maximum rms value that the quantizer can handle with qr1 is 0.3. Any amplitude that falls outside this range, the quantizer will not be able to use. Figure 5.6 increases the quantization range to qrJ Note that once this occurs, the rms difference stays low between Ux = 0.3 and Ux = 1.0. Since only three cases were taken for the input rms value, figures 5.5 and 5.6 do not indicate the true 'optimum' rms value for the given quantization range. When the values for the input rms are taken incrementally from Ux = 0.1 to Ux = 2.0 the true 'optimum' value is shown. In figure 5.7 two plots are shown, one is for a quantization range qr1 and the other is for qrJ Both of these plots show the rms difference for /.. = 31622.8 Hz and for the number of bits, L = 4. For qr1 the best match under these conditions is for the input rms value to be u.-r: = 0.4. While for qr3 the best match occurs at Ux = 1.2. 31

PAGE 38

rn 0 bO .s fs=31622.8 Hz, q-range=-1,1 . . . =: 0 0 0 0 0 0 .. r: . . . . . . . ......... i' ................ 'l' ................. : ............... o o o o o o o o o o o o o o o o o o o o o o o o I o o o o o o o . : : : : : : 0 0 ... "T"""""OotOOOOOOOOOOOOJ.OOOOOOoooo[ooO : : 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 sigma-x Figure 5.5: RMS difference vs. ux, for bits 2 through 8 32

PAGE 39

fs=31622.8 Hz, q-range=-3,3 -1.5 oo.::..:..:.:ooooioo oo=oo d 0 . . . ................. : .............. -s -2.5 ... -3 ., 00 -0 ..... -3.5 1:10 .3 0 0 -4 .... o: .. ooooooolooooooooooooroo. -4.5 . ...... oooooooo:ooo ... oooooooooo.:OOOOOOoooooooooolOOOOoooooooooooToooooooooooo:--ooOOOOoOOOoooooo,.oooo.ooooooOOOOooj"' .. OOOOoOOOOOOOO:oooooooOooooooo . 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 sigma-x Figure 5.6: RMS difference vs. u:r., for bits 2 through 8 33

PAGE 40

'i)' e {'-! 0 1:11) .s fs=31622.8Hz, bits=4, q-range=-1,1 and q-range=-3,3 -1.5 ooooo;ooooooooooooo oooo;oooo ooooooooooooo!"oooooooooooooooo:ooooooooooooooo o'!'ooooooooooooOOoojoOoooooo;ooooo: + + + + + + -2 ooOO ........ :o 0 : + t : .. rotooooOOOOoO.:Ooooooooooooooo ooo,ooOOooooooooooooo: + 0 + -2.5 oooooo+oooooooOOooooooo!ooooo ooooo(Jooooo.C 0 0 0 -3 -3.5 + + : 0 : 0 '3 : ooooooooooo oooooooooooo,ooooooooooooooooo:oooooooOooo oooooo ...... ,. .. ooooooo:o oooooooooooooooOooOOOOOOOOOOOO 000000-t':,':, .......... 00 00 00 0 ',,,!': 00 oooo ...... 00 .. oo:oo':: 00 00 00 00.. oi 0 Qo' 0 ; ......... '.0:: 00 00 00 .. 00 00 00 0 .............. ................. --4 i :' r T r -4.5 OOooOO,ooooooooooooooooo:oo OOooooooooooOOoOOoOO ooooooooooo: OOo oOOoOOooOoOooo:oooo 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 sigma-x Figure 5.7: RMS difference vs. U:r:, bits=4, Is= 31622.8 Hz 34

PAGE 41

5.3 Error Vs. Sample Rate Figures 5.8 and 5.9 show the rms difference for quantization bits 2 through 8 with respect to sample rate, fs As shown in figure 5.8 although sample rate does contribute to decreasing the rms difference, the contribution is not significant. In fact it remains fairly constant. Only three sample rates were used, and they are represented on graphs as + symbols. The connections were done to emphasize the general slope of the curves. Figure 5.9 is the most interesting. Here the significance between rms value of the input signal and proper quantization range is more pronounced. The top curve is for 2 bits, the second for 3 bits, and the third curve represents the rms difference for 4 to 8 bits. This again is the result of mismatch between the input signal's rms value and the quantization range of the quantizer. 35

PAGE 42

sigma-x=O.l, q-range=-1,1 -1.5 : ''''"'"'''''''''""'' '''"'!''' oo(oooooooooooo[ooooooo -2 .......................... ............................ : ........................... : .......................... : .............. : : . +-. : .................. ..:.:..:j.''''''''''' -2.5 . :: : . -3 ... ... ... ] .,,[ .. .. p ... ::-: ... :7: ... :7:' .. ,:';7 .... ... .. = ... = ... .......... .,., 0 . +-. ........... ......... ......... ....... : .... ............ ........... ....... = ............ =! ........ ... = .................... .. : : -+ -3.5 . -4 .......... ................ !" ......................... i ......... .. ................ ........................... 'j' ........................... [ ....................... .. . 0 0 0 0 : 0 0 0 0 0 : 0 : 0 -4.5 l l : . . . 3.4 3.6 3.8 4 4.2 4.4 4.6 Log10(sample rate) Figure 5.8: RMS difference vs. sample rate /8 for bits 2 through 8 36

PAGE 43

sigma-x=l.O, q-range=-1,1 -1.5 ................... : ................................. -2 ... ..... ......... -2.5 . :ooo OO:- . ................ ; ........................... -:----" "''''''''''' -3 ...................................................... ; ..... "' 0 bo -3.5 .3 ooooo .. O ;oooOooOoooooooo ;'''''''"''''''''' -4 ....... ................... .......................... ... .. : 4.5 oOoOooro"O ...... : ................................. ..................... :. ........................ 3.4 3.6 3.8 4 4.2 4.4 4.6 Log10(sample rate) Figure 5.9: RMS difference vs. sample rate Is, for bits 2 through 8 37

PAGE 44

5.4 Comparison of Autocorrelation Functions The last set of figures, 5.10 through 5.15, compare the continuous and digi tal autocorrelation functiens for different values of bit level, rms input, and sample rate. The time is in seconds, and all the curves go out to five time constants, 5ms. The figures show the results for all three sample rates, two different bit levels, 2 and 7, and one rms value, ux = 0.1. As the sample rate increases, there are more points that make up the shape of the digital autocorrelation, but their accuracy does not improve with sample rate. This is seen in figures 5.10 through 5.12. By increasing the quantization bit level to 7, a significant improvement is evident as shown in figures 5.13 through 5.15. Although sample rate does not contribute to the accuracy of the digital autocorrelation it is still needed. Once an appropriate value for bit quantization level is chosen, sample rate helps in filling in the rest of the curve. 38

PAGE 45

auto: fs=3162.28 Hz, sigma-x=0.1, bits=2, q-range=-1,1 0.014 ,----.,.--,----...--,r---.----,r---.----,,.....---.-----, 0.012 ............................ .. . ii i i ............ 0.01 ................ .............. ; ................ ; ................ : ....... ...................... . . : : : . . . . . 0.008 . ................ : ... : : . . .. ................ : ................ : ................ ................ . . . . . . . . ............ : : : : . . 0.006 . ............ ; . . . !>!l!f ............. : . : : : : : . . . 0.004 .. iQ ........ : ................ .................. ::=: ............. . . . . . . . . . . . . . ..... ; ................ ; ................ : ................ ; ............................. : : : : 0.002 + jIL.; ..... . . . . 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time x10-3 Figure 5.10: Continuous and digital autocorrelation functions 39

PAGE 46

auto: fs=le4 Hz, sigma-x=0.1, bits=2, q-range=-1,1 0.014 ..----.--.-----,----r-----,----,---.,---.----.-------, . 0.012 .............. j ............... i .............. l ................ ; ............. .. : ................ i ................ l ................ l ............................. : : . . . . 0.01 0 : ; : ; 0 ; . . . : : : . . . :: ::: ............... .............. .......... 0.008 ,-=. 0.006 i"" .............. ............... l ............... : ............................. 0.004 : 0 ...... .................. 0 ... ................ ............ : ................ ................ j ................ ................ j .. ............. o: . 0 : 0 0 : 0 .............. ............ ] ........... : : oo : : 0 0 0 0. 0.002 0 } 0 0. 0 0 . . . . . . . . 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time x10 Figure 5.11: Continuous and digital autocorrelation functions 40

PAGE 47

auto: fs=316;22.8 Hz, sigma-x=0.1, bits=2, q-range=-1,1 0.014 .-----,---..----.---,----,.---.---,....-----.--.-------, 0.012 ............... ; ...... ...... ; .............. ....... : ...... ; ................ ............... ; ........................... 0.01 .......... ................ i ................ 0. : : . 0.008 ..... ..... ... . ........... ................ .. : : : . Cl 0.006 . ............ ...... ......... ................ i ................ i ....... ....... i ............... . : : : : : : . 0.004 .... .............. : ................ : ................................ 0.002 ............. .............. ; . . 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time x10-3 Figure 5.12: Continuous and digital autocorrelation functions 41

PAGE 48

auto: fs=3162.28 Hz, sigma-x=0.1, bits=7, q-range=-1,1 0.014 .-------r---.--------r---.--------r---,.-------r---..--------r---. 0.012 ....... < ... .......... . . _______ ____ ----r---:r ;0.01 ....... ............... .............................. 0.008 -li 0.006 0.004 0.002 l i : l : : : : . . . . . . :! :-:-:-: ............. ,/. --: ............ ....... --. : : : : : . . : : \i !-i---!............ .. 0.5 1 1.5 2-2.5 3 3.5 4 4.5 s time xl0-3 Figure 5.13: Continuous and digital autocorrelation functions 42

PAGE 49

auto: fs=1e4 Hz, sigma-x=0.1, bits=7, q-range=-1,1 0.014 ..----.----r---..----.----r---.-----.----r---r-----, . . . 0.012 ...... ''''''''"'''''j'''"'"" "";"''''"'"'"j''"''''' ...... :"""'"1"'"""'''"''1""""''''''''1"""""'"'''1'""""''"' 0.01 "a' 0.008 -= 0.006 0.004 0.002 . . .. . . . . . .. ............. 1 ................ : ................ r .............. \ ................ ................. ................. i. ................ ............. : : . : 0 : 0 . . . . .T .. ........... ................ r.............. 1" ............. f .............. l" ........... . . . . . :::= . :: : I ............ ; ................ ; ............... : ................ 1 ........ ....... : ................ :'"''''''' . . 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time xl0-3 Figure 5.14: Continuous and digital autocorrelation functions 43

PAGE 50

auto: fs=31622.8 Hz, sigma-x=O.l, bits=7, q-range=-1,1 0.014 r----...---.-----...---.-----...---.----.---.----.-----, 0.012
PAGE 51

Chapter 6 Conclusion The aim of the thesis was to investigate the quantization effects on corre lation functions as they go through a flash type analog to digital converter. The thesis considered the impact that bit quantization level, quantization range, and sample rate had on the autocorrelation of the quantized output signal. Specifically, the problem of how the continuous autocorrelation function can be reconstructed after it goes through an analog to digital converter is addressed. The problem involved a number of computer programs that calculated mean, mean square and eventually the autocorrelation of the quantized output signal. Given that a proper quantization range is chosen that is commensurate with the input signal's amplitude range, the results show that of all the factors considered, the choice of bit quantization level has the most impact. Although, as seen from the graphs, sample rate contributes very little in improving the

PAGE 52

quantized output autocorrelation, it does provide a sense of the shape of the output correlation function. In either case, if the quantization range of the analog to digital converter is not chosen properly, neither sample rate nor bit quantization level contribute any improvement on the output autocorrelation function. The other interesting result was that of "optimization". As was mentioned in the previous chapter, when choosing the quantization range of the analog to digital converter it is important that the range and the input variance match. In other words, the quantization range must be wide enough to accept all the possible inputs, but not so wide that it loses resolution. 46

PAGE 53

Bibliography [1] Athanasios Papoulis. Probability, Random Variables, and Stochastic Processes. McGraw Hill, 1984. [2] Simon Haykin. Adaptive Filter Theory. Prentice Hall, 1990. [3] Sophocles J. Orfanidis. Optimum Signal Processing. Macmillan Publishing Company, 1988. [4] Milton Abramowitz and Irene A. Stegun. Mathematical Functions and Tables. National Bureau of Standards Series 55, 1972. [5] A.M. Yaglom. Correlation Theory of Stationary and Related Random Func tions. SpringerVerlag, 1987. [6] Brian Kernighan and Dennis Ritchie. The C Programming Language. Prentice Hall, 1978. 47

PAGE 54

Appendix A Table of Gauss Quadrature Weights and Zeros The following table lists the weights and zeros which are used to perform the inte gration in the computer programs. Note that there are both positive and negative zeros. Therefore, the calculation actually goes up to 96. The values were taken from Abramowitz and Stegun [4]. They are used in equations 3.11 and 3.22 to figure out the individual probabilities and the joint probabilities, respectively.

PAGE 55

t weights, Wi zeros, x; 1 0.032550614492363166242 0.016276744849602969579 2 0.032516118713868835987 0.048812985136049731112 3 0.032447163714064269364 0.081297495464425558994 4 0.032343822568575928429 0.113695850110665920911 5 0.032206204794030250669 0.145973714654896941989 6 0.032034456231992663218 0.178096882367618602759 7 0.031828758894411006535 0.210031310460567203603 8 0.031589330770727168558 0.241743156163840012328 9 0.031316425596861355813 0.273198812591049141487 10 0.031010332586313837423 0.304364944354496353024 11 0.030671376123669149014 0.335208522892625422616 12 0.030299915420827593794 0.365696861472313635031 13 0.029896344136328385984 0.395797649828908603285 14 0.029461089958167905970 0.425478988407300545365 15 0.028994614150555236543 0.454709422167743008636 16 0.028497411065085385646 0.483457973920596359768 49

PAGE 56

Weights and Zeros continued t weights, Wj zeros, a:i 17 0.027970007616848334440 0.511694177154667673586 18 0.027412962726029242823 0.539388108324357436227 19 0.026826866725591762198 0.566510418561397168404 20 0.026212340735672413913 0.593032364777572080684 21 0.025570036005349361499 0.618925840125468570386 22 0.024900633222483610288 0.644163403784967106798 23 0.024204841792364691282 0.668718310043916153953 24 0.023483399085926219842 0.692564536642171561344 25 0.022737069658329374001 0.715676812348967626225 26 0.021966644438744349195 0.738030643744400132851 27 0.021172939892191298988 0.759602341176647498703 28 0.020356797154333324595 0.780369043867433217604 29 0.019519081140145022410 0.800308744139140817229 30 0.018660679627411467385 0.819400310737931675539 31 0.017782502316045260838 0.837623511228187121494 32 0.016885479864245172450 0.854959033434601455463 33 0.015970562902562291381 0.871388505909296502874 34 0.015038721026994938006 0.886894517402420416057 50

PAGE 57

Weights and Zeros continued 1. weights, Wj zeros, :ci 35 0.014090941772314860916 0.901460635315852341319 36 0.013128229566961572637 0.915071423120898074206 37 0.012151604671088319635 0.927712456722308690965 38 0.011162102099838498591 0.939370339752755216932 39 0.010160770535008415758 0.950032717784437635756 40 0.009148671230783386633 0.959688291448742539300 41 0.008126876925698759217 0.968326828463264212174 42 0.007096470791153865269 0.975939174585136466453 43 0.006058545504235961683 0.982517263563014677447 44 0.005014202742927517693 0.988054126329623799481 45 0.003964554338444686674 0.992543900323762624572 46 0.002910731817934946408 0.995981842987209290650 47 0.001853960788946921732 0.998364375863181677724 48 0.000796792065552012429 0.999689503883230766828 51

PAGE 58

Appendix B Computer Programs The following are the computer programs that are used to simulate and calculate the probabilities. They were written in C language [6) under a UNIX using a Digital DEC System 5400 running Ultrix V4.1. Comments in C language are shown between the pair of symbols /* and /. The first program is called "define.h". It is a header file that contains constants and other definitions, in this case R:r.( t ). It also declares two arrays of size 48 that will hold the weights and zeros. I* This is the header file ''define.h''. I #include #include #define Vrb -1.0 I AID converter lover voltage range. I

PAGE 59

#define Vrt 1.0 I AID converter upper voltage range. I #define PI 3.14159 #define Tc 1e-3 I* Time constant, see belov. I I* R(t) is the definition of the input autocorrelation function. I #define R(t) ( sigma sigma exp( (double) ( t ) I (double) Tc ) ) extern double veights[48]; extern double zeros[48]; =========== 53

PAGE 60

This program is called "begin.c". It controls the calling of the other pro grams and also performs final calculations such as finding the rms value of the final results, and calculating the scale factor, A2. The program contains three nested "for" loops called, fs, sigma, and bits. Inside the "bits" for loop there are two more loops called "n" and "i". The "n" loop is used to calculate the numerator and denominator values for the scale factor A2. After finding the scale factor, loop "i" is used to actually find the scaled values of the discrete autocorrelation. Volt() calculates the voltage taps of the resistive network of the analog to digital converter. Pk() calculates the individual probabilities, and Pkl() calculates the joint probabilities. Rxn_ray is an array that hold the calculated values of the discrete autocorrelation. Finally, "temp_rms" holds the rms value and the nest print statement normalizes that value to the variance. =========== I* This is the first program ''begin.c''. *I #include #include #include "define.h" extern void volt(); extern void pk(); extern double pkl(); double Rxn_ray[5001]; 54

PAGE 61

main() { int i, bits, n, pvr, sigma_i, n_limit; float sigma; double A2, numerator, denominator, fs, Rxn, Rv_n, temp_rms, normal; double diff1, diff; extern float mean; printf("Y.f, Y.f\n\n", Vrt, Vrb); for( fs = 5e3; fs <= 5e5; fs = 10.00) { n_limit= (int) (5.0 Tc fs); for( sigma= 0.1; sigma<= 1.1; sigma+= 0.1 ) { printf("\n****** This is the start of something big! ******\n"); print("============= n_limit = Y.d\n", n_limit); for( bits = 2; bits <= 8; ++bits ) { printf("\nThe number of bits is Y.d\n", bits); printf("sigma_x is Y.f\n", sigma); pvr= (int) ( pov( (double) 2.0, (double) bits) 1.0 ); volt( sigma, pvr ); 55

PAGE 62

pk( sigma, pwr ); dif= diff= numerator= denominator= temp_rms= 0.0; printf("fs is %8.2 Hz\n", fs); for( n = 0; n <= n_limit; ++n ) { Rxn_ray[n]= Rxn = pkl( sigma, pwr, fs, n ): numerator+= R(-n/fs) ( Rxn-mean* mean): denominator+= pow((double) (Rxn-mean* mean), (double) 2.0); } print ( "num=%, den=%\n", numerator, denominator) ; A2= numerator I denominator; printf("A2= %g\n\n", A2); for( i = 0; i <= n_limit; ++i ) { Rv_n= A2 ( Rxn_ray[i]-mean* mean); printf("R(%dTs) = %g Rv_%d = %g\n", i, R(-i/fs), i, Rv_n): temp_rms += pow((double) ( R(-i/fs) Rv_n ), (double) 2.0); diff= fabs( R(-i/fs)-Rv_n ); if ( i ) dif= diff; if ( i ) 56

PAGE 63

} } } { if( dif < diff ) diff1= diff; } printf("max is Y.g\n", dif I (sigma* sigma) ); temp_rms= pov( (double) ( temp_rms I ( (double) ( n_limit + 1 ) ) ), (double) 0.5 ); print ("rms is Y.g\n", temp_rms I ( sigma sigma ) ) ; } printf("\n\n################# I am finished #################\n"); } 57

PAGE 64

This program is called volt(). It calculates the voltage values at the resister nodes. Here also is where the values for the weights and zeros are put into the arrays; Three files are created in this program. All three files contain the same information, but are used by different programs. The first file, volts_t.in, is used by program pk() to find the individual probabilities. The other two files, voltsab_t.in and voltscd_t.in, are used by program pkl() to find the joint probabilities. At the end of each file I add the value 1000 and 0. The 1000 is there as a :O.ag to represent the value of infinity when doing the integrations, it is not used as a value in any of the calculations. =========== I This is program ''volt.c'' I #include "define.h" double zeros[ 48 ]= { 0.016276744849602969579, 0.048812985136049731112, 0.081297495464425558994, 0.113695850110665920911, 0.145973714654896941989, 0.178096882367618602759, 0.210031310460567203603, 0.241743156163840012328, 0.273198812591049141487, 0.304364944354496353024, 0.335208522892625422616, 0.365696861472313635031, 0.395797649828908603285, 0.425478988407300545365, 0.454709422167743008636, 0.483457973920596359768, 0.511694177154667673586, 0.539388108324357436227, 58

PAGE 65

0.566510418561397168404, 0.593032364777572080684, 0.618925840125468570386, 0.644163403784967106798, 0.668718310043916153953, 0.692564536642171561344, 0.715676812348967626225, 0.738030643744400132861, 0.759602341176647498703, 0.780369043867433217604, 0.800308744139140817229, 0.819400310737931676539, 0.837623511228167121494, 0.654959033434601455463, o.87138e5o5909296502874, o.886894517402420416057, 0.901460635315852341319, 0.915071423120898074206, 0.927712456722308690965, 0.939370339762755216932, 0.950032717784437635756, 0.959688291448742539300, 0.968326828463264212174, 0.975939174586136466463, 0.982517263563014677447, 0.988054126329623799481, 0.992543900323762624572, 0.995981642987209290650, 0.998364375863181677724, 0.999689503883230766828 }; double veights[48]= { 0.032550614492363166242, 0.032516118713868835987, 0.032447163714064269364, 0.032343822568576928429, 0.032206204794030250669, 0.032034466231992663218, 0.031828768894411006535, 0.031589330770727168668, 0.031010332586313837423, 0.030671376123669149014, 0.030299915420827593794, 59

PAGE 66

0.029896344136328385984, 0.029461089958167905970, 0.028994614150555236543, 0.028497411065085385646, 0.027970007616848334440, 0.027412962726029242823, 0.026826866725591762198, 0.026212340736672413913, 0.025570036005349361499, 0.024900633222483610288, 0.024204841792364691282, 0.023483399086926219842, 0.022737069658329374001, 0.021966644438744349196, 0.021172939892191298988, 0.020356797154333324595, 0.019519081140145022410, 0.018660679627411467385, 0.017782502316045260838, 0.016885479864245172450, 0.015970562902562291381, 0.015038721026994938006, 0.014090941772314860916, 0.013128229566961572637. 0.012151604671088319635, 0.011162102099838498591, 0.010160770535008415758, 0.009148671230783386633, 0.008126876925698759217, 0.007096470791153865269, 0.006058545504235961683, 0.005014202742927517693, 0.003964554338444686674, 0.002910731817934946408, 0.001853960788946921732, 0.000796792065552012429 }; float v2; void volt( sigma, pvr ) double sigma; 60

PAGE 67

int pvr; { FILE *volts, *voltsab, *voltscd; int i; float delta, v; volts= fopen( "volts_t .in", "v" ) ; voltsab= fopen( "voltsab_t.in'', "v" ); voltscd= fopen( "voltscd_t. in", "v" ) ; fprintf( volts, "Y.d Y.f\n", 0, -1e3 ); delta= ( Vrt-Vrb) I ( 2.0 pvr ); for( i= 1; i <= pvr; ++i ) { v= Vrb + ( 2.0 i -1.0 ) delta; fprintf( volts, "Y.d Y.f\n", i. v ) ; fprintf( voltsab, "Y.d Y.f\n", i. v ) ; fprintf( volt sed, "Y.d Y.f\n", i, v ) ; } fprintf( volts, "Y.d Y.f\n", pvr + 1, 1e3 fprintf( volts, "Y.d Y.f\n", pvr + 2, 0.0 ) ; ) ; fprintf( voltsab, "Y.d Y.f\n", pvr + 1, 1e3 fprintf( voltsab, "Y.d Y.f\n", pvr + 2, 0.0 61 ) ; ) ;

PAGE 68

:fprint:f( volt sed, "%d %\n", pwr + 1, 1e3 ) ; :fprint:f( volt sed, "%d %\n'', pwr + 2, 0.0 ) ; :fclose( volts ) ; :fclose( voltsab ) ; :fclose( volt sed ) ; } =========== 62

PAGE 69

This program, pk.c, calculates the input probabilities. It finds the value of P(Vk :z:c(t) < Vk+d With this information it then finds the mean and mean square value, mean2, of the output. The function called "gauss_k()" uses the weights and zeros to calculate the probabilities. =========== I* This is program pk.c *I #include "define.h11 double gauss_k(); float mean, mean2; void pk( sigma, pwr ) double sigma; int pwr; { FILE *volts, out; int i, v; double main1 ; float a, b; mean= 0.0; mean2= 0.0; out= fopen( "pk_ t. out'', "w" ) ; volts= fopen( "volts_t.in", 11r11 ); 63

PAGE 70

scan ( volts, "%d %", tv, ta ) ; for( i= 0; i < pwr + 1; ++i ) { scan( volts, "Y.d Y.", tv, tb ); main1= gauss_k(a, b, sigma); a= b; mean+= ( v -1 ) (float) main1; mean2 += ( v-1 ) ( v -1 ) (float) main1; total += main1; } printf("mean= Y., mean square= %\n\n", mean, mean2); close( volts); close( out); } double gauss_k( a, b, sigma ) float a, b; double sigma; { int i, k, scale, flag_a, flag_b; double temp= 0.0, bb, c1, wi, zi; float a1, k2; 64

PAGE 71

flag_a= ( ( (int) a ) == -1000 ) ? 2 0; flag_b= ( ( (int) b ) == 1000 ) ? 1 0; bb= 1.0 I ( 2.0 pov( (double) 2.0 *PI, (double) 0.5) *sigma); c1= 1.0 I ( 8.0 *sigma* sigma); for( i= 0; i < 2; ++i ) { scale= ( i ) ? -1 : 1; for( k= 0; k < 48; ++k ) { vi= veights [ k ] ; zi= scale zeros[ k ] ; svitch( flag_a + flag_b ) { case 0: a1= b -a; k2= ( b -a ) zi + b + a; break; case 1: a1 a; k2= a* ( zi + 1.0 ); break; 65

PAGE 72

case 2: a1= -b; k2= b ( 1.0zi ); break; } temp+= wi a1 bb exp( (double)-c1 k2 k2 ); } } return( ( ( flag_a + flag_b) -0) ? temp ( 0.6temp) ); } =========== 66

PAGE 73

The final program is called pkl.c. It calculates the joint probabilities, P(ltk s Xc(t) < vk+ll Vl s Xc(O) < Vl+d This program contains four functions. The first, "g_f.Jd()", sets up some constants to simplify the actual integration. The second, "main_kl()", uses the weights and zeros and calls "gauss_kl()" to do the integration. Once inside gauss_kl(), the final function, "Q()", is called. This last function evaluates the second integral as a Q-function. =========== I* This is program pkl.c *I #include "define.h" static float a, b, c, d, t, gg1, gg2, gg3, gg4, bb; void g_f_kl (); double main_kl(); double gauss_kl(); double Q(); double pkl( sigma, pvr, fs, n ) double sigma, fs; int pvr, n; { FILE *voltsab, *voltscd, *out; inti, j, k,m, v; double temp_kl, R_n= 0.0, rms= 0.0; 67

PAGE 74

extern float mean, mean2; out= open( "pkl_t .out", "w" ) ; voltsab= open( "voltsab_ t. in", "r" ) ; voltscd= open( "voltscd_t. in", "r" ) ; if( n ) g_f_kl( sigma, fs, n ); scan ( voltsab, "Y.d Y.\n", tv, ta ) ; for( i= 1; i <= pwr; ++i ) { if( !n ) { R_n= mean2; break; } I When evaluating the ouput I I at n=O, the value of the I I autocorrelation is equal to the I I* second moment. *I scan( voltsab, "Y.dY.\n", tv, tb ); rewind( voltscd ); for( k= 0; k < i; ++k ) scan( voltscd, "Y.d Y.\n", tm, tc ); for( j= i; j <= pwr; ++j ) { scan( voltscd, "Y.d Y.\n", tm, td ) ; 68

PAGE 75

temp_kl= main_kl(); if( ( v -1 ) == ( m -1 ) ) { R_n += ( v -1 ) ( m -1 ) temp_kl; } else { R_n += 2.0 ( v -1 ) ( m-1 ) temp_kl; } c= d; } a= b; } fclose( voltsab ) ; fclose( volt sed ) ; fclose( out ) ; return( R_n ) ; } void g_f_kl( sigma, fs, n ) double sigma, fs; int n; 69

PAGE 76

{ doubleTs= ( n 1.0 ) I fs; t= ( Ts <= 0 ) ? Ts Ts; gg1= R( 0) I ( 2.0 ( R( 0) R( 0) -R( t) R( t) ) ); gg2= R( t) I R( 0 ); gg3= gg1 ( 1.0-gg2 gg2 ); gg4= pov( 2.0 gg1, 0.5 ); bb= 1.0 I pov( (double) 2.0 *PI* R( 0 ), (double) 0.5 ); } double main_kl() { int i, k, scale; double integ= 0.0, vi, zi; for( i= 0; i < 2; ++i ) { scale= ( i ) ? -1 : 1; for( k= 0; k < 48; ++k ) { vi= veights [ k ] : zi= scale zeros[ k]; integ += gauss_kl( vi, zi ); 70

PAGE 77

} } return( integ ); } double gauss_kl( vi, zi ) double vi, zi; { int flag_a, flag_b, flag_c, flag_d; double alpha, temp; float a1, a2, c1, k1, k2, qc, qd; alpha= zi; flag_ a= ( a == -1000 ) '? 2 0; flag_b= ( b -1000 ) '? 1 O flag_c= ( c == -1000 ) '? 2 0; flag_d= ( d -1000 ) '? 1 O t svitch( flag_a + flag_b ) { case 0: a1= ( b -a ) I 2.0; a2= cl= kl= 1.0; k2= ( ( b -a) alpha+ b +a ) I 2.0; 71

PAGE 78

break; case 1: a1= a2= 2.0 a; c1= 4.0 a a; k1= k2= 1.0 I (alpha+ 1.0 ); break; case 2: a1= 2.0 b; a2= 2.0 b; c1= 4.0 b b; k1= k2= 1.0 I (alpha+ 1.0 ); break; } switch( flag_c + flag_d ) { case 0: qc= Q( c, a2, k2 ) ; qd= Q( d, a2, k2 ) ; break; case 1: qc= Q( c, a2, k2 ) ; 72

PAGE 79

qd= 0.0; break; case 2: qc= 1.0; qd= Q( d, a2, k2 ); break; } temp= vi a1 bb k1 k1 exp( (double) c1 gg3 k2 k2 ) ( qc-qd ); return( temp); } double Q( g_cd, aa2, k ) float g_cd, aa2, k; { int flag= 0; float r; double x, tempp, co; double p= 0.2316419, b1= 0.319381530, b2=-0.356563782, b3= 1.781477937; double b4= -1.821255978, b5= 1.330274429; x= gg4 ( g_cd-gg2 aa2 k ); if( X < 0 ) 73

PAGE 80

{ x= -x; flag= 1; } cd= 1.0 I (double) 2.0 *PI, (double) 0.5 ); r= 1.0 I ( 1.0 + p x ); tempp= co exp( (double) -x x I 2.0 ) ( b1 r + b2 r r + b3 r r r + b4 r r r r + b5 r r r r r ); return( (flag) 1 ( 1.0tempp) : ( tempp) ); } 74