THE UNIFIED ADAPTIVE EQUALIZATION ALGORITHM

by

Hai Thanh Ho

B.S.E.E., University of Colorado, 1988

A thesis submitted to the

Faculty of the Graduate School of the

University of Colorado in partial fulfilment

of the requirements for the degree of

Master of Science

Department of Electrical Engineering

1989

This thesis for the Master of Science degree by

Hai Thanh Ho

has been approved for the

Department of

Electrical Engineering

Marvin Anderson

Date 3 O AJo u ^7

Ho, Hai Thanh (M.S., Electrical Engineering)

The Unified Adaptive Equalization Algorithm

Thesis directed by Associate Professor Douglas A. Ross

This thesis introduces the Unified Adaptive Equalization

Algortihm which can be used to implement nine types of equalization

schemes. Also, a performance study is conducted with three

adaptation algorithms: Recursive Least Square, Recursive Minimum

Variance, and Recursive Stochastic Gradient.

The form and content of this abstract are approved. I recommend

its publication.

Signed

Douglas A. Ross

ACKNOWLEDGMENTS

The author wishes to express sincere appreciation to Dr.

Douglas A. Ross for his guidance and support during the course of this

thesis. Also, he would like to thank the committee members

professors Bill Wolfe and Marve Anderson.

Special thanks to the author's family for their never ending

support.

CONTENTS

PART I INTRODUCTION & THEORY DEVELOPMENT

1. Introduction..........................................1

1.1 Mathematical Development......................6

2. Estimation Algorithm Development......................9

2.1 Recursive Least Square (RLS)..................10

2.2 Recursive Minimum Variance (RMV)..............17

2.3 Recursive Stochastic Gradient (RSG)...........21

2.4 Generalized Recursive Algorithm (GRA) ........24

PART II ADAPTIVE EQUALIZATION & APPLICATIONS

3. Adaptive Equalization.................................26

3.1 Derivation of Equalization algorithm ..........28

3.2 Generalized & Unified Adaptive................32

Equalization Algorithms (GAEA) & (UAEA)

3.3 Simulation and Performance Study..............39

3.3.1 MR Band-pass channel.....................44

3.3.2 Coaxial transmission line..............59

4. Conclusion ...........................................66

BIBLIOGRAPHY.............................................69

APPENDIX Simulation software & programs..................71

Figures and Tables

Figures: PAGE

1.1. Adaptive equalization shceme. 3

1.2. Block diagram of system. 6

2.1. Estimation scheme. 9

2.2. Generalized Recursive Algorithm block diagram. 24

3.1. Communication network. 27

3.2. Adaptive equalization scheme. 27

3.3. System identification scheme. 28

3.4. Estimation of inverse channel scheme. 30

3.5 HR equalizer structure. 32

3.6 FIR equalizer structure. 33

3.7. DFE Equalizer structure. 34

3.8. Block diagram of UAEA. 37

3.9. Comminication network with adaptive equalization. 43

3.10. Magnitude response of channel. 45

3.10b. Phase response of channel. 46

3.11a. Transmitted signal u(k). 46

3.11b. Received signal y(k). 47

3.12. Equalized output. 47

3.13a. Equalized channel magnitude response. 48

3.13b. Equalized channel phase response. 48

3.14a. Eye diagram of transmitted signal u(k). 49

3.14b. Eye diagram pf received signal y(k). 50

3.14c. Eye diagram of equalized signal u(k). 50

3.15. Equalizer filter coefficients. 51

3.16a. Random transmitted signal u(k). 52

3.16b. Received signal y(k). 52

3.16c. Equalized signal u(k). 53

3.17. MSE responses with crv = .01. 54

3.18. MSE responses with ctv = .03. 54

3.19a. RMV MSE responses. 56

3.19b. RLS MSE responses. 56

vii

3.19c. RSG MSE responses. 57

3.20. Coaxial channel circuit model. 59

3.21a. Coaxial channel magnitude response. 61

3.21b. Coaxial channel phase response. 62

3.21c. Coaxial impulse response. 62

3.22. Transmitted signal u(k) and received signal y(k). 63

3.23. Equalized signal u(k). 64

3.24. MSE response of HR RLS. 65

Table:

3.1. Algorithm performance summary. 58

Chapter 1

INTRODUCTION

One of the very popular and important subject in the areas

of digital signal processing and system engineering is Adaptive

Estimation Algorithms. In myriads of applications, scientists

need to faithfully estimate a certain set of parameters

belonging to a stochastic signal or system. Some of the

applications are system identification, adaptive equalization,

adaptive control, noise cancelation, and adaptive array

beamforming. The invention of estimation algorithms dates

back to the early 1960's and more and more different

algorithms of different structures and performances have been

developed to satisfy the -need of today's technology. Two of the

widely used mathematical models in representing a signal or a

system are the State Space model and Input/Output (or ARMAX)

model In many applications the ARMAX model is used more

because it is simpler to implement and requires less

computation.

Data transmission over telephone and radio lines have been

made efficient in bandwidth and minimal error recovery of the

signal by the use of equalization to compensate for the

distortion caused by the channel. Many types of equalizers have

been developed over the years and each has their advantages

and disadvantages. The process of selecting the type of

equalizer involves examining the particular application and

factors such as computation power, performance, cost,

complexity and type of channel. One of the popular equalizers

is the FIR (Finite Impulse Response) transversal filter [2]

2

which is popular because of its unconditionaly stable

characteristic and simple implementation. The Weiner MR

(Infinite Impulse Response) equalizer was shown [2] to have

better performance than the FIR but requires the channel

characteristics to be known ahead. So in the case where the

channel is unknown, the adaptive Kalman equalizer [3] can be

considered but has a nonlinear form and therefore the

linearization causes the algorithm not to always converge [4].

The last two types of equalizers which are not as simple as the

FIR nor complex as the Kalman are the adaptive HR [2,5] and the

HR Decision Feedback Equalizer DFE [2]. The performance of

these equalizers [2] beats the FIR because they can model the

transfer function poles more readily and accurately. However,

the Kalman equalizer can outperform these when it converges

[3]. The HR equalizer has received little attention in recent

years due to the development of the DFE. However, although the

DFE has better MSE performance compared to the HR, it has the

potential of propagating decisions errors [6] which can degrade

the performance but fortunately, not catastrophically. In

summary, in many equalization problem where the channel

characteristics are not known in advanced, the use of adaptive

equalizer is very necessary. The popular types of adaptive

equalizer (AE) are the FIR, HR, KALMAN, and the DFE which have

been studied and implemented by many researchers and

engineers.

The three types of adaptive equalizers: FIR, HR and DFE all

need the adaptive algorithm to search for the optimal

coefficients of the filter. Although, in the literatures of AE,

3

the structures and design of AE were discussed along with the

popular algorithms such as the Recursive Least Square and

Gradient, many of the details such as comparision of

performances of various algorithms, and the generality of FIR,

MR and DFE with respect to the algorithm structure have not

yet been investigated.

The motivation of this thesis was to develop the unified

adaptive equalization algorithm that in which it can be used to

implement nine different equalization schemes. Each scheme

involves an equalizer filter and an adaptive coefficient update

algorithm as shown in figure 1.1. The three types of filter are

Finite Impulse Response (FIR), Infinite Impulse Response (HR),

and Decision Feedback Equalizer (DFE), and the three types of

coefficient update algorithm are Recursive Least Square (RLS),

Recursive Minimum Variance (RMV), and the Recursive

Stochastic Gradient (RSG).

Figure 1.1. Adaptive equalization scheme.

4

Together, nine adaptive equalizers listed below can be formed

1) HR RLS

2) HR RMV

3) HR RSG

4) FIR RLS

5) FIR RMV

6) FIR RSG

7) DFE RLS

8) DFE RMV

9) DFE RSG

The Unified Adaptive Equalization Algorithm (UAEA) is to be

developed by integrating the Generalized Recursive Algorithm

(GRA) and the Generalized Adaptive Equalization Algorithm

(GAEA) where both of these are also to be developed in this

thesis. In

addition, an extensive evaluation and comparision of the

performance of the four estimation algorithms 1-4 in adaptive

equalization. The performance factors to be examined are:

stability, rate of convergence, steady state error, noise

immunity and computation complexity. Part of the results of

the study will be simulation results.

In chapter two, the concept of estimation and the

estimation algorithms will be discussed along with the

development of the three algorithms RLS, RMV and RSG. It will

also be shown that these three algorithms share a common

5

structure so that a Generalized Recursive Algorithm (GRA) can

be formed. In chapter three, the four estimation algorithms

are modified to become equalization algorithms. Similiar to the

GRA, the GAEA will be introduced and then combined with the

GRA to form the UAEA. Also In this chapter, a performance

study will be conducted using hyperthetical HR band-pass

channel and a Coaxial transmission line channel. Chapter four

ends the thesis with conclusions and final remarks.

6

1.1. Mathematical Development.

In this section, we will establish the mathematical

notations to be used later for developing the estimation

algorithms.

A given system or process can generally be represented by a

mathematical model based on physical laws of nature and/or

past hystory of its behavior. The model can take on various

forms. Here are some of the common models being used in real

applications

i) Ordinary, Partial Difference Equations

ii) Algebraic Equations

iii) State Space Equations

iv) Impulse Response

v) Transfer Function

We will be working with Regression models which can be of

type i, and v.

Consider the block diagram of a system shown in figure 1.2

below

Figure 1.2. Block diagram of system.

7

where

u: Input signal

y: Output of system

v: Output white noise with statistics

E{v) = 0; E{v2} = ov2

0: Parameter vector of system

The input output relationship of the system is described by the

difference equation as

Vk -a1 Yk-1 a2Vk-2 - anyk-n +

b0uk + b1 uk-1 + b2uk-2 + + bmuk-m

Defining the following vectors

0 = [-a., -a2 ... -an b0 b1 ... bm]' (1.1.2a)

and

[Vk-1 Vk-2 ^k-n uk uk-1 - uk-rJ (1-1-2b>

where denotes the transpose of a matrix. Now we combine

(1.1.1) and (1.1.2) to write y(k) in vector form as

yk = *'kB + vk

(1.1.3)

Next, if we take the Z-transform of the difference equation

8

(1.1.1) we will get

(1.1.4)

Y(z)[1 + a-jz-"1 + ... + anz'n] = U(z)[bQ + b^-1 + ... + bmz'm]

where z is the complex frequency variable. Thus, the transfer

function which represents the input output relationship in the

frequency domain is

Y(z) zn"m(bQZm + b-|Zm1 +... + bm)

H(z) ---------------------:----------------- (1.1.5)

U(z) zn + a-|Zn'1 +... + an

N

' 5 will use the notations established in the previous equations throughout

Ha next chapters.

Chapter 2

ESTIMATION ALGORITHM DEVELOPMENT

Estimation algorithm is the result of optimization of a

performance index that is a function the signals and

parameters we wish to estimate. Consider the estimation

scheme

Process

Figure 2.1. Estimation scheme.

Where

u(k) : Input signal

y(k) : Output signal

v(k) : Gaussian output noise

0 : System parameter vector

10

A'

0 : Estimated parameter vector

A

y(k) : Estimated output

e(k) : Output estimation error

The task of the estimation algorithm is to adjust the estimated

vector 0 so that the error signal e(k) converges to a minima.

The assumption is that u(k) and y(k) are available. Also, it

should be mentioned that the output y(k) can be thought as

being generated by an input u(k) through a real or fictitious

process with 0 representing the parameters of its transfer

function. This concept can be used to develop and distinguish

the system identification and adaptive equalization algorithms.

Optimization theory is applied to a wide range of models,

the two type models that are often used are the State Space

model and the Input/Output model discussed in section 1.1. Our

algorithms will be based on Input/Output model We will

develop three different algorithms RLS, RMV, and RSG. These

three algorithms will show to have similiar structure and

therefore a generalized recursive algorithm (GRA) will be

created.

2.1. Recursive Least Square (RLS) Algorithm

The RLS algorithm is a recursified version of the offline

Least Square algorithm. We therefore start with Least Square

Development.

11

Consider the system

y(k) = -a1 y(k-1) a2y(k-2) ... any(k-n) + (2.1.1)

bgu(k) + b.ju(k-1) + b2u(k-2) + ... + bmu(k-m)

or

y(k) = <(>'(k)0 + v(k) (2.1.2)

where 6, <|> and v are defined in section 1.1. Suppose we have

collected a set of data points (y(N) y(N+1) ... y(k)} and we would

like to estimate the unknown parameter vector 0 based on these

data points. Let us first define the augmented y as

y = [y(N) y(N+1) ... y(k)]' (2.1.3)

From the relationship in (2.1.2) we see that y can be written

as

(2.1.4)

y(N) y(N-1) y(N-2) ... y(N-n) u(N) ... u(N-m) -a^ v(N)

y(N+1) y(N) y(N-1) ... y(N+1-n) u(N+1) ... u(N+1-m) . -a2 v(N+1)

y(k) y(k-1) y(k-2) ... y(k-n) u(k) ... u(k-m) bm v(k)

We see that equation (2.1.4) can also be written

y = fo'(N)

'(N+1) ...
where the noise vector v is

v = [v(N) v(N+1) ... v(k)]' (2.1.6)

We next define

Thus, the output vector may now be written as

y =
We now derive the Least Square algorithm. As mentioned

earlier that optimization involves minimizing a cost function

J(0) and usually a function of the error. Consider the error

squared cost function

J(0) (y -y)'(y-y) (2.1.9)

= e'e

= (y o0)'(y-<&0)

The optimization can be done by finding the minima of the

quadratic function J(0) which means taking the derivative of

J(0) and set it to zero to solve for 0. The derivative of (2.1.9) is

8J(0)

= -2
80

(2.1.10)

1

Setting it equal to zero we get

or

O'
=
A

0 = 0

e* = [0'0]"10'y

(2.1.11)

(2.1.12)

Thus, (2.1.12) is the minimal least square solution of 0 based

on the (k-N) data points. Lets put in the time subscript to

specified that the latest data point is at tk we get

0k ro'kk]-Vkyk (2.1.13)

In many cases where the LS solution is to be applied to real

time processes it needs to update its solution to include the

next incoming data point. Suppose we receive the next data

point at time tk+1 then LS solution in equation (2.1.13) with

the updated vectors

ek+1 = [k+1 k+ll1 fk+1yk+1 (2.1.14)

Notice that this form of a solution requires us to perform the

matrix inversion which is very computationally expensive. A

way to get around this and facilitate the computation is to

develop a recursive form of (2.1.14). The idea of recursion is

A

letting the current estimate 0(k+1) be a function of the

previous estimate 6(k). We would expect that the computation

will lessen because we use the results from the previous

iterations.

14

The complete derivation of the RLS algorithm can be found

in many of the adaptive filter textbooks [7]. Therefore, we will

state the key ideas and important steps here. First, the vectors

that includes the next date point at tk +-j can be written in the

augmented form

Yk

Yk+1 =

vk+1 =

Vk+1

Vk

Vk+1

(2.1.15)

(2.1.16)

^k+1 =

Ok

(2.1.17)

Also, defining the matrix P as

pk+1 = ['k+1 k+111

= [0'k Ok + k+i 'k+i]

-1

(2.1.18)

Subtituting these equations into the solution equation (2.1.13)

we get

ek+1 = pk+1'k+iyk+1 (2.1.19)

Let next us state the matrix Inversion Lemma

Lemma: Let A, C and A, BCD be nonsingular square matrices,

the following matrix idenity holds

(A + BCD)-1 = A'1 A'1B(C1 + DA1B)_1DA1

(L1)

15

And applying this to the equation of Pk+1 (2.1.18) where we let

A = 0'k<|)k> B = k+1, C = I, D = 'k+1

we get the recursive form of P as

pk+1 = pk fVk+lC1 +^+1 Pk^k+I^'^'k+I pk (2.1.20)

Thus this form of P involves computing an inverse of a scalar

instead of a matrix. We can now subtitute (2.1.20) into the

solution (2.1.19) and get

k+1 = [pk Pk^k+l^+^k+lPk^k+ir^'k+lPk^^kyk ^k+l Vk+1^

(2.1.21)

It can be shown, after some algebraic manipulation, that 0 can

be put in recursive form as

ek+1 = ek + Pk^k+l^^'k+lPk^k+^^yk+l ^'k+1 ek^ (2-1-22)

For compactness, we define

*-k+1 = pk
and subtituting (2.1.23) into (2.1.22) and (2.1.20) yields

1 6

ek+1 = ek + Lk+1 tVk+1 -4>,k+1elJ (2.1.24)

and

pk+1 = pk Lk+1 ^k+1 pk (2-1 -25)

Finally, using (2.1.23), (2.1.24) and (2.1.25) the RLS algorithm

can be summarized as:

process:

yk = k0+uk (2.1.26)

^'k = frk-1 yk-2 yk-n uk uk-m^

0 = [-a1 -a2 . -an bQ . bm]

Vk = white Gaussian noise N(0,a2)

Algorithm:

Lk+1 = pk4*k+1 +
pk+1 = pk" Lk+1^ k+1 pk

ek+1 =ek + Lk+1 (yk+1 ^'k+1 ek>

(2.1.27)

17

2.2 The Recursive Minimum Variance (RMV) Algorithm

The RMV algorithm is a result of the optimization of a

somewhat different cost function from that of the RLS. The

RLS minimizes the error between the actual output yk and the

estimated output yk. Whereas the RMV, is based on the

minimization of the error between the actual parameter vector

A

0 and the estimated parameter 0. Let us state the properties of

the RMV

i) Linear estimator

0k+1 f(0k) f 's ,'near functional

ii) Unbiased estimator

E(0k) = E(0)

iii) Minimum variance

Jmin(e)-Emin([9k-V[k-8kl)

We will use these three properties to develop our RMV

algorithm. Recall the system model

*k **'k+10k + vk+1

(2.2.1)

where vk is a zero mean white noise sequence

E{vk} 0

E{vk2) av

18

Suppose that the parameters are slow varying with respect to

our sample period so that we can assume:

ek+1 = ek (2.2.3)

To satisfy the first property i) we choose the model of the

estimator as

ek+1 88 Ak+1 ek + Lk+1 Vk+1 (2.2.4)

where the matrix A and the vector L are to be determined so

that the three stated properties are satisfied. Subtituting

(2.2.1) into (2.2.4) for yk+1 we get

a A

ek+1 =Ak+1ek + Lk+1 U'k+^k+l +vk+1) (2.2.5)

Taking the expectation on both sides of (4) we get

E{ 0k+1} = Ak+1E{0k} + Lk+1 <|),k+1E{ek +1} + Lk+1E{vk}

88 Ak+1 E{0kJ + Lk+1 ^'k+1 E^ek +1 > (2.2.6)

Combining the unbiased property ii) and equation (2.2.6) we see

that

1 = Ak+1 + Lk+1 ^'k+1

or

Ak+1 88 1 Lk+1 ^'k+1

(2.2.7)

and I is the identity matrix with appropriate dimension.

19

Subtituting (2.2.7) back into (2.2.4) we will get

A

ek+1 * " Lk+1 ^'k+1 >ek + Lk+1 ^k+1 (2.2.8)

= ek+ Lk+i(yk+1 ^'k+1 ek)

It is now left to determine the vector Lk+1 such that the

optimal estimate can be obtained. The remaining property to be

satisfied is property iii) which states that the estimate 0

should minimize the MSE cost function

j(e) = E{(0k+1 8k+1 ) (8k+1 ek+1 )} (2.2.9)

- E{8k+1 k+T l

Defining P the Covariance matrix:

P = E{00'} (2.2.10)

The minimization of the quadratic cost function (2.2.9) would

mean that find the parameter vector 0 such that Trace[P(k)] is

minimized.

But let us first find a useful form for P. Notice that by using

equation (2.2.3) and (2.2.8) the error vector 0 can be written in

recursive form as

ek+1 = ek+1 ek+1 (2.2.11)

ek+1 _{0k+ Lk+l(Vk+1 ^'k+1 ek>>

= ek" Lk+l(Vk+1 ^k+1 ek>

Using equation (2.2.11), it can be shown by algebraic

manipulation that the covariance matrix P can be written in

recursive form

20

Pk+1 = pk pk ^k+1 k+1 + *~k+1 ^ k+1 pk + ^ k+1 *3k
- L,k+1k+1Pk (2-2.12)

Now, performing the differentiation

^r pk+1

------- = Pk^k+I + Lk+11 ^ k+1 pk ^k+1 + avl"1

0Lk+1 (2.2.13)

Solving for Lk+1 we get

Lk+1 = Pk ^k+1 I ^'k+1 Pk ^k+1 +v]'1 (2.2.14)

Subtituting (2.214) into (2.2.11) to get the compact recursive

form

pk+1 = pk pk ^k+1 L'k+1 + pk <>k+1 Lk+1 pk ^'k+1

= pk 4<+1 Pk^k+1 (2.2.15)

Finally, the RMV algorithm can be summarized by (2.2.8),

(2.2.14) and (2.2.15)

Lk+1 ~ Pk ^k+1 (^ k+1 pk ^k+1 + CTv^ ^

pk+1 = pk" Lk+1 pk ^'k+1

ek+1 = k +Lk+ltyk+1 -^k+Al

(2.2.16)

21

Comparing this algorithm with the RLS algorithm, we recognize

that they have very similiar structure. More about this in

section 2.4.

2.3 Recursive Stochastic Gradient Algorithm (RSG)

The idea of the RSG algorithm is that one optimizes the

approximation of the cost function. That is, using the Taylor's

series expansion to approximate the cost function at tk+1 about

xk

J(0k+1) = J(0k) + aj(9k+i)' (0k+1 ek) (2.3.1)

90k

The optimization is performed by maintaining the condition

J(9k+-|) < J(9k)

(2.3.2)

throughout the iterative search and J will converge to its

minima. The condition (2.3.2) can be satisfied if at each

iteration of equation (2.3.1) this condition is true

ek+1 ek

n(k)3J(0k+-|)

3(ek)

(2.3.3)

where p(k) > 0 is the search step size. Let's subtitute (2.3.3)

into (2.3.1)

J(0k+1) = J(0k) n{dJ'(9k+l)dJ(9k+l) i

30k 30k

(2.3.4)

22

We can see that the second term on the right hand side of

(2.3.4) is always positive and with the negative sign in front

makes

J(ek+-|) < J(9|<)

which satisfies our optimization criteria. Therefore, equation

(2.3.3) motivates the stochastic Gradient Algorithm

^k+1 = ek M^J(0) (2.3.5)

30 0 = 0k

We next proceed to find the gradient of J. Recall the system

model

Vk+1 *'k+1 9 + vk+1 <2-3-5)

and the estimated output is

yk+i = 4>'k+1 0 k+1 (2.3.7)

Defining the error square cost function as

J(0) = (1/2){yk+1

= <1/2)[e2k+1] (2.3.8)

Subtituting (2.3.7) into (2.3.8) we get

J(0) = 1/2[yk+1 -0'k+^k)2 (2.3.9)

Next we solve for the gradient of J by

23

5J(9k)

aek

A

(Yk+1 ^'k+10k) ^k+1

A

k+lfrk+l ^'k+1 ek)

Subtituting (2.3.10) into (2.3.5) we get

ek+1 = ek -M-^k+l^k+l ^'k+1 ek)

Next we define L as

Lk+1 =

Finally, using (2.3.11) and (2.3.12) we

Recursive Stochastic Gradient Algorithm

Lk+1 = ^k+1

ek+1 = ek Lk+1
(2.3.10)

(2.3.11)

(2.3.12)

summarize the

(2.3.13)

1 > |i > 0

24

2.4. Generalized Recursive Algorithm (GRA)

It is recognized that the three algorithms RLS, RMV, annd

RSG share a same structure for the estimator 0 which has the

form

A A A

ek+1 = ek -Lk+l(yk+1 ^k-t-1 ek) (2.4.1)

where each algorithm is different from the other in the

computation of the error weight vector L. This is shown in

figure 2.2.

RLS

Figure 2.2. Generalized Recursive Algorithm block

diagram.

25

In hardware implementation, switching from one algorithm to

the other can be conveniently done by only changing the module

that computes L. Thus, the Generalized Recursive Algorithm is

implemented as shown in fig. 2.2 and can be stated by (2.1.27),

(2.2.16) and (2.3.13) as

ek+1 = ek Lk+1 (Vk+1
RLS: Lk+1 Pk^k+lC + ^'k+1 Pk^k+l)"1 (2.4.3)

pk+1 = pk-Lk+lfk+1pk

RMV:L|^+-j = Pk^k+1 t ^ k+1 Pk ^k+1 + av^ (2.4.4)

pk+1 = pk Lk+1 Pk^'k+I

RSG: Lk+1 = P<|)k+i (2.4.5)

1> p > 0

a

Chapter 3

ADAPTIVE EQUALIZATION

Data transmission over the telephone and radio lines have

been made efficient in bandwidth and minimal error recorvery

of the signal by the use of adaptive equalization to compensate

for the distortion caused by the channel. A communication

network is shown in figure 3.1. In this figure, the "channel"

would include the effects of the transmitter, the modulator,

the channel distortion, and the demodulator. The equalizer is to

try to reproduce the signal at the input of the channel.

Equalization has been researched and investigated for the last

two decades, some of the early work done in this area were by

Lucky [8] and Price [9], and the recent survey of the literature

and complete bibliorgraphy can be found in the publication of

Cowan [2] Qureshi [10], Proakis [5] and Gordano [11].

In the application of the estimation algorithms RLS, RSG,

and RMV developed in chapter two, the FIR, MR equalizer and the

DEF all require an adaptive algorithm to compute the filter

coefficients. The adaptive scheme is shown in figure 3.2. It is

the goal of this chapter to develope the Unified Adaptive

Equalization Algorithm (UAEA) which encompasses all nine fo

the following algorithms: FIR RLS, FIR RMV, FIR RSG, HR RLS,

HR RMV, HR RSG, DFE RLS, DFE RMV, and DFE RSG. Also, we will

show the applicability of the algorithms and evaluate the

performance of the three HR algorithms. We will study two

types of channels, they are: 1) HR band-pass channel and 2)

Coaxial transmission line channel.

27

v(k) noise

u(k) A y(kV Adaptive u(k)_

Input cnonric i Equalizer

Figure 3.1 Communication network

Figure 3.2 Adaptive Equalization scheme.

28

3.1 Derivation of the Equalization Algorithm

The algorithms developed in chapter 2 were implicitly

based on the idea of system identification. In other words, the

desired signal y(k) can be assumed to have been generated from

an input u(k) through the system with vector 0 containing the

coefficients of the transfer function.

Process

Figure 3.3 System identification scheme.

29

where

u(k) : Given input

y(k) : Given output

0 : Parameter vector of process transfer function

A

y(k) : Estimated output

0s : Estimated transfer function parameter vector

v(k) : White Gaussian noise N(o,c2)

and

yk = 'k0 + vk (3.1.1)

k = [uk_*|Uk_2-- u k-n Vk Vk-1 Vk-J

0 = [-ara2 . -an bQb1 . bn]'

vk = White Gaussian noise N(0, a2)

and the algorithm that optimizes the error e(k) to produce y(k)

has the form

ek+1 = ek + Lk+l(Vk Vk) (3.1.2)

= 0k+ Lk+1 (yk" ^'k+l0^

Now, consider the scheme with the process being the inverse

model of the channel

30

channel

noise

v(k)

Process

Figure 3.4. Estimation of inverse channel scheme.

Here, the process represents the inverse of the channel which

is implicit. In this case, the algorithm tries to produce the

estimate of u(k) and its input is y(k) whereas the reverse in the

case of system Identification. If there exists an inverse model

then it can be described by the process

uk 4>'k9 + uk

'k luk-1uk-2 u k-n Vk Vk-1 Vk-nl <3-1 -3>

6 = [-ara2 -an b0b., ... bn]'

= white Gaussian noise N(o,
So one can see that the difference is only in the vector 0 where

the input and output samples are interchanged. Also, the order

of the numerator and denomerator is to be selected large

31

enough to model the inverse of the channel. Similiarly, in

modifying the algorithm so that it estimates u(k) instead of

y(k) we perform the similiar modification of interchanging the

input and output samples. Hence, we get the form

ek+1 = ek + Lk+1 (ukuk) (3.1.4)

" ek + Lk+1 H-^'k+l ek)

Notice that the algorithm has a general form, the computing of

the error weight vector L(k+1) depends on which algorithm we

choose, the RLS, RMV or the RSG. Other methods that were used

in the literatures in deriving FIR RLS [11] and the MR gradient

[5] adaptive equalization algorithms were based on different

approaches but yielded equivalent results.

In summary, the AE algorithm can be summarized by (3.1.3)

and (3.1.4) as follows:

k = K-1 uk-2 u k-n Vk Vk-1 Vk-nl <3-1 -5)

0 = [-ara2 . -an bQb1 ... bn]

uk = 'kek

a a /s

ek+1 ek + Lk+1 (uk' ^k+1 ek)

Lk+1 =Pk'k+iPkk+l)'1

Pk+1 = Pk Lk+1 ^ k+l Pk

Having the equalization algorithm developed, we next

investigate how it can be applied to various equalizer

structures.

32

3.2 Generalized & Unified Adaptive Equalization

Algorithms (GAEA) & (GAEA)

In the previous section, the AE algorithm has the structure

of an HR equalizer shown in figure 3.5. This can be extended to

accomodate other equalizer structures such as FIR and DFE.

Although, papers have been written on the performances of each

equalizer and their advantages and disadvantages [2,6], but no

general algorithm has been introduced to satisfy all three kinds

of equalizer structures. In this section, we will point out the

similiarities of the three equalizers and introduce the

generalized adaptive equalizer algorithm (GAEA). In figure 3.5,

3.6 and 3.7 the structures of the equalizers are shown.

Figure 3.5. MR equalizer strucure.

33

Figure 3.6. FIR equalizer structure.

34

y(k)' 7-1

Input z

Figure 3.7. DFE equalizer structure.

Recall the general HR AE algorithm in equation (3.1.8)

ek+1 = ek + Lk+1 (uk ^'k+1 ek) (3.2.1)

^ k = tuk-1 uk-2 u k-n ^k ^k-1 ^k-n^

A a A /\A/\ A /

6 = [-ara2 . -an bQb1 ... bn]

Examining the FIR equalizer in figure 3.6 we see that there is

no feedback structure, so therefore, the AE algorthim can be

35

modified appropriately by the vector <)> as

4>k = [0 . 0 yk . yk.n]' (3.2.2)

and so replacing the vector in (3.2.1) with (3.2.2) we'll get the

algorithm for FIR equalizer.

Recall that in digital communication, the message sent

u(k) belongs to a set of possible messages (u(k)}. The idea of

DFE is that it matches the output of the estimator u(k) to

closest possible element in the set (u(k)}. There are various

criteria on how a decision is made [ ] to minimize the error but

that is not the issue of this section. Once a decision has been

made, and giving the output of the decision device u (k), we can

see clearly that the AE algorithm can be modified by changing

the vector Â§ as

k Iu*k-1u*k-2 u*k-n Vk Vk-nl' <3-2-3)

Again, replacing the vector <]> in (3.2.1) with (3.2.3) we'll get the

algorithm for the DFE equalizer. Thus, we see that the AE

algorithms for all three types of equalizers HR, FIR and DFE

share a common form and differ in only the vector . Therefore

the Generalized Adaptive Equalization Algotighm (GAEA) can be

formed using (3.2.1), (3.2.2) and (3.2.3)

ek+1 =ek + Lk+l(uk_
ek [-ai (k) -a2(k) . an(k) b0(k) . bn(k)]

(3.2.4)

36

>k-1uk-2---- k-nVkVk-l Vk-nl IIR

[0 ... 0 yk ... yk.n] FIR

Ju*k-1u*k-2 u*k-n Vk ^k-nl

where the vector L is determined upon which recursive

algorithm (RLS, RMV, RSG) was selected. If we combine the GRA

and the GAEA we can form a Unified Adaptive Equalization

Algorithm (UAEA) in which any of the following equalizers can

be implemented:

1) IIR RLS

2) IIR RMV

3) IIR RSG

4) FIR RLS

5) FIR RMV

6) FIR RSG

7) DFE RLS

8) DFE RMV

9) DFE RSG

The block diagram of the UAEA in fig. 3.8 shows that any one of

the nine equlaizers can be implemented by switching to the

appropriate module.

37

Figure 3.8. Block diagram of UAEA.

And the UAEA can be summarized as

ek+1 = ek + Lk+1 (uk ^'k+1 ek) (3.2.5)

0k = [-^(k) -a2(k) . an(k) bQ(k) . bn(k)]y

imi k = [Uk.! uk_2 .... u k.n yk yk_! .. yk.n] (3.2.6)

FIR: ^'k = [0 . 0 yk . yk.n] (3.2.7)

DFEifk [uk-i u*k_2 u*k-n Vk Vk-J

(3.2.8)

38

B-USi M<+1 ^k^k+1 ["* + ^k+1 ^k^k+1^ ^

pk+1 Pk'Lk+l^'k+lPk

P^-*' ^k+1 = pk ^k+1 t ^k+1 pk ^k+1 + av^

pk+1 = pk Lk+1 Pk^'k+1

BSSl M<+1 = M-^k+l

1> p > 0

(3.2.9)

(3.2.10)

(3.2.11)

In the next sections, we will apply the HR equalizers to the

various channels : HR band-pass and Coaxial line.

42

3.3 Simulation and Performance Study

In this section, we will apply the HR RLS, HR RMV and HR

RSG equalization algorithms to two types of channels: bandpass

and coaxial line. The performance study will involve examining

the following factors and outputs:

- Convergence rate

- Mean Square Error MSE

Stability

- Computation

- Noise immunity

- Time domain signals

- Frequency responses of channel, equalizer and the

composite

Eye diagrams the transmitted, received and equalized

signals

- Noisy and noise free enviroment.

Before we present the simulation let us first describe the

routines and assumptions involved in the equalization process.

Consider the system mode shown in Figure 3.9

43

Data

Source

u(k)

Desired Probe

channel 4- signal

= 1 generator

Figure 3.9. Communication network with adaptive

equalization

The cascade of the channel and the AE is to represent a

reliable communication channel. To do this, at the initial

training period, the transmitted data sequence u(k) is generated

from the probe signal generators at the transmitter side and

the receiver side in synchronism. During this period, the AE

performs the search for the optimum filter coefficients to

minimize the error e(k). Once a converged steady state

response of the algorithm has been reached, the training period

is completed and the adaptation stops. Next, the steady state

filter coefficients 0 will be used to perform the actual

equalization of random input sequence. At this time, the input

to the channel will be from the data source, and the output of

the equalizer will be open-looped detected. The assumption is

that the channel characteristics have not changed since the

training. Often, a time-variant channel have time variation

that are much slower than the signaling interval so therefore

44

can esentially be considered as time-invariant channel. In the

next two sections, we will apply our three algorithms to the

band-pass channel and the coaxial channel. The simulation

results will be uses to study the performance of the

algorithms.

3.3.1 MR bandpass channel

A communication channel usually have a bandpass

characteristics and if not equalized than bandwidth of

transmitted signal will be greatly limited. Therefore, an

effective equalizer cascaded with the channel should produce a

low pass frequency response. Consider the simulation scheme

shown in Figure 3.8 with the channel represented by H(z) as

.7z2-.4z+.1 (3.3.1.1)

H(z) = ------------

z2 -.1z + .8

and the output noise v(k) with statistics

E{vk} =0 (3.3.1.2)

E{v2k) =
n order to model the inverse of the channel poles and zeros, the order

jf the equalizer has to be at least equal to number of poles of the

channel. In this example, the order of the channel is two, hence, the

45

equalizer will have the form:

H'1 (r) =

a-j + a2Z + 83

z2 + b1 z + bj

(3.3.1.3)

with the normalized sampling period T=1 the frequency response

of the channel is given in figure 3.10

K1 pr

1(

Freq. response of Channel H(z)

If

freq. w

Figure 3.10a. Magnitude response of channel

46

-1

Phase response of Channel H(z)

Figure 3.10b. Phase response of channel

This channel has a nonlinear amplitude supression and distortion

at frequencies below the bandpass frequency. Also, notice the

phase delay associated at the low frequency band. Suppose we

would like to design a reliable communication network that

operates in the bandpass region and also in the low frequency

band. Shown below in figure 3.11 are the transmitted signal u(k)

and the distorded received signal y(k).

Time k

Figure 3.11a. Transmitted signal u(k)

47

Figure 3.11b. Received signal y(k).

The received signal y(k) exhibits the effect of channel distortion

and enviroment noise. Applying HR RMV adaptive equalization to

this communication network we get the equalized signal u(k)

shown below in figure 3.12

Figure 3.12 Equalized output.

48

Furthermore, it is expected that the frequency response of the

channel and equalizer should have improved. Indeed it has, figure

3.12 shows the cascade frequency response of the equalized

channel

K1 -

Freq. response of Channel & Equalizer

Figure 3.13a Equalized channel magnitude response

DO

PO

0

Phase response of Channel & Equalizer

- .DO

-2D0

3 4

freq. w

Figure 3.13b Equalized channel phase response

50

Figure 3.14b Eye diagram of received signal y(k)

Figure 3.14c Eye diagram of equalized signal u(k)

51

Again, as one can see the effect of the channel and noise which

resulted in a very distorted eye diagram of the received signal.

In practice, such condition is absolutely unacceptable. In figure

3.13c, the equalized signal shows much improvement in its eye

diagram. And figure 3.15 shows the evolution and convergence of

the equalizer filter coefficients during the training period.

Figure 3.15 Equalizer filter coefficients

After the training period is over, the equalizer steady state

filter coefficients will be used to equalize random input data.

We next study how the equalizer perform open loop. Through

simulation, below are the signals that were recorded.

52

Transmitted signal u(k)

1

0

[-1

-2

50 100 150 200 250 300 350 400

Time k

Figure 3.16a Random transmitted signal u(k)

Received signal y(k)

150 200 250

Time k

300

350

400

Figure 3.16b Received signal y(k)

53

2

1

0

-1

-2

0 50 100 150 200 250 300 350 400

Time k

Equalized signal u(k)

...Ajr-

.4%:-AA^\h.......-jf\.j

!.K/

*J\^

S2V

Figure 3.16c Equalized signal u(k)

As shown by the figures, the equalizer indeed continues its

ability to equalize after the training period.

The above equalization was performed by the MR RMV

algorithm. We next apply the RLS and RSG to same problem and

compare the performances. The time and frequency domain

signals will not exhibit significant noticeable differences.

Therefore, we will examine the performances in terms of

convergence rate, stability, mean square error (MSE), noise

immunity, and computation complexity. The mean square error

response of the three algorithms applied to the band-pass

channel are shown in figure 3.17 Also, in figure 3.18 shows how

the mse reponses of the three algorithms degrade when

significant noise was added.

54

Figure 3.17. MSE responses with ov = .01.

Figure 3.18. MSE responses with av = .03.

Based on the theoretical and simulation results we

summarize the performance analysis of the algorithms next.

55

Stability: In simulation, all three algorithms were found to be

always stable and converge in every simulation run under

reasonable noise enviroment. In theory, very few results are

available with which predict the MSE convergence of the

algorithms.

Convergence Rate: As expected, due to the simplicity of the

RSG algorithm it converged the slowest. The RMV converges

faster than the RLS because of the fact that the input to the

equalizer is contaminated with the noise v^ in which the RMV

compensates by weighting the performance index with av'1.

Mean Square Error: In noisy enviroment, fig 3.18 shows the

steady state mse of all three algorithms seem to be almost the

same. However, in low noise enviroment, the RLS and RMV have

lower mse than the RSG. This indicates that there is more

degredation in either the RSG or the RLS and RMV when noise is

present. See noise immunity.

Computation: The RSG requires the least amount of

computation. The RLS and RMV require about the same number of

operations, however, the RMV requires that the variance av of the

output noise v(k) is known.

Noise Immunity: Shown below are the MSE response of each

algorithm in the case with noise and without noise.

56

RMV MSE responses

" I 1 1 with noise sigv = .04 i

^ without noise

.5

50

100

150

Time k

200

250

300

Figure 3.19a RMV MSE responses

Figure 3.19b RLS MSE responses

57

RSG MSE responses

._______________i--------.-------------------------------------1---------------

, 50 100 150 200 250 300

Time k

Figure 3.19c RSG MSE responses

From fig. 3.19, It appears that the RMV has the most degradation

when the noise is present, hence, it is least immuned to noise.

The RLS shows a little more immunity to noise than the RMV. The

most immuned algorithm indicated by fig. 3.19c is the RSG.

The performance factors of the algorithms are categorized in the

table 3.1 shown next page:

58

Convergence rate MSE Computation Multiplies Additions Stability Noise imunity

RMV Fastest smallest 10n4+ 4n2 4 2 8n + 8n Stable Least

RLS fast smaller 10n4+ 4n 4 2 8n + 8n Stable Average

RSG slow small 6n 4n Stable Most

Table 3.1. Algorithms performance summary.

59

3.3.2. Coaxial Transmission Line.

In the previous section we showed the the performance of

the adaptive equalizer when applied to a hyperthetical

band-pass channel. The channel was represented by a

polynomial transfer function. In real life application, the

communication channels are not usually representable by a

polynomial function, therefore making the estimation of the

inverse model more difficult. Some of such channels are the

Coaxial transmission line [12], Telephone channel [11],

Multipath Radio channel [11], and the Troposcatter channel [11].

In this section, we investigate how the RLS RMV equalizer

performs with the Coaxial line channel.

The coaxial cable can be modeled based on the equivalent

circuit parameters: resistance, inductance, capacitance, and

conductance. These parameters contribute to the frequency

characteristics of the cable. A complete derivation of the cable

transfer function can be found in Ross [12], we will summarize

the results here.

A short length Ax of the coaxial cable can be modeled by an

equivalent circuit shown in fig. 3.20

i(x,t) LAx

v(x,t)

"o------------------

RAx

^\A

i(x+Ax,y

O

+

v(x+Ax,t)

O

^................. Ax ----------------------

Figure 3.20. Equivalent circuit of coaxial line.

60

where the series resistance and inductance, and parallel

capacitance and conductance are modeled by the lumped

parameters R,L,G, and C. In practice, these parameters may vary

with frequency. From this circuit, it can be shown that the

transfer function has the form

H(> Vout(yVin() (3.3.2.1)

= exp(-ax)exp(-jpx)

= exp(-yx)

where x is the length of the cable and

y = [(R + j
= a + jp

is the propagation constant. To get the explicit form of the

transfer function we subtitute (3.3.2.2) into (3.3.2.1) and get

H(co) = exp{[(R + j
With this transfer function, once the cabel parameters R,L,C,G

and the length x are determined, we can sample the function

H(co) at a rate fg and truncate at the end frequency fmax. The

effect is we have approximated this analog channel H(co) by a

discrete channel H(z). Then, taking the inverse Fourrier

transform of the sampled channel will give us the discrete

impulse response h(k) of the channel. After the impulse

61

response is obtained, we can convolve it with the input signal

u(k) to get the output y(k). This is part of the simulation task

when generating the output of the channel. To proceed with the

simulation we use the following set of values of the

parameters for the copper conductor coaxial line [13 ]

62

Figure 3.21b. Coaxial channel phase response .

And the corresponding time impulse response is obtained and

shown in figure 3.21c

Coaxial Channel impulse response

-0

Figure 3.1c. Coaxial channel impulse response.

63

As one can see, with this channel characteristic, there is not

as much phase and amplitude distortion as compared to the

band-pass channel used in section 3.2.1. Hence, using the binary

transmitted signal for trainning will not be effective and,

therefore, we select a more phase and amplitude sensitive

signal. And such signal has the following form

u(k) = I cos(27iAfnt + 6n ) (3.3.2.5)

where

n = 0..10

Af = sampling frequency = 10,000 Hz

-7t < 0n < 7c random phase

This signal, shown in figure 3.22, represents a random low pass

signal that covers the frequency spectrum from f=0 to f=10Af.

Figure 3.22. Transmitted signal u(k) and received signal

Y(k).

64

At the receiving end of the channel, the distorted received

signal exhibits both phase and amplitude distortion. This signal

is shown together with u(k) for easier comparision in figure

3.22.

Next, we simulate the HR RLS adaptive equalizer to see how

well it can equalize the distortion. Since the channel has a

somewhat complex model, we select the order of the equalizer

to be five. In figure 3.23, the output of the equalizer is shown.

Figure 3.23. Equalized signal u(k).

Again, the equalized signal is shown with the transmitted

signal for comparision. The plot in fig. 3.23 shows that the HR

65

RLS ability to perform satisfactorily with a real

communication channel. Futher justification can be seen with

the MSE response of the algorithm shown in figure 3.24

Time k xlO-4

Figure 3.24. MSE of RLS algorithm.

In this chapter, we have shown how an adaptive

equalization algorithm is derived based on the general

estimation algorithm established in chapter 2. We also

developed the Generalized Adaptive Equalization Algorithm

Equalization (GAEA) and the Unified Adaptive Equalization

Algorithm (UAEA). A performance study of the three HR

algorithms was conducted using computer software MATLAB.

Lastly, the HR RLS AE was applied to a real communication

channel and was found to have maintained its abilitity to

equalize.

Chapter 4

Conclusion

The accomplishment of this thesis was developing the

Unified Adaptive Equalization Algorthim (UAEA) which is stated

in (3.2.5)-(3.2.11) and the block diagram is shown in figure 4.1.

The UAEA can implement any one of the three filter structures :

HR, FIR, and DFE and using any one of the three adaptation

algorithm: RLS, RMV, and RSG. It is the composition of the

Generalized Recursive Algorithm (GRA) (2.4.2)-(2.4.5) and the

Generalized Adaptive Equalization Algorithm (GAEA) (3.2.4)

which were also developed in this thesis.

Figure 4.1. Block diagram of UAEA.

67

In real life implementation of adaptive equalizers, the

initial task is to select the type of equalizer filter and

coefficient update algorithm. Knowing each type of filter and

algorithm offer different performance and often times it is

difficult to determine which types is most suitable. With the

UAEA, it offers the versatility of interchanging the various AE

schemes with minimum modification in hardware and software.

Therefore, it allows the user to build and test nine schemes of

adaptive equalization conveniently.

A performance study of the three MR equalization

algorithms was conducted by using computer simulation. The

results are summarized in the table shown below

Convergence rate MSE Computation Multiolies Additions Stability Noise imunity

RMV Fastest smallest ,n 4 2 10n +4n 4 2 8n +8n Stable Least

RLS fast smaller 10n4+ 4n 4 2 8n +8n Stable Average

RSG slow small 6n 4n Stable Most

Table 3.1. Algorithms performance summary.

68

The last part of the thesis was to apply the HR RLS

adaptive equalizer to a real communication channel. The

channel used was the Coaxial transmission line which has the

frequency characteristic response shown in figure 3.21. With

the results shown in figures 3.23 and 3.24, the equalizer

showed to maintain its ability to equalize the distortion.

69

BIBLIOGRAPHY

1. L. Ljung and T. Soderstrom, Theory and Practice of Recursive

Identification. The MIT Press, Cambridge, (1983).

2. B. Mulgreq and C. F. N Cowan, Adaptive Filters and Equalisers.

Kluwer Academic Publisher, Mass., (1988).

3. R. E. Lawrence and H. Kaufman, "The Kalman Filter for the

Equalisation of Digital Communications Channel," IEEE Trans.

Communications Technology. Vol. 19, (6), pp. 1137-1141, (Dec.

1971).

4. L. Ljung, "Assymptotic Behavior of the Extended Kalman Filter as a

Parameter Estimator for Linear Systems," IEEE Trans. Automatic

Control. Vol. AC-24, (1), pp. 36-58, (Feb. 1979).

5. J. G. Proakis, Digital Communications. McGraw Hill, New York,

(1983).

6. J. J. O'Reilly and A. M. de Oliveira Duarte, "Error Propagation in

Decision Feedback Receivers," IEEE Proceedings part F, Vol. 132,

(7), pp. 561-566, (Dec. 1985).

7. B. Widrow and S. D. Stearns, Adaptive Signal Processing. Prentice

Hall Inc., New Jersey, (1985).

70

8. R. W. Lucky and H. R. Rudin, "A Survey of the Communication Theory

Literature: 1968-1973," IEEE Trans. Inform. Theory. Vol. IT-19, pp.

725-739, (Nov. 1973).

9. R. Price, "Nonlinear Feedback Equalized PAM vs. Capacity for Noisy

Filter Channels, Proceedings 1972 IEEE Int. Conf.

Communications, pp. 22-12 to 22-17, (June 1972).

10. S. U. H. Qureshi, "Adaptive Equalisation," Proceedings IEEE Vol. 73,

(9), pp. 1349-1387, (Sept. 1985).

11. A. A. Giordano and F. M. Hsu, Least Souare Estimation with

Applications to Digital Signal Processing. John Willey & Sons, New

York, (1985).

12. D. A. Ross, "Distributed communications Architecture," University

of Colorado class report. (1983).

13. S. Ramo and J. Whinnery, Fields and Waves in Modern Radio. John

Wiley & Sons, New York, (1962).

APPENDIX: SIMULATION SOFTWARE & PROGRAMS

The software used for simulation in this thesis is MATLAB. Given

here are the written subroutines and programs to perform the

adaptive equalization simulation. The MATLAB version used was the

PC version, a two hundred-iteration simulation took about four

minutes on a 386 20 MHz system.

%

%clear

%io

%clc

clear

load data

clg

disp (' >')

M =size(Input)

N = 3;

disp (M)

INIT

~= ADAPTIVE EQUALIZATION=============

% number of iterations

% Select Algorithm

% Initialize variables & parameters

for k=l:M(2)

INPUTS

OUTPUT % Genearte plant output sequence

PARA

if k>l

if N>

if N=

if N=

end;

ERROR

SAVEDATA

%DISPLAY

end

% ALGORITHMS

1 RSG, end % Recursive Stochastic Algorithm

2 RMV, end % Recursive Minimum Variance

3 RLS, end % Recursive Least Square

% Save variables

% Display data

%PLOTTER

%==

========== Initialize Variables & Parameters=================

%=== ARMA model:

% y(k)= -al*y(k-1)-...-an*y(k-n) + bO*u(k)+...+bm*u(k-m)+v(k)

% THETA = [-al -a2 ... -an bO bl ... bm]' ;

% PHI = [y(k-1) y (k-2) ... y(k-n) u(k) u(k-l) ... u(k-m)]';

%=== i/o model: Y(Z) [Z'n+al*Z/'n-l + .. .+an]=U(Z) [bO*Z/Nm+bl*ZAm-l + .. .+bm]

n=5; % Order of Denomerator

m=5; % Order of Numerator

yl = 0;

y2=0;

ul = 0;

u2=0;

THETA = [.1 -.8 .35 -.2 .05)'; % [.6 .2 .7 -.4 .l]'test T = 20;

% enter plant parameters here

PHI1 = zeros(n+m+1,1);

PHI2 = zeros (n+n+1,1);

PHIB1 = zeros(n+n+1,1);

PHIB2 = zeros(n+n+1,1);

% ssss Generalized Algorithm:

% dim(THETAEST) = (n+n)xl

% dim(L) = (n+n)xl

% dim(P) = (n+n)x(n+n)

% dim(mu)=1x1

THETAEST = zeros(n+n+1,1);

L = zeros(n+n+1,1);

P = l*eye(n+n+1,n+n+1);

%==== Error Variables

% dim(yest) = lxl

% dim(es) = lxl

% dim(mse) = lxl

% dim(ES) = (m+n+l)x(m+n+l)

% dim(COV) = (m+n+1)x
uestl = 0;

uest2 = 0;

esl = 0;

es2 =0;

msel = 0;

mse2=0;

dp dp dp

%==================== 10: Input/Output ===============

% This program is to compute the output of the coaxial

% transmission line by means of using IFFT and convolution.

clear

% Initializing the parameters of the line

x=30; % Km

1 = 2.357e-7;

G = .93e-6;

c = 9.428e-ll;

r = 51.8;

j = sqrt(-l);

format short e

df = 5000; % sampling frequency :small => T = small

fend = 1000000; % maximum frequency

Nf = fend/df;

% Loop to sample the channel frequency function exp(-x*gamma)

for f=df:df:fend

w = 2*pi*f;

indext = f/df;

gama = sqrt<(r+j*w*l)*(G+j*w*c));

Gama(indext) = gama;

temp(indext) = exp<-x*gama);

niag (indext) = abs (temp (indext)) ;

phase(indext) = angle(temp(indext))*180/pi;

% freq(indext) = f;

disp('f r gama temp mag phase')

disp([w r gama temp(indext) mag(indext) phase(indext)]')

pause

end

freq = df*[l:Nf);

TIMESIG

%

TIMESIG

% Compute the impulse response of the channel.

% Generate two square wave inputs to channel.

% Compute channel output by means of convolution

y = real(ifft(temp)); % Channel imputlse response

dim = size(y);

Nt = dim (2) ;

fsquarel = 3000; % frequency of first input signal

fsquare2 = 100000; % frequency of second input signal

dt = l/(df*Nt); % The computed sampling period

T1 = 1/(fsquarel*dt);

T2 = 1/(fsquare2*dt);

% Loop to generate input signals

rand('normal')

harmo = 10;

for i=l:harmo

theta(i) = rand(l)*pi;

end

for k = l:dim(2)

time(k) = k*dt;

Input(k) = 0;

for i=l:harmo

Input(k) = Input (k) cos (2*pi*i*df'time(k) + theta(i))

end

T=T2;

square2;

Input2(k) = u2;

end

% Truncate the impulse response

% for i = round(dim(2)*3/4): dim(2)

% y (i) =0;

% end

% Computing output signals by convolution

Output = conv(Input,y);

Output2 = conv(Input 2, y);

IOPLOT % Plot results

save data freq time Input Output Input2 Output2

%

IOPLOT

% Plot out tiime and frequency signals of channel

cig

subplot(211)

%plot (f, R)

%grid

%title('Channel resistance: R(f)')

%pause

plot(freq,mag);

grid

title('Coaxial Channel freq. response')

xlabel('Frequency' )

grid

plot(freq,phase)

grid

title('Coaxial Channel phase response')

xlabel('Frequency')

delete cfl.met

meta cfl

pause

plot(time,y)

grid

title ('Coaxial Channel impulse response')

xlabel('Time k')

delete ct3.met

meta ct3

%============ Generate Input Sequence ==============

% Input u(k) = white noise with moment n(meanu,sigu)

udelayl uestl;

udelay2 = uest2;

ul = Input(k); % test signal

u2 = Input2(k); % information signal

78

%=============== Generate Output of Plant ==================

%

% v(k) = white noise sequence with moment n(meanv,sigv)

sigv = .005;

meanv = 0;

rand('normal'); % computing v(k+l)

v = rand(1,1) *sqrt(sigv) + meanv;

yl = Output(k) +v; % computing yl(k+l)

y2 = Output2(k) +v; % computing y2(k+l)

%=============== PARA: Parameterization ==================

%

for i= (n+n):-1:1; % updating PHI(k+l)

PHIB1(i+1)= PHIB1(i);

PHIB2(i+1)= PHXB2(i);

end

PHIBK1) = udelayl;

PHIB1
PHIB2(1) = udelay2;

PHIB2(n+1) = y2;

7 S

%=========== RSG (Recursive Stochastic Gradient) ==================

%

% J (THETA00) = ly(k)-yest(k)]A2

% mu(k+1) = alpha/(k+1) alpha>0

% L(k+1) = -mu(k+1)*PHI
% THETAEST(k+1) = THETAEST(k) + L(k+1)[y(k+1)-PHI'(k+1)THETAEST(k)]

mu = .1; % updating mu (k+1)'

L = -mu*PHIBl; % computing L(k+1)

THETAEST = THETAEST L* (u -(PHIB1'"THETAEST));

%============ RMV (Recursive Minimum Variance) ==================

%

% J(THETA) = E((THEATA-THETAEST)*(THETA-THETAEST)') ,MSE

% L(k+1) = P(k)*PHI(k+1)[PHI'(k+1)*P(k)*PHI(k+1) + sigv(k))"-1

% P (k+1) = P (k) L()c+1)*PHI' (k+l)*P(k)

% THETAEST(k+1) = THETAEST(k) + L (k+1) [u(k+1)-PHI' (k+1)*THETAEST(k)]

L = P*PHIBl*inv(PHIBl'*P*PHIB1 + sigv);

P = P LPHIB1'*P;

THETAEST = THETAEST + L*(ul (PHIB1*THETAEST));

%=================== RLS (Recursive Least Square) ==================

%

% J(THETA) = [ul(k)-uestl (k)]A2

% L(k+1) = P(k)*PHIB1(k+1)[1 + PHIB1' (k+1)*P(k)*PHIB1 (k+1)]A-1

% P(k+1) = P(k) L(k+1)*PHIB1'(k+1)*P(k)

% THETAEST(k+1) = THETAEST(k) + L(k+1) [ul (k+1) PHIB1' (k+1)THETAEST(k)]

L = P*PHIB1/(1+(PHIB1'*P*PHIB1));

P = P (L*PHIB1'*P);

THETAEST = THETAEST + L*(ul (PHIB1'THETAEST));

dP dP dP dp dP

83

================ ERROR Simuation ==;

uest(k) = PHIB' (k) *THETAÂ£ST (JO

es (k) = [uOO-uest (k) P2

mse(k) = E{es) ? mean square

uestl = PHIB1'*THETAEST;

uest2 = PHIB2'THETAEST;

esl = (ul-uestl)A2;

msel = msel +l/k* (esl -msel);

estimated input

; square error

error

es2 = (u2-uest2)*2;

mse2 = mse2 + (es2-mse2)/k;

84

% =================== Display desired variables

disp(' dispm = [k disp(dispm) k ul * ul (k) uestl v yl uestl(k) esl ] ; v (k) yl (k) esl (k) t

disp (' k u2 (k) uest2(k) v (k) y2 (k) es2 (k) t

disp([k u2 uest2 v y2 es2]);

dispt'PHIl')

disp(PHI1');

disp('PHIB1'), disp(PHIBl')

disp('PHI2'), disp(PHI2')

disp('PHIB2'), disp(PHIB2')

pause

clc;

85

%================ Save Data Sequence =====================

%

% Data are saved in Data Matrices DM1, DM2, ...

%

% DM1(k) = [ul 00 uestl(k)]

% DM3
DM1(k, 1) = ul;

DM1(k,2) = uestl;

DM1(k,3) = yl;

V (k) = v;

DM2 (k, 1) = u2;

DM2(k,2) = uest2;

DM2(k,3) = y2;

for i=l:n+n+l

DM3(k, i)=THETAEST (i);

end

DM4(k,1) = msel;

DM4

86

%=============== Plot Data ===================

%

plot(DM1) % Plot u(k) and uest(k)

title('ul(k) & uestl (k)');

grid

pause

plot(DM2)

title('u2(k) & uest2(k)');

grid

pause

plot(DM3)

title('THETAEST');

grid

pause

plot(DM4)

title('msel(k) & mse2 (k)');

grid

87

%============ Initialize Variables 4 Parameters=================

%=== ARMA model:

% y(k) = -al*y (k-1). .-an*y (k-n) + bO*u < k.) + . +bm*u (k-m) +v (k)

% THETA = [-al -a2 ... -an bO bl ... bm]' ;

% PHI = [y(k-l) y(k-2) ... y(k-n) u(k) u(k-l) ... u(k-m)]';

%=== I/O model: V (Z) [Z*n+al*Z*n-l+ . +an] =U (Z) [bOZAm+bl*Z/'m-l +. .

% Order of Denomeratcr

% Order of Numerator

.1 -.8 .35 -.2 .05]'; % [.6 .2 .1 -.4 .ij'test T = 2C;

% enter plant parameters here

PHI1 = zeros(n+m+1,1);

PHI2 = zeros(n+n+1,1);

PKIB1 = zeros(n+n+1,1);

PKIB2 = zeros (n+n+1,1);

% ===== Generalized Aloorithm:

% dim (THETAEST) = (r.+n)xl

% dim(L) = (n+n)xl

% aim(P) = (n+n)x(n-r.)

% dim(mu)=1x1

THETAEST = zeros (n+n-1,1);

L = zeros (.n+n + 1, 1) ;

? = l*eye(n+n+1,n+n-1);

%===f=== Error Variables

% dim(yest) = lxl

% dim(es) = lxl

% dim(mse) = lxl

% dim(ES) = (m+n+1)x(m+n+1)

.% dim(COV) = (m+n+1) x (m+n+1)

uestl = 0;

uest2 = 0;

esl = 0;

es2 = 0;

msel = 0;

mse2=0;

n=5;

C

yl"= 0;

y2=0 ;

ul = C;

u2=0 ;

THETA = [