Citation
Kalman filtering as applied in tracking

Material Information

Title:
Kalman filtering as applied in tracking
Creator:
Tsoulkas, Vasilis N
Publication Date:
Language:
English
Physical Description:
77 leaves : illustrations ; 28 cm

Subjects

Subjects / Keywords:
Kalman filtering ( lcsh )
Tracking radar ( lcsh )
Kalman filtering ( fast )
Tracking radar ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 69-70).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, Department of Electrical Engineering.
Statement of Responsibility:
by Vasilis N. Tsoulkas.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
19817802 ( OCLC )
ocm19817802
Classification:
LD1190.E54 1988m .T76 ( lcc )

Full Text
KALMAN FILTERING AS APPLIED IN TRACKING
by
Vasilis N. Tsoulkas
B.S., University of Colorado, 1986
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Master of Science
Department of Electrical Engineering
1988


This thesis for the Master of Science degree by
Vasilis N. Tsoulka^
has been approved for the
Department of
Electrical Engineering
by
Edward T. Wall
Date


iii
Tsoulkas, Vasilis Nicholas (M.S. Electrical Engineering)
Kalman Filtering as Applied in Tracking
Thesis directed by Professor Edward T. Wall
The applicability of the Kalman filter as applied in target tracking is
analyzed. The mathematical treatment of the overall dynamic process is
developed, using the generalized state space approach. The tracking
behavior of the filter is examined, within a certain range of signal to noise
ratios for two different types of target trajectories. The tracking matrix of the
state estimator is mathematically formulated, and its use, as applied in the
tracking problem, is demonstrated. The effectiveness of the so-called
Kalman tracking filter is presented and compared for a parabolic and
sinusoidal target trajectory. Further concepts of tracking matrices are
discussed for further use in stochastic control theory.
The form and content of this abstract are approved. I recommend its
publication.
Signed
Faculty member in charge of thesis


IV
ACKNOWLEDGEMENTS
I wish to express my gratitude to Dr. Alfred Fermellia from Hughes
Aircraft Co. for his continuing support and direction during the course of
thesis work. Gratitude is extended to Dr. Edward T, Wall for many useful
discussions and suggestions and to Dr. Arum Majumdar for serving in the
graduate committee for the defense of this thesis.


CONTENTS
TABLES...................................................vii
FIGURES..................................................viii
CHAPTER
1. INTRODUCTION...........................................1
1.1 General System Description......................1
2. THE STATE SPACE.......................................3
2.1 Methodology of the State Space..................3
2.2 Problem Description (Application)..............10
3. THE KALMAN FILTER ALGORITHM...........................12
3.1 Kalman Filter Equations........................12
3.2 Mathematical Justifications....................17
4. THE TRACKING MATRIX...................................30
4.1 Mathematics of the Tracking Matrix.............30
4.2 Harmonic Oscillations..........................35
5. STATE ESTIMATION-APPLICATION VIA THE
KALMAN TRACKING FILTER................................44
5.1 Parabolic-Trajectory......................... 44
5.2 Sinusoidal-Trajectory..........................54
5.3 Comparisons....................................65
6. CONCLUSION............................................67


BIBLIOGRAPHY.....................................69
APPENDIX A.......................................71
APPENDIX B.......................................76
APPENDIX C.......................................77


LIST OF TABLES
Table 5.1 Summary of Noise Deviations and SNR's.........45
Table 5.2 Summary of Noise Deviations and SNR's.........55


LIST OF FIGURES
Figure 2.1 Configuration of an I/O System in the State Space...............5
Figure 2.2 Classification Set..............................................8
Figure 2.3 Classification Set of Continuous Equations......................9
Figure 2.4 Classification Set of Discreet Equations........................9
Figure 3.1 Kalman Filter Diagram............................... .........29
Figure 4.1 Harmonic Oscillations in One Plane.............................35
Figure 5.1 Target Trajectory with SDP = 0.001, SDM = 0.01, T = 0.5...46
Figure 5.2 Target Trajectory with SDP = 0.001, SDM = 0.09, T = 0.5...47
Figure 5.3 Parabolic Trajectory with Noise SDP = 0.001, SDM = 0.2,
T= 0.5, SNR 250.............................................48
Figure 5.4 Innovation Process with SDP = 0.001, SDM = 0.2, T = 0.5,
Parabolic Case................................................49
Figure 5.5 Target Trajectory with SDP = 0.001, SDM = 1.5,T = 0.5.........50
Figure 5.6 Target Trajectory (dotted) with SDP = 0.01, SDM = 2,
T = 0.1, SNR = 5..............................................51
Figure 5.7 Target Trajectory with SDP = 2, SDM = 0.2, T = 0.5.............52
Figure 5.8 Target Trajectory (dotted) with SDP = 5, SDM = 0.01,
T = 0.5.......................................................53
Figure 5.9 Target Trajectory with SDP = 0.01, SDM = 0.001, T = 0.5.......56
Figure 5.10 Target Trajectory with SDP = 0.01, SDM = 0.1, T = 0.5.........57
Figure 5.11 Tracking Trajectory (dotted) with SDP = 0.01, SDM = 1.5,
T = 0.7, SNR = 3.5............................................58
Figure 5.12 Innovation Process SDP = 0.01, SDM = 1.5, T = 0.5,
SNR = 3.5.....................................................59


Figure 5.13 Target Trajectory with SDP = 0.1, SDM = 0.01, T = 0.5.........60
Figure 5.14 Innovation Process with SDP = 0.1, SDM = 0.01, T = 0.5,
Sinusoidal Case..........................................61
Figure 5.15 Target Trajectory with SDP = 1.0, SDM = 0.01, T = 0.5......62
Figure 5.16 Target Trajectory (dotted) with SDP = 2, SDM = 0.1, T = 0.5,
SNR = 2..................................................63
Figure 5.17 Noisy Signal z(k+1) End Estimated State SDP = 0.1,
SDM = 0.3, T = 0.5.................................
64


CHAPTER 1
INTRODUCTION
1.1 General System Description
System description is the most important step leading to the
solution of a general control problem. To be successfully conpleted, a
basic understanding of the physical laws which occupy the system is
required; then these laws can be described through a wide variety of
mathematical equations that will govern the overall process. Their
solution will reveal the basic properties of the system.
It is essential that the nature of these equations be identified
properly since failure to do so will result in an incorrect modelling and an
erroneous result.
In general, control systems may be described using different types
of equations. For example, digital equations, ordinary differential
equations, partial differential equations and delay differential
equations. Moreover, the above formulations may exhibit linear or non-
linear behavior; consequently other factors may need to be considered,
such as system stability, "operation in the small," "operation in the large"


and other related concepts. All of the above questions, including the
order of the system, can be determined when the governing mathematics
and physical laws have been carefully analyzed and understood.
Another important issue in control theory is whether the
representation of the system should be formulated deterministically or
stochastically; failure to realize the correct nature of the input-output
control signals will produce incomplete results which will lead to
inadequate design and synthesis.
To successfully include these considerations as well as numerous
design specifications in the system description, a generalized higher-
level mathematical framework known as the state-space-approach will
be employed.
It is a purpose of this research to expose the reader to some of the
power of the mathematics of state-space-theory and to demonstrate it's
use and applicability to a specific stochastic control problem.


CHAPTER 2
THE STATE SPACE
2.1 Methodology of the State Space Formulation
In recent years, a major effort has been made to investigate and
apply state variable techniques in many different areas of science and
engineering. Work done by Belman, Pontryagin, Liapunov, Kalman and
many others has shown that the state-space formulation offers many
advantages over the classical methods.
The main reason for the wide interest in state space modeling, in
contrast to the classical approaches (root locus methods, frequency
response, etc.) is that the latter, which are basically transformational
methods, present certain difficulties when applied to the analysis of non-
linear time varying and multivariable sytems. In other words, classical
control theory is generally applicable only to sinlge-input-single-output
(SISO) and linear time invariant systems. Since classical control is
essentially a complex frequency domain approach which relies heavily
on trial and error techiques, it is usually not possible to optimize the
design of a system when given specified performance measures. On the
other hand, optimization is possible with the state variable approach. In


4
fact, the state equations are best suited for this type of problem since the
quantities required in control system optimization are represented by
state variables, control variables and the system parameters. Another
advantage of modern control theory is that the equations are formulated
in the time domain, allowing the designer to work with a whole class of
input functions, including the initial conditions of the problem. Hence,
state equation formulation is a powerful tool with which the designer is
able to completely describe complex multiple-input-multiple-output
(MIMO) systems.
Consider a high level formulation which represents the nonlinear
and the time varying dynamics of a plant in the general discreet form:
x(tk+i) = f(x(tk),u(tk),tk)
y(tk) = S(x(tk),li(tk),tk)
where: x(tk) is a vector of state-variables
y(tk) is an input-control-vector
y(tk) is the output vector
and f, g. describe the non-linear functional
relationships of the system dynamics.
It is possible to linearize the above model description and obtain
the following discreet form:
x(tk+i) = F x(tk) + B u(tk)
2.1.1


5
y(tk) = C x(tk)) + D u(tk) 2.1.2
where F, B, C, D are the corresponding dynamic coefficient matrices of
the variables of the system. It is important to point out that the linearized
equations (2.1.1,2.1.2) show how the state-space-approach relates not
only to the input-output (I/O) behavior of the system but also to its internal
state behavior.
Ui(tk) rxi(tkn
u2(tk) *2(tk)
X(tk) =
Un(tk) _ x/tk) _
yi(tk)
y2(*k)
ym(tk)
FIGURE 2.1 CONFIGURATION OF AN I/O SYSTEM IN THE STATE
SPACE
For dynamic systems in general, there are two basic equations
that typically represent such processes; the state equation and the
measurement equation. The latter describes, in a linear fashion, which
states are measured (Equation 2.1.2) and the former relates to the effects
of forcing functions and to the systems state transition matrix from one
time frame to another with a possible existing forcing function (Equation
2.1.1).


6
Another important aspect of system representation is the
identification of its type; as has been already mentioned, the question to
be resolved is whether the system is deterministic or stochastic. A
system is of deterministic nature if there are no uncertainties present, or
are negligible with respect to the overall process; consequently, the
properties of the system propagate in a predictable manner. On the other
hand, when there is a sufficient amount of disturbances, inherent to the
system or introduced during the observations, the nature of the process
becomes random, and realization of its performance can only be
completed if the noise statistics are well understood.
It should be emphasized at this point that the state space
approach to system description is particularly useful in providing a
statistical description of the system behavior. The dynamics of linear
lumped-parameter systems with random coefficients can be modeled by
the generalized first order continuous time vector matrix differential
equation:
£(t) = F(t) *(t) + B(t) m + G(t) vy(t) 2.1.3
or
x(t) = F (t) *(t) + G (t) w(t) 2.1.4
where
li(t) is some deterministic forcing function
x(t) is the system's state-vector,


\&(t) is a random forcing function and
F(t), G(t) are time varying dimensioned matrices.
The measurement equation in the continuous time domain is of
the form:
z(t) = H(t) x(t) + y(t) 2.1.5
where x(t) is the state vector that is measured
H(t) is the dynamics measurement coefficient matrix
y(t) is a random noise vector introduced during
measurements
The previous models are non-unique, which means that there are
many different sets of the coefficient matrices {F(t), G(t), H(t)} which will
yeild the same overall input -output behavior.
Equation 2.1.3 is the state vector equation of a dynamic system
which is composed of a minimum set of variables (state variables) such
that knowledge of these variables at time t = to together with the input for t
> tO, is sufficient to completely describe the unforced motion of the system
for every
t > tO. for linear-time-invariant-systems, t0 is usually zero (t0 = 0).
in order to gain a better understanding of the dynamic
representation of a system, a classification set may be defined in such a


way that it combines the independent and dependent variables, the
system coefficients and the sensor output. These four mathematical
quantities are necessary and sufficient to model a given process.
Additionally, realization of the dependent and independent
variables leads to the solution of the estimation problem and the
realization of the system coefficients, results in a solution of the
identification part.
C A
INDEPENDENT
VARIABLES
DEPENDENT COEFFICIENTS
VARIABLES OF EQUATIONS
SENSOR
OUTPUT
ESTIMATION IDENTIFICATION
CONTROL
FIGURE 2.2 CLASSIFICATION SET
Solution of both subsets (estimation, identification), together with
the sensor output (Figure 2.2), yields a complete solution to the control
problem. Hence the proper estimation and identification with the
estimated terms will result in the correct modeling of the system.
The classification set (Figure 2.2) can be modified to a more
explicit continuous form as follows:


9
C =
IND VAR DEP VAR COEFFICIENTS
U X F,B,H,W,V
OUTPUT]
Z
FIGURE 2.3 CLASSIFICATION SET OF CONTINUOUS EQUATIONS
or, in the discrete form:
CA
flND VAR
U
DEP VAR
X
COEFFICIENTS
OUTPUT]
Z
ESTIMATION
IDENTIFICATION
CONTROL
FIGURE 2.4 CLASSIFICATION SET OF DISCRETE
EQUATIONS
From Figure 2.4 the corresponding discrete state equations are:
X(tk+1) = Wk+iM X(tk) + 0(tk+1,tk) U(tk) + S(tk)


Z(tk+i) = H(tk+1)X(tk+1) + V(tk+1)
From now on, the discrete state equations for notational
convenience will be written as:
Xi<+1 = ())k+i ik X|< + 0k+1 ik Uk +
Zk+i = Hk+1 Xk+1 Vk+1
If the control input (independent variables) is absent from the state
space formulation, the following state equations result:
^K+^ = ())k+1 k Xk + 2.1.6
Z|<+1 = Hk+1 Xi<+1 Vk+1 2.1.7
2.2 Problem Description (Application)
The problem application to illustrate the power of this approach
consists of two cases. In the first case, the Kalman tracking filter will be
used in a system consisting of a moving vehicle (such as an aircraft)
following a parabolic trajectory. In the second case, which is perhaps
more vivid, the same filter algorithm will be applied to a moving vehicle
describing a sinusoidal path with time varying acceleration.


In both cases the Kalman tracking filter is attempting to
simultaneously estimate and track the position of the target point from the
simulated incoming noisy data. In both situations, the target is perturbed
by zero mean white Gaussian noise (which takes into account the
maneuvering of the target and the possible effects of other random plant
parameters, such as wind gusts, etc.). Both results of the Kalman
tracking filter are modeled within a practical range of noise variances, for
both the measurement noise and the plant noise. This case illustrates
the power of the overall process. Finally, the two simulations are
compared and limitations are discussed.


CHAPTER 3
THE KALMAN FILTER ALGORITHM
In order to proceed to the Kalman filter formations, it is appropriate
to mention at this point the Gauss-Markov state random process. Since
the Kalman filter is a special case of a Gauss-Markov sequence.
3.1 Kalman Filter Equations
Definition:
"A stochastic process is Gauss-Markov if it assumes a Gaussian
distribution and if it is statistically dependent only on its previous
past value."
The non-unique formation of the Kalman filter is centered about the
following.
Given the discrete model of a system and the discrete
measurements in the following form:
Xk+1 = <|> k+i,k*k + Wk
3.1.1
zk+1 = Hk+1 xk+1 + ^k+1
3.1.2


Determine the optimal estimate of the state-vector xk+i (at t = tk+1),
(system state) where
wk : random process noise white sequence
vj^-i : measurement noise white sequence
with the following statistics:
E {wk} = 0
E{vk+1} = 0
where: E {} is the expectations operator
8kj- is the Knonecker-delta
Qk Rk+i are constant covariance matrices
and: xk+i A nx1 state vector
^k+l.k 4 nxn state transition matrix
wk A nx1 noise state vector
Zk+1 4 lx1 measurement vector
Hk+i A Ixn measurement-dynamics matrix
vk+i A 1x1 noise measurement vector


Since wk is a white noise sequence (uncorellated from one-step to the
next) and since the system is driven by a noise process, the state vector
xk (xk+1) becomes itself a noise process and the statistical properties of
the states can be described through the mean of xk and the associated
covariance matrix (to be analyzed later); so that determination of the
optimal estimator is feasible. It is also assumed that the random
variables have zero ensemble average values.
i.e. E{Xk} = (2 => E{xk+1} = 0 (unbiased)
The estimation of the state vector, which is created from discrete
measurement data, corrupted by noise, can be optimized after
minimization of the estimator error between the true-incoming-data and
the one-step predicted estimated data. In other words, the Kalman filter
can be derived via an optimal-recursive state estimator in the
generalized sense that given a prior estimate at time tk an updated
estimate at time tk+i is created based on the use of noisy measurements
zk at tk, along with the associated dynamics matrix (STM) of the filter.
Moreover, this estimator is a linear-least square minimum variance
unbiased estimator. The unbiased estimate has an expected value
which is the same as that of the quantity which is being estimated, that is
E {x} = x.
The minimum variance implies that the estimation error variance is
less than or equal to the error variance of any other unbiased estimate.
Since the observation process combines, linearly, the measurement data


with the states of the system, along with some additional white Gaussian
noise, the updated estimation error can be defined as:
£k+1/k+1 4 ^k+1 ^k+1/k+1 3.1.3
where the indices k+1/k+i imply an estimate at t^+i given the observations
attk+i.
Equation 3.1.3 implies that, during the state-estimation of a
dynamic system, an error vector exits, which is the deviation of the
estimated state from the actual value.
In summary, the filtering process involves the determination of that
estimator which minimizes in the least-squares sense. (Equation 3.1.3)
The previously defined estimation error is, ideally, equal to zero. To
accomplish the minimization process, a scalar cost functional needs to
be constructed, which is the inner product of xk+1/k+-|.
^k+1 § ^-pk+l/k+1 £k+1/k+lj 3.1.4
The value of xk+1/k+i that minimizes Equation 3.1.4 is optimum for the
actual state xk+1. In the following section, the well-known Kalman gain
will be derived using Equation 3.1.4.
The Kalman filter which is a generalization of the Wiener filter in
the state space domain was first developed in the 1960's by Kalman-


Bucy. Since then, due to its consistent results, especially in aerospace
applications, it has gained a significant popularity in control systems
engineering and other scientific research.
The complete state space representation of the process has the
following form:
Xk+1/k+i = *k+l/k + Gk+l[^k+1 ik+1/k] 3-1-5
As it can be seen, Equation 3.1.5 is an a posteriori estimate of the state
vector, and it is calculated from its state value using the state transitions
matrix (k+iik) (first term of the R.H.S.) and its optimized residual (second
term of the R.H.S).
It should be noted that the second term of the R.H.S of Equation
3.1.5 needs some special attention. At first, the quantity Gk+i (Kalman
gain) is a weighting factor to the bracketed expression, often termed the
residual or innovation process. Since this quantity is the error difference
between the new incoming measurements signal Zk+i and the one-step
predicted estimated signal ik+i/k, the residual quantity, in some sense,
with the optimum gain, corrects at every iteration the updated estimate of
the state (Equation 3.1.5).
It is very important to emphasize that the Kalman gain forces the
residual { Zk+i Zk+i/k} to seek a minimum level. The derivation, as


17
mentioned in an earlier section, is accomplished from the defined loss-
function or performance measure, Equation 3.1.4.
Note also that: ^+1/k = ar|d 2k+1/k = ^k+1 ^k+1/k = ^k+1 ^k+1 ,k ^k/k-
Then Equation 3.1.5 can be modified to a more explicit form as:
^k+1/k+1 = ^k+1 ,kv ^k/k + ^k+l[^k+1 ^k+1 ^k+l.k ^k/k] 3.1.6
3.2 Mathematical Justifications
In order to gain a better understanding of the Kalman filter
algorithm, it is desirable to go through certain mathematical derivations of
certain important expressions.
At first, note that the covariance matrix of an n-state vector is an
(nxn) symmetric matrix. The diagonal elements are the mean square
errors of the state variables. If the off-diagonal-elements are non-zero,
they represent the cross-correlation terms of the elements of xk+1/k+1.
Stochastically, the covariance matrix expresses the uncertainty involved
in the estimation process of the state of a dynamic system, which is
defined as:
P AE{xxf}.


1 8
Now, define the a posteriori error covariance-matrix as:
~ ~T
Pk+1/k+1 4 E {*k+1/k+1 £k+1/k+lJ 3.2.1
where: ^k+i/k+1 = ^k+1 ^k+l/k+1 3.2.2
Similarly, going one-step backwards,
Pk/k 4 ^ {4k/k *kyk} 3.2.3
where =xk- 3.2.4
Also, define the one-step-prediction estimation error as:
Pk+1/k 4 ^ {2k+1/k ^k+1/k} 3.2.5
where xk+1/k = Xj<+1 Vi/k 3.2.6
Moreover, let the state-model be described as:
Xk+1 =k+i,kX.k + Wk 3.2.7
and the one-step prediction estimation state as:
A i /V
^k+1/k ~ 9k+1,k ^k/k
3.2.8


From Equation 3.2.3, Equation 3.2.4:
pk/k ~ E {^k/k^k/k} ~ E {(4k 4k/k)(4k 4k/k )"*"}
and from Equation 3.2.5, Equation 3.2.6
pk+1/k = E (4k+1/k 4k+1/k }= E {(4k+1 4k+1/k)( 4k+l 4k+1/k)Tl
since, Xk+1 Xk+i/k = <|>k+i,k 4k + Wk ^k+1 ,k 4k/k
or,
Xk+1 Xj<+i/k = $k+1 ,k 4k ^k+1 ,k 4k/k+ ^k =
= ^k+1 ,k(4k 4k/k) + ^k
=Xk -4k+1/k = (l)k+1,k4k^ + wk
3.2.9
Combining Equation 3.2.5 and Equation 3.2.9 the covariance outer
product is:
pk+1/k = E {(^k+1 ,k 4k/k + ^k )( k+1 ,k 4k/k + ^k)T} =
= E {(^k+1 ,k 4k/k + ^k )(4k/k ^k+1 ,k + ^k)} =
= E {^k+1 ,k4k/k 4k/k ^k+1 ,k + ^k+1 ,k 4k/k ^k+ ^k 4k/k ^k+1 ,k+
+ WkW1}


20
= E {(j)k+i,k *k/k 2k/k ^k+1,k } + E (^k+l.k^k/k ^k) +
+ E{Wkxk/k c|>k+1 k) + E {wkwk} =
= ^k+1 ,k E {2k/k 2k/k Mk+I.k + ^k+l.k E {£k/k^k) +
+ E {WkxJyk}
=> pk+1/k = $k+1,k pk/k $
T
k+1,k
+ Qk
3:2.10
Since, E {x^ WkT} = E{Wk } = 0
where Qk is an nxn matrix.
Due to the properties of uncorreleatedness and orthogonality
between x^ and WkT (error estimate and process noise), and since at
least one of the above quantities has zero mean value, E {^} = 0, white
sequence.
Note that if *k+1 = k+i ,k *k/k +
the derivations are identical, and
pk+1/k = ^k+l.k £k/k ^k+l.k + ^kW^k1
Equation 3.2.10 is the a priori error covariance matrix of the one-
step prediction estimation error of the state vector. In addition, from
Equation 3.1.5, which is repeated here for convenience:


21
*k+1/k+1 = *k+1/k + Gk+lfek+1 Hk+1 *k+1/kl
and Equation 3.2.2: xk+1/k+1 = *k+i x^/k+i
notice that:
** A
£k+1/k+1 = ^k+1 £k+l/k+1 =
= xk+1 {*k+1/k + Gk+lUk+1 l<+1/k]} =
= Xk+1 {Xk+1/k + Gk+1 [2k+1 Hk+1 ^k+1/kl}=
= > = X|<+1 *k+i/k Gk+l(Hk+1 *k+1 + vk+l)-+ Gk+1Hk+1 *k+l/k =
= Xk+1 *k+1/k Gk+1Hk+1 *k+l Gk+l¥k+1 +
+ Gk+1Hk+1 *k+1/k =
= ^k+1/k Gk+1^k+1 Gk+1 ^k+1 [ ^k+1 ^k+1/kl =
= 2k+1/k Gk+1^k+1 £k+1/k Gk+1^k+1 ^
£k+1/k+1 = [I Gk+1^k+l] 2k+i/k Gk+1^k+1 3.2.11


Now the updated error covariance can be derived in the following
manner.
Since Pk+l/k+1 = E { £k+i/k+1 2k+-|/k+-|}
and from Equation 3.2.11
Ek+1/k+1 = E {[(I Gk+1 ^k+1 ) ^k+1/k Gk+1^k+1 ]
[(I Gk+1Hk+1) xk+1/k Gk+i^+i]T} =
= E {[(I Gk+iHk+i)xk+i/k Gk+iVk+-|]
[ 2k+1/k(l Gk+i Hk+1 )T vk+1Gk+1]} =
= E {(I Gk+i Hk+-| ) xk+1/k xk+1/k (I Gk+-|Hk+i)T -
- (I Gk+1 Hk+i )xk+1/k vk+1 Gj+1 -
' Gk+1^k+1 S+1/k(*' Gk+1Hk+l)T + Gk+l¥k+l¥k+1 Gk+1 }
= 0 Gk+1 ^k+1 ) E { xk+1/k xk+1/k}(l Gk+iHk+1)T -
- (I Gk+i Hk+1 ) E { xk+1/kT yk+1 }Gk+1 -
' Gk+1 E &k+1 *k+l/k} 0 Gk+1 Hk+1 )T +
+ Gk+1 E {^k+1 ^k+1 }Qk+i =>


3.2.12
pk+1/k+1 = 0 Gk+1 Hk+l^k+l/kO Gk+1Hk+l)T +
+ Gk+1 Rk+lGk+l
~T
where: E { xk+1/k xk+1/k} = Pk+1/k
E {xk+1/k vk+i } = E {^+1 xk+1/k } = 0
E {Yk+1 vk+1 } = Rk+1
Recall Equation 3.1.4. From stochastic geometric theory the
diagonal elements of the cost functions represent the geometric length
(in an Euclidean space) of the stochastic error which is also the trace of
the updated error covariance matrix, namely:
^k+1 = E tek+i/k+i £k+i/k+i} = Trace[Pk+i/k+i] 3.2.13
Realizing from Equation 3.2.13 that
^k+1 = E {2k+i/k+i I £k+l/k+i)
where I is the identity matrix, and using Equation 3.2.12, proceed in the
following results for the derivation of the optimum Kalman gain:
pk+1/k+1 = 0 Gk+1 Hk+l)Pk+1/k(! Gk+1Hk+l)T + Gk+1Rk+lGk+1


- 0 Gk+1 H|<+i)P|<+i/|<(l Hk+i Gk+i )+ Gk+iRk+1^k+1
*
= 0 Gk+1 Hk+l)(pk+1/k pk+1/k Hk+1 Gk+1 )+ Gk+1Rk+1Gk+1
- pk+1/k pk+1/k Rk+1 Gk+1 Gk+1Rk+1Pk+1/k +
+ Gk+1Rk+1pk+1/k Rk+1 Gk+1 + Gk+1Rk+1Gk+1
Taking the partial derivative of Jk+1 with respect to Gk+1 and
setting equal to zero,
dJk+1
9Gk+i
= ^GkTT(F>k+1/k) 'aG^7(Pk+l/k Hk+1 Gk+l) 3G^7 +
+ (Gk+1 Hk+1 pk+1/k) + 3G^'+1' (Gk+1 Hk+1 pk+1/k Hk+1 Gk+l)
+ 3Gk+1 (Gk+1 Rk+1 Gk+1)
3Jk. 1 T
0Gk+1 = '2(' Gk+1 Hk+l) pk+1/k Hk+1 + 2Gk+1Rk+1
3.2.14
Recalling from linear algebra that
5Gk+i
(pk+l/k) =
a.


25
0Gk+1 ("Pk+1/k Hk+1 Gk+i) Rk+1/k Rk+1 b.
9Gk+i (^k+1 Rk+1 Rk+1/k) = 'Rk+1Rk+1/k c-
Recall that Pk+i/k is a symmetric matrix, meaning that
(Pk+i/k)T=pk+i/k
So from c. taking the transpose of the whole quantity
('Rk+lPk+1/k)T = Pk+1/kRk+1 >
0Gk+i ^k+1 Rk+1 Pk+1/k Rk+1 Gk+l) = 2 Gk+1 Rk+1 Pk+1/k Rk+1
d.
From linear algebra: since (XAXT) = 2Ax; A,X: matrices, with A
symmetric,
3 T
9Gk+i" (Gk+1 Rk+1 Gk+l) = 2Gk+1 Rk+1
From Equation 3.2.14 and
3Jk+i
5Gk+l


26
there results:
-2 (I Gk+i Hk+1) Pk+i/k Hj[+1 + 2Gk+i Rk+i = 0
=> -2(l Gk+i Hk+1) Pk+i/k Hj[+1 = -2Gk+i Rk+i
^ pk+1/k Rk+1 Gk+1 Rk+1pk+1/k Rk+1 Gk+1 Rk+1 = 3
^ Gk+1 (Rk+1 Pk+1/k Rk+1 + Rk+l) = Pk+1/k Rk+1
=> Gk+1 = pk+1/k Hk+1 (Hk+1pk+1/k Hk+1 + Rk+l)'1 3-2-
Equation 3.2.15 is the expression for the optimum Kalman gain.
Substituting Equation 3.2.15 into 3.2.12 gives the updated error
covariance in its final form as:
pk+1/k+1 = ([ Gk+1 Hk+l)pk+1/k
which is optimum in some sense.
Summarizing, the complete computational algorithm of the
Kalman filter consists of the following ordered operations:


27
T
Pk+1/k ^k+l.k Pk/k ^k+l.k + Gk
Gk+1 ~ Pk+1/k ^k+1 (^k+1 Pk+1/k ^k+1 + ^k+l) 1
Pk+1/k+1 O' Gk+-|Hk+l) Pk+1/k
£ k+1/k+1 = ^k+1 ,k £ k/k + Gk+1 (Zk+1 H k+1 ^k+1 ,k k/k)
IV.
The initial conditions of the Kalman filter are:
Pk/k is an arbitrarily large and diagonal matrix of the form (first iteration):
(0/0) =
tfx^O) * O 2 0 0 1 o
ie
0 * %(0) 1 o 200_
For the initial state vector,
£(0/0) = a
For the general process noise covariance matrix:


28
I- 2
Qu- =
cWl(0)
0
<%n(0)
3.2.18
For the general measurement noise covariance matrix:
Rk+1 =
^ (0)
0
vn(0)
3.2.19
Note that the matrices P, Q, R must be always positive
semidefinite.
It is appropriate at this point to emphasize that
Gk+i = f(Q,0,H,R). 3.2.20
The above relationship shows the strong coupling between the Kalman
filter gain and the system's coefficient matrices; consequently the optimal
conditions of the updated estimator Xk+i/k+i are also strongly affected
by the same quantities.
From what has been said so far, it is evident that the Kalman filter
is a very sophisticated and useful algorithm in state estimation


applications, due to its intrinsic properties in reflecting real world
situations without the introduction of errors (under certain assumptions).
Figure 3.1 (below) shows the corresponding Kalman filter
diagram.
Zk+1 = Hk+1 £k+1 + Vk+1
Zk+1/k = ^k+1 ^k+l.k^k/k
FIGURE 3.1 KALMAN FILTER DIAGRAM


CHAPTER 4
THE TRACKING MATRIX
4.1 Mathematical Formulation of the Tracking Matrix
The tracking matrix is a method of discretizing and tracking (in the
polynomiai sense) a smooth function via the Taylor series expansion.
This methodology, unfortunately, as applied to tracking guidance
systems, hasn't been communicated very well in the existing literature,
for various reasons. In the next sections, this matrix will be used as the
state transition matrix (dynamics matrix), of the Kalman filter formulations,
thus solving in many respects, the identification problem of the assumed
model.
It is important that the identification part (recall from Chapter 2) is
properly solved, or at least, if there are certain drawbacks, these must be
realized a priori.
The tracking matrix has the following generalized form:
4>(k+1,k)-
T ¥
o
o
1
0
T
1.
4.1.1


31
The elements of this matrix are the generated sampling
coefficients of the well-known Taylor series expansion. Consider the
general Taylor series expansion of three variables; in this case a vector
directional gradient can be formulated in the following way:
f(x,y,z) A f(x0,y0,zo) lXo,yo,Zo + S (W) lXo,y0,z0
X x0'
y yo
Z Z0_
+ V2f(x,y,z)
'x0.yo.zo
'(x x0)
2-1
+ Yn f(x,y,z)
'x0.yo.zo
2!
(y yo)2
2!
(z z0)2
L 2! J
(x-xo)"-
n!
(y yo)n
n!
(z zp)n
I n! I
=> f(x,y,z) £ f(x0,y0,zo) + § lx0,y0>z0(x-xo) + § ^^0) +
9f_. a^f, (x-x0)2 d2f (y-yp)2
+ 8z lx0>y0,z0^ZZo) + 0X2 lx0ly0>z0 2! + 0y2 lx0,y0,z0 2! +
32f (z-z0)2 anf (x-xp)n
+ 5z2 x0.yo>zo 2! + *+ 3xn'x0,y0.z0 n! +
9nf (y-yp)n anf (z-z0)"
+ 3yn 'x0,yo,zo n! + 9zn'x0,yo.zo n!
4.1.2


32
The formidable expression of Equation 4.1.2 is actually not as
complicated as it may seem, especially when the expansion is applied to
a function of one variable. In the time domain, and about a center point
to, the Taylor series assumes the following form:
(t -10)2 (t t0)n
f(t) A f(t0) + f(t0)(t to) + Ht0)- 2|~+ + ">(<<)) nj
or, in closed form
oo
n=0
(t-t0)n
4.1.3
So for a general function x(t):
x(t) = x(t0) +x(t0) (t -10) + x(t0)
(t -10)2
2!
+
higher order terms.
For the discretization process, let:
* *k+i to-* *k
t" *0 *k+1 tk = T
where T is the sampling interval of the process. Then, as in Equation
4.1.3,
EX(")(tk)
nj (^k+1 tk)
n=0
n-S
n=0
XW(tk)
n! 1
4.1.4


Consequently:
x(tk+i) = x(tk) + x(tk) (tk+i tk) + x(tk>
(tk+1 ~ tk)2
2!
+ remainder.
or,
xk+1 = xk + xk T + xk2|- + remainder.
4.1.5
Taking the derivative of x(tk+i) and expanding in the same way,
we obtain :
x(tk+i). = x(tk) + x(tk) (tk+1 tk) + remainder
or, xk+i = xk + xk T + remainder.
4.1.6
Proceeding with the second derivative of X(tk+1), there results:
x(tk+i) = x(tk) + x(tk) (tk+1 tk) + remainder
or, xk+i = xk + xk T + remainder. 4.1.7
From Equations 4.1.5 4.1.7, the elements of the tracking matrix
are easily recognized. In the physical sense:
x(tk+i) denotes position
x(tk+i) denotes velocity
x(tk+i) denotes acceleration
These quantities are the states of the assumed model.


The tracking matrix is able to "follow", with very good accuracy,
targets moving with constant acceleration, implying that the series
expansion will be able to fit the polynomial trajectory of the target closely.
Expanding in the neighborhood of the center point, tk(to).
Considering the first three terms of the Taylor series, the tracking
will be exact if the function can be regenerated after a few terms. For the
case of a parabolic trajectory, the third term (second derivative)
regenerates the parabolic function.
then
or,
x(t) = t2lt0=o + 2t|t0=o(t to) + 2jto=0 2
=> x(t) = t2
4.1.8
or,
=> x(t) =~2~ => *(t) =T2
4.1.9
where T is the sampling interval.


In the extreme situation where the path of the target is a sinusoidal
function, the tracking process becomes somewhat pathological. Even
though, in general, additional derivative terms in the expansion give
better results, this is not true for the sinusoidal case, since the expansion
is about t0 = 0, and the sinusoid moves away from zero and also since
this type of a function is infinitely differentiable. Both cases, parabolic
and sinusoidal, will be examined in the next chapter, since it is of interest
to demonstrate the behavior of the Kalman tracking filter while it tracks a
noisy trajectory, especially in the sinusoidal case.
4.2 Harmonic Oscillation
Consider the vibration of a spring with a weight, being attached to
a support downwards.
rS7777777777777777777777777777
FIGURE 4.1 HARMONIC OSCILLATIONS IN ONE PLANE
If the block is moved away from its equilibrium point (0y, the motion
of the block is described by a differential equation, with its associated


initial conditions. This mechanical problem involves simple harmonic
motion.
Assuming that the motion of block A takes place entirely in a
vertical plane, the velocity and acceleration are given by the second and
third derivatives of x(ft) with respect to time t (seconds). Additionally,
there is a restoring force F which is proportional to the distance from its
equilibrium point, and which is directed towards point 0 (Fig. 4.2.1).
In addition to the above, there will be, in general, a retarding force,
which is created by the medium in which the motion occurs. Also a
possible impressed force may be present, such as the motion of the
support or the existence of some magnetic field. If the above parameters
are to be considered in the model, the describing differential equation
would assume the following form:
jx(t) + Bx(t) + Kx(t) = ^F(t) 4.2.1
If the retarding force is proportional to the cube of the velocity, then
Equation 4.2.1 becomes highly non-linear.
For present purposes, both the retarding and the impressed forces
are assumed to be zero. In such a case, Equation 4.2.1 becomes a
second order linear homogeneous differential equation with constant
coefficients of the form:


37
jx(t) + Kx(t) = 0 =>
=> mx + Kx(t) = x(t) + ^x(t) = 0 =>
=> x(t) + co2x(t) = 0
x + co2x = 0
or, x = -co2x
where
to =
and with I.C.'s
x(0) = x0
x(0) = v0
Solving Equation 4.2.3 there results
x(t) = Asin(cot + <)))
or x(t) = Acoscot + Bsjncot
Using the given initial conditions with Equation 4.2.4:
4.2.2
4.2.3
4.2.4
x(0) = x0
x(0) = v0
the coefficients are obtained as follows:


38
x(t) = Acoscot + Bsincot
fort0 = 0, x(t0) = A and
x(t0) = -Acosincot + coBcoscot 4.2.5
x(t0)
x(t0) = coB => B=-^
Now consider:
x(t0)
x(t) = x(t0)cosco(t-t0) + sinco(t-t0)
CO
4.2.6
Let
t *k+i
lk
t_t0 tk+i-tk
= T.
x(t)
then x(tk+1) = x(tk)cosco(tk+1 tk) +-------sinco(tk+1-tk)
CO
x(tk)
x(tk+i) = x(tk)coscoT +-------sincoT
CO
or
xk
Xk+1 = xkcoscoT + sincoT
co
4.2.7
Equation. 4.2.7 can be described and formulated in the following
manner:


39
' Xk+1 " r t2 1 " xk "
1 T T
Xk+1 d. Xk
0 1 T
Xk + 1 0 0 1 Xk
Recall matrix Equation. 4.1.1, realizing that:
cos(x) = ^
n=0
2n!
cos(coT) =
I
n=0
n/^r\2n
(-1) (coT)
2n!
sin(x)
_y tin
- Lu (2r
n=0
x2n+1
(2n+1)!
sin(coT) =
I
n=0
(-1 )n(coT)2n+1
(2n+1)l
From Equation. 4.2.9
, . 0)2T2 o)4T4 03676
cos(coT) ~ 1 "2! + 4! "6! +
. T co3T3 co5T5 (sPV
sin(coT) ~ coT- 31 + 51 71 +
Then from Equation. 4.2.11
C02f2
4.2.8
4.2.9
4.2.10
4.2.11
4.2.12
cos(coT) = 1 -
2!
4.2.13


and from Equation. 4.2.12
sin((oT) = coT 4.2.14
Substituting Equation. 4.2.13 and Equation. 4.2.14 into Equation. 4.2.7
there results:
x(tk+i) = X(W
1 -
0)2T2
x(tk) _
+-------COT :
co
C02T2 .
= x(tk) x(tk)-2+ x(tk)T:
= x(tk)-co2X(tk)^+x(tk)T:
or
o T2 .
xk+1 xk 032 xk 2 + xk^
but
Xk = -0)2 Xk
then
. .. J2
Xk+1 = xk + xkT + xk ~2~
4.2.15
Differentiating Equation. 4.2.7 and proceeding in the same manner:
x(tk+i) = -cox(tk)sinco(tk+1 tk) + x(tk)cosco(tk+1 tk)
= -cox(tk)sino)T + x(tk)coscoT =


41
= -cox(tk)[co7] + x(tk)
1 -
C02T2
n . . co2T2 . 0 co2T2
CO2 x(tk)T + x(tk) x(tk)2 = X(tk) CO2 x(tk)T xpiJ-g

= x(tk) + x(tk)T

or
xk+i = xk + xkT
4.2.16
Taking the first two terms and recalling that:
x(tk) = -co2 x(tk)
Taking the second derivative of Equation. 4.2.7:
x(tk+i) = -o^x^coscoT cox(tk)sincoT =
= -co2x(tk)
1 -
co2T2
cox(tk)coT =
= -co2x(tk) + co4x(tk)"2 co2x(tk)T
choosing the first term of the previous expression:
x(tk+i) = -co2 x(tk) => x(tk+1) = x(tk)
or
Xk+1 = Xk
4.2.17


42
Then the structure of Equation. 4.2.8 can be easily recognized
from Equations 4.2.15-3.2.17.
In the next chapter, the applicability of the Kalman tracking filter
equation is demonstrated for a target, such as an aircraft, moving at first
with constant acceleration (parabolic case) and then with time varying
acceleration (sinusoidal case).
The motion of the vehicle is perturbed by zero mean white
Gaussian plant noise, which accounts for manoeuvres or other random
factors such as wind gusts. Moreover, the measurement process is
corrupted also by mean white Gaussian noise which accounts for the
observation noise.
Then the vehicle dynamics are assumed to be described by the
following state vector equations.
^k+1 $k+i ,k + Fk wk 4.2.18
where: *k = [xk xk xk] 4.2.19
wj=[w(1) w{2) w(3)] I 4.2.20
rJ-[o o 1]
4.2.21


The measurement equation is:
Z|<+1 = Hk+12k+1 + Vk+1
4.2.23
where: Hk+-| = [1 0 0]
4.2.24
The matrix equation 4.2.22 is used in the Kalman filter equation so
that proper estimation and tracking is accomplished.


CHAPTER 5
STATE ESTIMATION APPLICATION VIA THE KALMAN
TRACKING FILTER
5.1 Parabolic Trajectory
For the parabolic trajectory state, the general phase variable
formulation is developed in the following way:
Letting y = x2 5.1.1
X1 = x2l x-i = x2
x2 = 2xJ X2 = 2
5.1.2
5.1.3
Since
X = A* + B yi
X1 0 1 V 0 '
S +
_x2. . o 0 . _x2_ 1
2 + )n
5.1.4
For the discretization process:
= pAT
(j) = e

eAX dX
B
5.1.5
5.1.6


where:
r o i
B =
1
The states are generated via formulation 5.1.4. Keeping in mind
equations 5.1.4 5.1.6, then:
*k+l ^k+uXk + TkWk 5.1.7
Zk+i = Hk+1 Xk+1 + Yk+i 5.1.8
where Hk+i = [ 1 0 ]
The Kalman tracking filter is applied within a certain range of process and
measurement noise deviations.
SDP = 0.001 SDM = 0.01 SNR = 198E3 LO o ii 1-
SDP = 0.001 SDM = 0.09 SNR 2.5E3 T = 0.5
SDP = 0.001 SDM = 0.2 SNR 500 to o II 1-
SDP = 0.001 SDM = 1.5 SNR 8.8 T = 0.5
SDP = 0.01 SDM = 2 SNR = 1.8 H II p
SDP = 2 SDM = 0.2 SNR 4.9 T = 0.5
SDP = 5 SDM = 0.01 SNR = 0.8 T = 0.5
TABLE 5.1 SUMMARY OF NOISE DEVIATIONS AND SNR'S.


TRACKING TRAJECTORY
FIGURE 5.1 TARGET TRAJECTORY WITH SDP = 0.001, SDM = 0.01, T = 0.5


TRACKING TRAJECTORY
-P-
-vl


5
3
5
2
5
1
5
0
5
(


FIGURE 5.4 INNOVATION PROCESS WITH SDP = 0.001, SDM = 0.2, T = 0.5, PARABOLIC CASE


TRACKING TRAJECTORY
tn
o


TRACKING TRAJECTORY
FIGURE 5.6 TARGET TRAJECTORY (DOTTED) WITH SDP = 0.01, SDM
2, T = 0.1, SNR = 5
cn


TRACKING TRAJECTORY
FIGURE 5.7 TARGET TRAJECTORY WITH SDP = 2, SDM = 0.2, T = 0.5


TRACKING TRAJECTORY
0 10 20 30 40 50 60 70 00 90 100
FIGURE 5.8 TARGET TRAJECTORY (DOTTED) WITH SDP = 5, SDM = 0.01, T = 0.5


5.2 Sinusoidal Trajectory
54
For the sinusoidal case, which is of special interest, the phase
variable representation develops in the following way. Consider, at first, the
second order differential equation which is repeated here for convenience:
x(t) + co2x(t) = 0
or x + co2x = 0 5.2.1
The sinusoidal trajectory is generated via the phase variable state
model of 5.2.1.
X1 = X] X, = x2 Let: .[ =* 1 * 5.2.2
X CM 3 1 II CM X >T n CM X 5.2.3
Since x = Ax + B X1 0 1 ' ' X1 0
S +
. X2 . -CO 2 0 _ . X2 . 1
Similarly, for the discretization process:
(j) A eAT
5.2.5
5.2.6
r =

B
then 5.2.4 becomes
5.2.7
*k+1 = $k+1,k + TkWk
5.2.8


55
and ^k+1 ^k+1 ^k+1 + Yk+1 5.2.9
where Hk+1 = [ 1 0] 5.2.10
' *1 '
Note that Zk+1 [ 1 0 ] + Yk+1
*2 k+1
=> Zk+I=xk+1 + Vk+1 5-2-11
Also the (-G)2) quantity of 5.25 is:
.. 2k 2k
Tsig 20Tsam 10Tsam
and -co2 = -x
100T2
The Kalman tracking filter equation is applied to the sinusoidal case
for the following range of process and measurement noise deviations.
SDP = 0.01 SDM = 0.001 SNR = 39.6E3 H II o cn
SDP = 0.01 SDM = 0.1 SNR 396 T = 0.5
SDP = 0.01 SDM = 1.5 SNR 1.7 H II p Vl
SDP = 0.01 SDM = 0.01 SNR 396 H II p cn
SDP = 1. SDM = 0.01 SNR-4 T = 0.5
SDP = 0.1 SDM = 0.1 SNR 1 T = 0.5
SDP = 0.1 SDM = 0.3 SNR 40 H II p cn
TABLE 5.2 SUMMARY OF NOISE DEVIATIONS AND SNRS


TRACKING TRAJECTORY


tracking trajectory
6
T


TARGET TRAJECTORY
cn
oo


5
A
3
2
1
0
1
2
3
A
5


5
A
3
2
1
0
1
2
3
A
5
i------1------1-------1------1----i-------1------1-------r
__i------j___:___i_______i_______t_______i_______i_______i______i_______
10 20 .. 30 40 50 60 70 80 90 100
FIGURE 5.13 TARGET TRAJECTORY WITH SDP = 0.1, SDM = 0.01, T = 0.5
CD
O




15 ---------1--------1--------1--------1---------1--------1--------1--------1--------1
_i__________i__________i__________i----------1----------1---------i --------1----------1
10 20 30 40 50 60 70 00
-15
0
FIGURE 5.15 TARGET TRAJECTORY WITH SDP = 1.0, SOM = 0.01, T = 0.5
90
100


25
20
15
10
5
0
-5
10
15
20
25
10 20 30 40 50 60 70 80 90 100
FIGURE 5.16 TARGET TRAJECTORY (DOTTED) WITH SDP = 2, SDM = 0.1, T = 0.5.SNR = 2
CD
co




5.3 Discussion
From the simulations of 5.1 and 5.2, the applicability and
usefulness of the Kalman tracking filter can be easily recognized.
Comparing the two cases, it is possible to draw many different kinds of
conclusions and explanations.
Probably the most interesting ones are related to the consistency
of the filter performance, for the parabolic case, and the expected error
that is introduced in the sinusoidal case. The results verify and couple in
a strong way the theory that was presented in Chapter 2, relating to the
importance of the classification set.
\
Recall that correct definitions and realization of the classification
set is mandatory and unavoidable. The parabolic situation verifies the
above notion since the overall tracking process behaves according to the
laws of this set.
The same observation can be made also for the sinusoidal case,
where an error is introduced due to the approximating nature of the
system dynamics matrix or, in other words, the tracking matrix.
In addition to the above, the quantity that couples the results with
the theoretical aspects of Kalman filtering is the innovation process. It is
very important to realize that for the parabolic case the error is zero mean
and unbiased, which is in agreement with the Kalman filter theory. On
the other hand, in the sinusoidal case this weighted quantity, or residual,


possesses some periodicity (Refer to Figure 5.4 and 5.12 or 5.14). This
periodicity can be explained via the classification set. Since the error is a
random convergent zero mean unbiased quantity for the sinusoidal case,
it can be said that there are some trends in the identification part, due to
the state transition matrix of the Kalman filter trying to approximate an
infinitely differentiable function via the Taylor series expansion.


CHAPTER 6
CONCLUSION
The Kalman tracking filter was proved to be a good systematic
state space approach for tracking moving targets which are perturbed by
plant noise. Additionally, in this work, the capability of estimating the
state (position) of a moving vehicle from noisy measurement data was
demonstrated.
Both applications presented some of the limitations of the filter for
high measurement noise. Moreover, the extreme situation of a parabolic
trajectory provided a verification for the importance of correct
mathematical modeling.
The most stimulating part is probably the ability of the Kalman filter
to act as a two-fold algorithm; namely as a state estimator and as a
tracker. The latter concept is of greater interest, from a controls point of
view. It is necessary to emphasize that the algorithm, loosely speaking,
doesn't know anything about the target dynamics; nevertheless, the
tracking matrix is able to approximate the unknown system coefficients of
the vehicle via the Taylor series expansion terms.


An area of concern is the sensitivity of the tracking filter for low
signal to noise ratios. Recalling the sinusoidal case, it seems that
introduction of measurement noise in the process creates problems to
the overall tracking behavior.
A probable area of further research would be to identify the matrix
parameters more correctly, especially when the problem of time varying
acceleration occurs. Since time varying acceleration implies certain
nonlinearities, a better matrix may be developed that would depend on
different types of functional approximations; numerical mathematics
offers a wide variety of functional relationships that could be used for
data fitting, such as the Bessel expansion coefficients, the Hermite and
Lagrange polynomials, the Laguerre and Legendre functions, and
others. Perhaps a tracking matrix, with coefficients derived from the
above set of functions, could provide, to the Kalman tracking filter, a
higher robustness in the pressure of noise.
Another important issue is the minimization of the real time
execution of the Kalman tracking filter, equations, especially when it is
extended to three dimensional coordinate systems. (Coupling between
Cartesian and Polar coordinates)
Finally, the great importance and the mechanics of state space
mathematics as applied to estimation and tracking became apparent
through this work.


BIBLIOGRAPHY
1. Ogata, K., Modern Control Engineering. Prentice-Hall, Inc.,
Englewood Cliffs, NJ, 1970.
2. Gear, William C., Numerical Initial Value Problems in Ordinary
Differential Equations. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1971.
3. Burden, Richard L., J. Douglas Faires, Albert C. Reynolds, Numerical
Analysis (Second Edition). Prindle, Weber & Schmidt, Boston, MA,
1981.
4. Toltov, Georgi P., Fourier Series. Dover Publications, Inc., NY, 1962.
5. Gelb, A., Applied Optimal Estimation. The MIT Press, Cambridge, MA,
1974.
6. Roden, M.S., Introduction to Communication Theory. Pergamon
Press, Inc., 1972.
7. Gagliardi, R., Introduction to Communications Engineering. John
Wiley & Sons, 1978.
8. Priestley, M.B., Spectral Analysis and Time Series. Academic Press,
Harcort Brace Jovanovich, 1981.
9.
Papoulis, A., Probability. Random Variables and Stochastic
Processes. McGraw-Hill Book Company, 1984.


10.
70
Beyer, W.H., CRC Standard Mathematical Tables (27th Edition^ CRC
Press, Inc., 1984.
11. Houpis, C.H., J.J. D'Azzo, Linear Control System Analysis and Design
(Conventional and Modern). McGraw-Hill Book Company, 1981.
12. Ramachandra, K.V., State Estimation of Manueuvenng Targets from
Noisy Radar Measurements, IEEE Proceedings, F, Communications,
Radar, and Signal Processing, Volume 135, Number 1, pp 82-84,
February, 1988.
13. Ramachandra, K.V., Position, Velocity and Acceleration Estimates f
rom the Noisy Radar Measurements, IEEE Proceedings, F,
Communications, Radar, and Signal Processing, Volume 131,
Number 2, pp 167-168, 1984.


APPENDIX A
Formal Derivation via Autocorrelation
For the power of X(t) = Asin(cot + ) we need to consider the
following:
7C
P
avg -
dco
A.1
Sxx()
J Rxx(t) e-jcotdx
A.2
Rxx(t) = 2% jSxx(C0) ejCOTdo)
A.3
Sxx(co): even spectrum.
oo oo
Rxx(0) = ^ Jsxx(co) dco => Rxx(0) = ~Jsxx(co) dco A.4
RW = E{x(t) x(t + t)}=!|A | x(t) x(t + x) 6x A.5
-T/2
T/2
Since


72
From
X(t) = Asin(cot + <|>) and Equation A.5,
Rxx(x) = t^pT 1 Asin(cot + ()>) Asin(cot + cox + <(>)
-T
T
lim A2
T->2T
T
rsT C
2f J sin(cot + <|>) sin(cot + cox + <()) d<{> A.6
-T
From trigonometry tables,
sin(cot + ) = jcos(cot + cot cox -<()-
- g'COS^t + ()) + cot + cox + )
^cos(o)t) ^-cos(2o)t + cox + 2) A. 7
From Equations A.6 and A.7,
RxxM- J [^cos(cox) 7jrcos(2cot + cox + 2$)] d<|>
-T
lim A2
t>oo 47
T
J [cos(cox) cos(2cot + cox + 2)] d(j)
-T


73
lim A2
t-*~ 4T
2T cos(cox) jcos(2cot + cot + 2) d -T
Letting u = 2cot + cox + 2 and du = 2d _ , lim A2 . ,
So: Rxx(x)= T_^jjr\2T cos(cox) -
2-sin(2cot + cox + 2)
V
lim A2 f__ , , 1 . x
t^oo4j]2T cos(cox) 2Sin(2cot + cox + 2T)
- ^-sin(2cot + cox + 2T)
lim
t>oo
A2
2Tcoscox sin(2cot + cox + 2T)
4T
8T
sin(2cot + cox + 2T)
8T
with Sirj ^ = 0, sin( ) is bounded by 1.
Then
0 , lim 2A2T _ A2
RxxW = t^lf^ => RxxW = coscox.
A2
Evaluating RxxCO lx=o ^ Rxx(O) = 2
A.8
Parabolic Case


74
y = kt2 where k =-------------5- (Normalizing Factor),
(20,000 -Ts)
and ^rms - v3? A. 9
or V2 = vrms k2 T4 5 A.10
where T = 100 sec (Period), meaning that for 200 point
(0.5)(200) = 100, Ts = 0.5 sec.
T
i f (k t2)2 dt k2 *4
S T 6 5 20
N 2 2 - 2 2_ 2 2
Ow + O v where k = 0.001,
(0.5) (200 points) T = 100 sec
(0.1) (200 points) -T = 20 sec
Sinusoidal Case
A2
S 2
N 2 2
cw + av
A = Vx2 + y2 = V4 + 4 = VF=^-= 4
A.12


75
Realizing that for independent noise processes and since for this case:
Hk+i = [10 0],
crz aw
+


76
Since
Then:
and
APPENDIX B
The sampling interval, according to Shannon's Theorem, is
T <^ia
1 sam ^ 2
B.1
1 2 Tsig_> 2
fsam > 2 fsig * t ' t
a 1 sam 1 sig 1 sam
T > 2 T =>T <219.
1 sig > 1 sam 1 sam < 2
For practical control systems, the sampling range is:
I§ia 10 sam 20
B.2
2% 2jc
jc

T 20 Tsam 10 Tsam
B.3
X(t) = Acoscot + Bsincox = Acos
%
r
10 T
t + Bsin
sam,
71
vj 0 Tsam^
For T = 0.5 => f = y= 2 Hz, co = 2*f 12.6 j .
For T= 0.7 =>f = 0.5 Hz, w = 1.6 j


APPENDIX C
For the discretization process (numerical evaluation),
x = Ax(t) + Bw(t)
C.1
i , ^ , AT AT2 AT3
(}) A eAT => C.2
r=
J eA?l dX
T -<
/ , A2X2 A3X3
H 1 + Aa. + 2i + 21 +
loJ >
B
=>r = 1IT +
AT2 A2X3 A3X4
2 + 6 + 24
B
C.3
Then C.1 is:
Xk+1 = (J)k+i,kXk + TkWk
C.4