The red lines correspond to “ground
truth” in our example.
The green points in Figure 9b show
the noisy measurements of velocity
at different time steps, assuming the
noise is modeled by a Gaussian with
variance 8. The blue lines show the
a posteriori estimates of the velocity
and position. It can be seen that the a
posteriori estimates track the ground
truth quite well even when the ideal
system model (the gray lines) is inaccurate and the measurements are
noisy. The cyan bars in the right figure
show the variance of the velocity at different time steps. Although the initial
variance is quite large, application of
Kalman filtering is able to reduce it
rapidly in few time steps.
Discussion. We have shown that
Kalman filtering for state estimation
in linear systems can be derived from
two elementary ideas: optimal linear
estimators for fusing uncorrelated
estimates and best linear unbiased
estimators for correlated variables.
This is a different approach to the
subject than the standard presentations in the literature. One standard
approach is to use Bayesian inference. The other approach is to assume
that the a posteriori state estimator
is a linear combination of the form
, and then find the values
of At and Bt that produce an unbiased
estimator with minimum MSE. We
believe that the advantage of the presentation given here is that it exposes
the concepts and assumptions that
underlie Kalman filtering.
Most presentations in the literature
also begin by assuming that the noise
terms wt in the state evolution equation
and vt in the measurement are Gaussian.
Although some presentations1, 10 use
properties of Gaussians to derive the
results in Figure 3, these results do not
depend on distributions being
Gaussians. Gaussians however enter the
picture in a deeper way if one considers
nonlinear estimators. It can be shown
that if the noise terms are not
Gaussian, there may be nonlinear
estimators whose MSE is lower than
that of the linear estimator presented
in Figure 6d. However, if the noise is
Gaussian, this linear estimator is as
good as any unbiased nonlinear estimator (that is, the linear estimator is a
minimum variance unbiased estimator
Kalman gain is not a dimensionless
value here. If Ht = I, the computations in Figure 6d reduce to those of
Figure 6c as expected.
Equation 39 shows that the a posteriori state estimate is a linear combination of the a priori state estimate
and the measurement (zt). The
optimality of this linear unbiased
estimator is shown in the Appendix. It
was shown earlier that incremental
fusion of scalar estimates is optimal.
The dataflow of Figures 6(c,d) computes the a posteriori state estimate at
time t by incrementally fusing measurements from the previous time
steps, and this incremental fusion
can be shown to be optimal using a
similar argument.
Example: falling body. To demonstrate the effectiveness of the Kalman
filter, we consider an example in
which an object falls from the origin
at time t=0 with an initial speed of 0
m/s and an expected constant acceleration of 9. 8 m/s2 due to gravity. Note
that acceleration in reality may not be
constant due to factors such as wind,
and air friction.
The state vector of the object con-
tains two components, one for the
distance from the origin s(t) and one
for the velocity v(t). We assume that
only the velocity state can be mea-
sured at each time step. If time is dis-
cretized in steps of 0.25 seconds, the
difference equation for the dynamics
of the system is easily shown to be
the following:
where we assume and
The gray lines in Figure 9 show
the evolution of velocity and distance
with time according to this model.
Because of uncertainty in modeling
the system dynamics, the actual evo-
lution of the velocity and position
will be different in practice. The red
lines in Figure 9 show one trajectory
for this evolution, corresponding to
a Gaussian noise term with covari-
ance in Equation 32 (because
this noise term is random, there are
many trajectories for the evolution,
and we are just showing one of them).
We have shown
that Kalman
filtering for state
estimation in linear
systems can be
derived from two
elementary ideas:
optimal linear
estimators for
fusing uncorrelated
estimates and best
linear unbiased
estimators
for correlated
variables.