Vector estimates. In some applications,
estimates are vectors. For example, the
state of a mobile robot might be represented by a vector containing its position and velocity. Similarly, the vital
signs of a person might be represented
by a vector containing his temperature, pulse rate, and blood pressure.
Here, we denote a vector by a boldfaced
lowercase letter, and a matrix by an
The covariance matrix ∑xx of a random variable x is the matrix E[(x − µx)
(x − µx)T], where µx is the mean of x.
Intuitively, entry (i,j) of this matrix
is the covariance between the i and
j components of vector x; in particular, entry (i,i) is the variance of the
ith component of x. A random variable x with a pdf p whose mean is
µx and covariance matrix is ∑xx is
written as x∼p(µx, ∑xx). The inverse
of the covariance matrix is called
the precision or information matrix.
Uncorrelated random variables. The
cross-covariance matrix ∑vw of two random variables v and w is the matrix
E[(v−µv)(w−µw)T]. Intuitively, element
(i, j) of this matrix is the covariance
between elements v(i) and w( j). If the
random variables are uncorrelated, all
entries in this matrix are zero, which
is equivalent to saying that every component of v is uncorrelated with every
component of w. Lemma 2 generalizes
Lemma 2. Let x1∼p1(µ1, ∑
xn∼pn(µn, ∑n) be a set of pairwise uncorrelated random variables of length m.
(i) The mean and covariance matrix of
y are the following:
(ii) If random variable xn+ 1 is pairwise
uncorrelated with x1, .., xn, it is
uncorrelated with y.
The MSE of an unbiased estimator y
is E[(y−µy)T(y−µy)], which is the sum of
the variances of the components of y; if
y has length 1, this reduces to variance
as expected. The MSE is also the sum
of the diagonal elements of ∑yy (this is
called the trace of ∑yy).
Fusing Scalar Estimates
We now consider the problem of choosing the optimal values of the parameters α and β in the linear estimator
β*x1 + α*x2 for fusing two estimates x1
and x2 from uncorrelated scalar-valued
The first reasonable requirement is
that if the two estimates x1 and x2 are
equal, fusing them should produce
the same value. This implies that α+β
= 1. Therefore, the linear estimators of
interest are of the form
If x1 and x2 in Equation 5 are considered to be unbiased estimators of some
quantity of interest, then yα is an unbiased estimator for any value of α. How
should optimality of such an estimator
be defined? One reasonable definition
is that the optimal value of α minimizes
the variance of yα as this will produce the
highest-confidence fused estimates.
Theorem 1. Let and
be uncorrelated random
variables. Consider the linear estimator
yα(x1,x2)=( 1−α)*x1+α*x2. The variance of
the estimator is minimized for .
The proof is straightforward and is
given in the online appendix. The vari-
ance (MSE) of yα can be determined from
Setting the derivative of with
respect to α to zero and solving the resulting equation yield the required result.
In the literature, the optimal
value of α is called the Kalman gain K.
Substituting K into the linear fusion
model, we get the optimal linear esti-
mator y(x1, x2):
As a step toward fusion of n> 2 estimates, it is useful to rewrite this as follows:
Substituting the optimal value of α
into Equation 6, we get
estimator is one
is equal to the
and it is preferable
to a biased
the same variance.