Equation 13, showing that incremental fusion is optimal.
Summary. The results in this section
can be summarized informally as follows. When using a linear estimator to fuse
uncertain scalar estimates, the weight given
to each estimate should be inversely proportional to the variance of the random variable from which that estimate is obtained.
Furthermore, when fusing n> 2 estimates,
estimates can be fused incrementally without any loss in the quality of the final result.
These results are often expressed formally
in terms of the Kalman gain K, as shown
in Figure 3; the equations can be applied
recursively to fuse multiple estimates.
Note that if ν1ν2, K≈0 and y(x1,x2)≈x1;
conversely if ν1ν2, K≈ 1 and y(x1,x2)≈x2.
Fusing Vector Estimates
The results for fusing scalar estimates
can be extended to vectors by replacing
variances with covariance matrices.
For vectors, the linear estimator
is where
. Here A stands for the matrix
parameters (A1, …, An). All the vectors (xi)
are assumed to be of the same length.
To simplify notation, we omit the subscript n in yn,A in the discussion here
as it is obvious from the context.
Optimality. The parameters A1, …,
An in the linear data fusion model are
chosen to minimize MSE(yA) which is
E[(yA−µyA)T(yA−µyA)].
Theorem 3 generalizes Theorem 2 to
the vector case. The proof of this theorem
is given in the appendix. Comparing
Theorems 2 and 3, we see that the
expressions are similar, the main difference being that the role of variance
in the scalar case is played by the covariance matrix in the vector case.
Theorem 3. Let xi∼pi(µi, ∑i) for ( 1≤i≤n)
be a set of pairwise uncorrelated
random variables. Consider the linear
estimator , where
. The value of MSE( yA) is mini-
mized for
( 23)
Therefore the optimal estimator is
( 24)
The covariance matrix of y can be
computed by using Lemma 2.
( 25)
In the vector case, precision is the
inverse of a covariance matrix, denoted
by N. Equations 26–27 use precision to
express the optimal estimator and its
variance and generalize Equations 13–14
to the vector case.
( 26)
( 27)
As in the scalar case, fusion of n> 2 vector estimates can be done incrementally
without loss of precision. The proof is
similar to the scalar case and is omitted.
Figure 2. Dataflow graph for incremental fusion.
ν1
ν1+ν2
ν2 ν3
ν1+ν2
ν1+ν2 ν1+···+νn−
1
ν1+···+νn−
1 ν1+···+νn
νn−
1 νn
ν1+ν2+ν3
ν1+ν2+ν3
ν1+···+νn
(ν1 + ν2)
y2(x1, x2)
(ν1 + ν2 + ν3)
y2(y2(x1, x2), x3)
(ν1 + ··· + νn−
1)
y2(y2(···), xn−
1)
(ν1 + ··· + νn)
y2(y2(···), xn)
...
x2
(ν2)
(ν1)
x1
x3
(ν3)
xn
(νn)
xn−
1
(νn−
1)
Figure 3. Optimal fusion of scalar estimates.
x1∼p1(µ1,σ1), x2∼p2(µ2,σ2)
2
2
K= σ12
σ1+ σ22 = ν2 ν1+ ν2
y(x1, x2)=x1 + K(x2 − x1)
2
2 σy =( 1−K )σ1 or νy = ν1+ ν2
( 17)
( 18)
( 19)
Figure 4. Optimal fusion of vector estimates.
Figure 5. BLUE line corresponding to
Equation ( 31).
y
; y1
;
µx
µy
;
; (y − µy)=Σyx Σxx(x − µx) −
1
x1 x