Nonparametric Belief Propagation
By Erik B. Sudderth, Alexander T. Ihler, Michael Isard, William T. Freeman, and Alan S. Willsky
Continuous quantities are ubiquitous in models of real-world phenomena, but are surprisingly difficult to reason
about automatically. Probabilistic graphical models such
as Bayesian networks and Markov random fields, and algorithms for approximate inference such as belief propagation (BP), have proven to be powerful tools in a wide range of
applications in statistics and artificial intelligence. However,
applying these methods to models with continuous variables
remains a challenging task. In this work we describe an extension of BP to continuous variable models, generalizing particle filtering, and Gaussian mixture filtering techniques for
time series to more complex models. We illustrate the power
of the resulting nonparametric BP algorithm via two applications: kinematic tracking of visual motion and distributed
localization in sensor networks.
Graphical models provide a powerful, general framework for
developing statistical models in such diverse areas as bioinformatics, communications, natural language processing,
and computer vision. 28 However, graphical formulations
are only useful when combined with efficient algorithms
for inference and learning. Such algorithms exist for many
discrete models, such as those underlying modern error correcting codes and machine translation systems.
For most problems involving high-dimensional continuous variables, comparably efficient and accurate algorithms
are unavailable. Alas, these are exactly the sorts of problems
that arise frequently in areas like computer vision. Difficulties
begin with the continuous surfaces and illuminants that digital cameras record in grids of pixels, and that geometric reconstruction algorithms seek to recover. At a higher level, the
articulated models used in many tracking applications have
dozens of degrees of freedom to be estimated at each time
step. 41, 45 Realistic graphical models for these problems must
represent outliers, bimodalities, and other non-Gaussian statistical features. The corresponding optimal inference procedures for these models typically involve integral equations
for which no closed form solution exists. It is thus necessary
to develop families of approximate representations, and algorithms for fitting those approximations.
In this work we describe the nonparametric belief propagation (NBP) algorithm. NBP combines ideas from Monte
Carlo3 and particle filtering6, 11 approaches for representing complex uncertainty in time series, with the popular
belief propagation (BP) algorithm37 for approximate inference in complex graphical models. Unlike discretized
approximations to continuous variables, NBP is not limited
to low-dimensional domains. Unlike classical Gaussian
approximations, NBP’s particle-based messages can represent, and consistently reason about, the multimodal
figure 1. Particle filters assume variables are related by a hidden
markov model (top). the nBP algorithm extends particle filtering
techniques to arbitrarily structured graphical models, such as those
for arrays of image pixels (bottom left) or articulated human motion
Hidden Markov model
distributions induced by many real-world datasets. And
unlike particle filters, NBP can exploit the rich nonsequential structure of more complex graphical models, like those
in Figure 1.
We begin in Section 2 by reviewing graphical models,
BP, Monte Carlo methods, and particle filters. Section 3
then develops the two stages of the NBP algorithm: a belief
fusion step which combines information from multiple particle sets, and a message propagation step which accounts
for dependencies among random variables. We review a
pair of previous real-world applications of NBP in Section
4: kinematic tracking of visual motion (Figures 6 and 7)
and distributed localization in sensor networks (Figure 8).
Finally, we conclude in Section 5 by surveying algorithmic
and theoretical developments since the original introduction of NBP.
2. infeRence in GRaPhicaL moDeLS
Probabilistic graphical models decompose multivariate
distributions into a set of local interactions among small
subsets of variables. These local relationships produce
conditional independencies which lead to efficient learn-
ing and inference algorithms. Moreover, their modular
The original versions of this paper were entitled “Non-
parametric Belief Propagation,” by E. Sudderth, A. Ihler,
W. Freeman, and A. Willsky, and “PAMPAS: Real-Valued
Graphical Models for Computer Vision,” by M. Isard.
Both appeared in the IEEE Conference on Computer Vision
and Pattern Recognition, June 2003.