classical example in this regard concerns an oil wildcatter that needs to
decide whether or not to drill for oil at
a specific site, with an additional decision on whether to request seismic
soundings that may help determine
the geological structure of the site.
Each of these decisions has an associated cost. Moreover, their potential
outcomes have associated utilities and
probabilities. The need to integrate
these probabilistic beliefs, utilities
and decisions has lead to the development of Influence Diagrams, which are
extensions of Bayesian networks that
include three types of nodes: chance,
utility, and decision. 18 Influence diagrams, also called decision networks,
come with a toolbox that allows one to
compute optimal strategies: ones that
are guaranteed to produce the highest
expected utility. 20, 22
Bayesian networks have also been
extended in ways that are meant to facilitate their construction. In many domains, such networks tend to exhibit
regular and repetitive structures, with
the regularities manifesting in both
CPTs and network structure. In these
situations, one can synthesize large
Bayesian networks automatically from
compact high-level specifications. A
number of concrete specifications
have been proposed for this purpose.
For example, template-based approaches require two components for
specifying a Bayesian network: a set of
network templates whose instantiation leads to network segments, and
a specification of which segments to
generate and how to connect them together. 22, 23 Other approaches include
languages based on first-order logic,
allowing one to reason about situations with varying sets of objects (for
example, Milch et al. 26).
The Challenges ahead
Bayesian networks have been estab-
lished as a ubiquitous tool for model-
ing and reasoning under uncertainty.
The reach of Bayesian networks, how-
ever, is tied to their effectiveness in
representing the phenomena of inter-
est, and the scalability of their infer-
ence algorithms. To further improve
the scope and ubiquity of Bayesian net-
works, one therefore needs sustained
progress on both fronts. The main
challenges on the first front lie in in-
creasing the expressive power of Bayes-
ian network representations, while
maintaining the key features that have
proven necessary for their success:
modularity of representation, trans-
parent graphical nature, and efficiency
of inference. On the algorithmic side,
there is a need to better understand the
theoretical and practical limits of exact
inference algorithms based on the two
dimensions that characterize Bayesian
networks: their topology and paramet-
1. bayes, t. an essay towards solving a problem in the
doctrine of chances. Phil. Trans. 3 (1963), 370–418.
reproduced in W.e. deming.
2. blei, d.m., ng, a.y. and jordan, m.i. latent dirichlet
allocation. Journal of Machine Learning Research 3
3. boutilier, C., Friedman, n., goldszmidt, m. and Koller, d.
Context-specific independence in bayesian networks.
in Proceedings of the 12th Conference on Uncertainty
in Artificial Intelligence (1996), 115–123.
4. Chavira, m., dar wiche, a. and jaeger, m. Compiling
relational bayesian networks for exact inference.
International Journal of Approximate Reasoning 42,
1-2 (may 2006) 4–20.
5. Cowell, r., dawid, a., lauritzen, S. and Spiegelhalter,
d. Probabilistic Networks and Expert Systems.
6. darwiche, a. recursive conditioning. Artificial
Intelligence 126, 1-2 (2001), 5–41.
7. darwiche, a. a differential approach to inference in
bayesian networks. Journal of the ACM 50, 3 (2003).
8. darwiche, a. Modeling and Reasoning with Bayesian
Networks. Cambridge University press, 2009.
9. dean, t. and Kanazawa, K. a model for reasoning
about persistence and causation. Computational
Intelligence 5, 3 (1989), 142–150.
10. dechter, r. bucket elimination: a unifying framework
for probabilistic inference. in Proceedings of the 12th
Conference on Uncertainty in Artificial Intelligence
11. edwards, d. Introduction to Graphical Modeling.
Springer, 2nd edition, 2000.
12. Fishelson, m. and geiger, d. exact genetic linkage
computations for general pedigrees. Bioinformatics 18,
1 (2002), 189–198.
13. Frey, b. editor. Graphical Models for Machine Learning
and Digital Communication. mit press, Cambridge,
14. Friedman, n., geiger, d. and goldszmidt, m. bayesian
network classifiers. Machine Learning 29, 2-3 (1997),
15. gilks, W., richardson, S. and Spiegelhalter, d. Markov
Chain Monte Carlo in Practice: Interdisciplinary
Statistics. Chapman & hall/CrC, 1995.
16. glymour, C. and Cooper, g. eds. Computation,
Causation, and Discovery. mit press, Cambridge, ma,
17. heckerman, d. a tutorial on learning with bayesian
networks. Learning in Graphical Models. Kluwer,
18. howard, r.a. and matheson, j.e. influence diagrams.
Principles and Applications of Decision Analysis, Vol.
2. Strategic decision group, menlo park, Ca, 1984,
19. jaakkola, t. tutorial on variational approximation
methods. Advanced Mean Field Methods. d. Saad
and m. opper, ed, mit press, Cambridge, ma, 2001,
20. jensen, F.V. and nielsen, t.d. Bayesian Networks and
Decision Graphs. Springer, 2007.
21. jordan, m., ghahramani, Z., jaakkola, t. and Saul, l.
an introduction to variational methods for graphical
models. Machine Learning 37, 2 (1999), 183–233.
22. Koller, d. and Friedman, n. Probabilistic Graphical
Models: Principles and Techniques. mit press,
Cambridge, ma, 2009.
23. Koller, d. and pfeffer, a. object-oriented bayesian
networks. in Proceedings of the 13th Conference on
Uncertainty in Artificial Intelligence (1997), 302–313.
24. lauritzen, S.l. and Spiegelhalter, d.j. local
computations with probabilities on graphical
structures and their application to expert systems.
Journal of Royal Statistics Society, Series B 50, 2
25. mengshoel, o., darwiche, a., Cascio, K., Chavira, m.,
poll, S. and Uckun, S. diagnosing faults in electrical
power systems of spacecraft and aircraft. in
Proceedings of the 20th Innovative Applications of
Artificial Intelligence Conference (2008), 1699–1705.
26. milch, b., marthi, b., russell, S., Sontag, d., ong, d.
and Kolobov, a. blog: probabilistic models with
unknown objects. in Proceedings of the International
Joint Conference on Artificial Intelligence (2005),
27. neapolitan, r. Learning Bayesian Networks. prentice
hall, englewood, nj, 2004.
28. pearl, j. bayesian networks: a model of self-activated memory for evidential reasoning. in
Proceedings of the Cognitive Science Society (1985),
29. pearl, j. Probabilistic Reasoning in Intelligent
Systems: Networks of Plausible Inference. morgan
30. pearl, j. Causality: Models, Reasoning, and Inference.
Cambridge University press, 2000.
31. durbin, a.K.r., eddy, S. and mitchison, g. Biological
Sequence Analysis: Probabilistic Models of Proteins
and Nucleic Acids. Cambridge University press, 1998.
32. Smyth, p., heckerman, d. and jordan, m. probabilistic
independence networks for hidden markov probability
models. Neural Computation 9, 2 (1997), 227–269.
33. Steyvers, m. and griffiths, t. probabilistic topic
models. Handbook of Latent Semantic Analysis. t. K.
landauer, d. S. mcnamara, S. dennis, and W. Kintsch,
eds. 2007, 427–448.
34. Szeliski, r., Zabih, r., Scharstein, d., Veksler,
o., Kolmogorov, V., agarwala, a., tappen, m.F.
and rother, C. a comparative study of energy
minimization methods for markov random fields with
smoothness-based priors. IEEE Trans. Pattern Anal.
Mach. Intell. 30, 6 (2008), 1068–1080.
35. yedidia, j., Freeman, W. and Weiss, y. Constructing
free-energy approximations and generalized belief
propagation algorithms. IEEE Transactions on
Information Theory 1, 7 (2005), 2282–2312.
36. Zhang, n.l. and poole, d. a simple approach to
bayesian network computations. in Proceedings
of the 10th Conference on Uncertainty in Artificial
Intelligence, (1994), 171–178.
Adnan Darwiche ( email@example.com) is a professor
and former chair of the computer science department at
the University of California, los angeles, where he also
directs the automated reasoning group.