ogy reasoning therefore plays a central
role in both the development of high-quality ontologies and the deployment
of ontologies in applications.
In spite of the complexity of reasoning with OWL ontologies, highly optimized DL reasoning systems (such as
FaCT++, owl.man.ac.uk/factplusplus/,
Racer, www.racer-systems.com/, and
Pellet, pellet.owldl.com/) have proved
effective in practice; the availability of
such systems was one of the key motivations for the W3C to base OWL on a DL.
State-of-the-art ontology-development
tools (such as SWOOP, code.google.
com/p/swoop/, Protégé 4, and TopBraid
Composer, www.topbraidcomposer.
com) use DL reasoners to give feedback
to developers about the logical implications of their designs. This feedback
typically includes warnings about inconsistencies and synonyms.
An inconsistent (sometimes called
“unsatisfiable”) class is one for which
its description is “overconstrained,”
with the result that it can never have
instances. This inconsistency is typically an unintended consequence of
the design (why introduce a name for
a class that can never have instances?)
and may be due to subtle interactions
among axioms. It is therefore useful to
be able to detect such classes and bring
them to the attention of the ontology
engineer. For example, during the recent development of an OWL ontology
at NASA’s Jet Propulsion Laboratory,
the class “OceanCrustLayer” was found
to be inconsistent. Engineers discovered (with the help of debugging tools)
that this was the result of its being defined as both a region and a layer, one
(a layer) a 2D object and the other (a
region) a 3D object. The inconsistency
thus highlighted a fundamental error
in the ontology’s design.
It is also possible that the descriptions in an ontology mean that two
classes necessarily have exactly the
same set of instances; that is, they are
alternative names for the same class.
Having multiple names for the same
class may be desirable in some situations (such as to capture the fact that
“myocardial infarction” and “heart
attack” are the same thing). However,
multiple names could also be the inadvertent result of interactions among
descriptions or of basic errors by the
ontology designer; it is therefore use-
Reliability and
correctness
are particularly
important when
ontology-based
systems are used
in safety-critical
applications; in
those involving
medicine, for
example, incorrect
reasoning could
adversely affect
patient care.
ful to be able to alert developers to the
presence of such synonyms.
In addition to checking for inconsistencies and synonyms, ontology-development tools usually check for implicit
subsumption relationships, updating
the class hierarchy accordingly. This
automated updating is also a useful design aid, allowing ontology developers
to focus on class descriptions, leaving
the computation of the class hierarchy
to the reasoner; it can also be used by
developers to check if the hierarchy induced by the class descriptions is consistent with their expert intuition. The
two may not be consistent when, for example, errors in the ontology result in
unexpected subsumption inferences or
“underconstrained” class descriptions
result in expected inferences not being
found. Not finding expected inferences
is common, as it is easy to inadvertently omit axioms that express “obvious”
information. For example, an ontology
engineer may expect the class of patients with a fracture of both the tibia
and the fibula to be a subClassOf “
patient with multiple fractures”; however,
this relationship may not hold if the ontology doesn’t include (explicitly or implicitly) the information that the tibia
and fibula are different bones. Failure
to find this subsumption relationship
should prompt the engineer to add the
missing DisjointClasses axiom.
Reasoning is also important when
ontologies are deployed in applications,
when it is needed to answer standard
data-retrieval queries, and to answer
conceptual queries about the structure
of the domain. For example, biologists
use ontologies (such as the Gene Ontology, or GO, and the Biological Pathways Exchange ontology, or BioPAX) to
annotate (Web-accessible) data from
gene-sequencing experiments, making
it possible to answer complex queries
(such as “What DNA-binding products
interact with insulin receptors?”). Answering requires a reasoner to not only
identify individuals that are (perhaps
only implicitly) instances of DNA-binding products and of insulin receptors
but to identify which pairs of individuals are related (perhaps only implicitly)
via the interacts With property.
Finally, in order to maximize the
benefit of reasoning services, tools
should be able to explain inferences;
without explanations, developers may