this leads to huge improvements in
scalability and consistency of process—by allowing humans to be taken
out of the process.
from Demanding compliance
to offering capability
What is the future of this test-driven
approach to management? To understand the challenges, you need to be
aware of a second culture that pervades
computer science: the assumption of
management by obligation. Obligations are modal statements: for example, X must comply with Y, A should do
B, C is allowed to do D, and so on. The
assumption is that you can force a system to bow down to a decision made
externally. This viewpoint has been the
backbone of policy-based systems for
years, 12 and it suffers from a number of
fundamental flaws.
The first flaw is that one cannot generally exert a mandatory influence on
another part of a software or hardware
system without its willing consent.
Lack of authority, lack of proximity,
lack of knowledge, and straightforward
impossibility are all reasons why this is
impractical. For example, if a computer is switched off, you cannot force it to
install a new version of software. Thus,
a model of maintenance based on obligation is, at best, optimistic and, at
worst, futile. The second point is that
obligations lead to contradictions in
networks that cannot be resolved. Two
different parties can insist that a third
will obey quite different rules, without
even being aware of one another. 1
Realizing these weaknesses has led
to a rethink of obligations, turning
them around completely into an atomic theory of “voluntary cooperation,” or
promise theory. 1 After all, if an obligation requires a willing consent to implement it, then voluntary cooperation
is the more fundamental point of view.
It turns out that a model of promises
provides exactly the kind of umbrella
under which all of the aspects of system administration can be modeled.
The result is an agent-based approach:
each system part should keep its own
promises as far as possible without external help, expecting as little as possible of its unpredictable environment.
Independence of parts is represented by agents that keep their own promises; the convergence to a standard is
Promise theory
is a wide-ranging
description of
cooperative model
building that thinks
bottom-up instead
of top-down.
it can be applied
to humans and
machines in
equal measure
and can also
describe human
workflows—a
simple recipe
for federated
management.
represented by the extent to which a
promise is kept; and the insensitivity to
initial conditions is taken care of by the
fact that promises describe outcomes,
not initial states.
Promise theory turns out to be a
rather wide-ranging description of cooperative model building that thinks
bottom-up instead of top-down. It can
be applied to humans and machines in
equal measure and can also describe
human workflows—a simple recipe for
federated management. It has not yet
gained widespread acceptance, but its
principal findings are now being used
to restructure some of the largest organizations in banking and manufacturing, allowing them to model complexity in terms of robust intended states.
Today, only Cfengine is intentionally
based on promise theory principles,
but some aspects of Chef’s decentralization10 are compatible with it.
the Limits of Knowledge
There are subtler issues lurking in
system measurement that we’ve only
glossed over so far. These will likely
challenge both researchers and practitioners in the years ahead. To verify
a model, you need to measure a system and check its compliance with the
model. Your assessment of the state of
the system (does it keep its promises?)
requires a trust of the measurement
process itself to form a conclusion.
That one dependence is inescapable.
What happens when you test a system’s compliance with a model? It
turns out that every intermediate part
in a chain of measurement potentially
distorts the information you want to
observe, leading to less and less certainty. Uncertainty lies at the very heart
of observability. If you want to govern
systems by pretending to know them
absolutely, you will be disappointed.
Consider this: environmental influences on systems and measurers can
lead directly to illogical behavior, such
as undecidable propositions. Suppose
you have an assertion (for example,
promise that a system property is true).
In logic this assertion must either be
true or false, but consider these cases:
˲ ˲ You do not observe the system (so
you don’t know);
˲ ˲ Observation of the system requires
interacting with it, which changes its
state;