and analysis services dynamically
from among the services provided by
multiple third parties. Although functionally equivalent, these services are
typically characterized by different
levels of reliability, performance, and
cost. A quantitative verification tool
invoked automatically at runtime supports such dynamic service selection
by establishing which combinations
of alarm and analysis services, or
specifications S, satisfy the requirements R at each time instant (see the
sidebar “Self-Adaptive Service-Based
Systems”).
We used the probabilistic model checker PRISM18 to validate the
quantitative-verification-at-runtime
approach described here in domains
ranging from dynamic power management7 and data-center resource
allocation8 to quality-of-service optimization in service-based systems. 5, 12
Success in these projects suggests that
employing quantitative verification in
runtime scenarios can augment software systems with self-adaptation capabilities in predictable ways.
Using Markovian models at a carefully chosen level of abstraction enabled these adaptive systems to operate with acceptable overheads for
small- and medium-size systems. Scaling to larger systems requires faster
runtime-verification algorithms; our
recent research into devising such algorithms, exploiting the fact that the
system model and verified requirements typically undergo only small
changes from one adaptation step to
the next, shows great promise. 13, 22
Filieri et al. 13 showed it is possible
to pre-compute the probabilities asso-
ciated with reliability-related require-
ments of a software system as symbolic
expressions parameterized by domain
assumptions; for example, the “prob-
ability that an invocation of the analy-
sis service is followed by an alarm fail-
ure” associated with requirement R1
for the system in Figure 4 can be pre-
computed as P1=( 1–y)×0.12×x, where
the parameters x and y represent the
failure rates of the alarm service and
the analysis service, respectively. This
“once-only” pre-computation step is
complemented by a runtime-verifi-
cation step in which the symbolic ex-
pressions are evaluated for the actual
values of the system parameters. In
the medical-assistance example, the
runtime verification step consists of
calculating the new value of P1 each
time the domain assumptions about
the parameters x or y change as a re-
sult of runtime monitoring. The over-
heads associated with the pre-compu-
tation step are comparable to those of
standard quantitative verification, but
the overhead to evaluate a set of sym-
bolic expressions in the runtime veri-
fication step is negligible irrespective
of system size.
Related Work
For the past decade, several research
communities have contributed to-
ward integration of formal verifica-
tion techniques into the runtime-
software-adaptation process, with
their results complementing our
own work on quantitative verification
at runtime; for example, Rushby’s
work on runtime certification29 em-
phasized the need for runtime con-
figuration, arguing that any software
reconfiguration at runtime must be
accompanied by certification of the
dependability of the new configura-
tion. Building on Crow’s and Rushby’s
previous research concerning a theory
of fault detection, identification, and
reconfiguration, 11 Rushby proposed
formal verification for achieving run-
time certification, describing an en-
abling framework, including runtime
use of “methods related to model
checking.” 29 The range of correctness
properties (such as safety and reach-
ability) supported by this framework
complements the reliability- and per-
formance-related properties that can
be managed through our quantitative
verification at runtime.