in This ar TiCle we consider the question: How
should autonomous systems be analyzed? in
particular, we describe how the confluence of
developments in two areas—autonomous systems
architectures and formal verification for rational
agents—can provide the basis for the formal
verification of autonomous systems behaviors.
We discuss an approach to this question that involves:
1. Modeling the behavior and describing the
interface (input/output) to an agent in charge of
making decisions within the system;
2. Model checking the agent within an unrestricted
environment representing the “real world” and those
parts of the systems external to the agent, in order to
establish some property, j;
3. Utilizing theorems or analysis of the
environment, in the form of logical statements (where
necessary), to derive properties of the larger system;
4. if the agent is refined, modify ( 1), but if
environmental properties are clarified, modify ( 3).
Autonomous systems are now being
deployed in safety, mission, or business critical scenarios, which means
a thorough analysis of the choices the
core software might make becomes
crucial. But, should the analysis and
verification of autonomous software be
treated any differently than traditional
software used in critical situations? Or
is there something new going on here?
Autonomous systems are systems
that decide for themselves what to do
and when to do it. Such systems might
seem futuristic, but they are closer
than we might think. Modern household, business, and industrial systems
increasingly incorporate autonomy.
There are many examples, all varying
in the degree of autonomy used, from
almost pure human control to fully autonomous activities with minimal human interaction. Application areas are
broad, ranging from healthcare monitoring to autonomous vehicles.
But what are the reasons for this increase in autonomy? Typically, autonomy is used in systems that:
1. must be deployed in remote environments where direct human control
2. must be deployed in hostile environments where it is dangerous for humans to be nearby, and so difficult for
humans to assess the possibilities;
3. involve activity that is too lengthy
Exploring autonomous systems
and the agents that control them.
BY michaEL fishER, LoUisE DEnnis, anD matt WEBstER
autonomous systems are systems that
decide for themselves what to do.
they are currently making large impacts
in a variety of applications, including
driverless cars, unmanned aircraft,
robotics, and remote monitoring.
a key issue for autonomous systems
is determining their safety and
trustworthiness: how can we be sure
the autonomous systems will be safe
and reliable? methodologies to enable
certification of such systems are
the choices made by agent-based
autonomous systems can be formally
verified to provide evidence for
certification. sample applications include
search and rescue robots, satellite
systems, and unmanned aircraft.