the same stand-alone application and
try to achieve the same control flow by
delivering it as a network application.
To network the example application,
the three components—caller, main,
and print—operate as three independent agents (Figure 4). Connecting
these agents allows them to deliver the
same print function in a networked setting. The caller agent kicks off the computation by sending data on the main
action and waiting on rtnMain action.
The main agent consists of two independent agents, one listening on the
main action and the other listening on
rtnPrint. The first component agent
that receives the main action turns on
print. Meanwhile, the one print agent
listens on print and after printing on
screen, turns on rtnPrint. The other
main agent responds to rtnPrint and
turns on rtnMain. Notice that each
agent operates independently but is
coordinated by the different actions
or sync points. As rtnMain is triggered
only after the print is completed, the
functionality is the same as in the first
C program. The difference is how the
functionality is achieved through coordination of the autonmous agents:
CallerAgent, MainAgent, and
PrintAgent. These agents could also
work across multiple system spaces.
Moving the application control
from computation to communication
makes applications work coherently
across multiple system spaces.
Latency. As mentioned earlier, the
simple C program that was a stand-alone application is now expressed
as a network application by moving it
from a premise of computation to one
of communication. The sync points
such as main, rtnMain, print, and
rtnPrint coordinate these agents to
create a coherent whole. These coordination elements could sit either in a
single system space or across multiple
system spaces. If these sync points sit
across address spaces, then this introduces a new constraint: the latency of
the network.
This now identifies the speed of
the whole application. In a typical
network application, latency is reduced when the application does not
use the network. By introducing caching, the network usage is reduced,
thereby increasing the speed of the
overall application.
interacts with the outside world by running functions to do the input/output
(Figure 3). During the I/O statements,
the program is blocked. This is becoming a multisystem-spaced world in
which the expectation is that the state
can be observed by another program.
The program and its languages should
have notation and concepts to share
data dynamically at runtime, with no
additional engineering.
A stand-alone application control
has two elements: the forward and
return movement of control, and the
transfer of data during these move-
ments. In the current programming
model, because of the limitation of the
state machine of the processor (with its
current program counter), the forward
and the return movements of control
and data are synchronous (that is, the
caller halts until the call control is
handed back to the caller).
Now fast forward to the futuristic
patient-doctor encounter mentioned
at the beginning of this article. In this
instance, the different devices, also
known as agents, talk to each other
to move the process forward. It is not
a single-host system but a collection
of distributed devices talking to each
other, making a complete, whole self.
In this new world, there are no state-
ments or assignments—only a series of
interactions, or sync points. How do you
express this new world with no single
point of control? How do you represent
this new world with no obvious program
control in the traditional sense?
To make this possible, let’s explore
Figure 4. “Hello, World” application as communication.
caller
send data
receive data
Main print
Rtnprint
main print
agent agent agent
caller
forward control path
return control path
main() print()