model and allow servers written in this
style to be scaled over multiple cores
(running multiple threads) and multiple servers. We were not trying to take
existing code and allow it to run within
our system. This would have made the
task much more difficult and would
not have corresponded to the realities
of the game and virtual-world culture.
Game and world servers are written
from scratch for each game or world,
perhaps reusing some libraries but
rarely, once running, being rehosted
into a different environment. Efforts
to bring different platforms into the
game are restricted to the client side,
where new consoles bringing in new
players may be worth the effort.
Darkstar provides a container in
which the server runs. The container
provides interfaces to a set of services
that allow the game server to keep persistent state, establish connections
with clients, and construct publish/
subscribe channels with sets of clients.
Multiple copies of the game server
code can run in multiple instances of
the Darkstar container. Each copy can
be written as if it was the only one active (and, in fact, it may be the only one
active for small-scale games or worlds).
Each of the servers is structured as an
event loop—the main loop listens on
a session with a client that is established when the client logs in. When a
message is delivered, the event loop is
called. The loop can then decode the
message and determine the game or
world action that is the appropriate response. It then dispatches a task within
the container.
SYNCRETIA IN SECOND LIFE, BY ALPHA AUER, AKA. ELIF AYI TER
Each of these tasks can read or
change data in the world through the
Darkstar data service, communicate
with the client, or send messages to
groups of other game or world participants via a channel. Under the covers,
the task is wrapped in a transaction.
The transaction is used to ensure that
no conflicting concurrent access to the
world data will occur. If a task tries to
change data that is being changed by
some other concurrent task, the data
service will detect that conflict. In that
case, one of the conflicting tasks will
be aborted and rescheduled; the other
task should run to completion. Thus,
when the aborted task is retried, the
conflict should have disappeared and
the task should run to completion.
This mechanism for concurrency
control does require that all tasks access all of their data through the Darkstar data service. This is a departure
from the usual way of programming
game or world servers, where data is
kept in memory to decrease latency.
By using results from the past 20 years
of database research, we believe that
we can keep the penalty for accessing
through a data service small by caching data in intelligent ways. We also believe that by using the inherent parallelism in these games, we can increase
the overall performance of the game as
the number of players increases, even
if there is a small penalty for individual
data access. Our data store is not based
on a standard SQL database since we
don’t need the full functionality such
databases provide. What we need is
something that gives us fast access to
persistently stored objects that can be
identified in simple ways. Our current
implementation uses the Berkeley Database for this, although we have abstracted our access to it to provide the
opportunity to use other persistence
layers if required.
crash can cause the loss of any change
in the game or world since the last time
the system was checkpointed. This can
sometimes be hours of play, which
can cause considerable consternation
among the customers and expensive
calls to the service lines. By keeping all
data persistently, we believe we can ensure that no more than a few seconds
of game or world interaction will be
lost in the case of a server crash. In the
best case, such a crash won’t even be
noticed by the players, as the tasks that
were on the server will be transferred
to another server in a fashion that is
transparent to the player.
The biggest payoff for requiring
that all data be kept in the data store is
that it helps to make the tasks that are
generated by the response to events in
the game portable. Since the data store
can be accessed by any of a cluster of
machines that are running the Darkstar stack and the game logic, there
is no data that cannot be moved from
machine to machine. We do the same
with the communication mechanisms,
ensuring that a session or channel that
is connecting the game and some set of
Concurrency control is not the only
reason to require that all data be accessed through the data store. By backing the data in a persistent fashion
rather than keeping it in main memory, we gain some inherent reliability
that has not been exhibited by games
or worlds in the past. Storing all of the
data in memory means that a server
clients is abstracted through the Darkstar stack. This allows us to move the
task using the session or channel to
another machine without affecting the
semantics of the task talking over the
session or channel.
This task portability means we can
dynamically balance the load on a set of
machines running the game or virtual