interval [0, 1] as the limit of fractional
The term η simply prevents division
by zero, and the term ξ is a statistical
shrinkage term used as a model param-
eter that helps to distort global infor-
mation available to agents when they
reselect a strategy.
We describe the probability that
agent i∈ S switches over to use the strategy that agent j∈ S previously implemented as .
Under the signaling game theoretic
model, we evaluate equilibrium concepts and their stability under evolutionary dynamics including mutant
Sybil identities. We further specify
the WANET case and its parameters to
perform computer simulations yielding empirical measures of its behavior. Here, we focus on how validated
and shared security information can
ballast the desired equilibrium of
Models and simulations. To demonstrate simulation scalability, we used a
laptop (with a 2GHz Intel core i7 processor and 8GB of RAM) to measure
a simulation history (with 800 nodes
and 1,000 generations). In eight minutes of user time over 16M rounds of
play, 160K strategic mutations were explored; 125K of those mutations were
found to be unique DFA strategy structures, and 36K employed deceptive
identities. It was possible to discover
a stable equilibrium where all agents
reveal their identity honestly and act
with the common knowledge of others revealing their identities honestly.
Since mutating into a Sybil behavior is
detectable by others and credibly punishable, the equilibrium is stable. Note
also that the nature of cyber-social systems makes these systems amenable to
empirical evolutionary studies in that
model checking or other formal approaches would require “an intelligent
designer” who could specify various
global properties of the system. However, we do not rule out a role for statistical model checking in this and other
similar mechanism design studies.
Experiments and empirical analysis.
Our experiments consider a simple
To avoid undesirable outcomes arising from deception, we call upon a theory of
information-asymmetric signaling games to unify many of the adversarial use cases
under a single framework, in particular when adversarial actions may be viewed
mathematically as rational (that is, utility-optimizing agents possessing common
knowledge of rationality).
The simplest model of signaling games involves two players. They are asymmetric
in information and are called S, sender (informed), and R, receiver (uninformed). A
key notion in this game is that of type, a random variable whose support is given by
T (known to sender S). Also, we use π T (∙) to denote probability distribution over T as
a prior belief of R about the sender’s type. A round of game play proceeds as follows:
Player S learns t∈ T; S sends to R a signal s ∈ M; and R takes an action a ∈ A. Their pay-
off/utility functions are known and depend on the type, signal, and action:
In this structure, the players’ behavior strategies can be described by the following
two sets of probability distributions: ( 1) μ(∙|t ), t ∈ T, on M and ( 2) α(∙|s), s ∈ M, on
A. For S, the sender strategy μ is a probability distribution on signals given types;
namely, μ(s|t) describes the probability that S with type t sends signal s. For R, the
receiver strategy α is a probability distribution on actions given signals; namely, α(a|s)
describes the probability that R takes action a following signal s. A pair of strategies μ
and α is in Nash equilibrium if (and only if) they are mutually best responses (that is, if
each maximizes the expected utility given the other):
for any μ, α. It is straightforward to show that such a strategy profile (α*, μ*) exists.
We conjecture that the natural models for sender-receiver utility functions could be
based on functions that combine information rates with distortion, as in rate distortion
theory (RD T). For instance, assume there are certain natural connections between the
types and actions, as modeled by the functions fS and fR for the sender and receiver
Then the utility function for each consists of two weighted-additive terms, one
measuring the mutual information with respect to the signals and the other measuring
the undesirable distortion, where the weights are suitably chosen Lagrange constants
where I denotes mutual information and d R,d S denote measures of distortion.
This definition also captures the notion of deception as follows. Thus the
distribution of signals received by R is given by the probability distribution πM, where
and the distribution of actions produced by R is given by the probability distribution
Clearly π T and πA are probability distributions on T and A respectively.
If π^A is the probability distribution on T induced by πA under the function fR, then
A natural choice of measure for deception is given by the relative entropy between
the probability distributions π T and π^T :
This definition describes deception from the point of view of the receiver. To get the
notion of deception from the point of view of the sender, one needs to play the game