games are naturally phrased as payoff-maximization games,
where each player has a payoff function Pi (s) that it strives to
maximize. We use P to denote the objective function of a pay-
off-maximization game. We call such a game (λ, m)-smooth if
automatically to the more general equilibrium concepts in
Figure 1 and to the corresponding outcome sequences in
games played over time. Several less direct consequences
of smoothness arguments are discussed in the full ver-
22 We work with cost-minimization games, though
similar results hold for smooth payoff-maximization games
(cf., Example 2. 5).
for every pair s, s of outcomes. A derivation similar to ( 3)–( 5)
shows that, in a (λ, m)-smooth payoff-maximization game,
the objective function value of every pure Nash equilibrium
is at least a λ/( 1 + m) fraction of the maximum possible. We
define the robust POA of a payoff-maximization game as
the supremum of λ/( 1 + m) over all legitimate smoothness
parameters (λ, m).
A valid utility game is defined by a ground set E, a nonnegative submodular function V defined on subsets of E, and a
strategy set Si ⊆ 2E and a payoff function Pi for each player
i = 1, 2,. .. , k.b For example, the set E could denote a set of
locations where facilities can be built, and a strategy si ⊆ E
could denote the locations at which player i chooses to build
facilities. For an outcome s, let U(s) ⊆ E denote the union
of players’ strategies in s. The objective function value
of an outcome s is defined as P(s) = V(U(s)). Furthermore,
the definition requires that two conditions hold: (i) for each
player i, Pi(s)≥ V(U(s))− V(U(0/, s−i)) for every outcome s; and
(ii) for every outcome s. One concrete example of such a game is competitive facility location with price-taking markets and profit-maximizing firms.
We claim that every valid utility game with a nondecreasing objective function V is ( 1, 1)-smooth. The proof is essentially a few key inequalities from Theorem 3. 2 of Vetta25, as
follows. Let s, s denote arbitrary outcomes of a valid utility
game with a nondecreasing objective function. Let Ui ⊆ E
denote the union of all of the players’ strategies in s, together
with the strategies employed by players 1, 2,..., i in s*.
Applying condition (i), the submodularity of V, and the nondecreasing property of V yields
3. 1. static equilibrium concepts
We begin with implications of Definition 2. 1 for randomized equilibrium concepts in one-shot games; the next section treats outcome sequences generated by repeated play.
A set (s1, . . . , sk) of independent probability distributions
over strategy sets—one per player of a cost-minimization
game—is a mixed Nash equilibrium of the game if no player
can decrease its expected cost under the product distribution s = s1 × . . . × sk via a unilateral deviation:
es s [Ci(s)] ≤ es−i s−i [Ci(s′ i , s−i)]
for every i and s′ i, ∈Si, where s−i is the product distribution
of all sj’s other than si. (By linearity, it suffices to consider
only pure-strategy unilateral deviations.) Obviously, every
pure Nash equilibrium is a mixed Nash equilibrium and not
conversely; indeed, many games have no pure Nash equilib-
rium, but every finite game has a mixed Nash equilibrium.
A correlated equilibrium of a cost-minimization game G
is a (joint) probability distribution s over the outcomes of G
with the property that
es s [Ci(s)½si] ≤ es s [Ci(s′ i , s−i)½si]
as desired. This smoothness argument implies a lower
bound of 1/2 on the POA of pure Nash equilibria in every
valid utility game with a nondecreasing objective function—
a result first proved in Vetta,
25 along with a matching upper
bound. Our extension theorem shows that this lower bound
applies more generally to all of the equilibria depicted in
Figure 1, a fact first established in Blum et al.
for every i and si, s′ i ∈Si. A classical interpretation of a correlated equilibrium is in terms of a mediator, who draws an
outcome s from the publicly known distribution s and privately “recommends” strategy si to each player i. The equilibrium condition requires that following a recommended
strategy always minimizes the expected cost of a player, conditioned on the recommendation. Mixed Nash equilibria
are precisely the correlated equilibria that are also product
distributions. Correlated equilibria have been widely studied as strategies for a benevolent mediator, and also because
of their relative tractability. Because the set of correlated
equilibria is explicitly described by a small set of linear
inequalities, computing (and even optimizing over) correlated equilibria can be done in time polynomial in the size of
the game (see, e.g., Chapter 2 of Nisan et al.
17). They are also
relatively “easy to learn,” as discussed in the next section.
Finally, a coarse correlated equilibrium of a cost-minimization game is a probability distribution s over outcomes
3. an eXtension theoRem
This section states and proves the extension theorem discussed in Section 1.2: every POA bound on pure Nash
equilibria derived from a smoothness argument extends
b A set function V : 2E → R is submodular if V (X ∩ Y ) + V (X ∪ Y ) ≤ V (X ) + V ( Y )
for every X, Y ⊆ E.
es σ[Ci(s)] ≤ es σ [Ci(s′ i , s−i)]
for every i and s′ i, ∈si. While a correlated equilibrium ( 8) protects against deviations by players aware of their recommended strategy, a coarse correlated equilibrium ( 9) is only
constrained by player deviations that are independent of the
sampled outcome. Since every correlated equilibrium is also