JANUARY 2018 | VOL. 61 | NO. 1 | COMMUNICATIONS OF THE ACM 103
second is the default configuration of PicoSAT. To our
knowledge, alongside MIP approaches this is the only other
solver that has been used in publications on the Incentive
Auction, 4, 21 probably because we showed it to be the best
among a set of alternatives in an early talk on the subject. 24
The third is the default configuration of Gurobi, the best performing MIP solver on our test set, which we include as a
proxy for the MIP approach initially investigated by the FCC.
In total, our 100 simulations consumed over five years of
CPU time (dominated by the VCG simulations). Figure 3
(left) illustrates the results. Each point shows the value loss (x
axis) and cost (y axis) of a single simulation; in both cases,
these quantities are normalized by the corresponding quantity achieved by VCG for the same valuation profile. The
SATFC simulations had a mean value loss ratio of 1.048 and a
mean cost ratio of 0.760, indicating that the reverse auction
achieved nearly optimal efficiency at much lower cost than
VCG. The PicoSAT results were nearly identical, differing in
only 2 value profiles, and only slightly. The Gurobi results
were again extremely similar but marginally worse, with a
mean value loss ratio of 1.053 and a mean cost ratio of 0.762.
All of the SATFC, PicoSAT , and Gurobi runs dominated the
greedy runs according to both metrics; on average, reverse
auctions based on greedy cost 1.742 times more and lost
1.366 times as much broadcaster value than those based on
SATFC. Despite these differences, all of the solvers were able
to solve a very large fraction of the feasibility checking problems encountered in their respective simulations (which
took different trajectories once 2 solvers differed in their
ability to solve a given problem): 99.978%, 99.945%, 99.948%,
and 99.118% for SATFC, PicoSAT, Gurobi, and greedy
respectively (including trivial problems).
6. 2. National simulations
We were more interested in economic outcomes at the
national scale, even though we could not simulate VCG in
such a large setting. We generated 20 valuation profiles for
our full set of stations and ran reverse auction simulations
using our 4 feasibility checkers. In total, these experiments
stations other than s for γ and the sum of the same stations’
values for a packing that is optimal subject to the constraint
that s does not win. We identified these optimal packings
using the MIP encoding from Section 5 with 2 changes. First,
we added the objective of maximizing the aggregate values
of the participating stations: maximize
where band (c) is a function that
returns the corresponding band for a given channel. Second,
we allowed the option of not assigning a channel to a bid-
ding station.
6. 1. Greater New York city simulations
Unfortunately, it was impossible to solve these optimization
problems at a national scale, even given several days of computing time. We therefore constructed tractable problems by
restricting ourselves to stations in the vicinity of New York
City, which we chose because it corresponds to one of the
most densely connected regions in the interference graph.
More specifically, we dropped all Canadian stations and
restricted ourselves to the UHF band using the smallest possible clearing target (maximum allowable channel of 29).
Using the interference graph induced by these restrictions,
we then dropped every station whose shortest path length to
a station in New York City exceeded 2. The result was a setting with 218 stations including those in Boston, Philadelphia
and Washington DC. The setting contained 78 499 channel-specific interference constraints, yielding a MIP encoding
with 2 465 variables and 78 717 constraints.
We randomly generated 20 different valuation profiles,
using the methodology described in Section 4 but restricting
ourselves to the stations in the restricted interference graph.
For each valuation profile, we conducted 5 simulations. The
first was of a VCG auction; we computed allocations and payments using CPLEX, solving all MIPs optimally to within 10− 6
absolute MIP gap tolerance. We also ran 4 reverse auction
simulations for each valuation profile, varying the feasibility
checker to consider 3 alternatives to SATFC. The first is the
greedy feasibility checker, which represents the simplest reasonable feasibility checker and thus serves as a baseline. The
Figure 3. Comparing value loss and cost of the greedy feasibility checker, Gurobi, PicoSAT, and SATFC 2. 3. 1 for 20 different value profiles.
(Left) Fraction of VCG cost versus fraction of VCG value loss for Greater New York City simulations. All VCG points lie at ( 1, 1). (Right) Value
loss and cost for national simulations.
VCG
Greedy Feasibility Checker
PicoSAT
SATFC 2. 3. 1
1. 8
1. 6
1. 4
1. 2
1.0
0.8
0.6
0.4
1.0 1. 1 1. 2 1. 3 1. 4 1. 5 1. 6 1. 7 1. 8
Value loss / VCG value loss
/
VC
G
c
os
t
0.0
0
1
2
3
4
5
6
7
8
9
0.5 1.0 1. 5 2.0 2. 5 3.0 3. 5 4.0
Value loss (Billions)
C
os
t
(
B
il
l
i
o
ns)
Greedy Feasibility Checker
PicoSAT
SATFC 2. 3. 1