ticularly compelling, as the growth of
cloud computing and mobile devices
contributes to the desire to outsource
computing from a client device to an
online service. In these applications,
how can users be assured the secrecy of
their data is protected? Equally important, how can users verify the results returned are correct, without redoing the
computations?
While various forms of homomorphic encryption provide only data secrecy,
7 the results of arbitrary tasks
(abstracted as function evaluations) on
a computational service (such as in the
cloud) can be efficiently verified without trusting any hardware or software
on the system.
6, 21 This result contrasts
with previous approaches that were
inefficient or that could verify only the
results of restricted function families.
To formalize secure computational
outsourcing we introduce the notion
of verifiable computing.
6, 21 Abstractly,
a client wishes to evaluate a function
F (such as sign a document or manipulate a photograph) over various dynamically selected inputs x1,...,xk on one or
more untrusted computers, then verify
that the values returned are indeed the
result of applying F to the given inputs.
The critical requirement is that the client’s effort to generate and verify work
instances must be substantially less
than the work required to perform the
computation.
The definition of verifiable computation here is non-interactive: The client sends a single message to the worker and vice versa. The definition also
uses an amortized notion of complexity for the client, which can perform
expensive preprocessing but following
this stage is required to run efficiently.
Drawing on techniques from secure multiparty computation, the first
protocol for verifiable computing, presented here, provably provides computational integrity for work done by
an untrusted party; it also provides
provable secrecy for the computation’s
inputs and outputs.
6, 21 This feature is
bundled into the protocol at no additional computational cost. Such privacy is critical for many real-world outsourcing scenarios in which a function
is computed over highly sensitive data
(such as medical records and trade secrets).
Moreover, the protocol provides
Developers of
security-sensitive
code modules need
to trust only their
own code, along
with as few as 250
lines of flicker
code, to protect
the secrecy and
integrity of the
code’s execution.
asymptotically optimal performance
(amortized over multiple inputs). Specifically, it requires a one-time preprocessing stage that takes O(|C|) time,
where C is the smallest known Boolean
circuit computing F. For each work instance, the client performs O(|m|) work
to prepare an m-bit input, the worker
performs O(|C|) work to compute the
results, and the client performs O(|n|)
work to verify the n-bit result.
This result shows arbitrary computations can be outsourced to untrusted
workers, preserve the secrecy of the
data, and efficiently verify the computations were done correctly. Verifiable
computing could thus be used to, say,
outsource work to a cloud-based service or within a distributed computing
project like Folding@home,
19 which
outsources protein-folding simulations to millions of Internet users
worldwide. To prevent workers from
cheating, these projects often assign
the same work unit to multiple clients,
then compare results; verifiable computing would eliminate these redundant computations and provide strong
cryptographic protections against colluding workers, though some subtle-ties are involved when workers are
caught cheating.
6, 21 The protocol is
based on the crucial (and somewhat
surprising) observation that Yao’s Garbled Circuit protocol,
31 in addition to
providing secure two-party computation, provides a “one-time” verifiable
computation. That is, Yao’s Construction can be adapted to allow a client to
outsource the computation of a function on a single input.
In the preprocessing stage of the
outsourcing protocol, the client garbles the circuit C according to Yao’s
Construction. Then, in the “input
preparation” stage, the client reveals
the random labels associated with the
input bits of x in the garbling. These
labels allow the worker to compute
the random labels associated with the
output bits, and from them the client reconstructs F(x). If the bit labels
are sufficiently long and random, it is
improbable that the worker can guess
those for an incorrect output; therefore
the client is assured the response, F(x),
is correct.
Unfortunately, reusing the circuit
for a second input x′ is insecure, since
once the output labels of F(x) are re-