Technical Perspective
native client:
A clever Alternative
By Dan Wallach
Doi: 10.1145/1629175.1629202
GooGLe’s nAtiVe CLient (typically abbreviated “NaCl” and pronounced NAH-cull) is an intriguing new system that
allows untrusted x86 binaries to run
safely on bare metal. Untrusted code
is already essential to the Web, whether shipping JavaScript source code,
Java byte code, Flash applications,
or ActiveX controls. Java, JavaScript,
and Flash all use an intermediate representation that is quite abstracted
from the hardware, using increasingly
sophisticated analysis and compilation techniques to achieve good performance on modern computers.
ActiveX (or Netscape/Firefox plugins), on the other hand, allows the
direct transmission of Windows x86
binary objects, digitally signed and
manually approved by the user to run
natively.
ActiveX has never been particularly
desirable. It is not portable to non-Windows platforms, and every user is
one mistaken click away from installing malware. Meanwhile, Flash has
become a standard install, largely due
to its powerful graphics and video libraries. (When you watch a YouTube
video in your browser, you’re looking
at the Flash plugin.) Indeed, Flash has
sufficient access to the local system
that it has, itself, been the target of a
variety of security attacks. Sure, you
can uninstall Flash on your system
(and mobile phones don’t support it
at all), but far too many Web sites assume you’ve got Flash installed, and
will be unusable without it.
Into this gap, NaCl offers a clever
alternative. A plugin like Flash, compiled and optimized in native x86 code,
could be downloaded, installed, and
run by any Web page without bothering the user for permission. If the plugin turned out to have security flaws,
those would be contained by the walls
that NaCl builds around the code.
Plugins could just run as distinct,
unprivileged users in the system,
leveraging the multi-user isolation
mechanisms already present in any
modern operating system, but this
ignores several unpleasant realities.
First, a substantial portion of the
world’s computers are running old
Windows variants with unacceptable
security holes. We must build stronger walls than those platforms’ native
mechanisms can support. Second, we
have to worry about CPU bugs. While
possibly the most famous CPU implementation error was Intel’s Pentium
floating-point division flaw (where
arithmetic could yield errors in the
low-order bits of the mantissa), other bugs have happened from time to
time that result in more serious security ramifications. We need the ability to filter out instructions that might
tickle CPU bugs or otherwise have undesirable behavior.
If all the world were running classic
RISC machines, where every instruction was 32-bits long, this process
would be simple. Variable length x86
instructions, however, allow any given
array of bytes to correspond to multiple different instruction streams,
depending on the exact byte offset to
which you jump. Consequently, NaCl
introduces a simple static verifier to
ensure that all jump instructions can
only target instructions on 32-byte-
aligned boundaries, and to ensure
that code blocks, starting at those offsets, have no known unsafe instructions.
The NaCl system hides the native
system call interface and uses its own
inter-process communication mechanism, while also building an “outer
sandbox” using more traditional operating system process privilege limits.
In principle, NaCl could be built into
a browser and ActiveX and Flash could
be kicked out. Adobe could recompile
Flash to pass NaCl’s verifier, and end
users would have one less source of
security holes to worry about. Also,
if Web designers wanted to use different video codecs, they would no
longer be limited to whatever Flash
supports. Even better, as NaCl doesn’t
necessarily expose the native operating system’s system call interface, we
can even imagine NaCl apps running
portably across Linux, OS X, and Windows (NaCl is even being extended to
support ARM and x86-64).
How secure is the open-source NaCl
implementation? I was one of the
judges for a contest that Google held
earlier this year to find out. In the end,
only five teams had entries, together
identifying what the Google development team considered to be 24 valid
security issues. These can be roughly
categorized into bugs in NaCl’s support infrastructure (unhandled exceptions, buffer overflow vulnerabilities,
and a few “type confusion” attacks
that exploit the ability to pass one type
where another was expected), and obscure instruction sequences that the
static verifier missed (for example,
the verifier missed a class of “prefix”
bits on jump instructions that change
their behavior). One vulnerability relied on NaCl’s support for memory-mapping to unmap and remap a code
segment, allowing unverified code to
be executed. Clever attacks, but all
straightforward to remediate.
In summary, the NaCl design as detailed in the following paper is pragmatic and attractive, with its known
implementation flaws no worse than
what we might see in any fledgling operating system’s security boundaries.
The NaCl codebase is small and simple enough that these sorts of bugs
can and will be fixed if and when NaCl
leaves the lab and gains market share
in the field.
Dan Wallach ( dwallach@cs.rice.edu) is an associate
professor in the department of Computer science at
rice university in Houston, tx.