For too long, our approach to information-protection policy has been to seek ways to
prevent information from “escaping”
beyond appropriate boundaries, then
wring our hands when it inevitably does.
This hide-it-or-lose-it perspective dominates technical and public-policy approaches to fundamental social questions of online privacy,
copyright, and surveillance. Yet it is increasingly inadequate for a connected world where information is
easily copied and aggregated and automated correlations and inferences across multiple databases uncover
information even when it is not revealed explicitly. As
an alternative, accountability must become a primary
means through which society addresses appropriate
use. Information accountability means the use of information should be transparent so it is possible to determine whether a particular use is appropriate under a
given set of rules and that the system enables individuals and institutions to be held accountable for misuse.
Transparency and accountability make bad acts visible to all concerned. However, visibility alone does
not guarantee compliance. Then again, the vast
majority of legal and social rules that form the fabric
of our societies are not enforced perfectly or automatically, yet somehow most of us still manage to follow
most of them most of the time. We do so because
social systems built up over thousands of years
encourage us, often making compliance easier than
violation. For those rare cases where rules are broken,
we are all aware that we may be held accountable
through a process that looks back through the records
of our actions and assesses them against the rules.
Personal privacy, copyright protection, and government surveillance are among the more intractable
policy challenges in our information society. In each
of these policy areas, excessive reliance on secrecy and
up-front control over information has yielded policies
that fail to meet social needs, as well as technologies
that stifle information flow without actually resolving
the problems for which they were designed.
Information privacy rights aim to safeguard individual autonomy against the power that institutions
or individuals gain over others through the use of personal information. 1 Sensitive, and possibly inaccurate,
information may be used against people in financial,
political, employment, and health-care settings. In
democratic societies, citizens’ behavior is unduly
restrained if they fear they are being watched at every
turn. They may deliberately avoid reading controver-
1There are numerous definitions of privacy. Our chief interest here is understanding
privacy rights as they relate to the collection and use of personal information, as
opposed to other privacy protections that seek to preserve control over, say, one’s
physical integrity.
sial material or feel inhibited from associating with
certain communities and ideas for fear of adverse consequences.
Protecting privacy is more challenging than ever
due to the proliferation of personal information on
the Web and the increasing analytical power available
to large institutions (and to everyone else) through
Web search engines and other facilities. 2 Access control and collection limits over a single instance of personal data are insufficient to guarantee the protection
of privacy when either the same information is publicly available elsewhere on the Web or it is possible to
infer private details to a high degree of accuracy from
other information that itself is public [ 8, 10]. Worse,
many privacy protections (such as lengthy online pri-vacy-policy statements in health care and financial
services) are mere fig leaves over the increasing exposure of our social and commercial interactions. In the
case of publicly available personal information, people often intentionally make the data available, not
always by accident [ 9]. They may not intend for it to
be used for every conceivable purpose but are willing
for it to be public nonetheless.
Even technological tools that help individuals
make informed choices about data-collection practices are no longer sufficient to protect privacy in the
age of the Web. As a case in point, the growth of e-commerce over the second half of the 1990s sparked
concern among Web users worldwide about their personal privacy and led businesses to emphasize Website privacy policies and infrastructure (such as the
World Wide Web Consortium’s Platform for Privacy
Preferences, or P3P, www.w3.org/P3P/). A fully
implemented P3P environment gives Web users the
ability to make privacy choices about every single
request by business organizations and government
agencies to collect information about them. However,
the number, frequency, and specificity of these
choices would be overwhelming, especially if they
were to cover all possible future uses by the data collector and by third parties. Individuals should not
have to agree in advance to complex policies with
unpredictable outcomes. Moreover, they should be
confident that there will be redress if they are harmed
by the improper use of the information they provide.
Otherwise, individuals cannot be expected to be
motivated to attend to privacy choices.
Consider the complexities of protecting privacy in
this scenario: Alice is the mother of a three-year-old
child with a severe chronic illness. She learns all she
can about it, buying books online, searching the Web,
2See the authors’ technical report; dspace.mit.edu/bitstream/1721.1/37600/2MIT-
CSAIL-TR-2007-034.pdf.