2. PRioR WoRK
Collaboration has been well studied in contexts that are not
directly related to information visualization. The study of
how computer systems can enable collaboration is referred
to as computer-supported cooperative work, or CSCW. Because
collaboration occurs in a variety of situations, CSCW scholars often use a “time-space” matrix21 to outline the conceptual landscape. The time dimension represents whether or
not participants interact at the same time (synchronously
or asynchronously)—for example, instant messaging is a
largely synchronous communication medium, while e-mail
is asynchronous. The space dimension describes whether
users are collocated or geographically distributed.
Most work on collaborative visualization has been done
in the context of synchronous scenarios: users interacting at the same time to analyze scientific results or discuss
the state of a battlefield. Collocated collaboration usually
involves shared displays, including wall-sized, table-top, or
virtual reality displays (e.g., Dietz,
14 General Dynamics16).
Systems supporting remote collaboration have primarily focused on synchronous interaction,
1, 4 such as shared
virtual workspaces8 and augmented reality systems that
enable multiple users to interact concurrently with visualized data. 3, 9 In addition, the availability of public displays
has prompted researchers to experiment with asynchronous, collocated visualization (same place, different time),
for example, in the form of ambient displays that share
activity information about collocated users.
In this article, we focus on remote asynchronous
collaboration—the kind of collaboration that is most common over the Web. One reason for our interest is that partitioning work across both time and space holds the potential
of greater scalability in group-oriented analysis. For example, one decision-making study found that asynchronous
collaboration resulted in higher-quality outcomes—broader
discussions, more complete reports, and longer solutions—
than face-to-face collaboration.
2 However, as noted by
Viégas and Wattenberg,
25 little research attention has been
dedicated to asynchronous collaboration around interactive visualization. Instead, users often rely on static imagery when communicating about these interactive systems.
Images of the visualization are transferred as printouts or
screenshots, or included in word-processing or presentation documents.
A few commercial visualization systems introduced prior
to our work provide asynchronous collaboration features.
Online mapping systems (e.g., Google Maps) provide bookmarks (URLs) that users can send to others to share views.
The visualization company Spotfire provides DecisionSite
Posters, a Web-based system that allows a user to post an
interactive visualization view that other users can explore
and comment on. The Posters apply only to a subset of
Spotfire’s full functionality and do not allow graphical annotations, limiting their adoption.
One common feature of these systems is application
bookmarks: URLs or URL-like objects that point back into a
particular state of the application, for example, a location
and zoom level in the case of Google Maps. This pattern is
not surprising; for users to collaborate, they must be able to
share what they are seeing to establish a common ground
One of the primary uses of bookmarks is in discussion
forums surrounding a visualization. Some systems use
what we term independent discussion, where conversations
are decoupled from the visualization. For example, Google
Earth provides threaded discussion forums with messages
that include bookmarks into the visualized globe. In such
systems there are unidirectional links from the discussion
to the visualization, but no way to discover related comments while navigating the visualization itself.
Another stream of related work comes from wholly
or partly visual annotation systems, such as the regional
annotations in sites such as Flickr.com and Wikimapia.org
and in Churchill et al.’s anchored conversations.
systems enable embedded discussion that places conversational markers directly within a visualization or document.
Discussion of a specific item may be accessed through a
linked annotation shown within the visualization. These
systems may be seen as the converse of independent discussions, allowing unidirectional links from an artifact to
In this article, we extend the past work with a comprehensive design for asynchronous collaboration around interactive data visualizations, addressing issues of view sharing,
discussion, graphical annotation, and social navigation.
33. thE DEsiGn of sEnsE.us
To explore the possibilities for asynchronous collaborative visualization, we designed and implemented sense.us,
a prototype Web application for social visual data analysis.
The site provides a suite of visualizations of United States
census data over the last 150 years (see Figures 1 and 2)
and was designed for use by a general audience. We built
sense. us to put our design hypotheses into a concrete form
which we could then deploy and use to study collaborative
The primary interface for sense.us is shown in Figure
1. In the left panel is a Java applet containing a visualization. The right panel provides a discussion area, displaying commentary associated with the current visualization
view, and a graphical bookmark trail, providing access
to views bookmarked by the user. With a straightforward
bookmarking mechanism, sense.us supports collaboration with features described in detail below: doubly linked
discussions, graphical annotations, saved bookmark
trails, and social navigation via comment listings and user
33. 1. View sharing
When collaborating around visualizations, participants
must be able to see the same visual environment in order to
ground12 each others’ actions and comments. To this aim,
the sense.us site provides a mechanism for bookmarking views. The system makes application bookmarking
transparent by tying it to conventional Web bookmarking. The browser’s location bar always displays a URL that
links to the current state of the visualization, defined by
the settings of filtering, navigation, and visual encoding