ticipants are given a limited budget to
buy and sell stocks, each representing a different idea, and get a payoff
when ideas they “bought” are successfully implemented by the customer.
This gives users incentives to evaluate
ideas carefully from the customer’s
perspective. Deliberation maps can
also help here, by allowing crowds to
check and build upon each other’s
reasoning by creating chains of supporting and rebutting arguments.
˲ Creativity enhancement techniques
(which have been developed, to date,
almost exclusively for face-to-face
team settings) can be used offline to
feed open innovation engagements as
well as adapted to crowd-scale online
˲ Interleaving ideation and evaluation across multiple open innovation
rounds. The crowd can be asked to create new ideas built upon those that survived the previous round of selection,
so idea generation is more likely to focus on what the customer wants.
Deeper computer support. Crowds
(of people) and clouds (computers)
have synergistic capabilities. Crowds
are able to create, understand and
evaluate ideas in ways that computers cannot match, but are best suited for performing relatively small
and quick tasks that require little
context. Computers, by contrast, excel at rapid analysis of large swaths
of data to get “the big picture” of
what is (and is not) happening in a
crowd. Combining these strengths
will require bridging the semantic
gap between the natural language
that crowds use, and the formal
languages that computers require.
Someday, this will be achieved by
advanced algorithms that allow
computers to deeply understand
natural language. But this achievement seems to remain far off. In
the meantime our goal, we believe,
must be to find ways that crowds
can do the minimum formalization
needed to enable significant computer support, for example by:
˲ Semantic tagging: Crowds can annotate natural language idea corpuses
with semantic cues (for example, idea
and argument boundaries, topic keywords). This can be a mixed initiative
process, wherein computers propose
possible tags that are corrected by
As the semantic
crowds and clouds
narrows, we can
create powerful new
forms of computer
support for open
crowd members, and where machine
learning can be used to improve the
computer algorithms over time based
on this human feedback.
˲ Design tools can allow users to
express their ideas as semi-formal
models built on domain-specific
primitives, rather than just as natural language text. Design tools aimed
at the masses (for example, Google
sketchup) are already becoming ubiquitous, but have yet to be incorporated
into open innovation platforms.
As the semantic gap between crowds
and clouds narrows, we can create
powerful new forms of computer support for open innovation, such as:
˲ Analysis tools that take advantage
of semi-formal idea representations to
help evaluate the strengths and weaknesses of contributed ideas;
˲ Semantic compression algorithms
that remove duplicates, and cluster related ideas, in order to compress idea
˲ Visualization tools that summarize what the crowd has done so far,
so customers can determine what the
gaps/promising areas are and use this
information to guide future crowd
contributions, for example via focused incentives.
A Call to Arms
Open innovation systems, as we have
seen, have the potential to harness the
collective intelligence of the crowd
for problem solving in areas ranging
from business to government, from
science to education. This potential
is far from fully realized, however,
largely because of our inability to deal
effectively with the massive levels of
user contributions that these systems
can elicit. Advances in this area will
require contributions from many disciplines, including computer science,
cognitive science, social psychology,
computational linguistics, and economics. Will you join us in addressing
these important challenges?
Bailey, B.P. and Horvitz, E.
What’s your idea? A case study of a
grassroots innovation pipeline within a large
software company. In Proceedings of CHI
2010, ACM Press, NY, 2010.
Leading Public Sector Innovation:
Co-creating for a Better Society.
Policy Press, 2010.
Bjelland, O.M. and Chapman Wood, R.
An inside view of IBM’s innovation jam.
MIT Sloan Management Review 50, 1
Chesbrough, H., Vanhaverbeke, W.,
and West, J., Eds.
Open Innovation: Researching a New
Paradigm. Oxford University Press, Oxford,
Patterns of innovation: A web-based MATLAB
programming contest. Human Factors in
Computing Systems (2001), 337–338.
Inside Cisco’s Search for the Next Big Idea.
Harvard Business Review 87, 9 (2009), 43–45.
Lakhani, K.R. and Jeppesen, L.B.
Getting unusual suspects to solve R&D
puzzles. Harvard Business Review 85, 5
von Hippel, E.
Democratizing Innovation. MIT Press, 2005.
1. Klein, M. and Iandoli. L. Supporting collaborative
deliberation using a large-scale argumentation
system: The MIT collaboratorium. Directions and
Implications of Advanced Computing; Conference on
Online Deliberation (DIAC-2008/OD2008). University
of California, Berkeley, 2008.
2. Thompson, V. IDC MarketScape: Worldwide
Innovation Management Solutions 2013 Vendor
Analysis, (2013); http://idcdocserv.com/240823_spigit.
Mark Klein (firstname.lastname@example.org) is a principal research
scientist at the MIT Center for Collective Intelligence,
an affiliate at the MIT Computer Science and AI Lab,
the New England Complex Systems Institute, and the
Dynamic and Distributed Information Systems Group at
the University of Zurich in Switzerland.
Gregorio Convertino (email@example.com) is
a senior user researcher at Informatica Corporation in
Redwood City, CA.
Copyright held by authors.