Nosey people still
exist, but these
days their options
for snooping
surreptitiously are
so much greater.
Curtain flickers not
need even approach
the window, so
there are few cues
as to who is
monitoring your
actions.
nicate amongst themselves and
will be able to auto-organize
depending on the context.
Some pundits of what has been
called “ambient intelligence”
are very excited about this version of the IoT world.
I am largely in agreement;
this all sounds really exciting.
My favorite, desired scenario for
all this auto-organization calls
for the development of sentient
socks that can find each other.
Yes indeed, I want a sock drawer that resembles Noah’s ark,
with neatly assembled socks
stacked two by two. Right now
what I have is a lot of singletons
wondering where their other
half went.
I have been spinning this
kind of simple, everyday fantasy for a while. Years ago, Les
Nelson, Tomas Sokoler, and
I designed a suite of objects
called “Tools That Tell Tales.”
One such tale-telling tool would
be the loaned wheely bag that
reports back to you to say it is
having a nice time on vacation
with your friend. Perhaps that
wheely bag is a spime—but
when we elaborated this design
space of chattering tools, the
term had not yet been coined.
One thing to note in our scenario, however, was that the
tools told you their tales only
when you asked for them. We
never tackled how on earth
they would know when and
whether or not to share their
experiences spontaneously with
us humans or with each other,
should the situation so demand.
I realize there are fundamental concerns about the autonomy, politeness, and social decision-making of these semi-sentient, communicating things.
I am not really sure I trust my
socks to self-organize without
disrupting the other inhabitants
of the clothing drawer. And
what if my confused and lonely
socks get so distraught in their
unsatisfactory search that they
get into a fight with each other
and with my other objects and
they collectively crash the operating system? As I think about
whether I would or would not
trust my semi-sentient socks, I
realize that, for me, the cloud
on the horizon of this dream
world of sentient (or at least
semi-sentient) objects is trust
in all its forms.
Trust is a slippery concept.
Judd Antin of the iSchool at UC
Berkeley and I checked out the
stats: The word has appeared
in the titles of papers indexed
by the ACM Digital Library
more times between 2005 and
2007 (149 times) than in the
previous seven years combined (1998–2006, 131 times).
Research into trust is all about
uncertainty and risk. Most of
the reported research addresses
trust in enterprises, especially
in the context of e-commerce,
trust as developed in mediated
human-human communication
contexts, or systems perspectives on trusted/untrusted
networks and network security.
In interface and interaction
design, trust unpacks to the
familiar concepts of reliability,
predictability, credibility, and
visibility/transparency.
I see at least three dimensions of uncertainty and risk
for IoTs to address if they are
to be deemed trustworthy by
experiencers (these are not necessarily users, after all; we may
just be experiencing these IoTs
unknowingly—the word “use”
implies awareness).
First there’s system reliability, consistency, credibility,
and transparency. As system
designers, we know that people
will not continue to use technologies that they cannot trust
to do the job they are supposed
to do on a regular and predictable basis. The problem is,
once there has been a breach
we could not have foreseen,
distrust sets in. And distrust is
much harder than trust to navigate. Distrust is about fear and
self-protection; it is about not
believing in the product, the
tool. Once someone distrusts
a system, it is very difficult to
regain their confidence. Lack of
reliability and consistency are