the button in touch-sens-
ing input provides three
states (nontouch, touch,
and press). The additional
state allows more precise
control of the system. For
example, the toolbox of a
GUI application automati-
cally picks up more tools
when a user moves a cursor
to a toolbar region with a
finger touch of the button.
Pressure is another use-
fulinput parameter for
organic interaction. We
intuitively use and control
pressure for natural com-
munication (such as when
shaking hands). With a
simple pressure sensor
(such as a force-sensitive
resister) embedded in a
regular mouse or touch- Figure 3. PreSense 2D input
pad, the device easily device enhanced with pressure
sensors. Users add pressure to
senses finger pressure by control analog parameters (such
measuring the pressure as scaling) and specify “positive”
sensor’s resister values. and “negative” pressures by changing the size of the finger
PreSense [ 8] is a touch- contact area on the touchpad
and pressure-sensing input surface. PreSense can be
device that uses finger combined with tactile feedback to emulate a discrete button
pressure, as well as finger press with “click” sensation.
position (see Figure 3). It
consists of a capacitive touchpad, force-sensitive
resister pressure sensor, and actuator for tactile feedback. It also recognizes finger contact by measuring
the capacitive change on the touchpad surface. Combining pressure sensing and tactile feedback, it also
emulates a variety of buttons (such as one-level and
two-level) by setting thresholds to pressure parameters. For example, a user can “soft press” the target to
select it and “hard press” it to display a pop-up menu.
Analog pressure sensing enables users to control
continuous parameters (such as the scale of the displayed image). The finger contact area is used to distinguish between scaling directions (scale-up and
scale-down). By changing the position of the finger
slightly, one can control both zooming-in and zoom-ing-out with one finger (see Figure 3b).
Pressure is useful for explicit parameter control (such as
scaling) while offering the possibility of sensing the user’s
implicit or emotional state. When a user is, say, frustrated
with the system, his or her mouse button pressure might
change from the normal state, and the system would be able
to react to that frustration.
Finger input with pressure, combined with tactile feedback, is the most
common form of natural
interaction. Like Shiatsu
(Japanese finger-pressure
therapy), users of PreSense
feel and control the performance of computer systems directly.
RESEARCH ISSUES
Because organic user interfaces represent such a new
and emerging research
field, many related
research challenges and
issues require further
study. In what follows, I
outline four of them:
Interaction techniques
for OUIs. GUIs have a
long history and incorporate a large number of interaction techniques. When the mouse was invented by
Douglas Englebart at Stanford Research Institute in
1963, it was used only to point at on-screen objects.
Development of mouse-based interaction techniques
(such as pop-up menus and scrollbars) followed. The
current level of development of organic user interfaces
is the equivalent of where the mouse was when first
invented. For multi-touch interaction, only a simple
set of techniques (such as zooming) has been introduced, though many more should be possible; the
interaction techniques explored in [ 4] may be candidates.
Stone(Tool) vs. skin. It is also interesting and worthwhile to consider the similarities and differences
between tangible UIs and organic UIs. Although these
two types of UIs overlap in many ways, the conceptual differences are clear. Tangible UI systems often
use multiple physical objects as tools for manipulation; each object is graspable so users are able to use
physical manipulation. Because these objects often
have a concrete meaning (called physical icons, or
“phicons”) in the application, many tangible systems
are domain-specific (tuned for a particular application). For organic UI systems, users directly interact
with possibly curved interactive surfaces (such as
walls, tables, and electronic paper) with no intermediate objects. Interactions are more generic and less
application-oriented. This situation may be compared
to real-world interaction. In the real world, we use
physical instruments (tools) to manipulate something
but prefer direct contact for human-to-human com-