The current level of development of organic user interfaces is the equivalent of where
the mouse was when it was first invented.
capacitively couples to the electrodes and drains the
wave signal. As a result, the received signal amplitude becomes weak. Measuring this effect makes it
possible to detect the proximity of a conductive
object (such as a human hand).
Since the hand detection is done through
capacitive sensing, all the necessary sensing
elements can be completely embedded in
the surface. Unlike camera-based systems,
the SmartSkin sensor is not affected by
changes in the intensity of the environmental lighting. The surface is also not limited to
being flat; the surface of any object, including furniture and robots, potentially provides such interactivity, functioning like the skin of a living creature.
The system recognizes the effect of the capacitance
change when the user’s hand is placed 5cm–10cm
from the table. To accurately determine the hand’s
position (the peak of the potential field), SmartSkin
uses bi-cubic interpolation to analyze the sensed data.
The position of the hand can be determined by finding the peak on the interpolated curve. The precision
of the calculated position is much finer than the size
of a grid cell (10cm). The current implementation has
an accuracy of 1cm.
SmartSkin’s sensor configuration also enables
shape-based manipulation that does not explicitly use
the hand’s 2D position. A potential field created by
sensor inputs is instead used to move objects. As the
hand approaches the surface of the table, each intersection of the sensor grid measures the capacitance
between itself and the hand. This field helps define
various rules of object manipulation. For example, an
object that descends to a lower potential area is
repelled from the hand. The direction and speed of
the object’s motion can be controlled by changing the
hand’s position around the object.
In my lab’s tests, many SmartSkin users were able
to quickly learn to use the interface even though they
did not fully understand its underlying dynamics.
Many users used two hands or even their arms. For
example, one can sweep the table surface with an arm
to move a group of objects, and two arms can be used
to trap and move objects, and (see Figure 2b).
Using the same sensing principle with a more
dense grid antenna layout, SmartSkin determines the
shape of a human hand (see Figure 2c and Figure 2d).
The peak detection algorithm can also be used; in it,
the algorithm, rather than tracking just one position
of the hand, is able to track multiple positions of the
fingertips.
An algorithm known as As-Rigid-As-Possible
Shape Manipulation deforms objects with multiple
control points [ 4]; Figure 2e shows its implementation in SmartSkin. Users manipulate graphical objects
directly with multiple finger control points.
DIAMONDTOUCH
Diamond Touch [ 1] developed at Mitsubishi Electric
Research Laboratories is another interactive table system based on capacitive sensing. Its unique feature is
the ability to distinguish among multiple users. The
grid-shaped antenna embedded in the DiamondTouch table transmits a time-modulated signal. Users
sit in a special chair with a built-in signal-receiving
electrode. When a user’s finger touches the surface, a
capacitive connection from the grid antenna to the
signal-receiving chair is established through the user’s
body. The connection information is then used to
determine the user’s finger position on the surface, as
well as the uniquely identified user manipulating the
surface. Since the DiamondTouch table transmits a
modulated signal, multiple users are able to operate
the same surface simultaneously without the system
losing track of the identity of any user. DiamondTouch also supports semi-multi-touch operation in
which “semi” means (despite some ambiguity) the
ability to detect multiple points. For instance, when
a user touches two points—( 100, 200) and (300,
400)—the system is unable to distinguish them from
another two points—( 100, 400) and (300, 200).
However, performing simple multi-touch interactions (such as pinching, or controlling scale with the
distance between two fingers), this ambiguity is not
a problem.
PRESENSE: TOUCH- AND PRESSURE-SENSING
INTERACTION
Touch-sensing input [ 3] extends the mouse’s usability
by adding a touch sensor. While the buttons of a normal mouse have only two states (nonpress and press),