display surface, thereby supporting spatially multiplexed
bidirectional communications with multiple local devices
and reception of data from remote gesturing devices. Of
course, it is also possible to time multiplex communications
between different devices if a suitable addressing scheme is
used. We have not yet prototyped either of these multiple-device communications schemes.
4. 4. interacting with thinSight
As shown earlier in this section, it is straightforward to sense
and locate multiple fingertips using ThinSight. In order to
do this we threshold the processed data to produce a binary
image. The connected components within this are isolated,
and the center of mass of each component is calculated
to generate representative X, Y coordinates of each finger.
A very simple homography can then be applied to map these
fingertip positions (which are relative to the sensor image)
to onscreen coordinates. Major and minor axis analysis or
more detailed shape analysis can be performed to determine orientation information. Robust fingertip tracking
algorithms or optical flow techniques28 can be employed to
add stronger heuristics for recognizing gestures.
Using these established techniques, fingertips are
sensed to within a few millimeters, currently at 23 frames/s.
Both hover and touch can be detected, and could be disambiguated by defining appropriate thresholds. A user therefore need not apply any force to interact with the display.
However, it is also possible to estimate fingertip pressure by
calculating the increase in the area and intensity of the fingertip “blob” once touch has been detected.
Figure 1 shows two simple applications developed using
ThinSight. A simple photo application allows multiple
images to be translated, rotated, and scaled using established multifinger manipulation gestures. We use distance
and angle between touch points to compute scale factor
and rotation deltas. To demonstrate some of the capabilities of ThinSight beyond just multitouch, we have built an
example paint application that allows users to paint directly
on the surface using both fingertips and real paint brushes.
The latter works because ThinSight can detect the brushes’
white bristles which reflect IR. The paint application also
supports a more sophisticated scenario where an artist’s
palette is placed on the display surface. Although this is visibly transparent, it has an IR reflective marker on the underside which allows it to be detected by ThinSight, whereupon
a range of paint colors are rendered underneath it. The user
can change color by “dipping” either a fingertip or a brush
into the appropriate well in the palette. We identify the presence of this object using a simple ellipse matching algorithm which distinguishes the larger palette from smaller
touch point “blobs” in the sensor image. Despite the limited resolution of ThinSight, it is possible to differentiate a
number of different objects using simple silhouette shape
5. DiSCuSSion anD FutuRE WoRK
We believe that the prototype presented in this article is an
interesting proof-of-concept of a new approach to multi-touch and tangible sensing for thin displays. We have already
described some of its potential; here we discuss a number of
additional observations and ideas which came to light during the work.
5. 1. Fidelity of sensing
The original aim of this project was simply to detect fingertips to enable multi-touch-based direct manipulation.
However, despite the low resolution of the raw sensor data,
we still detect quite sophisticated object images. Very small
objects do currently “disappear” on occasion when they are
midway between optosensors. However, we have a number of ideas for improving the fidelity further, both to support smaller objects and to make object and visual marker
identification more practical. An obvious solution is to
increase the density of the optosensors, or at least the density of IR detectors. Another idea is to measure the amount
of reflected light under different lighting conditions—for
example, simultaneously emitting light from neighboring
sensors is likely to cause enough reflection to detect smaller
5. 2. Frame rate
In informal trials of ThinSight for a direct manipulation
task, we found that the current frame rate was reasonably
acceptable to users. However, a higher frame rate would not
only produce a more responsive UI which will be important
for some applications, but would make temporal filtering
more practical thereby reducing noise and improving sub-pixel accuracy. It would also be possible to sample each
detector under a number of different illumination conditions as described above, which we believe would increase
fidelity of operation.
5. 3. Robustness to lighting conditions
The retro-reflective nature of operation of ThinSight combined with the use of background substitution seems to give
reliable operation in a variety of lighting conditions, including an office environment with some ambient sunlight.
One common approach to mitigating any negative effects
of ambient light, which we could explore if necessary, is to
emit modulated IR and to ignore any nonmodulated offset
in the detected signal.
5. 4. Power consumption
The biggest contributor to power consumption in ThinSight
is emission of IR light; because the signal is attenuated in
both directions as it passes through the layers of the LCD
panel, a high intensity emission is required. For mobile
devices, where power consumption is an issue, we have
ideas for improvements. We believe it is possible to enhance
the IR transmission properties of an LCD panel by optimizing the materials used in its construction for this purpose—
something which is not currently done. In addition, it may
be possible to keep track of object and fingertip positions,
and limit the most frequent IR emissions to those areas. The
rest of the display would be scanned less frequently (e.g. at
2–3 frames/s) to detect new touch points.
One of the main ways we feel we can improve on power
consumption and fidelity of sensing is to use a more