Figure 5. our laptop prototype. top: three PCBs are tiled together
and mounted on an acrylic plate, to give a total of 105 sensing
pixels. holes are also cut in the white reflector shown on the
far left. Bottom left: an aperture is cut in the laptop lid to allow
the PCBs to be mounted behind the LCD. this provides sensing
across the center of the laptop screen. Bottom right: side views
of the prototype—note the display has been reversed on its
hinges in the style of a tablet PC.
Figure 6. the thinSight tabletop hardware as viewed from
the side and behind. thirty PCBs (in a 5 × 6 grid) are tiled with
columns interconnected with ribbon cable and attached to
a hub board for aggregating data and inter-tile communication.
this provides a total of 1050 discrete sensing pixels across
the entire surface.
to use what is referred to as a “cold mirror.” Unfortunately
these are made using a glass substrate which means they are
expensive, rigid and fragile and we were unable to source a
cold mirror large enough to cover the entire tabletop display. We experimented with many alternative materials
including tracing paper, acetate sheets coated in emulsion
paint, spray-on frosting, thin sheets of white polythene
and mylar. Most of these are unsuitable either because of
a lack of IR transparency or because the optosensors can
be seen through them to some extent. The solution we settled on was the use of Radiant Light Film by 3M (part number CM500), which largely lets IR light pass through while
reflecting visible light without the disadvantages of a true
cold mirror. This was combined with the use of a grade “0”
neutral density filter, a visually opaque but IR transparent
diffuser, to even out the distribution rear illumination and
at the same time prevent the “floating” effect. Applying the
Radiant Light Film carefully is critical since minor imperfections (e.g. wrinkles or bubbles) are highly visible to the
user—thus we laminated it onto a thin PET carrier. One
final modification to the LCD construction was to deploy
these films behind the light guide to further improve the
optical properties. The resulting LCD layer stack-up is
depicted in Figure 4 right.
Most LCD panels are not constructed to resist physical
pressure, and any distortion which results from touch interactions typically causes internal IR reflection resulting in
“flare.” Placing the Radiant Light Film and neutral density
filter behind the light guide improves this situation, and
we also reinforced the ThinSight unit using several lengths
of extruded aluminum section running directly behind
4. thinSiGht in oPERation
4. 1. Processing the raw sensor data
Each value read from an individual IR detector is defined
as an integer representing the intensity of incident light.
These sensor values are streamed to the PC via USB where
the raw data undergoes several simple processing and filtering steps in order to generate an IR image that can be
used to detect objects near the surface. Once this image is
generated, established image processing techniques can be
applied in order to determine coordinates of fingers, recognize hand gestures, and identify object shapes.
Variations between optosensors due to manufacturing
and assembly tolerances result in a range of different values
across the display even without the presence of objects on
the display surface. To make the sensor image uniform and
the presence of additional incident light (reflected from
nearby objects) more apparent, we subtract a “background”
frame captured when no objects are present, and normalize
relative to the image generated when the display is covered
with a sheet of white reflective paper.
We use standard bicubic interpolation to scale up the
sensor image by a predefined factor ( 10 in our current implementation). For the larger tabletop implementation this
results in a 350 × 300 pixel image. Optionally, a Gaussian
filter can be applied for further smoothing, resulting in a
grayscale “depth” image as shown in Figure 7.
4. 2. Seeing through the thinSight display
The images we obtain from the prototype are quite rich, particularly given the density of the sensor array. Fingers and
hands within proximity of the screen are clearly identifiable.
Examples of images captured through the display are shown
in Figures 1, 7 and 8.