mobile robots. For a robotic arm, human users may use the “put it there”
command while pointing to the object and then the place. Hand actions
can be used to manipulate operations
(such as grasp and release), since a human hand is able to simulate the form
of the robot gripper. All these aspects
of robot interaction help satisfy the
“intuitiveness” requirement. Third,
people with physical handicaps are
able to control robots through gestures
when other channels of interaction are
limited or impossible without special
keyboards and teach-pendants, or robot controls, satisfying the “come as
you are” requirement. Fourth, such an
interface brings operability to beginners who find it difficult to use sophisticated controls to command robots.
Hand-gesture control of robots faces
several constraints specific to this category of interfaces, including “fast,”
“intuitive,” “accuracy,” “interaction
space,” and “reconfigurability.” While
most systems succeed to some extent
in overcoming the technical requirements (“accuracy”), the interaction
aspects of these systems involve many
unsolved challenges.
Using stereo vision to develop a cooperative work system, Kawarazaki22
combined robotic manipulators and
human users with hand-gesture instructions to recognize four static gestures; when users point at an object on
a table with their forefinger the robot
must be able to detect it. Chen and
Tseng10 described human-robot interaction for game playing in which three
static gestures at multiple angles and
scales are recognized by a computer-vision algorithm with 95% accuracy,
satisfying the “accuracy” requirement.
Using Sony’s AIBO entertainment
robot, Hasanuzzaman19 achieved interaction by combining eight hand gestures and face detection to identify two
nodding gestures and the hand (left or
right) being used, allowing for a larger
lexicon than hand gestures alone.
Rogalla et al. 41 developed a robot-
ic-assistant interaction system using
both gesture recognition and voice
that first tracks gestures, then com-
bines voice and gesture recognition
to evoke a command. Once the hand
is segmented, six gestures are trained
using a hand contour as the main fea-
ture of each gesture. Since the user
and robot interact with objects on a
table, the interaction space is large
enough to include both user and ob-
jects. Rogella et al. 41 reported 95.9%
recognition accuracy.
conclusion
Hand-gesture implementation in-
volves significant usability challenges,
including fast response time, high rec-
ognition accuracy, quick to learn, and
user satisfaction, helping explain why
few vision-based gesture systems have
matured beyond prototypes or made it
to the commercial market for human-
computer devices. Nevertheless, multi-
touchscreens and non-joystick and
-keyboard interaction methods have
found a home in the game-console
market, commercial appeal suggesting
that hand-gesture-based interactive
applications could yet become impor-
tant players in next-generation inter-
face systems due to their ease of access
and naturalness of control.