To see the full paper (in gzipped postscript)
In this paper we present an alternative view of hand modeling, i.e., a point-based hand model, and then investigate 3D static hand gestures in detail. Thereby, we develop a device-independent and general-purpose logical hand device, which supports the use of comprehensive 3D gestural input in virtual environments. Based on our logical hand device, not only can the implementation of ``point, reach, and grab'' interaction be facilitated, but also American-Sign-Language-like static gestures can be conceived easily.
KEYWORDS: virtual environments, hand models, 3D gestures, logical hand device, American Sign Language