Hand Gesture Recognition
41 papers with code • 18 benchmarks • 14 datasets
Hand gesture recognition (HGR) is a subarea of Computer Vision where the focus is on classifying a video or image containing a dynamic or static, respectively, hand gesture. In the static case, gestures are also generally called poses. HGR can also be performed with point cloud or joint hand data.
Datasets
Most implemented papers
Real-time Hand Gesture Detection and Classification Using Convolutional Neural Networks
We evaluate our architecture on two publicly available datasets - EgoGesture and NVIDIA Dynamic Hand Gesture Datasets - which require temporal detection and classification of the performed hand gestures.
Make Skeleton-based Action Recognition Model Smaller, Faster and Better
Although skeleton-based action recognition has achieved great success in recent years, most of the existing methods may suffer from a large model size and slow execution speed.
HGR-Net: A Fusion Network for Hand Gesture Segmentation and Recognition
We propose a two-stage convolutional neural network (CNN) architecture for robust recognition of hand gestures, called HGR-Net, where the first stage performs accurate semantic segmentation to determine hand regions, and the second stage identifies the gesture.
Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison
Based on this new large-scale dataset, we are able to experiment with several deep learning methods for word-level sign recognition and evaluate their performances in large scale scenarios.
Human Computer Interaction Using Marker Based Hand Gesture Recognition
Human Computer Interaction (HCI) has been redefined in this era.
First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations
Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.
Deep Fisher Discriminant Learning for Mobile Hand Gesture Recognition
Gesture recognition is a challenging problem in the field of biometrics.
A Study of Convolutional Architectures for Handshape Recognition applied to Sign Language
Using the LSA16 and RWTH-PHOENIX-Weather handshape datasets, we performed experiments with the LeNet, VGG16, ResNet-34 and All Convolutional architectures, as well as Inception with normal training and via transfer learning, and compared them to the state of the art in these datasets.
Motion Fused Frames: Data Level Fusion Strategy for Hand Gesture Recognition
Acquiring spatio-temporal states of an action is the most crucial step for action classification.
Deep Learning for Hand Gesture Recognition on Skeletal Data
In this paper, we introduce a new 3D hand gesture recognition approach based on a deep learning model.