Gesture Recognition
118 papers with code • 13 benchmarks • 14 datasets
Gesture Recognition is an active field of research with applications such as automatic recognition of sign language, interaction of humans and robots or for new ways of controlling video games.
Source: Gesture Recognition in RGB Videos Using Human Body Keypoints and Dynamic Time Warping
Libraries
Use these libraries to find Gesture Recognition models and implementationsDatasets
Most implemented papers
Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning
Consequently, this paper proposes applying transfer learning on aggregated data from multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large datasets.
Recognizing Surgical Activities with Recurrent Neural Networks
In contrast, we work on recognizing both gestures and longer, higher-level activites, or maneuvers, and we model the mapping from kinematics to gestures/maneuvers with recurrent neural networks.
Cloud Dictionary: Sparse Coding and Modeling for Point Clouds
With the development of range sensors such as LIDAR and time-of-flight cameras, 3D point cloud scans have become ubiquitous in computer vision applications, the most prominent ones being gesture recognition and autonomous driving.
Using Deep Convolutional Networks for Gesture Recognition in American Sign Language
In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied areas.
A Study of Vision based Human Motion Recognition and Analysis
Vision based human motion recognition has fascinated many researchers due to its critical challenges and a variety of applications.
Times series averaging and denoising from a probabilistic perspective on time-elastic kernels
In the light of regularized dynamic time warping kernels, this paper re-considers the concept of time elastic centroid for a setof time series.
Intel RealSense Stereoscopic Depth Cameras
We present a comprehensive overview of the stereoscopic Intel RealSense RGBD imaging systems.
HGR-Net: A Fusion Network for Hand Gesture Segmentation and Recognition
We propose a two-stage convolutional neural network (CNN) architecture for robust recognition of hand gestures, called HGR-Net, where the first stage performs accurate semantic segmentation to determine hand regions, and the second stage identifies the gesture.
Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison
Based on this new large-scale dataset, we are able to experiment with several deep learning methods for word-level sign recognition and evaluate their performances in large scale scenarios.
Recognizing Families In the Wild: White Paper for the 4th Edition Data Challenge
Recognizing Families In the Wild (RFIW): an annual large-scale, multi-track automatic kinship recognition evaluation that supports various visual kin-based problems on scales much higher than ever before.