Hand Pose Estimation
87 papers with code • 10 benchmarks • 22 datasets
Hand pose estimation is the task of finding the joints of the hand from an image or set of video frames.
( Image credit: Pose-REN )
Libraries
Use these libraries to find Hand Pose Estimation models and implementationsDatasets
Most implemented papers
Learning from Simulated and Unsupervised Images through Adversarial Training
With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations.
Learning to Estimate 3D Hand Pose from Single RGB Images
Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images.
V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map
To overcome these weaknesses, we firstly cast the 3D hand and human pose estimation problem from a single depth map into a voxel-to-voxel prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood for each keypoint.
DeepPrior++: Improving Fast and Accurate 3D Hand Pose Estimation
DeepPrior is a simple approach based on Deep Learning that predicts the joint 3D locations of a hand given a depth map.
HOnnotate: A method for 3D Annotation of Hand and Object Poses
This dataset is currently made of 77, 558 frames, 68 sequences, 10 persons, and 10 objects.
3DGazeNet: Generalizing Gaze Estimation with Weak-Supervision from Synthetic Views
To close the gap between image domains, we create a large-scale dataset of diverse faces with gaze pseudo-annotations, which we extract based on the 3D geometry of the scene, and design a multi-view supervision framework to balance their effect during training.
Learning Pose Specific Representations by Predicting Different Views
To exploit this observation, we train a model that -- given input from one view -- estimates a latent representation, which is trained to be predictive for the appearance of the object when captured from another viewpoint.
End-to-end Hand Mesh Recovery from a Monocular RGB Image
In this paper, we present a HAnd Mesh Recovery (HAMR) framework to tackle the problem of reconstructing the full 3D mesh of a human hand from a single RGB image.
3D Hand Shape and Pose Estimation from a Single RGB Image
This work addresses a novel and challenging problem of estimating the full 3D hand shape and pose from a single RGB image.
A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image
For 3D hand and body pose estimation task in depth image, a novel anchor-based approach termed Anchor-to-Joint regression network (A2J) with the end-to-end learning ability is proposed.