3D Human Pose Tracking
4 papers with code • 1 benchmarks • 4 datasets
Datasets
Most implemented papers
Iterative Greedy Matching for 3D Human Pose Tracking from Multiple Views
In this work we propose an approach for estimating 3D human poses of multiple people from a set of calibrated cameras.
Part-Aware Measurement for Robust Multi-View Multi-Human 3D Pose Estimation and Tracking
This paper introduces an approach for multi-human 3D pose estimation and tracking based on calibrated multi-view.
Event-based Human Pose Tracking by Spiking Spatiotemporal Transformer
Motivated by the above mentioned issues, we present in this paper a dedicated end-to-end sparse deep learning approach for event-based pose tracking: 1) to our knowledge this is the first time that 3D human pose tracking is obtained from events only, thus eliminating the need of accessing to any frame-based images as part of input; 2) our approach is based entirely upon the framework of Spiking Neural Networks (SNNs), which consists of Spike-Element-Wise (SEW) ResNet and a novel Spiking Spatiotemporal Transformer; 3) a large-scale synthetic dataset is constructed that features a broad and diverse set of annotated 3D human motions, as well as longer hours of event stream data, named SynEventHPD.
Uncertainty-aware State Space Transformer for Egocentric 3D Hand Trajectory Forecasting
In this paper, we set up an egocentric 3D hand trajectory forecasting task that aims to predict hand trajectories in a 3D space from early observed RGB videos in a first-person view.