Pose Tracking
60 papers with code • 3 benchmarks • 9 datasets
Pose Tracking is the task of estimating multi-person human poses in videos and assigning unique instance IDs for each keypoint across frames. Accurate estimation of human keypoint-trajectories is useful for human action recognition, human interaction understanding, motion capture and animation.
Source: LightTrack: A Generic Framework for Online Top-Down Human Pose Tracking
Libraries
Use these libraries to find Pose Tracking models and implementationsDatasets
Most implemented papers
Deep High-Resolution Representation Learning for Human Pose Estimation
We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutli-resolution subnetworks in parallel.
Simple Baselines for Human Pose Estimation and Tracking
There has been significant progress on pose estimation and increasing interests on pose tracking in recent years.
BlazePose: On-device Real-time Body Pose tracking
We present BlazePose, a lightweight convolutional neural network architecture for human pose estimation that is tailored for real-time inference on mobile devices.
Event-based Camera Pose Tracking using a Generative Event Model
Event-based vision sensors mimic the operation of biological retina and they represent a major paradigm shift from traditional cameras.
PoseTrack: Joint Multi-Person Pose Estimation and Tracking
In this work, we introduce the challenging problem of joint multi-person pose estimation and tracking of an unknown number of persons in unconstrained videos.
Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points
In this work, we propose a framework for hand tracking that can capture the motion of two interacting hands using only a single, inexpensive RGB-D camera.
PoseTrack: A Benchmark for Human Pose Estimation and Tracking
In this work, we aim to further advance the state of the art by establishing "PoseTrack", a new large-scale benchmark for video-based human pose estimation and articulated tracking, and bringing together the community of researchers working on visual human analysis.
Multigrid Predictive Filter Flow for Unsupervised Learning on Videos
We introduce multigrid Predictive Filter Flow (mgPFF), a framework for unsupervised learning on videos.
LightTrack: A Generic Framework for Online Top-Down Human Pose Tracking
To the best of our knowledge, this is the first paper to propose an online human pose tracking framework in a top-down fashion.
6-PACK: Category-level 6D Pose Tracker with Anchor-Based Keypoints
We present 6-PACK, a deep learning approach to category-level 6D object pose tracking on RGB-D data.