Multi-Person Pose forecasting
7 papers with code • 2 benchmarks • 1 datasets
Most implemented papers
Learning Trajectory Dependencies for Human Motion Prediction
In this paper, we propose a simple feed-forward deep network for motion prediction, which takes into account both temporal smoothness and spatial dependencies among human body joints.
Trajectory-Aware Body Interaction Transformer for Multi-Person Pose Forecasting
Specifically, we construct a Temporal Body Partition Module that transforms all the pose sequences into a Multi-Person Body-Part sequence to retain spatial and temporal information based on body semantics.
Multi-Person Extreme Motion Prediction
In this paper, we explore this problem when dealing with humans performing collaborative tasks, we seek to predict the future motion of two interacted persons given two sequences of their past skeletons.
Multi-Person 3D Motion Prediction with Multi-Range Transformers
Thus, instead of predicting each human pose trajectory in isolation, we introduce a Multi-Range Transformers model which contains of a local-range encoder for individual motion and a global-range encoder for social interactions.
Back to MLP: A Simple Baseline for Human Motion Prediction
This paper tackles the problem of human motion prediction, consisting in forecasting future body poses from historically observed sequences.
SoMoFormer: Multi-Person Pose Forecasting with Transformers
Although there are several previous works targeting the problem of multi-person dynamic pose forecasting, they often model the entire pose sequence as time series (ignoring the underlying relationship between joints) or only output the future pose sequence of one person at a time.
Best Practices for 2-Body Pose Forecasting
The task of collaborative human pose forecasting stands for predicting the future poses of multiple interacting people, given those in previous frames.