Human Dynamics
18 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Human Dynamics
Most implemented papers
Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning
We propose to (i) rethink pairwise interactions with a self-attention mechanism, and (ii) jointly model Human-Robot as well as Human-Human interactions in the deep reinforcement learning framework.
PoseFormerV2: Exploring Frequency Domain for Efficient and Robust 3D Human Pose Estimation
However, in real scenarios, the performance of PoseFormer and its follow-ups is limited by two factors: (a) The length of the input joint sequence; (b) The quality of 2D joint detection.
Convolutional Sequence to Sequence Model for Human Dynamics
Human motion modeling is a classic problem in computer vision and graphics.
MT-VAE: Learning Motion Transformations to Generate Multimodal Human Dynamics
Our model jointly learns a feature embedding for motion modes (that the motion sequence can be reconstructed from) and a feature transformation that represents the transition of one motion mode to the next motion mode.
Action-Agnostic Human Pose Forecasting
In this paper, we propose a new action-agnostic method for short- and long-term human pose forecasting.
Learning 3D Human Dynamics from Video
We present a framework that can similarly learn a representation of 3D dynamics of humans from video via a simple but effective temporal encoding of image features.
Predicting 3D Human Dynamics from Video
In this work, we present perhaps the first approach for predicting a future 3D mesh model sequence of a person from past video input.
Contact and Human Dynamics from Monocular Video
Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors that violate physical constraints, such as feet penetrating the ground and bodies leaning at extreme angles.
Behavior-Driven Synthesis of Human Dynamics
Using this representation, we are able to change the behavior of a person depicted in an arbitrary posture, or to even directly transfer behavior observed in a given video sequence.
Towards Tokenized Human Dynamics Representation
For human action understanding, a popular research direction is to analyze short video clips with unambiguous semantic content, such as jumping and drinking.