One-Shot 3D Action Recognition
5 papers with code • 1 benchmarks • 1 datasets
Most implemented papers
NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding
Research on depth-based human activity analysis achieved outstanding performance and demonstrated the effectiveness of 3D representation for action recognition.
SL-DML: Signal Level Deep Metric Learning for Multimodal One-Shot Action Recognition
Further, we show that our approach generalizes well in experiments on the UTD-MHAD dataset for inertial, skeleton and fused data and the Simitate dataset for motion capturing data.
Skeleton-DML: Deep Metric Learning for Skeleton-Based One-Shot Action Recognition
One-shot action recognition allows the recognition of human-performed actions with only a single training example.
One-shot action recognition in challenging therapy scenarios
We also develop a set of complementary steps that boost the action recognition performance in the most challenging scenarios.
MotionBERT: A Unified Perspective on Learning Human Motion Representations
We present a unified perspective on tackling various human-centric video tasks by learning human motion representations from large-scale and heterogeneous data resources.