Spatio-Temporal Action Localization
13 papers with code • 1 benchmarks • 6 datasets
Most implemented papers
Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization
We propose to explicitly model the Actor-Context-Actor Relation, which is the relation between two actors based on their interactions with the context.
Action Tubelet Detector for Spatio-Temporal Action Localization
We propose the ACtion Tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs tubelets, i. e., sequences of bounding boxes with associated scores.
1st place solution for AVA-Kinetics Crossover in AcitivityNet Challenge 2020
This technical report introduces our winning solution to the spatio-temporal action localization track, AVA-Kinetics Crossover, in ActivityNet Challenge 2020.
Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection
In this paper, we propose a network architecture that computes and integrates the most important visual cues for action recognition: pose, motion, and the raw images.
Actor-Centric Relation Network
A visualization of the learned relation features confirms that our approach is able to attend to the relevant relations for each action.
Video action detection by learning graph-based spatio-temporal interactions
Action Detection is a complex task that aims to detect and classify human actions in video clips.
ST-HOI: A Spatial-Temporal Baseline for Human-Object Interaction Detection in Videos
Detecting human-object interactions (HOI) is an important step toward a comprehensive visual understanding of machines.
KORSAL: Key-point Detection based Online Real-Time Spatio-Temporal Action Localization
Despite the simplicity of our approach, our lightweight end-to-end architecture achieves state-of-the-art frame-mAP of 74. 7% on the challenging UCF101-24 dataset, demonstrating a performance gain of 6. 4% over the previous best online methods.
Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision
Modern self-supervised learning algorithms typically enforce persistency of instance representations across views.
E^2TAD: An Energy-Efficient Tracking-based Action Detector
Video action detection (spatio-temporal action localization) is usually the starting point for human-centric intelligent analysis of videos nowadays.