Multi-Object Tracking and Segmentation
16 papers with code • 2 benchmarks • 3 datasets
Multiple object tracking and segmentation requires detecting, tracking, and segmenting objects belonging to a set of given classes.
(Image and definition credit: Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation, NeurIPS 2021, Spotlight )
Most implemented papers
BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning
Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving.
EagerMOT: 3D Multi-Object Tracking via Sensor Fusion
Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.
D2Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos
We further show that D2Conv3D out-performs trivial extensions of existing dilated and deformable convolutions to 3D.
Segment as Points for Efficient Online Multi-Object Tracking and Segmentation
The resulting online MOTS framework, named PointTrack, surpasses all the state-of-the-art methods including 3D tracking methods by large margins (5. 4% higher MOTSA and 18 times faster over MOTSFusion) with the near real-time speed (22 FPS).
PointTrack++ for Effective Online Multi-Object Tracking and Segmentation
In this work, we present PointTrack++, an effective on-line framework for MOTS, which remarkably extends our recently proposed PointTrack framework.
Online Multi-Object Tracking and Segmentation with GMPHD Filter and Mask-based Affinity Fusion
One affinity, for position and motion, is computed by using the GMPHD filter, and the other affinity, for appearance is computed by using the responses from a single object tracker such as a kernalized correlation filter.
Continuous Copy-Paste for One-Stage Multi-Object Tracking and Segmentation
Current one-step multi-object tracking and segmentation (MOTS) methods lag behind recent two-step methods.
Assignment-Space-Based Multi-Object Tracking and Segmentation
In contrast, we formulate a global method for MOTS over the space of assignments rather than detections: First, we find all top-k assignments of objects detected and segmented between any two consecutive frames and develop a structured prediction formulation to score assignment sequences across any number of consecutive frames.
Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation
We propose Prototypical Cross-Attention Network (PCAN), capable of leveraging rich spatio-temporal information for online multiple object tracking and segmentation.
Do Different Tracking Tasks Require Different Appearance Models?
We show how most tracking tasks can be solved within this framework, and that the same appearance model can be successfully used to obtain results that are competitive against specialised methods for most of the tasks considered.