3D Multi-Object Tracking
31 papers with code • 6 benchmarks • 7 datasets
Image: Weng et al
Most implemented papers
Center-based 3D Object Detection and Tracking
Three-dimensional objects are commonly represented as 3D boxes in a point-cloud.
Probabilistic 3D Multi-Object Tracking for Autonomous Driving
Our method estimates the object states by adopting a Kalman Filter.
EagerMOT: 3D Multi-Object Tracking via Sensor Fusion
Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.
Exploring Simple 3D Multi-Object Tracking for Autonomous Driving
3D multi-object tracking in LiDAR point clouds is a key ingredient for self-driving vehicles.
SRT3D: A Sparse Region-Based 3D Object Tracking Approach for the Real World
Finally, we use a pre-rendered sparse viewpoint model to create a joint posterior probability for the object pose.
SimpleTrack: Understanding and Rethinking 3D Multi-object Tracking
3D multi-object tracking (MOT) has witnessed numerous novel benchmarks and approaches in recent years, especially those under the "tracking-by-detection" paradigm.
SRCN3D: Sparse R-CNN 3D for Compact Convolutional Multi-View 3D Object Detection and Tracking
Our novel sparse feature sampling module only utilizes local 2D region of interest (RoI) features calculated by the projection of 3D query boxes for further box refinement, leading to a fully-convolutional and deployment-friendly pipeline.
Probabilistic 3D Multi-Object Cooperative Tracking for Autonomous Driving via Differentiable Multi-Sensor Kalman Filter
However, their proposed methods mainly use cooperative detection results as input to a standard single-sensor Kalman Filter-based tracking algorithm.
FANTrack: 3D Multi-Object Tracking with Feature Association Network
Instead, we exploit the power of deep learning to formulate the data association problem as inference in a CNN.
3D Multi-Object Tracking: A Baseline and New Evaluation Metrics
Additionally, 3D MOT datasets such as KITTI evaluate MOT methods in the 2D space and standardized 3D MOT evaluation tools are missing for a fair comparison of 3D MOT methods.