Sensor Fusion
89 papers with code • 0 benchmarks • 2 datasets
Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia]
Benchmarks
These leaderboards are used to track progress in Sensor Fusion
Datasets
Most implemented papers
Improvements to Target-Based 3D LiDAR to Camera Calibration
The homogeneous transformation between a LiDAR and monocular camera is required for sensor fusion tasks, such as SLAM.
A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors
We highlight that our system is a general framework, which can easily fuse various global sensors in a unified pose graph optimization.
LiDARTag: A Real-Time Fiducial Tag System for Point Clouds
Because of the LiDAR sensors' nature, rapidly changing ambient lighting will not affect the detection of a LiDARTag; hence, the proposed fiducial marker can operate in a completely dark environment.
PointPainting: Sequential Fusion for 3D Object Detection
Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature.
PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation
We present PointFusion, a generic 3D object detection method that leverages both image and 3D point cloud information.
Multi-Resolution Multi-Modal Sensor Fusion For Remote Sensing Data With Label Uncertainty
It is valuable to fuse outputs from multiple sensors to boost overall performance.
MonoLayout: Amodal scene layout from a single image
We dub this problem amodal scene layout estimation, which involves "hallucinating" scene layout for even parts of the world that are occluded in the image.
CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection
In this paper, we focus on the problem of radar and camera sensor fusion and propose a middle-fusion approach to exploit both radar and camera data for 3D object detection.
EagerMOT: 3D Multi-Object Tracking via Sensor Fusion
Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.
R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package
Moreover, to make R3LIVE more extensible, we develop a series of offline utilities for reconstructing and texturing meshes, which further minimizes the gap between R3LIVE and various of 3D applications such as simulators, video games and etc (see our demos video).