Novel LiDAR View Synthesis
3 papers with code • 0 benchmarks • 0 datasets
Synthesize a novel frame of LiDAR point clouds with an arbitrary LiDAR sensor pose from given source point clouds and their LiDAR sensor poses. For dynamic scenes, it also includes different spatial and temporal view synthesis.
Benchmarks
These leaderboards are used to track progress in Novel LiDAR View Synthesis
Most implemented papers
LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields
We address this challenge by formulating, to the best of our knowledge, the first differentiable end-to-end LiDAR rendering framework, LiDAR-NeRF, leveraging a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
PC-NeRF: Parent-Child Neural Radiance Fields Using Sparse LiDAR Frames in Autonomous Driving Environments
With extensive experiments, PC-NeRF is proven to achieve high-precision novel LiDAR view synthesis and 3D reconstruction in large-scale scenes.
LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis
In light of this, we propose LiDAR4D, a differentiable LiDAR-only framework for novel space-time LiDAR view synthesis.