Autonomous Driving
1415 papers with code • 4 benchmarks • 66 datasets
Autonomous driving is the task of driving a vehicle without human conduction.
Many of the state-of-the-art results can be found at more general task pages such as 3D Object Detection and Semantic Segmentation.
(Image credit: Exploring the Limitations of Behavior Cloning for Autonomous Driving)
Libraries
Use these libraries to find Autonomous Driving models and implementationsDatasets
Most implemented papers
YOLOX: Exceeding YOLO Series in 2021
In this report, we present some experienced improvements to YOLO series, forming a new high-performance detector -- YOLOX.
PointPillars: Fast Encoders for Object Detection from Point Clouds
These benchmarks suggest that PointPillars is an appropriate encoding for object detection in point clouds.
MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving
While most approaches to semantic reasoning have focused on improving performance, in this paper we argue that computational times are very important in order to enable real time applications such as autonomous driving.
nuScenes: A multimodal dataset for autonomous driving
Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar.
SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving
In addition to requiring high accuracy to ensure safety, object detection for autonomous driving also requires real-time inference speed to guarantee prompt vehicle control, as well as small model size and energy efficiency to enable embedded system deployment.
Complex-YOLO: Real-time 3D Object Detection on Point Clouds
We introduce Complex-YOLO, a state of the art real-time 3D object detection network on point clouds only.
Key Points Estimation and Point Instance Segmentation Approach for Lane Detection
In the case of traffic line detection, an essential perception module, many condition should be considered, such as number of traffic lines and computing power of the target system.
The Double Sphere Camera Model
We evaluate the model using a calibration dataset with several different lenses and compare the models using the metrics that are relevant for Visual Odometry, i. e., reprojection error, as well as computation time for projection and unprojection functions and their Jacobians.
Learning by Cheating
We first train an agent that has access to privileged information.
Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car
This eliminates the need for human engineers to anticipate what is important in an image and foresee all the necessary rules for safe driving.