Out of Distribution (OOD) Detection
231 papers with code • 3 benchmarks • 8 datasets
Out of Distribution (OOD) Detection is the task of detecting instances that do not belong to the distribution the classifier has been trained on. OOD data is often referred to as "unseen" data, as the model has not encountered it during training.
OOD detection is typically performed by training a model to distinguish between in-distribution (ID) data, which the model has seen during training, and OOD data, which it has not seen. This can be done using a variety of techniques, such as training a separate OOD detector, or modifying the model's architecture or loss function to make it more sensitive to OOD data.
Libraries
Use these libraries to find Out of Distribution (OOD) Detection models and implementationsMost implemented papers
Deep Anomaly Detection with Outlier Exposure
We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.
Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices
We find that characterizing activity patterns by Gram matrices and identifying anomalies in gram matrix values can yield high OOD detection rates.
Likelihood Ratios for Out-of-Distribution Detection
We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics.
Improved Contrastive Divergence Training of Energy Based Models
Contrastive divergence is a popular method of training energy-based models, but is known to have difficulties with training stability.
Hierarchical VAEs Know What They Don't Know
Deep generative models have been demonstrated as state-of-the-art density estimators.
SSD: A Unified Framework for Self-Supervised Outlier Detection
We demonstrate that SSD outperforms most existing detectors based on unlabeled data by a large margin.
A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
Mahalanobis distance (MD) is a simple and popular post-processing method for detecting out-of-distribution (OOD) inputs in neural networks.
Generalized Out-of-Distribution Detection: A Survey
In this survey, we first present a unified framework called generalized OOD detection, which encompasses the five aforementioned problems, i. e., AD, ND, OSR, OOD detection, and OD.
MUAD: Multiple Uncertainties for Autonomous Driving, a benchmark for multiple uncertainty types and tasks
However, disentangling the different types and sources of uncertainty is non trivial for most datasets, especially since there is no ground truth for uncertainty.
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants.