Out-of-Distribution Detection
326 papers with code • 50 benchmarks • 22 datasets
Detect out-of-distribution or anomalous examples.
Libraries
Use these libraries to find Out-of-Distribution Detection models and implementationsDatasets
Most implemented papers
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers.
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
We consider the two related problems of detecting if an example is misclassified or out-of-distribution.
Deep Anomaly Detection with Outlier Exposure
We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets.
Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices
We find that characterizing activity patterns by Gram matrices and identifying anomalies in gram matrix values can yield high OOD detection rates.
Energy-based Out-of-distribution Detection
We propose a unified framework for OOD detection that uses an energy score.
Learning Confidence for Out-of-Distribution Detection in Neural Networks
Modern neural networks are very powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong.
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.
Likelihood Ratios for Out-of-Distribution Detection
We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics.
Probabilistic Autoencoder
The PAE is fast and easy to train and achieves small reconstruction errors, high sample quality, and good performance in downstream tasks.