Learning with noisy labels
121 papers with code • 18 benchmarks • 13 datasets
Learning with noisy labels means When we say "noisy labels," we mean that an adversary has intentionally messed up the labels, which would have come from a "clean" distribution otherwise. This setting can also be used to cast learning from only positive and unlabeled data.
Libraries
Use these libraries to find Learning with noisy labels models and implementationsDatasets
Most implemented papers
Sharpness-Aware Minimization for Efficiently Improving Generalization
In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability.
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
Here, we present a theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE.
Symmetric Cross Entropy for Robust Learning with Noisy Labels
In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).
Confident Learning: Estimating Uncertainty in Dataset Labels
Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence.
Normalized Loss Functions for Deep Learning with Noisy Labels
However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.
Open-set Label Noise Can Improve Robustness Against Inherent Label Noise
Learning with noisy labels is a practically challenging problem in weakly supervised learning.
How does Disagreement Help Generalization against Label Corruption?
Learning with noisy labels is one of the hottest problems in weakly-supervised learning.
Probabilistic End-to-end Noise Correction for Learning with Noisy Labels
Deep learning has achieved excellent performance in various computer vision tasks, but requires a lot of training examples with clean labels.
Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach
We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise.