Missing Labels
40 papers with code • 0 benchmarks • 0 datasets
The challenge in multi-label learning with missing labels is that the training data often has incomplete label information. Collecting labels for multi-label datasets is a manual exercise and dependent on external sources, leading to the collection of only a subset of labels. This assumption of complete label information doesn't hold, especially when the label space is large. Inaccurate label-label and label-feature relationships can be captured, leading to suboptimal solutions in missing label settings.
Benchmarks
These leaderboards are used to track progress in Missing Labels
Most implemented papers
Multi-Label Learning from Single Positive Labels
When the number of potential labels is large, human annotators find it difficult to mention all applicable labels for each training image.
Label-set Loss Functions for Partial Supervision: Application to Fetal Brain 3D MRI Parcellation
Deep neural networks have increased the accuracy of automatic segmentation, however, their accuracy depends on the availability of a large number of fully segmented images.
Simple and Robust Loss Design for Multi-Label Learning with Missing Labels
Multi-label learning in the presence of missing labels (MLML) is a challenging problem.
The Dice loss in the context of missing or empty labels: Introducing $Φ$ and $ε$
We find and propose heuristic combinations of $\Phi$ and $\epsilon$ that work in a segmentation setting with either missing or empty labels.
DICNet: Deep Instance-Level Contrastive Network for Double Incomplete Multi-View Multi-Label Classification
To deal with the double incomplete multi-view multi-label classification problem, we propose a deep instance-level contrastive network, namely DICNet.
Towards Semi-supervised Learning with Non-random Missing Labels
Semi-supervised learning (SSL) tackles the label missing problem by enabling the effective usage of unlabeled data.
Cross-Prediction-Powered Inference
We show that cross-prediction is consistently more powerful than an adaptation of prediction-powered inference in which a fraction of the labeled data is split off and used to train the model.
Generalized test utilities for long-tail performance in extreme multi-label classification
As such, it is characterized by long-tail labels, i. e., most labels have very few positive instances.
Max-Margin Deep Generative Models for (Semi-)Supervised Learning
Deep generative models (DGMs) are effective on learning multilayered representations of complex data and performing inference of input data by exploring the generative ability.
Learning Deep Latent Spaces for Multi-Label Classification
Multi-label classification is a practical yet challenging task in machine learning related fields, since it requires the prediction of more than one label category for each input instance.