Semi-Supervised Image Classification
124 papers with code • 58 benchmarks • 13 datasets
Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:
- An overview of proxy-label approaches for semi-supervised learning - Sebastian Ruder
- Semi-Supervised Learning in Computer Vision - Amit Chaudhary
( Image credit: Self-Supervised Semi-Supervised Learning )
Libraries
Use these libraries to find Semi-Supervised Image Classification models and implementationsMost implemented papers
A Simple Framework for Contrastive Learning of Visual Representations
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
mixup: Beyond Empirical Risk Minimization
We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
Learning Transferable Visual Models From Natural Language Supervision
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories.
Improved Techniques for Training GANs
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.
Bootstrap your own latent: A new approach to self-supervised Learning
From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.
MixMatch: A Holistic Approach to Semi-Supervised Learning
Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets.
Representation Learning with Contrastive Predictive Coding
The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models.
Improved Regularization of Convolutional Neural Networks with Cutout
Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks.
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance.
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
This causes the embedding vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.