Unsupervised Few-Shot Learning
12 papers with code • 0 benchmarks • 0 datasets
In contrast to supervised few-shot learning, only the unlabeled dataset is available in the pre-training or meta-training stage for unsupervised few-shot learning.
Benchmarks
These leaderboards are used to track progress in Unsupervised Few-Shot Learning
Most implemented papers
Self-Supervision Can Be a Good Few-Shot Learner
Specifically, we maximize the mutual information (MI) of instances and their representations with a low-bias MI estimator to perform self-supervised pre-training.
Self-Supervised Prototypical Transfer Learning for Few-Shot Classification
Building on these insights and on advances in self-supervised learning, we propose a transfer learning approach which constructs a metric embedding that clusters unlabeled prototypical samples and their augmentations closely together.
Program synthesis performance constrained by non-linear spatial relations in Synthetic Visual Reasoning Test
Here we re-considered the human and machine experiments, because they followed different protocols and yielded different statistics.
Rethinking Class Relations: Absolute-relative Supervised and Unsupervised Few-shot Learning
The majority of existing few-shot learning methods describe image relations with binary labels.
Diversity Helps: Unsupervised Few-shot Learning via Distribution Shift-based Data Augmentation
Importantly, we highlight the value and importance of the distribution diversity in the augmentation-based pretext few-shot tasks, which can effectively alleviate the overfitting problem and make the few-shot model learn more robust feature representations.
Revisiting Unsupervised Meta-Learning via the Characteristics of Few-Shot Tasks
Meta-learning has become a practical approach towards few-shot image classification, where "a strategy to learn a classifier" is meta-learned on labeled base classes and can be applied to tasks with novel classes.
Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning
Then, the learned model can be used for downstream few-shot classification tasks, where we obtain task-specific parameters by performing semi-supervised EM on the latent representations of the support and query set, and predict labels of the query set by computing aggregated posteriors.
UVStyle-Net: Unsupervised Few-shot Learning of 3D Style Similarity Measure for B-Reps
Boundary Representations (B-Reps) are the industry standard in 3D Computer Aided Design/Manufacturing (CAD/CAM) and industrial design due to their fidelity in representing stylistic details.
Trip-ROMA: Self-Supervised Learning with Triplets and Random Mappings
However, in small data regimes, we can not obtain a sufficient number of negative pairs or effectively avoid the over-fitting problem when negatives are not used at all.
Self-Attention Message Passing for Contrastive Few-Shot Learning
Humans have a unique ability to learn new representations from just a handful of examples with little to no supervision.