Few-Shot Learning
1037 papers with code • 22 benchmarks • 41 datasets
Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.
Source: Penalty Method for Inversion-Free Deep Bilevel Optimization
Libraries
Use these libraries to find Few-Shot Learning models and implementationsSubtasks
Most implemented papers
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning.
Language Models are Few-Shot Learners
By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class.
Matching Networks for One Shot Learning
Our algorithm improves one-shot accuracy on ImageNet from 87. 6% to 93. 2% and from 88. 0% to 93. 8% on Omniglot compared to competing approaches.
Learning to Compare: Relation Network for Few-Shot Learning
Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network.
On First-Order Meta-Learning Algorithms
This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i. e., learns quickly) when presented with a previously unseen task sampled from this distribution.
A Closer Look at Few-shot Classification
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples.
Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
The edge between these two lines of works has yet been underexplored, and the effectiveness of meta-learning in few-shot learning remains unclear.
The Power of Scale for Parameter-Efficient Prompt Tuning
More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method "closes the gap" and matches the strong performance of model tuning (where all model weights are tuned).
Meta-SGD: Learning to Learn Quickly for Few-Shot Learning
In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial.