Few-Shot Class-Incremental Learning
39 papers with code • 3 benchmarks • 3 datasets
Most implemented papers
Constrained Few-shot Class-incremental Learning
Moreover, it is imperative that such learning must respect certain memory and computational constraints such as (i) training samples are limited to only a few per class, (ii) the computational cost of learning a novel class remains constant, and (iii) the memory footprint of the model grows at most linearly with the number of classes observed.
On the Soft-Subnetwork for Few-shot Class Incremental Learning
Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which hypothesizes that there exist smooth (non-binary) subnetworks within a dense network that achieve the competitive performance of the dense network, we propose a few-shot class incremental learning (FSCIL) method referred to as \emph{Soft-SubNetworks (SoftNet)}.
Neural Collapse Terminus: A Unified Solution for Class Incremental Learning and Its Variants
Beyond the normal case, long-tail class incremental learning and few-shot class incremental learning are also proposed to consider the data imbalance and data scarcity, respectively, which are common in real-world implementations and further exacerbate the well-known problem of catastrophic forgetting.
Continual Learning: Forget-free Winning Subnetworks for Video Representations
Inspired by the Lottery Ticket Hypothesis (LTH), which highlights the existence of efficient subnetworks within larger, dense networks, a high-performing Winning Subnetwork (WSN) in terms of task performance under appropriate sparsity conditions is considered for various continual learning tasks.
A streamlined Approach to Multimodal Few-Shot Class Incremental Learning for Fine-Grained Datasets
Few-shot Class-Incremental Learning (FSCIL) poses the challenge of retaining prior knowledge while learning from limited new data streams, all without overfitting.
Few-Shot Class-Incremental Learning
FSCIL requires CNN models to incrementally learn new classes from very few labelled samples, without forgetting the previously learned ones.
Few-Shot Incremental Learning with Continually Evolved Classifiers
First, we adopt a simple but effective decoupled learning strategy of representations and classifiers that only the classifiers are updated in each incremental session, which avoids knowledge forgetting in the representations.
Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning
Few-shot class-incremental learning is to recognize the new classes given few samples and not forget the old classes.
Subspace Regularizers for Few-Shot Class Incremental Learning
The key to this approach is a new family of subspace regularization schemes that encourage weight vectors for new classes to lie close to the subspace spanned by the weights of existing classes.
Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima
Our study shows that existing methods severely suffer from catastrophic forgetting, a well-known problem in incremental learning, which is aggravated due to data scarcity and imbalance in the few-shot setting.