Class Incremental Learning
209 papers with code • 6 benchmarks • 1 datasets
Incremental learning of a sequence of tasks when the task-ID is not available at test time.
Libraries
Use these libraries to find Class Incremental Learning models and implementationsSubtasks
Most implemented papers
Overcoming catastrophic forgetting in neural networks
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence.
Supervised Contrastive Learning
Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.
Learning without Forgetting
We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities.
iCaRL: Incremental Classifier and Representation Learning
A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data.
Three scenarios for continual learning
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.
On Tiny Episodic Memories in Continual Learning
But for a successful knowledge transfer, the learner needs to remember how to perform previous tasks.
Continual Learning with Deep Generative Replay
Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting.
Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches
Ideally, continual learning should be triggered by the availability of short videos of single objects and performed on-line on on-board hardware with fine-grained updates.
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.
Gradient based sample selection for online continual learning
To prevent forgetting, a replay buffer is usually employed to store the previous data for the purpose of rehearsal.