Disjoint 15-1
6 papers with code • 1 benchmarks • 0 datasets
Most implemented papers
Learning without Forgetting
We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities.
Incremental Learning Techniques for Semantic Segmentation
To tackle this task we propose to distill the knowledge of the previous model to retain the information about previously learned classes, whilst updating the current model to learn the new ones.
Modeling the Background for Incremental Learning in Semantic Segmentation
Current strategies fail on this task because they do not consider a peculiar aspect of semantic segmentation: since each training step provides annotation only for a subset of all possible classes, pixels of the background class (i. e. pixels that do not belong to any other classes) exhibit a semantic distribution shift.
PLOP: Learning without Forgetting for Continual Semantic Segmentation
classes predicted by the old model to deal with background shift and avoid catastrophic forgetting of the old classes.
SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning
While the recent CISS algorithms utilize variants of the knowledge distillation (KD) technique to tackle the problem, they failed to fully address the critical challenges in CISS causing the catastrophic forgetting; the semantic drift of the background class and the multi-label prediction issue.
Representation Compensation Networks for Continual Semantic Segmentation
In this work, we study the continual semantic segmentation problem, where the deep neural networks are required to incorporate new classes continually without catastrophic forgetting.